GNOME Bugzilla – Bug 631368
libsoup 2.32.0 sometimes fail to load pages through proxy
Last modified: 2014-06-23 16:22:14 UTC
I apologize if this is a duplicate. I filed before but can't find my bug anywhere. libsoup 2.32.0 with libwebkit 1.2.4 causes some pages to not be loaded properly rendering only a blank page (for instance bugs.archlinux.org and gmail.com). Manually refreshing the page will bring it up. Reverting back to libsoup 2.30.2 solves the problem. I posted about this at the Arch Linux forums and it's common. All browsers that use libwebkit (other than Chromium) suffer from this.
are you using a proxy?
Yes, I'm behind privoxy. A least one Arch user is reporting that they're not behind a proxy and it's still happening to them.
It turns out that it is this version of libsoup and privoxy not playing well together. I didn't check before because of other users saying that they're not using any proxy and still having the problem.
Yes, I'm one of those users :) Epiphany 2.32 renders blank pages usually after php search scripts (forums mostly). But it happens on all kinds of web pages. I'm not behind proxy.
Happens here also. Mainly when visiting the archlinux bugtracker and forums. It could be the server configuration, but it could be related to the https redirect also. We haven't changed anything serverside in the last few months, and I can't remember having this problem with GNOME 2.30, so I assume it's a bug in libsoup then.
Few more things: - This happens also on other distros than Arch (so most likely not a packaging/distro issue) - It happened somewhere between 2.30.2 (good) and 2.31.6 (bad) but I couldn't bisect it because of a huge amount of "skips" due to crashes while bisecting. - As Skottish, I'm using privoxy and deactivating the proxy solves the problem.
if you're not behind a proxy, then you have a different bug, and you should verify that it is *libsoup* 2.32, as opposed to epiphany 2.32 or webkit 1.2.4 that's causing the problem before filing a new libsoup bug
https://bugzilla.gnome.org/show_bug.cgi?id=631525 I've filed a seperate bug report.
On Fedora 14 beta (webkitgtk 1.3.4; privoxy 3,0,16; libsoup 2.31.90; epiphany 2.30.5) it appears that the proxy is simply ignored when browsing with Epiphany. Using the same proxy settings, ads are blocked in Firefox and not in Epiphany. Is that a separate bug? Anything I can do to help debug?
After some technical difficulties, I'm live again. So what can I as a fairly knowledgeable non-developer do to help you solve this problem? It's way serious on just how annoying it is to even try to use libsoup 2.32 behind a simple privoxy. Reverting to 2.30 works, but it doesn't solve the problem. I want to help. What can I do?
as a non-developer, nothing unfortunately; i can reproduce the bug, so it just needs to be fixed now, but this requires some tricky rewrites in the connection-handling code.
Is anyone seeing this with non-SSL pages, or is it just SSL+proxy? And when people say that libsoup 2.30 works, is libsoup the only thing that's changing, or is that with an older version of gnutls or anything else as well? I thought I could reproduce this; if you run tests/proxy-test in the libsoup source tree over and over, it will occasionally fail, with one or more of the https tests giving errors. But I've now tested with older versions of libsoup and older versions of gnutls and still see the error there too (even though I'm sure I never saw this error before this summer either...)
I've tried downgrading only libsoup (I broke dep's so it wouldn't bring down a lot of apps) and then it worked fine. I'm pretty sure that I haven't encountered this problem prior to upgrading to gnome 2.32 cause I use only epiphany as my web browser (and chromium as a backup in certain rare situations). I'm seeing this with non-SSL pages also, and without proxy.
I've only downgraded libsoup. I'm on Arch Linux and have seen many changes to the system, and only downgrading libsoup always corrects the problem.
skottish: but is it correct that the problem only happens for you with https pages? gmail and bugs.archlinux.org both use https...
so, the bug that was making libsoup's proxy-test occasionally fail turns out to be... an apache bug. https://issues.apache.org/bugzilla/show_bug.cgi?id=45444 that would only have affected libsoup's internal tests, not real-world use, so now I'm not convinced this bug is any different from bug 631525, which is now fixed in master. Is anyone still seeing problems when loading through a proxy?
I'm at d4cc1a0cdd62fc3669f4ec5a8974c61aee9e9ce2 and the issue is still there.
@Dan Winship I've only seem this happen with https and it happens every time, everywhere: Arch, gmail, my bank, my ISP... literally everywhere.
Same problem here, but kinda weird: http://localhost --> 301 --> https://... doesn't work http://localhost --> 302 --> https:// works But http://wiki.archlinux.org sends a 302 to https://wiki.archlinux.org first and then a 301 to .../index.php/... Reproducable: As often as I wish to
If it helps at all, Arch just pushed gnutls to 2.10.3 into their testing repository. This breaks libwebit based browsers fairly badly on my machine (actually, it totally kills jumaji). A useful, reoccurring message is: "Problem occurred while loading the URL https://bbs.archlinux.org/ SSL handshake failed: A record packet with illegal version was received." So, here are the combinations: gnutls 2.8.6 and libsoup 2.30.2 work as expected gnutls 2.8.6 and libsoup 2.32.2 results in this original bug report gnutls 2.10.3 and libsoup 2.30.2 results in the error message in this post gnutls 2.10.3 and libsoup 2.32.2 results in the same behavior as the original bug report
libsoup 2.30 is known to be incompatible with gnutls 2.10. bug 622857
I'm also using a libwebkit browser and notice a peculiar behavior when I'm using it with tsocks (SOCKS5 proxy): when any https site is loaded, there will be a long pause before the page is rendered. This pause can be bypassed by hitting any key or wiggling the mouse...the page will then render normally. This behavior has been the same since I started using this browser about 4-6 months ago (roughly). I'm using tsocks via : "tsocks <browsername". Nothing fancy. Name : tsocks Version : 1.8beta5-2 Name : libsoup Version : 2.32.2-1 Name : gnutls Version : 2.10.4-1 Let me know if there's other testing I can help with. Thanks! Scott
I don't have patches and haven't solved any the issues; I'm only posting to provide my observations as five months have past. The situation is getting worse as the development libraries under Arch and some extra dev areas are advancing (JGC has posted here and I'm using his repos to try to see if this has been solved). This is the current stack: libwebkit 1.3.12 gnutls 2.10.4 libsoup 2.33.90 glib-networking 2.28.0 Before, it was a (annoyingly) simple refresh to get pages to work. Then the 'Moved Temporarily' messages started with the newer libraries. Now the messages are occasionally preceded by the 303 error code messages. Every step requires greater user interaction and less functionality.
Oh yes, and 301 errors are also falsely appearing. Refresh. Refresh. Website.
I was about to file a new bug but then remembered this one. Current state for me is thus: Epiphany verison: 3.0.2; libsoup 2.34.1; OS: Fedora 15 beta 1. Install Privoxy as local proxy server 2. Set localhost:8118 as http and https proxy in network settings 3. Visit bugzilla.gnome.org Actual result: Literal output from 302 Found response: "The document has moved <a href="https://bugzilla.gnome.org/">here</a>". Expected result: Automatic loading of https://bugzilla.gnome.org/
Created attachment 193272 [details] [review] Fix processing of requeued messages. I'm unsure about whether this really is the right fix, but it solves the problem described in this bug for me.
I can confirm that the patch works. I really hope that it's sane. Regardless if it is or not, thanks Thierry. It's nice to see some activity in this thread.
The patch doesn't fix the issue I was having in Comment 22 above, but it doesn't seem to break anything that I can find. My issue I guess is unrelated, but still minor. Scott
*** Bug 653130 has been marked as a duplicate of this bug. ***
Thanks for figuring this out. Fixed in master.
Created attachment 193449 [details] [review] Call soup_message_clean_response() when restarting a message When a message got restarted, we were leaving the previous response state in the message, which is bad for various reasons. In particular, this caused a problem with non-keepalive redirections of https URLs through proxies (!), because after the second CONNECT succeeded, it would see that the message already had a status_code set, and so it thought the message had been cancelled or something while it was processing the CONNECT. proxy-test now has a regression test for this case. Based on patches and analysis from DongJae Kim and Thierry Reding.
I'd like to extend my gratitude to all of those that worked on this over the last nine months. Thank you all. You're work is appreciated. Take care, skottish
The patch from above with libsoup 2.34.3 corrected the problem with it and privoxy. Unfortunately, libsoup 2.36.0 is far more unstable than when I opened this report for 2.32.0. There are constant segfaults. In some cases the crashes are guaranteed (like my bank), in others like in comment 20 it's somewhat random. For the moment a patched libsoup 2.34.3 is working, but that's going to go away soon enough as it dependencies keep moving forward. Please let me know what libraries are interesting to you and if you want a GDB trace or whatever else.
oh, i think libsoup 2.36 exposes a bug in earlier glib-networking that will cause a crash under some circumstances. glib-networking 2.30 should work
Thanks for the response and hopefully my continued follow-ups suggest to all of you that I really want this to work. I'm actually using glib-networking 2.30.0 since 28 Sep 2011: ~ > pp -Q libsoup glib-networking gnutls libproxy glib2 libgcrypt libsoup 2.36.0-1 glib-networking 2.30.0-1 gnutls 3.0.3-1 libproxy 0.4.7-1 glib2 2.30.0-1 libgcrypt 1.5.0-1 If it helps (without full debugging symbols): [Thread debugging using libthread_db enabled] [New Thread 0x7fffeb4a1700 (LWP 2789)] [New Thread 0x7fffaab86700 (LWP 2790)] [New Thread 0x7fffa89ed700 (LWP 2792)] (jumanji:2786): GLib-GObject-CRITICAL **: g_object_ref: assertion `G_IS_OBJECT (object)' failed (jumanji:2786): GLib-GObject-WARNING **: invalid (NULL) pointer instance (jumanji:2786): GLib-GObject-CRITICAL **: g_signal_connect_data: assertion `G_TYPE_CHECK_INSTANCE (instance)' failed (jumanji:2786): GLib-GObject-WARNING **: invalid (NULL) pointer instance (jumanji:2786): GLib-GObject-CRITICAL **: g_signal_connect_data: assertion `G_TYPE_CHECK_INSTANCE (instance)' failed (jumanji:2786): GLib-GObject-CRITICAL **: g_type_instance_get_private: assertion `instance != NULL && instance->g_class != NULL' failed Program received signal SIGSEGV, Segmentation fault. 0x00007ffff5f8d1d4 in soup_socket_is_ssl () from /usr/lib/libsoup-2.4.so.1 (gdb) where
+ Trace 228700
jumanji is another random libwebkit based browser (which I happen to be very fond of). I can reproduce these results with midori. I can stop it with disabling privoxy or regressing back to the patched version as mentioned above.
glib-networking 2.30.1 and libsoup 2.36.1 seem to have, at least with quick and obvious tests of sites that I knew were broken, solved the problem. Thanks.