After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 639768 - Starvation issues when requesting a lot of connections in a short period of time
Starvation issues when requesting a lot of connections in a short period of time
Status: RESOLVED FIXED
Product: libsoup
Classification: Core
Component: HTTP Transport
2.31.x
Other Linux
: Normal normal
: ---
Assigned To: libsoup-maint@gnome.bugs
libsoup-maint@gnome.bugs
Depends on:
Blocks:
 
 
Reported: 2011-01-17 17:42 UTC by Sergio Villar
Modified: 2011-01-18 17:35 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Fix for the bug (1.17 KB, patch)
2011-01-17 17:45 UTC, Sergio Villar
reviewed Details | Review

Description Sergio Villar 2011-01-17 17:42:47 UTC
I initially observed this behaviour when reloading planet.gnome.org with epiphany. After some initial progress the browser suddenly seems to stop loading resources.

It seems that there is a problem in async queue's run_queue() method. Thing is that if we issue too many requests in a short period of time, the run_queue() loop will try to cleanup connections ASAP even without running the currently queued items with stablished connections (because if it finds a prunable connection it breaks the loop).

This way every single queue item with a stablished connection located (in the queue) after an item with a prunable connection won't get executed because the loop will never reach them. You might think that pending connections will eventually succeed and will become available, but it seems that if we issue a lot of them in a row, servers start to either refuse or block indefinitelly our connection requests.
Comment 1 Sergio Villar 2011-01-17 17:45:04 UTC
Created attachment 178535 [details] [review]
Fix for the bug

With this patch all the queue items are processed on each queue run. Only after doing that we consider cleaning up connections
Comment 2 Dan Winship 2011-01-18 16:41:23 UTC
Comment on attachment 178535 [details] [review]
Fix for the bug

>-	     item && !should_prune;
>+	     item;

Looking through git history, the "&& !should_prune" was added in http://git.gnome.org/browse/libsoup/commit/?id=0fe9af2d15c4bc8dcd3413ace0c8e3f6dbe4b487, way back in 2.4.0...

I think the idea behind breaking out early to do pruning was that we wanted the queue to be roughly FIFO, and so if we find that one of the earlier messages would be sendable if we pruned, then we want to prune and then send it, rather than continuing down the queue and eventually hitting max-conns before getting to the prune_connections step.

However, that's less important now, since now we notice connections that were closed by the server without even having to go through the pruning step. So this patch is good.

>-		if (msg->method != SOUP_METHOD_CONNECT)
>-			process_queue_item (item, &should_prune, TRUE);
>+		if (msg->method != SOUP_METHOD_CONNECT) {
>+			gboolean prune_maybe;
>+			process_queue_item (item, &prune_maybe, TRUE);
>+			if (prune_maybe && !should_prune)
>+				should_prune = TRUE;

You don't actually need to change this; process_queue_item() will set should_prune TRUE if it should be TRUE, but it doesn't ever set it FALSE.

OK to commit after reverting that part.
Comment 3 Sergio Villar 2011-01-18 17:35:55 UTC
Committed in master fce7b8b57