After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 143061 - timeouts in event loop are calculated in O(n) on each iteration
timeouts in event loop are calculated in O(n) on each iteration
Product: glib
Classification: Platform
Component: mainloop
Other Linux
: Normal normal
: ---
Assigned To: gtkdev
Depends on:
Reported: 2004-05-24 13:13 UTC by itamar
Modified: 2018-05-24 10:32 UTC
See Also:
GNOME target: ---
GNOME version: ---

The source code ;-) (12.11 KB, text/plain)
2004-07-29 19:46 UTC, Tristan Van Berkom

Description itamar 2004-05-24 13:13:29 UTC
g_main_context_prepare() calculates the timeout to be used for next iteration of
the event loop. The problem is that it iterates over all the sources each time.
This is very inefficient. 

In Twisted's default event loop ( timeouts are one off,
not repeating. We keep sorted list of times when next timeout event will happen,
and then checking for how long to wait is just O(1), checking the one with
lowest value in the sorted order. Twisted is in Python, but once you hit a
hundred timeouts a glib-based Twisted event loop is *slower* than the default
timeout-calculating code, even though glib is in C.

Given that timeouts in glib are repeating you'd need a different algorithm, but
I'd hope that a faster one can be found...
Comment 1 itamar 2004-05-24 13:20:47 UTC
I tested with 2.2.x but have been looking at code in HEAD, but code in 2.2.x
branch seems to be basically the same.
Comment 2 Federico Mena Quintero 2004-07-08 23:31:49 UTC
Do you often have hundreds of timeouts active in a program?

It would be nice if someone could create a patch for this.
Comment 3 itamar 2004-07-09 14:55:44 UTC
Yes, though mostly in server rather than client programs. For example, an IMAP
server might have a timeout per connection.
Comment 4 Tristan Van Berkom 2004-07-29 19:46:42 UTC
Created attachment 30041 [details]
The source code ;-)

This source should do the trick, basicly; its a single GSource which
manages a number of timeouts while keeping them in a sorted list,
it would be interesting to know how much of a difference in performance
this can make.

Another benefit is that you can adjust your timeout interval without 
recreating a GSource (which is the reason I wrote it in the first place).
Comment 5 itamar 2004-07-31 01:05:13 UTC
Arguably glib should work that way by default - there's no reason people should
have to recreate fast algorithms if the core can be made to DTRT.
Comment 6 Tristan Van Berkom 2004-08-02 15:39:20 UTC
    Point taken, ATM, if you have a large handfull of GIOWatches, the entire 
list must be "check"ed when the polling times out, this could be fixed along 
with timeouts in a uniform fashion if all sources were sorted by expiration time. 
It souldn't be very hard to accomplish (GSource could be made hold an  
expiration time value and the `context->source_list' would need to be sorted  
in g_main_context_prepare, then g_main_context_check would only have  
to handle the first few sources which return TRUE). 
Id like to take a crack at it, but unfortunatly I probably won't find the time to  
hack around in gmain.c (maybe in a month from now, but I wont bet on it ;-). 
Comment 7 Emmanuele Bassi (:ebassi) 2007-06-14 12:18:43 UTC
just for reference: the code above contains a small race condition which happens when you remove a timeout from the timeout pool source while the pool is still spinning. this is solved using flags or reference counting on the timeout. Clutter is using a pool like this (thanks Tristan for the reference code, by the way :-)) and we have a working implementation of timeout pools (Clutter applications make heavy usage of timeout sources).
Comment 8 GNOME Infrastructure Team 2018-05-24 10:32:34 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to GNOME's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: