After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 750397 - CRITICAL: Race condition in GstBus
CRITICAL: Race condition in GstBus
Status: RESOLVED FIXED
Product: GStreamer
Classification: Platform
Component: gstreamer (core)
1.4.5
Other All
: Normal critical
: 1.9.1
Assigned To: GStreamer Maintainers
GStreamer Maintainers
: 767601 (view as bug list)
Depends on:
Blocks: 767979
 
 
Reported: 2015-06-04 13:26 UTC by Denis T
Modified: 2016-10-13 20:10 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
gstbus: Fix races posting and popping messages. (1.55 KB, patch)
2015-06-06 12:13 UTC, Jan Schmidt
rejected Details | Review
poll: Take a mutex when the control state changes before signalling (3.19 KB, patch)
2016-06-22 16:29 UTC, Sebastian Dröge (slomo)
none Details | Review
poll: Take a mutex when the control state changes before signalling (7.55 KB, patch)
2016-06-23 13:04 UTC, Sebastian Dröge (slomo)
rejected Details | Review
poll: Warn if control_pending becomes negative, or read_control() is called before write_control() (1.37 KB, patch)
2016-06-23 13:04 UTC, Sebastian Dröge (slomo)
none Details | Review
poll: Add warnings to gst_poll_{read,write}_control() about ordering and how often they can be called (1.86 KB, patch)
2016-06-23 13:04 UTC, Sebastian Dröge (slomo)
rejected Details | Review
bus: Ensure to never call gst_poll_read_control() before write_control() finished (5.01 KB, patch)
2016-06-23 13:04 UTC, Sebastian Dröge (slomo)
none Details | Review
poll: Warn if control_pending becomes negative, or read_control() is called before write_control() (1.43 KB, patch)
2016-06-23 13:50 UTC, Sebastian Dröge (slomo)
rejected Details | Review
bus: Ensure to never call gst_poll_read_control() before write_control() finished (5.08 KB, patch)
2016-06-23 15:24 UTC, Sebastian Dröge (slomo)
rejected Details | Review
Proposed fix for both Linux and Windows. (1.03 KB, patch)
2016-06-25 05:59 UTC, Matt Gruenke
none Details | Review
Addresses race condition on Windows, possible with patch 330346. (1.71 KB, patch)
2016-06-27 15:18 UTC, Matt Gruenke
none Details | Review
Addresses race conditions on both platforms and fixes gst_poll_read_control() when control_pending == 0. (2.05 KB, patch)
2016-06-28 02:19 UTC, Matt Gruenke
none Details | Review
Fix for Bug 750397 (Race condition in GstBus). (5.10 KB, patch)
2016-06-28 19:43 UTC, Matt Gruenke
none Details | Review
Fix for Bug 750397 (Race condition in GstBus). (11.54 KB, patch)
2016-06-29 05:35 UTC, Matt Gruenke
none Details | Review
Simple diff of the changes in in attachment 330519, for ease of reviewing. (6.25 KB, patch)
2016-06-29 05:46 UTC, Matt Gruenke
needs-work Details | Review
poll: Warn if the return value of gst_poll_read_control() is unused (1.12 KB, patch)
2016-06-29 13:09 UTC, Sebastian Dröge (slomo)
committed Details | Review
poll: set_controllable(), restart() and set_flushing() are only valid for non-timer GstPolls (3.19 KB, patch)
2016-06-29 13:09 UTC, Sebastian Dröge (slomo)
committed Details | Review
poll: Clarify when FALSE is returned from read/write_control() (1.97 KB, patch)
2016-06-29 13:10 UTC, Sebastian Dröge (slomo)
committed Details | Review
systemclock: Improve GstPoll handling and don't check for impossible errno values (2.74 KB, patch)
2016-06-29 13:14 UTC, Sebastian Dröge (slomo)
committed Details | Review
bus: Make sure to always read the control after popping a message (1.96 KB, patch)
2016-06-29 13:14 UTC, Sebastian Dröge (slomo)
committed Details | Review
poll: Fix various race conditions with read_control() and write_control() (7.85 KB, patch)
2016-06-29 17:02 UTC, Sebastian Dröge (slomo)
committed Details | Review

Description Denis T 2015-06-04 13:26:29 UTC
gst_bus_post function has race condition. When one thread posts messages to a bus and another one reads them from this bus, the following situation is possible:

1. The first thread posts a message using gst_bus_post and GST_BUS_PASS. This leads to gst_atomic_queue_push being called (line 346 gstbus.c). Then it calls gst_poll_write_control (line 347 gstbus.c). gst_poll_write_control calls raise_wakeup (gstpoll.c, line 180). raise_wakeup increments control_pending variable and then calls WAKE_EVENT macro, as control_pending was 0 before increment. But wait! Imagine that WAKE_EVENT is not called now, because the CPU has just passed execution to the second thread, which calls gst_bus_timed_pop_filtered.

2. This second thread calls gst_bus_timed_pop_filtered, which calls gst_atomic_queue_pop (line 499 gstbus.c). We have 1 message in the queue already. Right after this it also calls gst_poll_read_control (gstpoll.c, line 1541), which calls release_wakeup (gstpoll.c, line 196), which decrements set->control_pending value, making it zero again. This means, that RELEASE_EVENT macro is called. It reads a byte from the socket (note that there's nothing in the socket yet).

3. Now the first thread continues its execution. It calls WAKE_EVENT, which writes a byte to the socket.

At this moment we have empty bus queue, control_pending == 0, but the byte in the socket! This means, every call to gst_bus_timed_pop_filtered from this moment will lead to immediate wakeup of function gst_poll_wait, as the socket has data to read. But we have no messages in the queue! Now the code like this will execute without any delays, even if there're no messages in the queue:

do
{
  GstMessage* msg = gst_bus_timed_pop_filtered(bus, 100 * GST_MSECOND, GST_MESSAGE_ANY);
  //Process message
}
while(1);


This mutex fixes the issue:

    case GST_BUS_PASS:
      /* pass the message to the async queue, refcount passed in the queue */
      GST_DEBUG_OBJECT (bus, "[msg %p] pushing on async queue", message);
      g_mutex_lock (&bus->priv->queue_lock); //PROPOSED FIX
      gst_atomic_queue_push (bus->priv->queue, message);
      gst_poll_write_control (bus->priv->poll);
      g_mutex_unlock (&bus->priv->queue_lock); //PROPOSED FIX
      GST_DEBUG_OBJECT (bus, "[msg %p] pushed on async queue", message);


Probably GST_BUS_ASYNC case has race condition, too.
Comment 1 Vijay Jayaraman 2015-06-04 19:12:00 UTC
Hi,

I noticed a similar issue today where my application CPU usage shot up from typically being 15% to 50-60%. On debugging further it looks like the following debug shows up all the time when the CPU usage shoots up. 

0:06:59.338361149 21782   0x4b0600 INFO                 GST_BUS gstbus.c:549:gst_bus_timed_pop_filtered:<bus1> we got woken up, recheck for message
0:06:59.338832148 21782   0x4b0600 INFO                 GST_BUS gstbus.c:549:gst_bus_timed_pop_filtered:<bus1> we got woken up, recheck for message
0:06:59.339566815 21782   0x4b0600 INFO                 GST_BUS gstbus.c:549:gst_bus_timed_pop_filtered:<bus1> we got woken up, recheck for message
0:06:59.340383149 21782   0x4b0600 INFO                 GST_BUS gstbus.c:549:gst_bus_timed_pop_filtered:<bus1> we got woken up, recheck for message
0:06:59.340860000 21782   0x4b0600 INFO                 GST_BUS gstbus.c:549:gst_bus_timed_pop_filtered:<bus1> we got woken up, recheck for message

AS you can see, gst_bus_timed_pop_filtered wakes up almost every millisecond and this causes the CPU usage to shoot up. When this happened, I tried pausing my player and still this message fills up the log.

I tried patching gstreamer with the proposed patch and I had to use it in the GST_BUS_ASYNC case as well and the issue seems to have disappeared.

Note: The original issue where the CPU shot up was not 100% reproducible but it happened most of the time if I let the pipeline play long enough.

I am using gstreamer 1.4.0. Anybody else seeing this issue?

Thanks,
-Vijay
Comment 2 Jan Schmidt 2015-06-06 11:28:19 UTC
Seems plausible, since the control sockets are non-blocking. A related race exists on Windows using the event objects there, it's possible the event will be set after the corresponding message is consumed, since there's no lock to make it atomic.
Comment 3 Jan Schmidt 2015-06-06 12:13:06 UTC
Created attachment 304684 [details] [review]
gstbus: Fix races posting and popping messages.

Wrap message posting and writing to the GstPoll control socket
in the bus mutex so the operation can't race against a client
attempting to pop a message.
Comment 4 Sebastian Dröge (slomo) 2015-06-06 12:41:29 UTC
Doesn't this make the lockless queue used in GstBus a bit useless now?
Comment 5 Jan Schmidt 2015-06-07 01:10:32 UTC
Thinking about this more, this seems like more of a problem with GstPoll than the bus - it's not safe to write_control and read_control simultaneously from different threads when read_control can successfully decrement the control_pending atomic int to 0 and then fail to read a byte.
Comment 6 Jonathan Matthew 2015-10-15 12:46:23 UTC
I hit this pretty regularly in rhythmbox, where the crossfading player backend uses gst_bus_timed_pop during preroll, and it messes up the glib mainloop source (used during normal operation) which causes the whole UI to freeze.

Would retrying the read until successful be acceptable? At that point we know that either there is a byte to read already, or we're racing another thread that will write it soon. Unfortunately, I don't think this would work for the win32 implementation - it looks like ResetEvent will succeed whether the event is signaled or not.

I've tried 'while (RELEASE_EVENT (set) == 0) {}' out a bit on linux and it does appear to fix the problem for me.
Comment 7 James Stevenson 2015-10-20 14:59:06 UTC
I can confirm we are also seeing this bug in gstreamer 1.2, 1.4
Comment 8 James Stevenson 2015-10-20 15:11:03 UTC
Another possible fix for this issue without using the lock would be to reverse the order of the statements during the write to the queue.

so instead of 

     gst_atomic_queue_push (bus->priv->queue, message);
      gst_poll_write_control (bus->priv->poll);

changing the order to

     gst_poll_write_control (bus->priv->poll);
     gst_atomic_queue_push (bus->priv->queue, message);

This will make sure the gst_poll_read_control internals are > 0 before an item is removed from the queue. However this will also have the cost of a possible premature wakeup by something sleeping on the bus.

Any thoughts or suggestions on doing that?
Comment 9 Matt Gruenke 2015-10-31 01:00:56 UTC
I've got a fair amount of runtime on a simple fix.  Just clear the O_NONBLOCK flag of bus->priv->poll->control_read_fd.fd.  I'm not sure why it was ever set, in the first place, but I believe it's not necessary.  In fact, you might get rid of it on the write side, as well.

Without this change, I'm not certain that even holding bus->priv->queue_lock (the fix proposed in comment #1) will always suffice, on all platforms.  I don't know whether any guarantees are made about how fast a byte is transferred from the write side of a socketpair() to the read side.  If it's not guaranteed to be atomic, then perhaps a nonblocking read could still fail, even after the writer relinquishes the mutex.

However, if read_fd.fd is kept in blocking mode, then the atomic counter (bus->priv->poll->control_pending) will serve as a reliable arbiter of whether to do the read.  So, even if the read does block for a bit, it's guaranteed that there's a byte on the way.

That said, acquiring the mutex (bus->priv->queue_lock) is certainly a good idea (and I haven't even considered whether it's necessary for Windows).  Mutexes are cheap (unless there's contention) and bus messages tend to be infrequent.  So, I really don't see a good argument not to do both.
Comment 10 Matt Gruenke 2015-10-31 01:10:08 UTC
(In reply to Jan Schmidt from comment #5)
> it's not safe to write_control and read_control simultaneously from
> different threads when read_control can successfully decrement the
> control_pending atomic int to 0 and then fail to read a byte.

It's plausible even writing and reading from the same thread could leave GstPoll in a bad state, depending on how a given platform implements socketpair().  Clearing O_NONBLOCK on control_read_fd.fd fixes this.
Comment 11 Sebastian Dröge (slomo) 2016-06-22 16:29:28 UTC
Created attachment 330201 [details] [review]
poll: Take a mutex when the control state changes before signalling

The previous code was racy as too much protection was expected from the atomic
integer. What could have happened is:

Thread 1:
  - release()
  - --control_pending == 0

Thread 2:
  - raise()
  - control_pending++ == 0
  - WAKE_EVENT()

Thread 1:
  - RELEASE_EVENT()

The final state would be that control_pending > 0 but no event is pending. As
such, only when release() would be called another time everything could get
into a consistent state again. However e.g. if the GstPoll is used by a bus
GSource, the source would never wake up again and would have no opportunity to
call release() again.

The same thing can happen in the other way around:

Thread 1:
  - raise()
  - control_pending++ == 0

Thread 2:
  - release()
  - --control_pending == 0
  - RELEASE_EVENT()

Thread 1:
  - WAKE_EVENT()

Here the final state would be that control_pending == 0 but an event is
pending. Future calls to raise() would add *yet another* event (except on
Windows, where the event can only be signalled or not), but release() would
only release one of them. This results in there always being an event pending.
With GstBus this causes an empty bus to cause gst_bus_timed_pop_filtered() to
wake up immediately, see no message is there, sleep again, wake up again
immediately, repeat forever.

As a solution we add a mutex for the case when control_pending == 0 before
WAKE_EVENT() and RELEASE_EVENT(), and check if control_pending has changed
after we acquired the mutex. If it did, we shouldn't do anything anymore.

While this mutex makes gst_bus_post() not completely lockless anymore, it is
only taken whenever we change from an empty to a non-empty bus and back. That
is, not very often ideally.
Comment 12 Sebastian Dröge (slomo) 2016-06-22 16:34:00 UTC
Comment on attachment 304684 [details] [review]
gstbus: Fix races posting and popping messages.

This also solved the problem, but only for the bus and takes too many mutexes when not needed
Comment 13 Matt Gruenke 2016-06-22 16:47:24 UTC
(In reply to Sebastian Dröge (slomo) from comment #11)
> As a solution we add a mutex for the case when control_pending == 0 before
> WAKE_EVENT() and RELEASE_EVENT(), and check if control_pending has changed
> after we acquired the mutex. If it did, we shouldn't do anything anymore.

I've got a large amount of runtime, on the fix I mentioned in comment #9.  We probably have about a couple thousands streams running, at any given time.  They're each restarted several times per day.
Comment 14 Nicolas Dufresne (ndufresne) 2016-06-22 16:47:32 UTC
Review of attachment 330201 [details] [review]:

This change is better, but yet, still tricky. Could we try and exercise this into a test, it would also provide us a better understanding of the setup that cause issues.
Comment 15 Sebastian Dröge (slomo) 2016-06-22 16:50:25 UTC
(In reply to Nicolas Dufresne (stormer) from comment #14)
> Review of attachment 330201 [details] [review] [review]:
> 
> This change is better, but yet, still tricky. Could we try and exercise this
> into a test, it would also provide us a better understanding of the setup
> that cause issues.

There's no way to reliably test this really, you would need a stress test that runs for many minutes... or a virtual machine where you can step each thread manually to get them into a conflicting situation.

For understanding the setup, read the commit message. It all makes sense to me now and is actually relatively easy ;)
Comment 16 Matt Gruenke 2016-06-22 16:51:43 UTC
(In reply to Matt Gruenke from comment #13)
> 
> I've got a large amount of runtime, on the fix I mentioned in comment #9.

I forgot to mention that I'm not using any additional mutex or locking (contrary to the last paragraph of that comment).  I've only tested this on Linux.
Comment 17 Sebastian Dröge (slomo) 2016-06-22 16:55:21 UTC
(In reply to Matt Gruenke from comment #16)
> (In reply to Matt Gruenke from comment #13)
> > 
> > I've got a large amount of runtime, on the fix I mentioned in comment #9.
> 
> I forgot to mention that I'm not using any additional mutex or locking
> (contrary to the last paragraph of that comment).  I've only tested this on
> Linux.

If I understand your proposed change correctly, you can still run into the first situation I mentioned in the commit message.

And AFAIU the blocking mode of the sockets is necessary so that we can be sure that always either 1 or 0 bytes are in the socket. If you would do a non-blocking read, you could end up reading nothing, then having 1 byte arriving in the socket shortly after, and then be in an inconsistent state again where next time a second byte is written to the socket.
Comment 18 Matt Gruenke 2016-06-22 19:31:02 UTC
(In reply to Sebastian Dröge (slomo) from comment #17)
> (In reply to Matt Gruenke from comment #16)
> > (In reply to Matt Gruenke from comment #13)
> > > 
> > > I've got a large amount of runtime, on the fix I mentioned in comment #9.
> > 
> > I forgot to mention that I'm not using any additional mutex or locking
> > (contrary to the last paragraph of that comment).  I've only tested this on
> > Linux.
> 
> If I understand your proposed change correctly, you can still run into the
> first situation I mentioned in the commit message.

The atomic counter is a reliable arbiter of whether to do the read.  Adding a mutex provides no benefit, on Linux.


> And AFAIU the blocking mode of the sockets is necessary so that we can be
> sure that always either 1 or 0 bytes are in the socket. If you would do a
> non-blocking read, you could end up reading nothing, then having 1 byte
> arriving in the socket shortly after, and then be in an inconsistent state
> again where next time a second byte is written to the socket.

Yes.  This was the bug.  The socketpair in GstPoll is put in O_NONBLOCK mode, for some reason.  I think it's wrong.  Even with a mutex, I think the bug might still happen, unless you make the read block.

Again, I haven't examined the Windows code path.
Comment 19 Sebastian Dröge (slomo) 2016-06-22 19:51:27 UTC
Can you provide a patch for what you're proposing? Either I misunderstand you, or it doesn't solve the problem. Independent of Windows or not (which also needs special consideration).
Comment 20 Sebastian Dröge (slomo) 2016-06-22 20:07:07 UTC
FWIW, I believe my patch also has one problem left (that was not mentioned so far) on non-Windows. I'll investigate more tomorrow to find a solution for both platforms.
Comment 21 Sebastian Dröge (slomo) 2016-06-23 07:36:05 UTC
*** Bug 767601 has been marked as a duplicate of this bug. ***
Comment 22 Sebastian Dröge (slomo) 2016-06-23 13:04:00 UTC
Created attachment 330253 [details] [review]
poll: Take a mutex when the control state changes before signalling

The previous code was racy as too much protection was expected from the atomic
integer. What could have happened is:

Thread 1:
  - release()
  - --control_pending == 0

Thread 2:
  - raise()
  - control_pending++ == 0
  - WAKE_EVENT()

Thread 1:
  - RELEASE_EVENT()

The final state would be that control_pending > 0 but no event is pending. As
such, only when release() would be called another time everything could get
into a consistent state again. However e.g. if the GstPoll is used by a bus
GSource, the source would never wake up again and would have no opportunity to
call release() again.

The same thing can happen in the other way around:

Thread 1:
  - raise()
  - control_pending++ == 0

Thread 2:
  - release()
  - --control_pending == 0
  - RELEASE_EVENT()

Thread 1:
  - WAKE_EVENT()

Here the final state would be that control_pending == 0 but an event is
pending. Future calls to raise() would add *yet another* event (except on
Windows, where the event can only be signalled or not), but release() would
only release one of them. This results in there always being an event pending.
With GstBus this causes an empty bus to cause gst_bus_timed_pop_filtered() to
wake up immediately, see no message is there, sleep again, wake up again
immediately, repeat forever.

Note that this is also forbidden by the GstPoll API: read_control() must not
be called before write_control() has returned. Follow-up patches to GstBus and
GstBufferPool will fix this.

As a solution we add a mutex for the case when control_pending == 0 before
WAKE_EVENT() and RELEASE_EVENT(), and then have a second, lock protected
control_pending counter that reflects the real situation while everything
while everything is locked around the socket access.

While this mutex makes gst_bus_post() not completely lockless anymore, it is
only taken whenever we change from an empty to a non-empty bus and back. That
is, not very often ideally.
Comment 23 Sebastian Dröge (slomo) 2016-06-23 13:04:06 UTC
Created attachment 330254 [details] [review]
poll: Warn if control_pending becomes negative, or read_control() is called before write_control()

Both is forbidden by the GstPoll API and leads to undefined behaviour, race
conditions and worse.
Comment 24 Sebastian Dröge (slomo) 2016-06-23 13:04:13 UTC
Created attachment 330255 [details] [review]
poll: Add warnings to gst_poll_{read,write}_control() about ordering and how often they can be called

read() must be called after the corresponding write() has returned, and only
that number of times as there were writes(), not more.
Comment 25 Sebastian Dröge (slomo) 2016-06-23 13:04:19 UTC
Created attachment 330256 [details] [review]
bus: Ensure to never call gst_poll_read_control() before write_control() finished

This means that we will always use gst_poll_wait() now when using
gst_bus_timed_pop_filtered() or any other of the gst_bus_pop*() functions.

Also check if both functions are actually succeeding and print a warning
otherwise.
Comment 26 Sebastian Dröge (slomo) 2016-06-23 13:50:12 UTC
Created attachment 330259 [details] [review]
poll: Warn if control_pending becomes negative, or read_control() is called before write_control()

Both is forbidden by the GstPoll API and leads to undefined behaviour, race
conditions and worse.
Comment 27 Sebastian Dröge (slomo) 2016-06-23 15:24:29 UTC
Created attachment 330269 [details] [review]
bus: Ensure to never call gst_poll_read_control() before write_control() finished

This means that we will always use gst_poll_wait() now when using
gst_bus_timed_pop_filtered() or any other of the gst_bus_pop*() functions.

Also check if both functions are actually succeeding and print a warning
otherwise.
Comment 28 Sebastian Dröge (slomo) 2016-06-23 15:31:35 UTC
I think except for GstBufferPool (bug #767979), this is all fixed now.
Comment 29 Matt Gruenke 2016-06-24 03:48:30 UTC
(In reply to Sebastian Dröge (slomo) from comment #28)
> I think except for GstBufferPool (bug #767979), this is all fixed now.

I disagree.  I think your diff solves nothing, as you're acquiring the mutex too late.  The entire problem comes down to an emergent discrepancy between set->control_pending and the pipe occupancy.

If a mutex were to solve this problem, you'd need to lock it before testing/modifying set->control_pending.  And then, what would be the point of set->control_pending_locked?

But, I'm not sure even that would fix the problem, so long as gstpoll.c contains the following:

    fcntl (control_sock[0], F_SETFL, O_NONBLOCK);
    fcntl (control_sock[1], F_SETFL, O_NONBLOCK);

Because, even if a mutex is locked before set->control_pending is tested/modified, is the pipe guaranteed to be atomic?  In other words, if one thread reads the set->control_read_fd.fd immediately after another thread's write has returned, is the reader guaranteed to get the byte?  On all platforms that support socketpair()?  If not, no mutex would completely avoid the need to make it blocking.

However, if you simply make the control pipe block, then it solves the inconsistency between the pipe and the atomic counter, and you can even remove the mutex locking you just added.

Consider the following sequence:

0. set->control_pending is currently 0
1. Writer increments set->control_pending
2. Reader decrements set->control_pending
3. Reader performs a read on set->control_read_fd.fd and blocks
4. Write writes set->control_write_fd.fd
5. Reader unblocks and returns

Consistency between set->control_pending and the pipe are maintained.  And without any mutex locking.

The reason I haven't posted a patch is that I didn't modify gstpoll.c.  For my purposes, I just reached inside the GstBus, before we ever read it, and ensure that O_NONBLOCK has been cleared from its poll instance.

As I said, I've got probably thousands of pipeline years of runtime on this fix, and probably hundreds of thousands of pipeline setups & teardowns.  Given how quickly I encountered the original problem (within days and maybe a few hundred pipeline setups/teardows), any inadequacies of my fix should've become apparent, by now.
Comment 30 Sebastian Dröge (slomo) 2016-06-24 07:44:16 UTC
(In reply to Matt Gruenke from comment #29)
> (In reply to Sebastian Dröge (slomo) from comment #28)
> > I think except for GstBufferPool (bug #767979), this is all fixed now.
> 
> I disagree.  I think your diff solves nothing, as you're acquiring the mutex
> too late.  The entire problem comes down to an emergent discrepancy between
> set->control_pending and the pipe occupancy.
> 
> If a mutex were to solve this problem, you'd need to lock it before
> testing/modifying set->control_pending.  And then, what would be the point
> of set->control_pending_locked?

The point of the control_pending_locked is to actually detect this discrepancy properly. control_pending is always correct at the time the atomic operation happens, after that things go out of sync. So we now have additionally a locked counter that basically is the "fill level" of the socket.

By this we make sure that always the socket contains exactly one byte when it should, and none otherwise. Which is needed especially for the Windows part, as there you only have an on/off switch basically.

> But, I'm not sure even that would fix the problem, so long as gstpoll.c
> contains the following:
> 
>     fcntl (control_sock[0], F_SETFL, O_NONBLOCK);
>     fcntl (control_sock[1], F_SETFL, O_NONBLOCK);
> 
> Because, even if a mutex is locked before set->control_pending is
> tested/modified, is the pipe guaranteed to be atomic?  In other words, if
> one thread reads the set->control_read_fd.fd immediately after another
> thread's write has returned, is the reader guaranteed to get the byte?  On
> all platforms that support socketpair()?  If not, no mutex would completely
> avoid the need to make it blocking.

GstPoll API requires that a read() only happens after a write() has returned, just that GstBus ignored that condition. Now we always first wait on the socket with poll() before doing anything else (in GstBus).

Making the socket blocking is not going to solve anything here additional to that. And it won't work on Windows because there the signalling is inherently non-blocking.

> However, if you simply make the control pipe block, then it solves the
> inconsistency between the pipe and the atomic counter, and you can even
> remove the mutex locking you just added.
> 
> Consider the following sequence:
> 
> 0. set->control_pending is currently 0
> 1. Writer increments set->control_pending
> 2. Reader decrements set->control_pending
> 3. Reader performs a read on set->control_read_fd.fd and blocks
> 4. Write writes set->control_write_fd.fd
> 5. Reader unblocks and returns
> 
> Consistency between set->control_pending and the pipe are maintained.  And
> without any mutex locking.

Can you give me an example sequence that fails with my patch? This one works in both variants that are now possible due to the mutex.

As explained above, making things blocking does not solve anything that isn't solved now already... and it won't work for Windows.

> The reason I haven't posted a patch is that I didn't modify gstpoll.c.  For
> my purposes, I just reached inside the GstBus, before we ever read it, and
> ensure that O_NONBLOCK has been cleared from its poll instance.
> 
> As I said, I've got probably thousands of pipeline years of runtime on this
> fix, and probably hundreds of thousands of pipeline setups & teardowns. 
> Given how quickly I encountered the original problem (within days and maybe
> a few hundred pipeline setups/teardows), any inadequacies of my fix
> should've become apparent, by now.

Nobody noticed that GstBus was broken like this for more than 7 years (if my GIT archaeology was correct), and even now that we know the problem it only happens for a few people. I don't think "it works for me" is a good argument in this case :)



Nonetheless I think it would make sense to refactor GstPoll after this is fixed in a minimally invasive way to have a more useful API (so that e.g. the pattern in GstBufferPool can work reliable). That's going to be a longer term project though as we need to ensure it actually works reliable on all platforms.
Comment 31 Matt Gruenke 2016-06-25 05:59:33 UTC
Created attachment 330346 [details] [review]
Proposed fix for both Linux and Windows.

The attached patch makes both Linux and Windows block on the signalling of the control fd/Event.

This solves the race condition, without the addition of any mutex locking or any assumptions about how quickly writes to the socketpair will propagate to the read fd.

Needs testing.
Comment 32 Matt Gruenke 2016-06-25 08:00:57 UTC
(In reply to Sebastian Dröge (slomo) from comment #30)
> (In reply to Matt Gruenke from comment #29)
> > (In reply to Sebastian Dröge (slomo) from comment #28)
> > > I think except for GstBufferPool (bug #767979), this is all fixed now.
> > 
> > I disagree.  I think your diff solves nothing, as you're acquiring the mutex
> > too late.  The entire problem comes down to an emergent discrepancy between
> > set->control_pending and the pipe occupancy.
> > 
> > If a mutex were to solve this problem, you'd need to lock it before
> > testing/modifying set->control_pending.  And then, what would be the point
> > of set->control_pending_locked?
> 
> The point of the control_pending_locked is to actually detect this
> discrepancy properly. control_pending is always correct at the time the
> atomic operation happens, after that things go out of sync. So we now have
> additionally a locked counter that basically is the "fill level" of the
> socket.

But that's not the problem.  The problem is inconsistency between set->control_pending and the socket.  Adding another counter doesn't help.  Again, if you want to solve this with a mutex, you need to lock it before testing/modifying set->control_pending and release it after WAKE_EVENT()/RELEASE_EVENT().

set->control_pending did a fine job of tracking the fill level of the socket - the issue was a race condition that your mutex is being acquired too late to prevent.


> By this we make sure that always the socket contains exactly one byte when
> it should, and none otherwise. Which is needed especially for the Windows
> part, as there you only have an on/off switch basically.

The socketpair is also being used as an on/off switch, since it should contain only 0 or 1 bytes.


> > But, I'm not sure even that would fix the problem, so long as gstpoll.c
> > contains the following:
> > 
> >     fcntl (control_sock[0], F_SETFL, O_NONBLOCK);
> >     fcntl (control_sock[1], F_SETFL, O_NONBLOCK);
> > 
> > Because, even if a mutex is locked before set->control_pending is
> > tested/modified, is the pipe guaranteed to be atomic?  In other words, if
> > one thread reads the set->control_read_fd.fd immediately after another
> > thread's write has returned, is the reader guaranteed to get the byte?  On
> > all platforms that support socketpair()?  If not, no mutex would completely
> > avoid the need to make it blocking.
> 
> GstPoll API requires that a read() only happens after a write() has
> returned, just that GstBus ignored that condition. Now we always first wait
> on the socket with poll() before doing anything else (in GstBus).

Even requiring write to complete before reads won’t necessarily guarantee that the read won’t fail.  What guarantees are there on the propagation time between a write to control_write_fd.fd and when it can be read from control_read_fd.fd?

I sense you're reluctant to remove O_NONBLOCK?  Do you think it’s needed, or even beneficial?  Why?


> Making the socket blocking is not going to solve anything here additional to
> that. And it won't work on Windows because there the signalling is
> inherently non-blocking.

Actually, because Events are synchronization primitives, and this one is created with bManualReset=TRUE, it needn't be so.  In the patch I just submitted, RELEASE_EVENT() now blocks, on both platforms.


> Can you give me an example sequence that fails with my patch? This one works
> in both variants that are now possible due to the mutex.

Just as without the patch:

0. set->control_pending is initially 0.
1. Writer increments set->control_pending
2. Reader decrements set->control_pending
3. Reader gets the mutex, finds that set->control_pending_locked is
   still 0, and skips the read.
4. Writer gets the mutex, increments set->control_pending_locked, and
   performs the write.

Now, set->control_pending is 0, but set->control_pending_locked is 1 and a byte is waiting in set->control_read_fd.fd that will never be read, due to the skew between them.  Consequently, the fd/Event will remain signaled, causing select()/WSAEventSelect() always to immediately return.  This is equivalent to the original bug.


> > As I said, I've got probably thousands of pipeline years of runtime on this
> > fix, and probably hundreds of thousands of pipeline setups & teardowns. 
> > Given how quickly I encountered the original problem (within days and maybe
> > a few hundred pipeline setups/teardows), any inadequacies of my fix
> > should've become apparent, by now.
> 
> Nobody noticed that GstBus was broken like this for more than 7 years (if my
> GIT archaeology was correct), and even now that we know the problem it only
> happens for a few people. I don't think "it works for me" is a good argument
> in this case :)

I don't know when it was introduced, but we didn't encounter the bug in gstreamer-0.10.25, which we’d been using until last year.  When we upgraded to 1.2.4, we noticed it quite easily.

We use gst_bus_timed_pop(), though I haven’t investigated whether it occurs with g_main_loop().   Since the bug is intermittent and merely causes excessive CPU utilization (at worst), it’s not nearly as noticeable or problematic as bugs which result in segfaults, connection loss, or data corruption.
Comment 33 Jan Schmidt 2016-06-25 11:33:17 UTC
It seems to me that removing non-blocking mode from the socket is sufficient on Linux, but I don't think so on Windows.

A socket *can* end up containing multiple bytes, but an Event can only be set once. Consider a scenario with multiple threads submitting control events, and a single reader.

0. set->control_pending is initially 0.
1. Writer 1 increments set->control_pending
2. Writer 1 sets the Event
3. Reader decrements set->control_pending
4. Writer 2 increments set->control_pending
5. Writer 2 sets the Already-Set event again
6. Reader clears the Event

Now the control variable is set but the Event never will be. I think we can use a Semaphore object instead on Windows to avoid that.
Comment 34 Matt Gruenke 2016-06-26 01:36:01 UTC
(In reply to Jan Schmidt from comment #33)
> It seems to me that removing non-blocking mode from the socket is sufficient
> on Linux, but I don't think so on Windows.

I agree on all points.  Thank you.


> I think we can use a Semaphore object instead on Windows to avoid that.

Good point.  So, CreateSemaphore() with lInitialCount=0 and lMaximumCount=999 (whatever's semaphore's largest supported value).  Then, replace SetEvent() with ReleaseSemaphore()?

The tricky part is that the concepts of selecting on it and acquiring it are intertwined.  I'll have a look and see if I can make it work.

Otherwise, the simplest fix is going to be putting a mutex around both control_pending and WAKE_EVENT()/RELEASE_EVENT().  On Windows, at least.
Comment 35 Matt Gruenke 2016-06-26 02:25:30 UTC
(In reply to Matt Gruenke from comment #34)

> The tricky part is that the concepts of selecting on it and acquiring it are
> intertwined.  I'll have a look and see if I can make it work.

I'm thinking that, whenever a normal wait is interrupted by the wakeup_event, we'd just call ReleaseSemaphore() on it.  Then, RELEASE_EVENT() would do another wait on it, to make it block like the socketpair().
Comment 36 Sebastian Dröge (slomo) 2016-06-26 07:49:41 UTC
(In reply to Jan Schmidt from comment #33)
> Now the control variable is set but the Event never will be. I think we can
> use a Semaphore object instead on Windows to avoid that.

Yes that should work, afaiu they also work with WaitForMultiplePbjects.

Main question then is how to implement release_wakeup_all() with all this
Comment 37 Sebastian Dröge (slomo) 2016-06-26 09:07:11 UTC
I missed that you can apparently wait the same way on semaphores on Windows, that's why I didn't consider this a useful alternative. It all still has to work with how the glib maincontext works.
Note that this was the main reason why I dismissed the blocking option for Linux, as without this implementing the same external behaviour on Windows would be quite tricky... And we need consistent external behaviour for our APIs across platforms.

I'll take a look at implementing that on Monday when I have a real keyboard in front of me again, and run it through my testcase.
Comment 38 Sebastian Dröge (slomo) 2016-06-26 10:44:55 UTC
I have to check some details with the windows APIs and run through all the scenarios, but other than working for this here it might also solve the buffer pool directly with some extra care in the implementation. But it slightly seems to change behaviour in some IMHO unsupported corner cases, replacing undefined behaviour that might work with a deadlock.
Comment 39 Sebastian Dröge (slomo) 2016-06-26 10:51:04 UTC
Comment on attachment 330346 [details] [review]
Proposed fix for both Linux and Windows.

This goes in the right direction but has problems on both platforms still if I'm not mistaken. I'll update it tomorrow with a new patch on top that contains the details
Comment 40 Sebastian Dröge (slomo) 2016-06-27 08:54:08 UTC
For the semaphore, as you mentioned because waiting and decreasing it is combined on Windows, this won't be usable as is. We have no control over what GMainContext does with the semaphore in relation to calling dispatch. It would possibly wait multiple times on the same semaphore before calling dispatch.

Also gst_poll_wait() would decrease the semaphore then, and removing the item from the bus another time... while on POSIX waiting would have no effect on the "fill level" of the socket, only removing an item.

We would fundamentally change the external behaviour of GstPoll here between different platforms, which is not a solution. (Which also means that we can't make it blocking on POSIX but not on Windows)


For the blocking implementation, if I'm not mistaken we will need to read() whenever the counter goes to <= 0 (not just == 0), and we need to write() whenever it was <= 0 before. Otherwise we might end up with control_pending==0, release_wakeup(), control_pending goes to -1, we don't read/block but instead return TRUE as if reading was successful.


Also if we block, for release_all_wakeup() we would need to know the exact fill level of the socket to read exactly that many bytes... Some counter, which would need to be protected by a mutex because of the reasons we talked about all the time here now.
Comment 41 Sebastian Dröge (slomo) 2016-06-27 09:17:12 UTC
For the GMainContext related problems, it could make sense to implement it all like how you would implement an external GstBus handler. Install a sync_handler, call g_main_context_wakeup() from there, consider the GSource ready if gst_bus_peek() return non-NULL and get around all the fd mess there.

That doesn't solve the problem though, as we still need to keep GstPoll behaviour consistent across platforms.
Comment 42 Sebastian Dröge (slomo) 2016-06-27 12:50:50 UTC
After some further discussions on IRC, the solution here for the time being is likely going to be to move away from GstPoll for GstBus (to simplify the code and not have it work around GstPoll weirdness) / GstBufferPool (to actually fix the code), and instead of mutexes and condition variables instead. The advantage of using lockless data structures here is basically non-existant and it seems better to have it actually working reliable and correct before optimizing. At a later point, someone could consider optimizing this again if the needs arises.

For GstPoll itself it probably makes sense to go with my patches for the time being as they seem safe without changing the behaviour of the API.


The usage of GstPoll in GstSystemClock seems safe though, so we can keep that one. It is also having an advantage over mutexes and condition variables there because of nanosecond vs. microsecond precision (which pthread condition variables also have, but GLib truncates to microseconds... we could use pthread directly here though, optionally)
Comment 43 Matt Gruenke 2016-06-27 15:18:33 UTC
Created attachment 330446 [details] [review]
Addresses race condition on Windows, possible with patch 330346.

This contains separate fixes for Windows and Linux.

For Windows, it's necessary to fix the race condition between testing set->control_pending and performing WAKE_EVENT()/RELEASE_EVENT().  This is accomplished by acquiring and holding set->lock, for both of these operations.  We could optimize the Linux version by making this Windows-specific.

On Linux, the socketpair() of the should not be made O_NONBLOCK.  If there's any propagation delay between set->control_write_fd.fd and set->control_read_fd.fd, even the above mutex will not be sufficient to prevent a race condition.  There's no benefit to using O_NONBLOCK, here.  Only liabilities.
Comment 44 Matt Gruenke 2016-06-27 15:22:45 UTC
(In reply to Sebastian Dröge (slomo) from comment #42)

> For GstPoll itself it probably makes sense to go with my patches for the
> time being as they seem safe without changing the behaviour of the API.

As I stated before, I think your patches don't fix the race condition, here.  Please consider attachment 330446 [details] [review].

If you take it, then I think GstBus can safely use GstPoll, for the time being.
Comment 45 Sebastian Dröge (slomo) 2016-06-27 16:04:24 UTC
One thing that this still does not solve without e.g. my GstBus patch is explained above: control_pending==0, release_wakeup(), control_pending goes to -1, we don't read/block but instead return TRUE as if reading was successful. And next raise_wakeup() would also do nothing as control_pending==-1. For consistent behaviour the release_wakeup() would've had to block though as it does otherwise.

If you always write() if control_pending<=0 before incrementing it, and always read() if control_pending<=0 after decrementing this should be solved for the case of blocking sockets. For Windows this would still not work.


(In reply to Matt Gruenke from comment #32)

> Just as without the patch:
> 
> 0. set->control_pending is initially 0.
> 1. Writer increments set->control_pending
> 2. Reader decrements set->control_pending
> 3. Reader gets the mutex, finds that set->control_pending_locked is
>    still 0, and skips the read.
> 4. Writer gets the mutex, increments set->control_pending_locked, and
>    performs the write.

Not exactly:

3. Reader gets the mutex, --control_pending_locked == -1 (!) so reading is skipped
4. Writer gets the mutex, control_pending_locked++ == -1 (!) so writing is skipped

It's however exactly the wrong case I mentioned above for your patch.


Note however that this situation can't happen together with my GstBus patch in both cases, or in your patch (that is, the combination of them... please attach complete patches in the future) if you also check for negative numbers.

> > Nobody noticed that GstBus was broken like this for more than 7 years (if my
> > GIT archaeology was correct), and even now that we know the problem it only
> > happens for a few people. I don't think "it works for me" is a good argument
> > in this case :)
> 
> I don't know when it was introduced, but we didn't encounter the bug in
> gstreamer-0.10.25, which we’d been using until last year.  When we upgraded
> to 1.2.4, we noticed it quite easily.

In 0.10, GstBus did not use the lockless stuff.


If I'm not mistaken, the only thing that your patch brings additionally over mine is that more mutexes are taken, and that the explicit gst_poll_wait() (that can have a timeout) is replaced by an implicit blocking (that can't have a timeout) in read_control(). I don't like neither btw, which is why I propose to get away from the poll stuff here (which does not really bring us anything anyway once we add the mutex).


What I'm mostly worried about here is that we end up in a situation where we actually block without any way of interruption, which previously was just an error and was reported as such (and ignored by GstBus).
Comment 46 Sebastian Dröge (slomo) 2016-06-27 16:34:28 UTC
Note that by considering negative values we actually implement a real semaphore here, that can be polled by gst_poll_wait() in an asynchronous way (and considering other sockets in the same time).

If we make the socket blocking (and INFINITE wait on Windows), we have a blocking semaphore.

If we make the socket non-blocking (or 0-wait on Windows, which we currently don't) then we have a non-blocking semaphore and errno tells us if there was just no data (EWOULDBLOCK) or some other error. In case of EWOULDBLOCK we could then decide to retry, or use gst_poll_wait() (maybe together with other sockets again) again.


The only thing that is important is that the counter and the socket/event have a consistent state, which we can AFAICS only get by using a mutex around them (for the Windows case, the POSIX case should also get away with just blocking sockets).

Note also that making it blocking breaks the GstPoll API: gst_poll_read_control() is supposed to return FALSE when no bytes were there to be read (which it currently also does not always do because bugs, see above). As such, we should only really consider making it blocking if there is really really no other way around it.
Comment 47 Matt Gruenke 2016-06-28 02:19:26 UTC
Created attachment 330467 [details] [review]
Addresses race conditions on both platforms and fixes gst_poll_read_control() when control_pending == 0.

Updates attachment 330446 [details] [review], by changing release_wakeup() to return TRUE and decrement control_pending only when > 0.  Furthermore, RELEASE_EVENT() is called only when control_pending == 1.

Note: this is effectively non-blocking, as it will only attempt to read from control_read_fd.fd after a byte has been written to control_write_fd.fd.
Comment 48 Matt Gruenke 2016-06-28 02:25:31 UTC
(In reply to Sebastian Dröge (slomo) from comment #46)

> Note also that making it blocking breaks the GstPoll API:
> gst_poll_read_control() is supposed to return FALSE when no bytes were there
> to be read (which it currently also does not always do because bugs, see
> above). As such, we should only really consider making it blocking if there
> is really really no other way around it.

I think attachment 330467 [details] [review] fixes that.

Please understand that, in spite of removing O_NONBLOCK, the intention is still that RELEASE_EVENT() shouldn't block for any appreciable amount of time.  We only avoid O_NONBLOCK to ensure the read() succeeds.
Comment 49 Sebastian Dröge (slomo) 2016-06-28 07:08:40 UTC
(In reply to Matt Gruenke from comment #48)

> Please understand that, in spite of removing O_NONBLOCK, the intention is
> still that RELEASE_EVENT() shouldn't block for any appreciable amount of
> time.  We only avoid O_NONBLOCK to ensure the read() succeeds.

The thing is that the API specifies explicitly that it should not succeed if there is no data to be read, as such we must not block.
Comment 50 Matt Gruenke 2016-06-28 07:11:23 UTC
(In reply to Sebastian Dröge (slomo) from comment #49)
> (In reply to Matt Gruenke from comment #48)
> 
> > Please understand that, in spite of removing O_NONBLOCK, the intention is
> > still that RELEASE_EVENT() shouldn't block for any appreciable amount of
> > time.  We only avoid O_NONBLOCK to ensure the read() succeeds.
> 
> The thing is that the API specifies explicitly that it should not succeed if
> there is no data to be read, as such we must not block.

Yes, I get it.  I think my latest patch is consistent with that objective.  If you believe otherwise, please indicate why.
Comment 51 Sebastian Dröge (slomo) 2016-06-28 07:17:32 UTC
It still removes the O_NONBLOCK, but the way it is written now it indeed can't run into the read() without a previous write() first. So I guess it's fine now, just that behaviour is slightly different on Windows still, where no waiting would happen in any case.

Note that we don't really need atomic integer operations now anymore
Comment 52 Sebastian Dröge (slomo) 2016-06-28 07:20:52 UTC
Comment on attachment 330467 [details] [review]
Addresses race conditions on both platforms and fixes gst_poll_read_control() when control_pending == 0.

Can you attach a *complete* patch (in git format-patch format) that a) also blocks the same on Windows and b) uses normal integer operations?
Comment 53 Matt Gruenke 2016-06-28 07:58:04 UTC
(In reply to Sebastian Dröge (slomo) from comment #52)
> Comment on attachment 330467 [details] [review] [review]
> Addresses race conditions on both platforms and fixes
> gst_poll_read_control() when control_pending == 0.

Thank you.


> Can you attach a *complete* patch (in git format-patch format) that a) also
> blocks the same on Windows and b) uses normal integer operations?

I'm happy to do it, but I'd just point out that the Windows implementation doesn't have the same potential for delay between when an event is set and when it can be reset.

That said, I see the maintainability benefits of making RELEASE_EVENT() behave consistently.  So, I'll do this, unless you decide otherwise.
Comment 54 Sebastian Dröge (slomo) 2016-06-28 08:12:34 UTC
Yes, please do. Thanks!
Comment 55 Matt Gruenke 2016-06-28 19:43:27 UTC
Created attachment 330504 [details] [review]
Fix for Bug 750397 (Race condition in GstBus).

This addresses slightly different race conditions on Linux and Windows, and
fixes gst_poll_read_control() when control_pending == 0.

On Linux, the socketpair() used for control should not be made O_NONBLOCK.
If there's any propagation delay between set->control_write_fd.fd and
set->control_read_fd.fd, even the mutex now held will not be sufficient to
prevent a race condition.  There's no benefit to using O_NONBLOCK, here.
Only liabilities.

For Windows, it's necessary to fix the race condition between testing
set->control_pending and performing WAKE_EVENT()/RELEASE_EVENT().  This is
accomplished by acquiring and holding set->lock, for both of these operations.
We could optimize the Linux version by making this Windows-specific.

For consistency with the Linux implementation, Windows' RELEASE_EVENT()
has also been made to block, although it should never happen.

Also, changed release_wakeup() to return TRUE and decrement control_pending
only when > 0.  Furthermore, RELEASE_EVENT() is called only when
control_pending == 1.

Finally, changed control_pending to use normal, non-atomic arithmetic
operations, since it's now protected by set->lock.

Note: even though the underlying signaling mechanisms are blocking,
release_wakeup() is effectively non-blocking, as it will only attempt to read
from control_read_fd.fd after a byte has been written to control_write_fd.fd or WaitForSingleObject() after it's been signaled.
Comment 56 Sebastian Dröge (slomo) 2016-06-28 20:56:18 UTC
Review of attachment 330504 [details] [review]:

Looks good to me except for minor things

::: gst/gstpoll.c
@@ +172,3 @@
 #else
 #define WAKE_EVENT(s)       (SetLastError (0), SetEvent ((s)->wakeup_event), errno = GetLastError () == NO_ERROR ? 0 : EACCES, errno == 0 ? 1 : 0)
+#define RELEASE_EVENT(s)    (WaitForSingleObject ((s)->wakeup_event, INFINITE), ResetEvent ((s)->wakeup_event))

You probably want to check for errors here too, and set errno like in WAKE_EVENT

@@ +206,3 @@
+  if (set->control_pending > 0) {
+    /* release, only if this was the last pending. */
+    if (set->control_pending-- == 1) {

--set->control_pending == 0 seems more clear (in symmetry with set->control_pending++ == 0 above)

@@ +211,3 @@
+    }
+    else
+      result = TRUE;

Curly braces here while we're at it :)

@@ +230,3 @@
+    GST_LOG ("%p: releasing %d", set, old);
+    if (!RELEASE_EVENT (set))
+      GST_WARNING ("%p: failed to release event", set);

Before this was a loop to call RELEASE_EVENT() until it succeeded
Comment 57 Sebastian Dröge (slomo) 2016-06-28 21:00:22 UTC
Review of attachment 330504 [details] [review]:

::: gst/gstpoll.c
@@ +206,1 @@
+  if (set->control_pending > 0) {

There is also an asymmetry here now: release_wakeup() might not change the value, raise_wakeup() will always change the value. If I'm not missing anything, this will cause release_wakeup() followed by raise_wakeup() to put one additional byte into the socket that is never going to be read.

I think in release_wakeup() you always have to decrement for symmetry between the two. Basically:

if (--set->control_pending == 0) {
  result = RELEASE_EVENT (set);
} else if (set->control_pending < 0) {
  result = FALSE;
} else {
  result = TRUE;
}
Comment 58 Sebastian Dröge (slomo) 2016-06-28 21:03:47 UTC
Actually ignore the last comment: reading that fails should have no effect (which also means that you need to ++ again if RELEASE_EVENT() failed) and must be retried.
Comment 59 Matt Gruenke 2016-06-28 21:04:43 UTC
(In reply to Sebastian Dröge (slomo) from comment #56)
> Review of attachment 330504 [details] [review] [review]:
> 
> Looks good to me except for minor things

> @@ +206,3 @@
> +  if (set->control_pending > 0) {
> +    /* release, only if this was the last pending. */
> +    if (set->control_pending-- == 1) {
> 
> --set->control_pending == 0 seems more clear (in symmetry with set->control_pending++ == 0 above)

I was going for a 1:1 substitution of the atomic operations, but I can change it.


> @@ +211,3 @@
> +    }
> +    else
> +      result = TRUE;
> 
> Curly braces here while we're at it :)

I was trying to be consistent with what I saw elsewhere in the file.  Will fix.


> @@ +230,3 @@
> +    GST_LOG ("%p: releasing %d", set, old);
> +    if (!RELEASE_EVENT (set))
> +      GST_WARNING ("%p: failed to release event", set);
> 
> Before this was a loop to call RELEASE_EVENT() until it succeeded

Here's my only real objection.  There's no reason to expect it'll fail, unless someone passes in a bad GstPoll.  And if it does fail, there's no reason to think that looping will result in success.  So, I'd rather try once and warn, than introduce a hang.

If you prefer, I can emit a critical warning.
Comment 60 Sebastian Dröge (slomo) 2016-06-28 21:34:41 UTC
(In reply to Matt Gruenke from comment #59)

> If you prefer, I can emit a critical warning.

Yes, that sounds indeed better.


So the only other problem then is, that if either RELEASE_EVENT() or WAKE_EVENT() fails, the counter should have the same value as before. Otherwise the counter and fill level of the socket will diverge again.
Comment 61 Sebastian Dröge (slomo) 2016-06-28 21:44:09 UTC
RELEASE_EVENT() can actually "fail" with EAGAIN or EINTR and should be retried then instead of causing a critical warning. So it might be better to handle those two accordingly, and only do a critical warning in other cases (and in any case keep the counter consistent!).
Comment 62 Sebastian Dröge (slomo) 2016-06-29 04:22:29 UTC
So what I think has to happen here is that:
1) If FALSE is returned in release because the counter is already 0, errno should be set to EWOULDBLOCK
2) If RELEASE_EVENT() or WAKE_EVENT() returns FALSE and EAGAIN/EINTR, or RELEASE_EVENT() reads 0 bytes, or WAKE_EVENT() writes 0 bytes, the counter should either be set to the previous value or we need to loop. Both seems acceptable.
3) If RELEASE_EVENT() or WAKE_EVENT() fail otherwise, the counter should be set to its previous value, FALSE should be returned (and errno contain the error). This should probably also cause a g_warning().
Comment 63 Matt Gruenke 2016-06-29 05:35:37 UTC
Created attachment 330519 [details] [review]
Fix for Bug 750397 (Race condition in GstBus).

Updates previous patch with requested changes.
Comment 64 Matt Gruenke 2016-06-29 05:41:30 UTC
(In reply to Sebastian Dröge (slomo) from comment #61)
> RELEASE_EVENT() can actually "fail" with EAGAIN or EINTR and should be
> retried then instead of causing a critical warning. So it might be better to
> handle those two accordingly, and only do a critical warning in other cases
> (and in any case keep the counter consistent!).

According to the manpage, Linux won't fail with EAGAIN, unless O_NONBLOCK was set.  Regardless, I've handled this as requested (no harm in it, really, since EINTR should be handled by retrying).  I can imagine that or read() returning 0 occurring on a Posix-like system, so it seems reasonable to handle those cases.
Comment 65 Matt Gruenke 2016-06-29 05:46:55 UTC
Created attachment 330521 [details] [review]
Simple diff of the changes in in attachment 330519 [details] [review], for ease of reviewing.

Since the diff viewer shows my 3 commits separately, this should be easier to review.
Comment 66 Sebastian Dröge (slomo) 2016-06-29 06:09:20 UTC
Review of attachment 330521 [details] [review]:

On windows, you need to set errno (and reset it before), like the old code did. We need it to distinguish between EWOULDBLOCK and disaster. This should be for all 3 operations: set, reset and wait

Otherwise looks perfect. Thanks for your work :)
Comment 67 Sebastian Dröge (slomo) 2016-06-29 13:09:25 UTC
Created attachment 330572 [details] [review]
poll: Warn if the return value of gst_poll_read_control() is unused

This might fail even under correct usage, e.g. if read_control() is called
from another thread before write_control() finished in another. It has to be
retried then, or other measures have to be taken, depending on how it is used
by the surrounding code.
Comment 68 Sebastian Dröge (slomo) 2016-06-29 13:09:56 UTC
Created attachment 330574 [details] [review]
poll: set_controllable(), restart() and set_flushing() are only valid for non-timer GstPolls

On timer GstPolls it will cause the control socket state to become
inconsistent as now one less read_control() than write_control() be would
needed.

Similarly, read_control() and write_control() are only valid on timer
GstPolls.
Comment 69 Sebastian Dröge (slomo) 2016-06-29 13:10:27 UTC
Created attachment 330576 [details] [review]
poll: Clarify when FALSE is returned from read/write_control()

And also mention what the expected values of errno are going to be.

write_control() will only ever return FALSE if there was a critical error. It
will never return because of EINTR, EAGAIN or EWOULDBLOCK.

read_control() will return FALSE if there was no byte to read, in which case
errno would be EWOULDBLOCK.
In all other cases there was a critical error.
Comment 70 Sebastian Dröge (slomo) 2016-06-29 13:14:06 UTC
Created attachment 330577 [details] [review]
systemclock: Improve GstPoll handling and don't check for impossible errno values

Also just read/write control every time, GstPoll is optimized by itself
already to only do I/O if switching between empty and one byte.
Comment 71 Sebastian Dröge (slomo) 2016-06-29 13:14:54 UTC
Created attachment 330578 [details] [review]
bus: Make sure to always read the control after popping a message

It might happen that we popped the message before writing of the control
happened. In this case we just have to retry again a bit later, and failure to
do so will cause an additional byte in the control and the GSource /
gst_poll_wait() to always wake up again immediately.
Comment 72 Sebastian Dröge (slomo) 2016-06-29 15:43:46 UTC
All these other patches are on top of yours btw, once you updated yours to set errno on Windows we can merge it all :) And then hopefully get rid of the mutex once it's all working, at least for Linux. Somehow.
Comment 73 Sebastian Dröge (slomo) 2016-06-29 17:02:48 UTC
Created attachment 330603 [details] [review]
poll: Fix various race conditions with read_control() and write_control()

This addresses slightly different race conditions on Linux and Windows, and
fixes gst_poll_read_control() when control_pending == 0.

On Linux, the socketpair() used for control should not be made O_NONBLOCK.
If there's any propagation delay between set->control_write_fd.fd and
set->control_read_fd.fd, even the mutex now held will not be sufficient to
prevent a race condition.  There's no benefit to using O_NONBLOCK, here.
Only liabilities.

For Windows, it's necessary to fix the race condition between testing
set->control_pending and performing WAKE_EVENT()/RELEASE_EVENT().  This is
accomplished by acquiring and holding set->lock, for both of these operations.
We could optimize the Linux version by making this Windows-specific.

For consistency with the Linux implementation, Windows' RELEASE_EVENT()
has also been made to block, although it should never happen.

Also, changed release_wakeup() to return TRUE and decrement control_pending
only when > 0.  Furthermore, RELEASE_EVENT() is called only when
control_pending == 1.

Finally, changed control_pending to use normal, non-atomic arithmetic
operations, since it's now protected by set->lock.

Note: even though the underlying signaling mechanisms are blocking,
release_wakeup() is effectively non-blocking, as it will only attempt to read
from control_read_fd.fd after a byte has been written to control_write_fd.fd
or WaitForSingleObject() after it's been signaled.
Comment 74 Sebastian Dröge (slomo) 2016-06-29 17:03:56 UTC
Comment on attachment 330521 [details] [review]
Simple diff of the changes in in attachment 330519 [details] [review], for ease of reviewing.

I've updated your patch now to set EBADF as errno if something goes wrong on Windows. That's probably the closest that makes sense (write() and read() can both return that, unlike the previous EACCES).
Comment 75 Matt Gruenke 2016-06-29 17:07:18 UTC
(In reply to Sebastian Dröge (slomo) from comment #74)
> I've updated your patch now to set EBADF as errno if something goes wrong on
> Windows. That's probably the closest that makes sense (write() and read()
> can both return that, unlike the previous EACCES).

Okay, so do you need anything more from me?


(In reply to Sebastian Dröge (slomo) from comment #72)

> And then hopefully get rid of the mutex once it's all
> working, at least for Linux. Somehow.

Well, the non-atomic conversion and conditional update of control_pending will impede that effort.  I think it's possible to revert it back to being atomic, but I haven't been paying attention to whether we'd get burned by anything else.

At this point, I think it's safer to leave the mutex in there.  It should be cheap, especially by comparison with calls to select() and when we actually read() and write().
Comment 76 Sebastian Dröge (slomo) 2016-06-29 18:00:05 UTC
(In reply to Matt Gruenke from comment #75)
> (In reply to Sebastian Dröge (slomo) from comment #74)
> > I've updated your patch now to set EBADF as errno if something goes wrong on
> > Windows. That's probably the closest that makes sense (write() and read()
> > can both return that, unlike the previous EACCES).
> 
> Okay, so do you need anything more from me?

No, thanks. I didn't want you to waste time on such trivial things ;)

> (In reply to Sebastian Dröge (slomo) from comment #72)
> 
> > And then hopefully get rid of the mutex once it's all
> > working, at least for Linux. Somehow.
> 
> Well, the non-atomic conversion and conditional update of control_pending
> will impede that effort.  I think it's possible to revert it back to being
> atomic, but I haven't been paying attention to whether we'd get burned by
> anything else.
> 
> At this point, I think it's safer to leave the mutex in there.  It should be
> cheap, especially by comparison with calls to select() and when we actually
> read() and write().

Yes, probably. I leave that as an exercise for someone else when it becomes a problem / important enough. It will have to keep the API in any case, or become a new API.
Comment 77 Sebastian Dröge (slomo) 2016-06-29 20:49:07 UTC
Attachment 330572 [details] pushed as 0bfc9fb - poll: Warn if the return value of gst_poll_read_control() is unused
Attachment 330574 [details] pushed as 254955d - poll: set_controllable(), restart() and set_flushing() are only valid for non-timer GstPolls
Attachment 330576 [details] pushed as cda3f12 - poll: Clarify when FALSE is returned from read/write_control()
Attachment 330577 [details] pushed as 149127b - systemclock: Improve GstPoll handling and don't check for impossible errno values
Attachment 330578 [details] pushed as 4ff2721 - bus: Make sure to always read the control after popping a message
Attachment 330603 [details] pushed as cd06aea - poll: Fix various race conditions with read_control() and write_control()
Comment 78 Xavier Claessens 2016-10-12 20:28:53 UTC
Are those patches backported to 1.8? I think I'm hitting this bug as well.

My app runs on Windows7, it creates a thread with a GMainContext and attach gst_bus_create_watch()'s source on it, with gst_bus_async_signal_func as callback. Then I connect "message::state-changed" signal.

When doing a serie of "play->pause->seek->play" in loop, I always end up in a state where I don't receive the "state-changed" PAUSED->PLAYING signal. This leaves my app stuck in an inconsistent state where it thinks the seek never finish (That code is similar to GstPlayer).

To be sure it's not a problem in my side, I even added a 1s timeout source on the context to verify the thread polling the bus message queue is not stuck. That source always continues working.

Using gst_bus_set_sync_handler() I do see the PAUSED->PLAYING "state-changed" message, so I guess I'm hitting the race described here. Is that possible?
Comment 79 Sebastian Dröge (slomo) 2016-10-13 08:18:40 UTC
They are not backported as they are rather non-trivial.
Comment 80 Xavier Claessens 2016-10-13 16:23:04 UTC
Ok, backported them into my project's branch, and it seems to work well.
Comment 81 James Stevenson 2016-10-13 17:10:19 UTC
The alternative work around for this would be to use gst_bus_set_sync_handler. This will skip over the race conditions that are present then roll your own bus that you can sleep on a queue with something like g_async_queue_

This will make it work across multiple gstreamer versions eg 1.2->current and without having to rebuild the gstreamer libraries.
Comment 82 Nicolas Dufresne (ndufresne) 2016-10-13 20:10:50 UTC
We know this very well, similarly you can use a g_idle if you have a mainloop. It remains ugly.