After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 734412 - multiqueue: The buffering logic can lead to a pipeline stuck in PAUSED forever
multiqueue: The buffering logic can lead to a pipeline stuck in PAUSED forever
Status: RESOLVED FIXED
Product: GStreamer
Classification: Platform
Component: gstreamer (core)
unspecified
Other All
: Normal normal
: 1.4.2
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks: 734445
 
 
Reported: 2014-08-07 10:44 UTC by Thibault Saunier
Modified: 2014-11-25 18:16 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
multiqueue: Not post BUFFERING message if one of the singlequeue is full (3.82 KB, patch)
2014-08-07 10:44 UTC, Thibault Saunier
reviewed Details | Review
Real pipeline dump where the issue happens (153.46 KB, image/svg+xml)
2014-08-07 10:45 UTC, Thibault Saunier
  Details
multiqueue: Not post BUFFERING message if one of the singlequeue doesn't need it (5.08 KB, patch)
2014-08-11 11:14 UTC, Thibault Saunier
needs-work Details | Review
multiqueue: Not post BUFFERING message if one of the singlequeue doesn't need it (5.08 KB, patch)
2014-08-11 11:59 UTC, Thibault Saunier
needs-work Details | Review
/me facepalm (5.13 KB, patch)
2014-08-11 12:34 UTC, Thibault Saunier
committed Details | Review

Description Thibault Saunier 2014-08-07 10:44:39 UTC
Basically what happens is that we have the following pipeline as represented in the attached file,  and multiqueue1 audio simple queue empties while the video one is full, tsdemux is blocked trying to push on it. As the audio multique is emptied, multiqueue post a buffering message on the bus which leads to the application set state to PAUSED. But as one of the multiqueue single queue is full nothing happens anymore.

That happens in fast backward playback with HLS, and playing backward can introduce a lot of latency with video which can explain why the audio single queue is emptied so fast in comparision with the video track.
Comment 1 Thibault Saunier 2014-08-07 10:44:42 UTC
Created attachment 282767 [details] [review]
multiqueue: Not post BUFFERING message if one of the singlequeue is full

Imagine the following 'pipeline'

                --------------
            p1/| 'fullqueue'  |--- 'laggy' downstream
  ---------  / |              |
-| demuxer |   | multiqueue   |
  ---------  \ |              |
            p2\| 'emptyqueue' |--- 'fast' downstream
                --------------

In the case downstream of one single queue (fullqueue) has (a lot of) latency
(for example for reverse playback with video), we can end up having the other
SingleQueue (emptyqueue) emptied, before that fullqueue gets
unblocked. In the meantime, the demuxer tries to push on fullqueue, and
is blocking there.

In that case the current code will post a BUFFERING message on the bus when
emptyqueue gets emptied, that leads to the application setting the pipeline state to
PAUSED. So now we end up in a situation where 'laggy downstream' is
prerolled and will not unblock anymore because the pipeline is set to
PAUSED, the fullequeue does not have a chance to be emptied and
the emptyqueue can not get filled anymore so no more BUFERRING message
will be posted and the pipeline is stucked in PAUSED for the eternity.

Making sure that we do not try to "buffer" if one of the single queue is
full, prevent this situtation from happening though it lets the
oportunity for buffering in all other cases.
Comment 2 Thibault Saunier 2014-08-07 10:45:46 UTC
Created attachment 282768 [details]
Real pipeline dump where the issue happens
Comment 3 Sebastian Dröge (slomo) 2014-08-11 07:55:21 UTC
Review of attachment 282767 [details] [review]:

Or maybe if one of the queues is full and we would post a buffering message, we should instead allow for the full queue to grow until buffering is done?

Otherwise your patch looks good, just not sure if that's the correct solution as it basically disables buffering completely for such pipelines.

::: plugins/elements/gstmultiqueue.c
@@ +828,3 @@
     gst_single_queue_flush_queue (sq, full);
+
+    GST_MULTI_QUEUE_MUTEX_LOCK (mq);

Why do you move the lock?
Comment 4 Thibault Saunier 2014-08-11 08:06:45 UTC
Review of attachment 282767 [details] [review]:

> Otherwise your patch looks good, just not sure if that's the correct solution as it basically disables buffering completely for such pipelines.

It does not disable buffering completely, it will just not buffer in a quite rare case where we have one single queued emptied while the other is full. It will most probably buffer when that full single queue get s a little bit emptied.

::: plugins/elements/gstmultiqueue.c
@@ +828,3 @@
     gst_single_queue_flush_queue (sq, full);
+
+    GST_MULTI_QUEUE_MUTEX_LOCK (mq);

Because gst_single_queue_flush_queue does not require the lock to be taken, and now in that function we take the lock right before calling update_buffering() so it would deadlock to take the lock before calling the method here.
Comment 5 Sebastian Dröge (slomo) 2014-08-11 08:19:51 UTC
(In reply to comment #4)
> Review of attachment 282767 [details] [review]:
> 
> > Otherwise your patch looks good, just not sure if that's the correct solution as it basically disables buffering completely for such pipelines.
> 
> It does not disable buffering completely, it will just not buffer in a quite
> rare case where we have one single queued emptied while the other is full. It
> will most probably buffer when that full single queue get s a little bit
> emptied.

Ok, so we would start buffering when that single queue gets a bit emptied... but is something going to send buffering=100% once it's full again on probably the next buffer? And does it even make sense to buffer for such a short amount of time then? You would always jump between buffering and no-buffering.
Comment 6 Thibault Saunier 2014-08-11 08:25:06 UTC
(In reply to comment #5)
> (In reply to comment #4)
> > Review of attachment 282767 [details] [review] [details]:
> > 
> > > Otherwise your patch looks good, just not sure if that's the correct solution as it basically disables buffering completely for such pipelines.
> > 
> > It does not disable buffering completely, it will just not buffer in a quite
> > rare case where we have one single queued emptied while the other is full. It
> > will most probably buffer when that full single queue get s a little bit
> > emptied.
> 
> Ok, so we would start buffering when that single queue gets a bit emptied...
> but is something going to send buffering=100% once it's full again on probably
> the next buffer? And does it even make sense to buffer for such a short amount
> of time then? You would always jump between buffering and no-buffering.

It will send buffering=100% when the most emptied queue ready.
Comment 7 Thibault Saunier 2014-08-11 11:14:45 UTC
Created attachment 283076 [details] [review]
multiqueue: Not post BUFFERING message if one of the singlequeue doesn't need it

Imagine the following 'pipeline'

                --------------
            p1/| 'fullqueue'  |--- 'laggy' downstream
  ---------  / |              |
-| demuxer |   | multiqueue   |
  ---------  \ |              |
            p2\| 'emptyqueue' |--- 'fast' downstream
                --------------

In the case downstream of one single queue (fullqueue) has (a lot of) latency
(for example for reverse playback with video), we can end up having the other
SingleQueue (emptyqueue) emptied, before that fullqueue gets
unblocked. In the meantime, the demuxer tries to push on fullqueue, and
is blocking there.

In that case the current code will post a BUFFERING message on the bus when
emptyqueue gets emptied, that leads to the application setting the pipeline state to
PAUSED. So now we end up in a situation where 'laggy downstream' is
prerolled and will not unblock anymore because the pipeline is set to
PAUSED, the fullequeue does not have a chance to be emptied and
the emptyqueue can not get filled anymore so no more BUFERRING message
will be posted and the pipeline is stucked in PAUSED for the eternity.

Making sure that we do not try to "buffer" if one of the single queue
does not need buffering, prevents this situtation from happening though it lets the
oportunity for buffering in all other cases.

That implements a new logic where we need all singlequeue to need
buffering for the multiqueue to actually state buffering is needed,
taking the maximum buffering of the single queue as the reference point.
Comment 8 Sebastian Dröge (slomo) 2014-08-11 11:25:13 UTC
Review of attachment 283076 [details] [review]:

::: plugins/elements/gstmultiqueue.c
@@ +924,3 @@
       mq->buffering = TRUE;
       mq->percent = percent;
       post = TRUE;

Does this make sense? You set multiqueue to buffering mode...

@@ +933,3 @@
+        post = FALSE;
+
+        break;

... but don't post a buffering message. Wouldn't it be clearer to just store the maximum percentage all the time, and also only have mq->buffering==TRUE if we actually posted a message that would trigger buffering?

It seems like this here only makes the code even harder to understand, even if it probably does the right thing :)
Comment 9 Thibault Saunier 2014-08-11 11:59:41 UTC
Created attachment 283082 [details] [review]
multiqueue: Not post BUFFERING message if one of the singlequeue doesn't need it

Imagine the following 'pipeline'

                --------------
            p1/| 'fullqueue'  |--- 'laggy' downstream
  ---------  / |              |
-| demuxer |   | multiqueue   |
  ---------  \ |              |
            p2\| 'emptyqueue' |--- 'fast' downstream
                --------------

In the case downstream of one single queue (fullqueue) has (a lot of) latency
(for example for reverse playback with video), we can end up having the other
SingleQueue (emptyqueue) emptied, before that fullqueue gets
unblocked. In the meantime, the demuxer tries to push on fullqueue, and
is blocking there.

In that case the current code will post a BUFFERING message on the bus when
emptyqueue gets emptied, that leads to the application setting the pipeline state to
PAUSED. So now we end up in a situation where 'laggy downstream' is
prerolled and will not unblock anymore because the pipeline is set to
PAUSED, the fullequeue does not have a chance to be emptied and
the emptyqueue can not get filled anymore so no more BUFERRING message
will be posted and the pipeline is stucked in PAUSED for the eternity.

Making sure that we do not try to "buffer" if one of the single queue
does not need buffering, prevents this situtation from happening though it lets the
oportunity for buffering in all other cases.

That implements a new logic where we need all singlequeue to need
buffering for the multiqueue to actually state buffering is needed,
taking the maximum buffering of the single queue as the reference point.
Comment 10 Thibault Saunier 2014-08-11 12:01:03 UTC
(In reply to comment #8)
> Review of attachment 283076 [details] [review]:
> 
> ::: plugins/elements/gstmultiqueue.c
> @@ +924,3 @@
>        mq->buffering = TRUE;
>        mq->percent = percent;
>        post = TRUE;
> 
> Does this make sense? You set multiqueue to buffering mode...
> 
> @@ +933,3 @@
> +        post = FALSE;
> +
> +        break;
> 
> ... but don't post a buffering message. Wouldn't it be clearer to just store
> the maximum percentage all the time, and also only have mq->buffering==TRUE if
> we actually posted a message that would trigger buffering?
> 
> It seems like this here only makes the code even harder to understand, even if
> it probably does the right thing :)

That last patch fixes the mistake pointed out here.
Comment 11 Sebastian Dröge (slomo) 2014-08-11 12:29:03 UTC
Review of attachment 283082 [details] [review]:

::: plugins/elements/gstmultiqueue.c
@@ +896,3 @@
+{
+  gint percent;
+  gboolean post = FALSE;

FALSE and ...

@@ +931,3 @@
+    }
+
+    if (post && percent < mq->low_percent) {

... it will be never TRUE here.
Comment 12 Thibault Saunier 2014-08-11 12:34:18 UTC
Created attachment 283090 [details] [review]
/me facepalm

That new version should finally be correct :)
Comment 13 Sebastian Dröge (slomo) 2014-08-13 15:25:50 UTC
commit 78e22645444fa3784e0bcf4e2b1d0858be3af4fb
Author: Thibault Saunier <thibault.saunier@collabora.com>
Date:   Thu Aug 7 12:18:04 2014 +0200

    multiqueue: Not post BUFFERING message if one of the singlequeue doesn't need it
    
    Imagine the following 'pipeline'
    
                    --------------
                p1/| 'fullqueue'  |--- 'laggy' downstream
      ---------  / |              |
    -| demuxer |   | multiqueue   |
      ---------  \ |              |
                p2\| 'emptyqueue' |--- 'fast' downstream
                    --------------
    
    In the case downstream of one single queue (fullqueue) has (a lot of) latency
    (for example for reverse playback with video), we can end up having the other
    SingleQueue (emptyqueue) emptied, before that fullqueue gets
    unblocked. In the meantime, the demuxer tries to push on fullqueue, and
    is blocking there.
    
    In that case the current code will post a BUFFERING message on the bus when
    emptyqueue gets emptied, that leads to the application setting the pipeline state to
    PAUSED. So now we end up in a situation where 'laggy downstream' is
    prerolled and will not unblock anymore because the pipeline is set to
    PAUSED, the fullequeue does not have a chance to be emptied and
    the emptyqueue can not get filled anymore so no more BUFERRING message
    will be posted and the pipeline is stucked in PAUSED for the eternity.
    
    Making sure that we do not try to "buffer" if one of the single queue
    does not need buffering, prevents this situtation from happening though it lets the
    oportunity for buffering in all other cases.
    
    That implements a new logic where we need all singlequeue to need
    buffering for the multiqueue to actually state buffering is needed,
    taking the maximum buffering of the single queue as the reference point.
    
    https://bugzilla.gnome.org/show_bug.cgi?id=734412
Comment 14 Philippe Normand 2014-11-25 18:06:33 UTC
Sorry I'm a bit late reacting... but in WebKit we rely on BUFFERING messages to keep some of the player's internal state variables up-to-date. So this patch breaks HLS playback in WebKit unless we explicitely enable the buffering flag of playbin.

Is this the right approach or should WebKit not rely on those messages for HLS at all?
Comment 15 Sebastian Dröge (slomo) 2014-11-25 18:16:47 UTC
You should still always get buffering messages for HLS if I'm not mistaken. Unless you disable buffering yourself. And unless buffering is already 100% in the very beginning, in which case the buffering message would be just useless noise.