GNOME Bugzilla – Bug 734412
multiqueue: The buffering logic can lead to a pipeline stuck in PAUSED forever
Last modified: 2014-11-25 18:16:47 UTC
Basically what happens is that we have the following pipeline as represented in the attached file, and multiqueue1 audio simple queue empties while the video one is full, tsdemux is blocked trying to push on it. As the audio multique is emptied, multiqueue post a buffering message on the bus which leads to the application set state to PAUSED. But as one of the multiqueue single queue is full nothing happens anymore. That happens in fast backward playback with HLS, and playing backward can introduce a lot of latency with video which can explain why the audio single queue is emptied so fast in comparision with the video track.
Created attachment 282767 [details] [review] multiqueue: Not post BUFFERING message if one of the singlequeue is full Imagine the following 'pipeline' -------------- p1/| 'fullqueue' |--- 'laggy' downstream --------- / | | -| demuxer | | multiqueue | --------- \ | | p2\| 'emptyqueue' |--- 'fast' downstream -------------- In the case downstream of one single queue (fullqueue) has (a lot of) latency (for example for reverse playback with video), we can end up having the other SingleQueue (emptyqueue) emptied, before that fullqueue gets unblocked. In the meantime, the demuxer tries to push on fullqueue, and is blocking there. In that case the current code will post a BUFFERING message on the bus when emptyqueue gets emptied, that leads to the application setting the pipeline state to PAUSED. So now we end up in a situation where 'laggy downstream' is prerolled and will not unblock anymore because the pipeline is set to PAUSED, the fullequeue does not have a chance to be emptied and the emptyqueue can not get filled anymore so no more BUFERRING message will be posted and the pipeline is stucked in PAUSED for the eternity. Making sure that we do not try to "buffer" if one of the single queue is full, prevent this situtation from happening though it lets the oportunity for buffering in all other cases.
Created attachment 282768 [details] Real pipeline dump where the issue happens
Review of attachment 282767 [details] [review]: Or maybe if one of the queues is full and we would post a buffering message, we should instead allow for the full queue to grow until buffering is done? Otherwise your patch looks good, just not sure if that's the correct solution as it basically disables buffering completely for such pipelines. ::: plugins/elements/gstmultiqueue.c @@ +828,3 @@ gst_single_queue_flush_queue (sq, full); + + GST_MULTI_QUEUE_MUTEX_LOCK (mq); Why do you move the lock?
Review of attachment 282767 [details] [review]: > Otherwise your patch looks good, just not sure if that's the correct solution as it basically disables buffering completely for such pipelines. It does not disable buffering completely, it will just not buffer in a quite rare case where we have one single queued emptied while the other is full. It will most probably buffer when that full single queue get s a little bit emptied. ::: plugins/elements/gstmultiqueue.c @@ +828,3 @@ gst_single_queue_flush_queue (sq, full); + + GST_MULTI_QUEUE_MUTEX_LOCK (mq); Because gst_single_queue_flush_queue does not require the lock to be taken, and now in that function we take the lock right before calling update_buffering() so it would deadlock to take the lock before calling the method here.
(In reply to comment #4) > Review of attachment 282767 [details] [review]: > > > Otherwise your patch looks good, just not sure if that's the correct solution as it basically disables buffering completely for such pipelines. > > It does not disable buffering completely, it will just not buffer in a quite > rare case where we have one single queued emptied while the other is full. It > will most probably buffer when that full single queue get s a little bit > emptied. Ok, so we would start buffering when that single queue gets a bit emptied... but is something going to send buffering=100% once it's full again on probably the next buffer? And does it even make sense to buffer for such a short amount of time then? You would always jump between buffering and no-buffering.
(In reply to comment #5) > (In reply to comment #4) > > Review of attachment 282767 [details] [review] [details]: > > > > > Otherwise your patch looks good, just not sure if that's the correct solution as it basically disables buffering completely for such pipelines. > > > > It does not disable buffering completely, it will just not buffer in a quite > > rare case where we have one single queued emptied while the other is full. It > > will most probably buffer when that full single queue get s a little bit > > emptied. > > Ok, so we would start buffering when that single queue gets a bit emptied... > but is something going to send buffering=100% once it's full again on probably > the next buffer? And does it even make sense to buffer for such a short amount > of time then? You would always jump between buffering and no-buffering. It will send buffering=100% when the most emptied queue ready.
Created attachment 283076 [details] [review] multiqueue: Not post BUFFERING message if one of the singlequeue doesn't need it Imagine the following 'pipeline' -------------- p1/| 'fullqueue' |--- 'laggy' downstream --------- / | | -| demuxer | | multiqueue | --------- \ | | p2\| 'emptyqueue' |--- 'fast' downstream -------------- In the case downstream of one single queue (fullqueue) has (a lot of) latency (for example for reverse playback with video), we can end up having the other SingleQueue (emptyqueue) emptied, before that fullqueue gets unblocked. In the meantime, the demuxer tries to push on fullqueue, and is blocking there. In that case the current code will post a BUFFERING message on the bus when emptyqueue gets emptied, that leads to the application setting the pipeline state to PAUSED. So now we end up in a situation where 'laggy downstream' is prerolled and will not unblock anymore because the pipeline is set to PAUSED, the fullequeue does not have a chance to be emptied and the emptyqueue can not get filled anymore so no more BUFERRING message will be posted and the pipeline is stucked in PAUSED for the eternity. Making sure that we do not try to "buffer" if one of the single queue does not need buffering, prevents this situtation from happening though it lets the oportunity for buffering in all other cases. That implements a new logic where we need all singlequeue to need buffering for the multiqueue to actually state buffering is needed, taking the maximum buffering of the single queue as the reference point.
Review of attachment 283076 [details] [review]: ::: plugins/elements/gstmultiqueue.c @@ +924,3 @@ mq->buffering = TRUE; mq->percent = percent; post = TRUE; Does this make sense? You set multiqueue to buffering mode... @@ +933,3 @@ + post = FALSE; + + break; ... but don't post a buffering message. Wouldn't it be clearer to just store the maximum percentage all the time, and also only have mq->buffering==TRUE if we actually posted a message that would trigger buffering? It seems like this here only makes the code even harder to understand, even if it probably does the right thing :)
Created attachment 283082 [details] [review] multiqueue: Not post BUFFERING message if one of the singlequeue doesn't need it Imagine the following 'pipeline' -------------- p1/| 'fullqueue' |--- 'laggy' downstream --------- / | | -| demuxer | | multiqueue | --------- \ | | p2\| 'emptyqueue' |--- 'fast' downstream -------------- In the case downstream of one single queue (fullqueue) has (a lot of) latency (for example for reverse playback with video), we can end up having the other SingleQueue (emptyqueue) emptied, before that fullqueue gets unblocked. In the meantime, the demuxer tries to push on fullqueue, and is blocking there. In that case the current code will post a BUFFERING message on the bus when emptyqueue gets emptied, that leads to the application setting the pipeline state to PAUSED. So now we end up in a situation where 'laggy downstream' is prerolled and will not unblock anymore because the pipeline is set to PAUSED, the fullequeue does not have a chance to be emptied and the emptyqueue can not get filled anymore so no more BUFERRING message will be posted and the pipeline is stucked in PAUSED for the eternity. Making sure that we do not try to "buffer" if one of the single queue does not need buffering, prevents this situtation from happening though it lets the oportunity for buffering in all other cases. That implements a new logic where we need all singlequeue to need buffering for the multiqueue to actually state buffering is needed, taking the maximum buffering of the single queue as the reference point.
(In reply to comment #8) > Review of attachment 283076 [details] [review]: > > ::: plugins/elements/gstmultiqueue.c > @@ +924,3 @@ > mq->buffering = TRUE; > mq->percent = percent; > post = TRUE; > > Does this make sense? You set multiqueue to buffering mode... > > @@ +933,3 @@ > + post = FALSE; > + > + break; > > ... but don't post a buffering message. Wouldn't it be clearer to just store > the maximum percentage all the time, and also only have mq->buffering==TRUE if > we actually posted a message that would trigger buffering? > > It seems like this here only makes the code even harder to understand, even if > it probably does the right thing :) That last patch fixes the mistake pointed out here.
Review of attachment 283082 [details] [review]: ::: plugins/elements/gstmultiqueue.c @@ +896,3 @@ +{ + gint percent; + gboolean post = FALSE; FALSE and ... @@ +931,3 @@ + } + + if (post && percent < mq->low_percent) { ... it will be never TRUE here.
Created attachment 283090 [details] [review] /me facepalm That new version should finally be correct :)
commit 78e22645444fa3784e0bcf4e2b1d0858be3af4fb Author: Thibault Saunier <thibault.saunier@collabora.com> Date: Thu Aug 7 12:18:04 2014 +0200 multiqueue: Not post BUFFERING message if one of the singlequeue doesn't need it Imagine the following 'pipeline' -------------- p1/| 'fullqueue' |--- 'laggy' downstream --------- / | | -| demuxer | | multiqueue | --------- \ | | p2\| 'emptyqueue' |--- 'fast' downstream -------------- In the case downstream of one single queue (fullqueue) has (a lot of) latency (for example for reverse playback with video), we can end up having the other SingleQueue (emptyqueue) emptied, before that fullqueue gets unblocked. In the meantime, the demuxer tries to push on fullqueue, and is blocking there. In that case the current code will post a BUFFERING message on the bus when emptyqueue gets emptied, that leads to the application setting the pipeline state to PAUSED. So now we end up in a situation where 'laggy downstream' is prerolled and will not unblock anymore because the pipeline is set to PAUSED, the fullequeue does not have a chance to be emptied and the emptyqueue can not get filled anymore so no more BUFERRING message will be posted and the pipeline is stucked in PAUSED for the eternity. Making sure that we do not try to "buffer" if one of the single queue does not need buffering, prevents this situtation from happening though it lets the oportunity for buffering in all other cases. That implements a new logic where we need all singlequeue to need buffering for the multiqueue to actually state buffering is needed, taking the maximum buffering of the single queue as the reference point. https://bugzilla.gnome.org/show_bug.cgi?id=734412
Sorry I'm a bit late reacting... but in WebKit we rely on BUFFERING messages to keep some of the player's internal state variables up-to-date. So this patch breaks HLS playback in WebKit unless we explicitely enable the buffering flag of playbin. Is this the right approach or should WebKit not rely on those messages for HLS at all?
You should still always get buffering messages for HLS if I'm not mistaken. Unless you disable buffering yourself. And unless buffering is already 100% in the very beginning, in which case the buffering message would be just useless noise.