GNOME Bugzilla – Bug 476514
[multiqueue] Doesn't forward EOS event in all cases
Last modified: 2016-11-12 11:27:16 UTC
The following pipeline never finishes: $ gst-launch gnomevfssrc location=http://replaygain.hydrogenaudio.org/ref_pink.wav ! queue ! wavparse ! fakesink Setting pipeline to PAUSED ... Pipeline is PREROLLING ... Pipeline is PREROLLED ... Setting pipeline to PLAYING ... New clock: GstSystemClock The reason for this seems to be that wavparse never sends eos event downstream. gst_wavparse_chain returns GST_FLOW_UNEXPECTED when wavparse thinks it has reached eos. This causes queue src pad task to pause and eos event is never received from the queue. Wavparse doesn't generate eos event itself either.
Created attachment 95511 [details] [review] proposed patch
Created attachment 95515 [details] [review] proposed patch
Created attachment 95518 [details] [review] wavparse-eos.diff What about this patch instead? This prevents two EOS events when being in loop-mode. Does this fix your bug?
Well, it does for me... committed :) Please reopen if you still have problems with this. 2007-09-13 Sebastian Dröge <slomo@circular-chaos.org> * gst/wavparse/gstwavparse.c: (gst_wavparse_perform_eos), (gst_wavparse_loop), (gst_wavparse_chain): Add EOS logic for the push-based mode too. Fixes #476514.
I'm not sure whether this fix is correct. The EOS event should be generated by the element driving the pipeline (in this case: not wavparse), and should be passed on downstream until it reaches a sink. If this doesn't happen, it needs to be fixed upstream IMHO.
Ok, then let's reopen this bug...
(In reply to comment #5) > I'm not sure whether this fix is correct. The EOS event should be generated by > the element driving the pipeline (in this case: not wavparse), and should be > passed on downstream until it reaches a sink. If this doesn't happen, it needs > to be fixed upstream IMHO. > So in this case wavparse shouldn't be responsible for determining when stream ends. And therefore it shouldn't return GST_FLOW_UNEXPECTED in gst_wavparse_chain until it receives eos event from upstream.
> So in this case wavparse shouldn't be responsible for determining when stream > ends. And therefore it shouldn't return GST_FLOW_UNEXPECTED in > gst_wavparse_chain until it receives eos event from upstream. I _think_ returning GST_FLOW_UNEXPECTED to tell upstream that processing should stop is okay in this case (even if unusual).
So it's probably either queue or gnomevfssrc that don't forward the EOS event? From a short look at a GST_DEBUG=5 log both forward the event but wavparse doesn't forward it anymore.
Hm, no... queue receives the event but doesn't forward it.
Tim, do you have any idea what could be wrong in queue? :)
Yep, adding the hack below to gstqueue.c makes the above pipeline work. Would be nice if someone with deeper knowledge about queue could look into this. Index: plugins/elements/gstqueue.c =================================================================== RCS file: /cvs/gstreamer/gstreamer/plugins/elements/gstqueue.c,v retrieving revision 1.199 diff -u -p -a -u -p -r1.199 gstqueue.c --- plugins/elements/gstqueue.c 13 Sep 2007 17:15:38 -0000 1.199 +++ plugins/elements/gstqueue.c 14 Sep 2007 09:55:56 -0000 @@ -758,6 +758,9 @@ gst_queue_handle_sink_event (GstPad * pa STATUS (queue, pad, "after flush"); goto done; } + case GST_EVENT_EOS: + gst_pad_push_event (queue->srcpad, event); + goto done; default: if (GST_EVENT_IS_SERIALIZED (event)) { /* serialized events go in the queue */
queue2 suffers from the same bug btw...
(In reply to comment #12) > Yep, adding the hack below to gstqueue.c makes the above pipeline work. Would > be nice if someone with deeper knowledge about queue could look into this. > EOS is a serialized event - in queue it is handled by the default: case, and should be placed in the queue and output in correct serial order. The problem from my perspective is wavparse returning UNEXPECTED when it hasn't yet received EOS - the consequence as you've noted is that queue won't send anything more. Instead, I think wavparse should feel free to ignore any further data it receives once it has seen what it thinks is the end of the file, but it should definitely not take it upon itself to return UNEXPECTED - it should just pass the EOS downstream, and let that shut the pipeline down.
> Instead, I think wavparse should feel free to ignore any further data it > receives once it has seen what it thinks is the end of the file, but it should > definitely not take it upon itself to return UNEXPECTED - it should just pass > the EOS downstream, and let that shut the pipeline down. Definitely? While your suggestion would work of course, I think it would be unnecessarily limiting in terms of what can be done in future. I wonder if it might not even be bad from a design point of view, making it impossible for elements in the pipeline to decide when streaming should stop, putting this burden onto the application, with all that entails (like shutting down source first to make it send EOS, then wait for the EOS message before shutting down the rest of the pipeline etc.). And what if you have a source that provides data forever? (Not so much an issue for wavparse, but in general). I think we should allow for elements in the pipeline to return FLOW_UNEXPECTED (or a new flow return, don't really care) to signal upstream that processing should stop now.
it should be allowed for a chain based element to return FLOW_UNEXPECTED, which makes upstream perform its EOS logic (push EOS or post SEGMENT_DONE). It looks like queue is not handling this correctly yet.
(In reply to comment #16) > it should be allowed for a chain based element to return FLOW_UNEXPECTED, which > makes upstream perform its EOS logic (push EOS or post SEGMENT_DONE). It looks > like queue is not handling this correctly yet. > So that implies that queue should not shut down the source pad on UNEXPECTED, but should simply ensure that it is returned upstream and then await the EOS?
UNEXPECTED means that it does not want anymore buffers. It's fine to push an EOS, FLUSH or NEWSEGMENT. Queue should probably get rid of all buffers up to a queued NEWSEGMENT or EOS, and not shut down the push thread. I'm thinking about the case where you have httpsrc -> queue -> mad and you requested a SEGMENT seek in time from 0 to 3 seconds, mad does not know the byte end position so asks to seek from byte 0 to the end. When it has pushed up to 3 seconds it can return UNEXPECTED, which should make httpsrc perform the EOS logic (post SEGMENT_DONE), at which point the app can do another SEGMENT seek.
* plugins/elements/gstqueue.c: (gst_queue_locked_enqueue), (gst_queue_handle_sink_event), (gst_queue_chain), (gst_queue_push_one), (gst_queue_handle_src_query), (gst_queue_sink_activate_push), (gst_queue_src_activate_push): * plugins/elements/gstqueue.h: When downstream returns UNEXPECTED from pushing a buffer, don't try to push more buffers but allow pushing of EOS and NEWSEGMENT. Add some more debug info here and there. Fixes #476514.
This problem applies to queue2 too, so reopening. $ gst-launch gnomevfssrc location=http://replaygain.hydrogenaudio.org/ref_pink.wav ! queue2 ! wavparse ! fakesink Setting pipeline to PAUSED ... Pipeline is PREROLLING ... Pipeline is PREROLLED ... Setting pipeline to PLAYING ... New clock: GstSystemClock
Hm, Works fine for me with queue2 now...
I'm using latest stable gst packages from debian + patched queue, it hangs as follows for me: 0:00:02.083651000 2792 0x8112578 LOG wavparse gstwavparse.c:1604:gst_wavparse_stream_data:<wavparse0> Got buffer. timestamp:0:00:04.969070295 , duration:0:00:00.030929705, size:2728 0:00:02.083677000 2792 0x8112578 LOG wavparse gstwavparse.c:1620:gst_wavparse_stream_data:<wavparse0> offset: 441028 , end: 441028 0:00:02.083688000 2792 0x8112578 LOG wavparse gstwavparse.c:1504:gst_wavparse_stream_data:<wavparse0> offset: 441028 , end: 441028 , dataleft: 0 0:00:02.083697000 2792 0x8112578 DEBUG wavparse gstwavparse.c:1628:gst_wavparse_stream_data:<wavparse0> found EOS 0:00:02.083708000 2792 0x8112578 LOG queue2_dataflow gstqueue2.c:1390:gst_queue_loop:<queue20> pause task, reason: unexpected Do you have earlier workarounds to wavparse reverted?
Hum, right... I still had the old wavparse on that machine. You're right, with queue2 it still doesn't EOS correctly :/
Slomo, maybe because you applied that wavparse patch also. Should that be reverted now?
The wavparse patch _was_ applied, I reverted it in CVS as it's wrong behaviour (although it doesn't hurt for this bug). With queue2 it definitely still has the broken EOS behaviour in current, clean CVS, queue(1) is fixed.
Wim, can you do the same change in queue2. Is it also needed in multiqueue?
* gst/playback/gstqueue2.c: (update_buffering), (gst_queue_locked_flush), (gst_queue_locked_enqueue), (gst_queue_handle_sink_event), (gst_queue_chain), (gst_queue_push_one), (gst_queue_sink_activate_push), (gst_queue_src_activate_push), (gst_queue_src_activate_pull): Also fix #476514 for queue2.
It seems to be needed in multiqueue as well.
Wim, should this be fixed before releasing new core & base?
no, I don't think this is going to be done for the release. It's not a regression, and I think it can wait.
It applies to multiqueue too, but is not that easy to be fixed there.
It's more complicated in multiqueue, mostly because the queue locking is done in the lower level object and we need to grab it to handle the EOS case.
Maybe a test case would be a good first step towards getting this fixed?
This seems relevant: commit fbdf4dcedad8692f1e3d8838551188987e462e74 Author: Wim Taymans <wim.taymans@collabora.co.uk> Date: Tue Jan 26 12:43:09 2010 +0100 multiqueue: handle UNEXPECTED flowreturn better When we receive an UNEXPECTED flowreturn from downstream, we must not shutdown the pushing thread because upstream will at some point push an EOS that we still need to push further downstream. To achieve this, convert the UNEXPECTED return value to OK. Add a fixme so that we implement the right logic to propagate the flowreturn upstream at some point. Also clean up the unit test a little. Fixes #608136
Does this still apply to 1.x ?
This thing is almost 9 years old. A lot has changed in multiqueue since then. If it's still a problem someone will run into it sooner or later and open a new bug about it. Let's close this.