GNOME Bugzilla – Bug 773940
splitmuxsink goes to dead lock when used along with other sinks
Last modified: 2018-11-03 15:14:07 UTC
pipeline goes to dead lock when splitmux sink is used in parallel with other sink. Example pipeline. gst-launch-1.0 videotestsrc ! x264enc ! tee name=split \ split.! queue ! splitmuxsink muxer=mpegtsmux location=/root/vinod/apple_tv_ad__.ts \ split.! queue ! mpegtsmux ! udpsink host=10.0.100.3 port=9000 When udpsink is replaced with another splitmux sink pipeline works perfectly fine. This issue is present in 1.6.4 , 1.8.2 1.10.0 and master.
When sufficient memory is added in UDP sink path queue pipeline works. gst-launch-1.0 videotestsrc ! x264enc ! tee name=split \ split.! queue ! splitmuxsink muxer=mpegtsmux location=/root/vinod/apple_tv_ad__.ts max-size-time=10000000000 \ split.! queue max-size-buffers=4000 max-size-bytes=10487860 max-size-time=10000000000 ! mpegtsmux ! udpsink host=10.0.100.3 port=9000 I tried following things: 1. Changed multique max-size-bytes and max-size-time in splitmux sink code but no luck. 2. Increasing or decreasing queue size doesnt have any impact on the pipeline. 3. Increasing queue size in UDP sink path removed the dead lock. 4. Tried to check dead lock using gdb but not able identify. Can some one help with steps to identify the issue ?? Is this dead lock because of different delays involved in different sinks ?? Regards, Vinod
The problem you're encountering is that splitmuxsink consumes an entire GOP until the 2nd keyframe before it can preroll. You need enough buffering in both tee paths to ensure that can happen, or the pipeline can't preroll.
Hello Jan, Thank you. If queue size is configure slightly greater than GOP size. Pipeline works. But Pipeline works perfectly fine without configuring queue when there are multiple splitmux sinks. How pipeline rolls properly in this case ?? Regards, Vinod
When you use only splitmuxsink in each branch, then each splitmuxsink internally buffers a full GOP, which lets them all preroll.
I think the only thing we could do with this report is to make splitmuxsink allow through the first buffers the first GOP for each fragment, which would the buffer less in the internal multiqueue. It would have to do it carefully though. At the moment, it gathers the entire GOP before releasing it.
Hi Jan, Will async-handling property of splitmuxsink will allow sink to pre-roll un conditionally. I think this will internally do exactly what you are telling. Following pipeline worked for me. gst-launch-1.0 videotestsrc ! x264enc ! tee name=split split.! queue ! splitmuxsink muxer=mpegtsmux async-handling=true location=/root/vinod/apple_tv_ad__.ts max-size-time=1000000000 split.! queue ! mpegtsmux ! udpsink host=10.0.100.3 port=9000 Please correct me if I am wrong ?? ~ Vinod
Yes, setting async-handling on splitmuxsink hides the prerolling of filesink inside splitmuxsink from the rest of the pipeline - so the other elements can reach PLAYING state, even though the filesink inside splitmuxsink hasn't.
Will that make any major difference in pipeline functionality.
The difference is that the pipeline will reach PLAYING state before the renderering sink inside splitmuxsink actually has. Whether that's important depends on whether or not you need that sink to have actually prerolled when the pipeline reaches PLAYING, or if it's OK for it to happen later.
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/322.