GNOME Bugzilla – Bug 760880
hlssink: stops fragmentizing the stream so that segment file grows infinitely
Last modified: 2018-05-06 13:04:08 UTC
i have a pipeline with a custom h264 and aac source, parsers, mpegtsmux, queue and hlssink. there's a tee piping the ts to a tcpclientsink and an appsink aswell. fragment duration is set to 2s. after maybe 100 oder 2500 correctly fragmentized segments, one segment will eventually grow until the disk is full. the other transfers (tcp and rtsp) keep running correctly. my videosource returns FALSE on the force keyframe events. with buffer probes installed, i do observe keyframe buffers both leaving the h264parse element and also going into hlssink, so it's not caused by missing i-frames. i'm going to do some more debugging on this today.
Since you don't handle the force-keyframe events at all, the question is why does it work in the first place? Did you find a way to reproduce this with stock GStreamer elements?
tim, the force keyframe events do come back to hlssink and are being handled correctly: 1:15:11.138381438 375 0xb38db0 INFO hlssink gsthlssink.c:476:gst_hls_sink_ghost_event_probe:<hlssink> setting index 2164 1:15:11.138646993 375 0xb38db0 INFO hlssink gsthlssink.c:297:gst_hls_sink_handle_message:<hlssink> COUNT 2164 1:15:11.139036030 375 0xb38db0 INFO hlssink gsthlssink.c:502:schedule_next_key_unit:<hlssink> sending upstream force-key-unit, index 2165 now 1:15:08.349824081 target 1:15:10.349824081 in this example, segment 2163 was the one growing wild i am not sure if i can reproduce it. probably not though just have to find out why multifilesink doesn't start a new file. it might even be a multifilesink issue.
so it ran the whole night, 500000 segments without error only thing i changed was setting the fragment duration down to 1 second (in the hopes that more segments would trigger the issue faster) and add a few lines of debug to multifilesink. could this be a missing lock in gst_multi_file_sink_open_next_file? i see multifilesink->files gets modified without protection. technically there should be only one thread but who knows
Created attachment 319541 [details] log results of buffer probes i've got new suspicions about what may be causing this issue: i placed buffer probes at the srcpad of my h264source element, at the source of h264parse and at hlssink's sinkpad (after the mpegtsmux) it turns out that at regular pace, there always are buffers that have dozens of seconds late timestamps compared to the others. no idea where those buffers originate in because i can't find any other occurences of those specific pts in the logs. i know h264 has reordering, but certainly not to this degree.
Could you point us to the line numbers in the log where you see the problem?
Comment on attachment 319541 [details] log results of buffer probes there are hundreds of occurences, the first one that popped up when i randomly scrolled somewhere is like 2210, pts 0:00:13.360000006 when it's surrounded by all 50s+ pts somewhat interesting is that the "late" pts are within themselves also monotonically increasing and always seem to keep the same offset to the real pts this offset varies every time i run the pipeline, i've observed values between 12s and 60s
Something to try: maybe you can do yoursrc ! tee name=t \ t. ! queue ! gdppay ! filesink location=src.gdp \ t. ! queue ! h264parse ! mpegtsmux ! hlssink and with a bit of luck (if it's not a race condition) we can then replay the input and reproduce the problem.
http://dreambox.guru/dreamvideosource.gdp last-but-one correctly chopped segment: http://dreambox.guru/segment00084.ts last infite size segment (killed after 10x regular segment duration) http://dreambox.guru/segment00085.ts you can clearly see a discontinuity after 1-2 seconds i'm going to have to find out what that is
unfortunately, gst-launch-1.0 filesrc location=dreamvideosource.gdp ! gdpdepay ! queue ! h264parse ! mpegtsmux ! queue ! hlssink target-duration=2 creates 98 perfectly chopped segments
this must have something to do with the fact that the pipeline is not running straight through, but the different branches and sinks are added to the tee during playback
I've also been seeing this bahaviour, did you ever get to the bottom of this?
Frankly, I don't remember exactly but i think something that i changed upstreams in the pipeline resolved this.
OK, then I'd just close this. Feel free to reopen if something turns up again.