After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 760880 - hlssink: stops fragmentizing the stream so that segment file grows infinitely
hlssink: stops fragmentizing the stream so that segment file grows infinitely
Status: RESOLVED FIXED
Product: GStreamer
Classification: Platform
Component: gst-plugins-bad
1.6.2
Other Linux
: Normal critical
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2016-01-20 09:59 UTC by Andreas Frisch
Modified: 2018-05-06 13:04 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
log results of buffer probes (1.40 MB, text/x-log)
2016-01-22 08:32 UTC, Andreas Frisch
Details

Description Andreas Frisch 2016-01-20 09:59:03 UTC
i have a pipeline with a custom h264 and aac source, parsers, mpegtsmux, queue and hlssink. there's a tee piping the ts to a tcpclientsink and an appsink aswell. fragment duration is set to 2s.
after maybe 100 oder 2500 correctly fragmentized segments, one segment will eventually grow until the disk is full. the other transfers (tcp and rtsp) keep running correctly.
my videosource returns FALSE on the force keyframe events.
with buffer probes installed, i do observe keyframe buffers both leaving the h264parse element and also going into hlssink, so it's not caused by missing i-frames.

i'm going to do some more debugging on this today.
Comment 1 Tim-Philipp Müller 2016-01-20 10:06:23 UTC
Since you don't handle the force-keyframe events at all, the question is why does it work in the first place?

Did you find a way to reproduce this with stock GStreamer elements?
Comment 2 Andreas Frisch 2016-01-20 10:29:32 UTC
tim, the force keyframe events do come back to hlssink and are being handled correctly:

1:15:11.138381438   375   0xb38db0 INFO                 hlssink gsthlssink.c:476:gst_hls_sink_ghost_event_probe:<hlssink> setting index 2164
1:15:11.138646993   375   0xb38db0 INFO                 hlssink gsthlssink.c:297:gst_hls_sink_handle_message:<hlssink> COUNT 2164
1:15:11.139036030   375   0xb38db0 INFO                 hlssink gsthlssink.c:502:schedule_next_key_unit:<hlssink> sending upstream force-key-unit, index 2165 now 1:15:08.349824081 target 1:15:10.349824081

in this example, segment 2163 was the one growing wild

i am not sure if i can reproduce it. probably not though
just have to find out why multifilesink doesn't start a new file. it might even be a multifilesink issue.
Comment 3 Andreas Frisch 2016-01-21 07:58:49 UTC
so it ran the whole night, 500000 segments without error
only thing i changed was setting the fragment duration down to 1 second (in the hopes that more segments would trigger the issue faster) and add a few lines of debug to multifilesink.
could this be a missing lock in gst_multi_file_sink_open_next_file? i see multifilesink->files gets modified without protection. technically there should be only one thread but who knows
Comment 4 Andreas Frisch 2016-01-22 08:32:55 UTC
Created attachment 319541 [details]
log results of buffer probes

i've got new suspicions about what may be causing this issue:
i placed buffer probes at the srcpad of my h264source element, at the source of h264parse and at hlssink's sinkpad (after the mpegtsmux)
it turns out that at regular pace, there always are buffers that have dozens of seconds late timestamps compared to the others. 
no idea where those buffers originate in because i can't find any other occurences of those specific pts in the logs.
i know h264 has reordering, but certainly not to this degree.
Comment 5 Tim-Philipp Müller 2016-01-22 09:50:42 UTC
Could you point us to the line numbers in the log where you see the problem?
Comment 6 Andreas Frisch 2016-01-22 10:02:17 UTC
Comment on attachment 319541 [details]
log results of buffer probes

there are hundreds of occurences, the first one that popped up when i randomly scrolled somewhere is like 2210, pts 0:00:13.360000006 when it's surrounded by all 50s+ pts 
somewhat interesting is that the "late" pts are within themselves also monotonically increasing and always seem to keep the same offset to the real pts
this offset varies every time i run the pipeline, i've observed values between 12s and 60s
Comment 7 Tim-Philipp Müller 2016-01-22 11:08:03 UTC
Something to try: maybe you can do

  yoursrc ! tee name=t  \
      t. ! queue ! gdppay ! filesink location=src.gdp  \
      t. ! queue ! h264parse ! mpegtsmux ! hlssink

and with a bit of luck (if it's not a race condition) we can then replay the input and reproduce the problem.
Comment 8 Andreas Frisch 2016-01-22 12:25:07 UTC
http://dreambox.guru/dreamvideosource.gdp

last-but-one correctly chopped segment:
http://dreambox.guru/segment00084.ts
last infite size segment (killed after 10x regular segment duration)
http://dreambox.guru/segment00085.ts
you can clearly see a discontinuity after 1-2 seconds
i'm going to have to find out what that is
Comment 9 Andreas Frisch 2016-01-22 12:49:38 UTC
unfortunately, 
gst-launch-1.0 filesrc location=dreamvideosource.gdp ! gdpdepay ! queue ! h264parse ! mpegtsmux ! queue ! hlssink target-duration=2 creates 98 perfectly chopped segments
Comment 10 Andreas Frisch 2016-01-25 13:17:58 UTC
this must have something to do with the fact that the pipeline is not running straight through, but the different branches and sinks are added to the tee during playback
Comment 11 minfrin 2017-01-01 22:14:49 UTC
I've also been seeing this bahaviour, did you ever get to the bottom of this?
Comment 12 Andreas Frisch 2017-01-01 22:30:49 UTC
Frankly, I don't remember exactly but i think something that i changed upstreams in the pipeline resolved this.
Comment 13 Vivia Nikolaidou 2018-05-06 13:04:08 UTC
OK, then I'd just close this. Feel free to reopen if something turns up again.