After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 688310 - New HLS sink element using the GstBaseAdaptiveSink base class
New HLS sink element using the GstBaseAdaptiveSink base class
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gst-plugins-bad
1.x
Other Linux
: Normal enhancement
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
: 668093 (view as bug list)
Depends on: 668093
Blocks:
 
 
Reported: 2012-11-14 11:49 UTC by Andoni Morales
Modified: 2018-11-03 13:13 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
IRC discussion from 29-November-2012 ca. 10:30h (6.65 KB, text/plain)
2012-11-30 20:34 UTC, Tim-Philipp Müller
Details

Description Andoni Morales 2012-11-14 11:49:53 UTC
This is a port of hlssink using the base class for adaptive sinks in #668093.

This sink adds the following features with respect of the one for 0.10:
  * support for multi-bitrate streams
  * support for byte-ranges media segments (added in version 4 of the protocol)
  * application interface (similar to appsink)
  * uses the base adaptive base class

You can find it in the baseadaptive branch:

https://github.com/ylatuya/gst-plugins-bad/tree/baseadaptive

Usage:

gst-launch videotestsrc ! tee name=t ! \
   ! queue ! x264enc bitrate=1000 ! mpegtsmux ! queue ! hlssink name=s \
t. ! queue ! x264enc bitrate=2000 ! mpegtsmux ! queue ! s.             
t. ! queue ! x264enc bitrate=3000 ! mpegtsmux ! queue ! s.
Comment 1 Andoni Morales 2012-11-25 11:20:50 UTC
Can anyone review this, please? I am open to any kind of change, modification, whatever... but I wouldn't like to have it waiting forever in bugzilla like it happened in the past.
I think this implementation is substantially better than the current one, supporting a much bigger number of use cases.
Comment 2 Tim-Philipp Müller 2012-11-30 20:34:07 UTC
Created attachment 230341 [details]
IRC discussion from 29-November-2012 ca. 10:30h

For posterity: see attached IRC discussion
Comment 3 Olivier Crête 2013-04-16 20:40:07 UTC
@andoni: Did you ever get to updated your branches to use multifilesink/etc ? We should try to get support for this, dash, etc upstream soonish.
Comment 4 Olivier Crête 2013-04-17 23:41:43 UTC
*** Bug 668093 has been marked as a duplicate of this bug. ***
Comment 5 Olivier Crête 2013-04-17 23:42:20 UTC
I had a look at your code, I think it's a good start and has some important features over the current hlssink, such as supporting multiple bitrates.

Some comments:

1. I'm not sure why you have a base class and separate dashsink and hlssink, I think it would make more sense to have a single "fragmentsink" that can produce both, since it is very possible to make a stream that is both dash and hls at the same time (and then to write both manifests). If you do that, then you no longer need an external base class.

2. As discussed on bug #688310, you should use multifilesink, it might make sense to allow the user to replace this multifilesink which his own element.

3. I'm not sure listening on the ForceKeyUnit is the best way to split the fragments, it might make sense to instead listen to FLAG_HEADER and FLAG_DELTA_UNIT. This way, one can use it from a pre-encoded file.

4. I'm also not incredibly happy about seeing decodebin in there, I think if we just muxed the stream, we shouldn't need to re-demux it. This may mean that we need to enrich the caps of muxed streams to know what they contain. Or forward something along the lines of an encoding profile down the pipeline.

5. DASH allows for more complex cases than just N streams representing the same content, it allows for separate files for separates languages, etc. It might make sense to think of a more complex API to create these kind of streams.
Comment 6 Andoni Morales 2013-04-18 08:45:58 UTC
1) I think we should keep 2 different elements even though you can in some special cases produce DASH and HLS content from the same stream. HLS is a subset of DASH, which means you could only have an HLS playlists for certain kind of streams (mpeg-ts). Instead DASH is codec agnostic and for now there are defined profiles for MP4 and mpeg-ts with several levels of complexity but more can be added in the future (such as for webm).
We could have at some point a bin that can do both HLS and DASH but with a restricted set of input caps. The base class has a write-to-disk property that can be disabled for one of them, meaning it will only write the playlist and fragments will be written by the other sink.
Also note that HLS's fragments recommended duration is 10 seconds while in DASH you have fragments of around 2 seconds. So producing both contents from the same stream is a special case and not the default operation mode.

2) I am not in favour of replacing the code to write fragments to disk with multifilesink because it will make things harder in the future, such as encryption or Smooth Streaming live, but that's what was agreed so let's do it.

3) ForceKeyUnit is the only way to split fragments correctly. In a HLS stream segments are 10 seconds long, so you let the encoder generate keyframes when needed except each 10 seconds where you force one. The only way to know when this keyframe is forced is listening to this event. We will also need to listen to FLAG_DELTA_UNIT to create I-Frames playlists in HLS, which is not implemented.
We could also change this behaviour and let the sink fragment each X seconds using as reference the closest FLAG_DELTA_UNIT for encoded streams, but that needs to be an option.

4) I am not happy too, but that's the only way I found to get stream info needed for the DASH media representation such as width, heigth, codec type, etc.. It should possible to add this in the caps, but it won't work for encoded streams.

5) We need to improve the API for alternative renditions. When adding a new stream to the sink there is no way to let him know whether it's a new bitrate or a variant stream of an existing one. The sink relies in ondemand pads to make it work with gst-launch but we should additionally provide an api to add pads with some metada such as the stream bitrate and the stream information (name of the alternative rendition or language for audio streams).
Comment 7 Olivier Crête 2013-04-19 16:28:00 UTC
2) Why is multifilesink wrong for Smooth Streaming live? For encryption, it might make sense to add an extension point (like an insertbin) before the sink.

3) Why do you care if the keyframe is forced or not? Can't you just wait for the first keyframe after the 10 second mark? Why does using FLAG_DELTA_UNIT need to be an option ?

4) We really need to fix that in the core.
Comment 8 Andoni Morales 2013-04-20 09:43:03 UTC
(In reply to comment #7)
> 2) Why is multifilesink wrong for Smooth Streaming live? For encryption, it
> might make sense to add an extension point (like an insertbin) before the sink.

In Smooth Streaming Live there is a special atom for live which includes the timestamp of the incoming next 2 fragments. The playlist is not reloaded by clients and that's the way clients know when the should fetch the next fragment. It's easier to store the whole fragment in memory and rewrite this part when you have queued enough fragments rather than reading back your fragment from disk.
For encryption it's not that important because most cyphers are block cyphers, but you could have some a signal in the sink emitting  'fragment-added' so that applications could apply their own encryption.
The point of using multifilesink is that  you shouldn't take care of delimiting fragments (it already does it using the GstForceKeyUnit event) and you shouldn't need to write code to write buffers to disk... But in the end you still need to keep track of fragment boundaries in the sink (for timestamps and fragments size) and you also need to have code to write buffers to disk (for writing playlists) so there is no real benefit and it adding more code to the sink.
But that's been discussed several times and I can live with it :)


> 3) Why do you care if the keyframe is forced or not? Can't you just wait for
> the first keyframe after the 10 second mark? Why does using FLAG_DELTA_UNIT
> need to be an option ?
You do care about the GstForceKeyUnit events because they fully delimit the boundaries of a fragment.  Containers also do their job on the fragmentation when they receive a GstForceKeyUnit event. For instance mpeg-ts ensures that the fragment starts with a SPS/PPS and MP4 creates a Movie Fragment adding the Movie Fragment Headers and flushing the queued buffers for the fragments.
Comment 9 Olivier Crête 2013-04-26 19:06:48 UTC
Is there a Smooth Streaming sink somewhere ? Or was that done application-side ?

I rebased your branch over the latest git and ported to GLib 3.32:

http://cgit.collabora.com/git/user/tester/gst-plugins-bad.git/log/?h=baseadaptive
Comment 10 Andoni Morales 2013-04-27 12:10:18 UTC
No smooth streaming yet, but it should be really easy to add it as it would only require to write a manifest renderer (at least for on-demand). Live is only a bit more tricky.

I think there might a GType collision with the fragmented plugin (hlsdemux) for GstFragment. We should make sure that this plugin also uses the baseadaptive library and GstFragment from it. Maybe we can also rename baseadaptive to adaptivestreaming...
Comment 11 GStreamer system administrator 2018-11-03 13:13:39 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/82.