GNOME Bugzilla – Bug 607742
API: add gst_event_new_{upstream,downstream}_force_key_unit() etc.
Last modified: 2011-11-29 09:40:45 UTC
Should we add this to video library, so that we can also document it? GstEvent* gst_event_new_force_key_unit(void) { return gst_event_new_custom (GST_EVENT_CUSTOM_DOWNSTREAM, gst_structure_new("GstForceKeyUnit", NULL)); } Should we also have this for the codec implementations? gboolean gst_event_is_force_key_unit(GstEvent *event) { const GstStructure *s = gst_event_get_structure (event); return gst_structure_has_name (s, "GstForceKeyUnit"); }
We probably want more parameters to this function for the timestamp, etc and also for the "all-headers" parameter (bug #606689)
And we also want a downstream and upstream version of the function.
We may also want to map it with RTCP messages in the rtpbin
My avpf branch does RFC 4585 PLI already, but not RFC5104 FIR as used by google (because in that case, rtpbin needs to know when the requested info has been received). http://git.collabora.co.uk/?p=user/tester/gst-plugins-good.git;a=shortlog;h=refs/heads/avpf-timing
Some patches might already depend on it (bug #644151, bug #644154, bug #644150). Should them wait to the new API or can they be pushed as they are now?
(In reply to comment #5) > Some patches might already depend on it (bug #644151, bug #644154, bug > #644150). Should them wait to the new API or can they be pushed as they are > now?
(In reply to comment #5) > Some patches might already depend on it (bug #644151, bug #644154, bug > #644150). Should them wait to the new API or can they be pushed as they are > now? Marked them as dependency. Personally I would like to see this bug solved. If there are patches for all those elements, it sounds like there is a good understanding of the API needed. Also events can be extended later (by the cost of having a gst_new_force_key_frame_event_full() method or such).
While we're at it, maybe we should add some params to say which is the last valid reference, so that the encoder may not need a new keyframe but instead only use the older reference, the advantage of using the same event is that encoders that don't do this can fallback to a new keyframe. We may also want a way to specify a partial keyframe (ie, if only some slices where damaged).
I would also add a "count" field (number of GstforceKeyUnit events sent) for when this event is used by a source element to sync downstream element at a constant interval, like for fragmented streaming, where you you want a fragment each 250 frames but you also want to know the index of this fragment.
(In reply to comment #7) > (In reply to comment #5) > > Some patches might already depend on it (bug #644151, bug #644154, bug > > #644150). Should them wait to the new API or can they be pushed as they are > > now? > > Marked them as dependency. Personally I would like to see this bug solved. I will update them to use gst_event_is_force_key_unit() instead.
(In reply to comment #9) > I would also add a "count" field (number of GstforceKeyUnit events sent) for > when this event is used by a source element to sync downstream element at a > constant interval, like for fragmented streaming, where you you want a fragment > each 250 frames but you also want to know the index of this fragment. Hmm, not sure if I understand that correctly. This needs well defined semantics (e.g. how it will change when seeking). Too bad we can't just use the sequence number.
I'm not sure if I see the whole picture of use cases for this event, but to me it's useful in 2 cases: * downstream element requesting a new sync point, like in real time protocols after some packets lost. * upstream element forcing a new keyframe periodically. In the second use case, the "count" (sequence, or even buffer offset) is an important information if you are in a multi-bitrate scenario, either because you add dynamically a new "bitrate" or because you add a new sink. If this sink has to name the new fragment (or generate a playlist similar to the other sinks), it needs to know the sequence number of this fragment, which is the unique shared information between all the sinks. I don't really see how seeks would be involved here. Does any encoder forward seek events upstream?
> While we're at it, maybe we should add some params to say ... (snip) > We may also want a way to specify ... (snip) In order to move this forward: what is the essential stuff that such an event should *always* have? Reading http://cgit.freedesktop.org/gstreamer/gst-plugins-base/tree/docs/design/draft-keyframe-force.txt it sounds like the upstream event just has a name, while the downstream event should at least carry timing info and the all-headers flag. If this is right, I guess we could do something like this (using CUSTOM_BOTH then). GstEvent * gst_event_new_force_key_unit (void); void gst_event_set_force_key_unit_{running_time,time_stamp,whatever} (event, ...); We can then add more API for other "nice to have" things later. Thoughts?
all-headers is valid both downstream and upstream... I regret a bit making all-headers default to false if its not present (it should probably have been true). I'm not sure the code that currently exists would work with BOTH events (although I may be mistaken).
The code that currently exists would have to be updated for the new, "official" event anyway so that's no problem IMHO. FWIW I like Tim's proposal. Let's get this done! :)
Created attachment 190233 [details] [review] Add new API for the GstForceKeyUnit event
That's my proposal for the new API. I haven't enough knowledge of the use cases for the upstream events and more fields might by needed for the upstream event.
What's the count for ? I'd replace "all-headers" with "no-headers", I now think the default should be all-headers.
(In reply to comment #18) > What's the count for ? As a downstream event, this event is very useful for fragmented streaming, being able to synchronise multibtirate streams by sending a GstForceKeyUnit event downstream in the source element at a fixed interval (eg: creating fragments of 10 seconds for X different bitrates from the same source). Some of the fragmented streaming technologies relies in producing the same playlist for the n different bitrates, like the HTTP live streaming draft from apple. This event ensures for all the stream that all fragments will have the same duration and will be fragmented at the same position, but the sinks that writes the playlist needs to know the fragment's count/index The "count" field would be used to store the number of forced keyframes, and implicitly the number of fragments that have been created since the start of the live stream. If at some point of the live stream, a new bitrate needs to be added to the stream dynamically (or one of the encoders needs to be restarted or it went out of sink), the new sink will know the index of the incoming fragment using the count field and it will be able to create the same playlist as the other sinks that are using the same source.
Ok, the count field makes sense, just make sure its well documented.
Created attachment 191233 [details] [review] Add API for GstForceKeyUnit event Change INT64 to UINT64 for setting clock times in the structure and fix check of is_force_key_unit.
Suggested addition to the upstream event for slice loss: "picture-id" UINT (corresponds to framenum in h.264, see H.271) "first-mb-lost" UINT "last-mb-lost" UINT All of these optional, also, this way, an element that doesn't understand slice loss can repair with a full keyframe. I should mention that I haven't implemented any of this or tested to see if it really works.
Based on my work on https://bugzilla.gnome.org/show_bug.cgi?id=654799, I propose the following changes to the API: * add a running_time field to the upstream event * add a count field to the upstream event These two would make it easier to create chunks (like in HLS) when muxing multiple streams. The running_time would allow you to specify what exactly goes in the next chunk. Encoders would wait running_time before pushing the event and forcing a new keyframe. The count field is needed in case you want to start a playlist at an alternative bitrate mid-stream and you need to specify at what $count the playlist starts. If people agree, i'm willing to do the changes to move this a bit forward.
Would it be possible to not increase the "count" if the request is just being re-sent (without having received valid data in between) ? Or would that be an abuse? My use-case is that for RTCP FIR requests, there is a sequence number that needs to be increased when a new request is sent, but not when re-sending the same one. If I can't use the "count" field for that, then I need to introduce a new GstReceivedKeyframe event which sounds very evil.
It should be allowed to re-send events with the same count, yes.
Created attachment 196677 [details] [review] Additions + fixes on top of the original patch Here's some fixes I did on top of the original patch. I made the _parse functions accept NULL and I added running_time and count as args to gst_video_event_new_upstream_force_key_unit. Now with running_time in the upstream event, it's possible to ask for a keyframe at an accurate time while before sending an upstream event could only mean "give me a keyframe ASAP". In addition to being nicer (imo), this makes the implementation in muxers easier.
Is this ready now, we are 2 months without comment.
(In reply to comment #27) > Is this ready now, we are 2 months without comment. I have some minor changes on top of my last patch and I also implemented the API in h264parse. It's all on my github repos now (github.com/alessandrod), i'll rebase and attach new patches soonish. My idea was to wait for the next -base release and commit this. Considering this release cycle is taking a bit longer than usual though, we could maybe just commit it now. Another idea would be to move the API to -bad (now that bad has a video lib too...) and iterate it there for a while.
What exactly is the h264parse code doing with the event? Will it add the SPS/PPS before the next keyframe if all_headers==TRUE and there's no new SPS/PPS before the next keyframe? Also, are there already some changes to basevideoencoder to handle the new, complex event?
Created attachment 201621 [details] [review] Key unit event handling for gst-omx
(In reply to comment #30) > Created an attachment (id=201621) [details] [review] > Key unit event handling for gst-omx This also has the changes for basevideoencoder.
Any other progress here?
commit 848711706bb4e039287e8206417771afbad33c6f Author: Alessandro Decina <alessandro.d@gmail.com> Date: Tue Nov 29 08:49:53 2011 +0100 libgstvideo: minor fixes to key unit events Make out args to gst_video_event_parse_{downstream|upstream}_force_key_unit optional, update libgstvideo.def and fix docs a bit. API: gst_video_event_new_upstream_force_key_unit API: gst_video_event_new_downstream_force_key_unit API: gst_video_event_is_force_key_unit API: gst_video_event_parse_upstream_force_key_unit API: gst_video_event_parse_downstream_force_key_unit https://bugzilla.gnome.org/show_bug.cgi?id=607742 commit df44e771f138bbe0634bd74403ea02e4e4173ce3 Author: Andoni Morales Alastruey <ylatuya@gmail.com> Date: Sun Jun 5 01:49:38 2011 +0200 libgstvideo: Add force key unit events
Slomo: can we close this now or we still need to commit the gst-omx and basevideoencoder patches?
Close this now, this bug is against -base :) I'll care for the basevideoencoder and gst-omx changes