GNOME Bugzilla – Bug 641729
[vp8enc] Add properties to control the keyframe distance
Last modified: 2011-02-08 23:53:53 UTC
Created attachment 180308 [details] [review] add property 'min-keyframe-distance' The following patches add the properties 'min-keyframe-distance' and 'keyframe-placement-mode'. The spec doesn't document it, but min_kf_distance can only be 0 or equal to max_kf_distance, otherwise the encoder will not init.[1] Does it make sense property 'fixed-keyframe-distance' instead? Or should I leave the 'min-keyframe-distance' if in the future it can be used with a different combination of values? [1]http://www.webmproject.org/tools/vp8-sdk/structvpx__codec__enc__cfg.html#a0a7b5444ecb09745cbe8d5af17553846
Created attachment 180309 [details] [review] add property 'keyframe-placement-mode'
If min-keyframe-distance can only be 0 or max, wouldn't it make sense to make it into a 3rd mode. So you'd have "auto", "disabled" and "fixed" instead?
(In reply to comment #2) > If min-keyframe-distance can only be 0 or max, wouldn't it make sense to make > it into a 3rd mode. So you'd have "auto", "disabled" and "fixed" instead? That makes more sense. Should I go for this option? This is the relevant part of the of the code in livpx: http://review.webmproject.org/gitweb?p=libvpx.git;a=blob;f=vp8/vp8_cx_iface.c;h=b23bd951df6a7ddcb3fde2d6a51c5f372b593690;hb=HEAD#l168
Created attachment 180311 [details] [review] add property 'keyframe-placement-mode' Added a new mode 'fixed' to force the keyframe distance.
Created attachment 180312 [details] [review] add property 'keyframe-placement-mode' changed placement of the enum
Created attachment 180313 [details] [review] add property 'keyframe-placement-mode' fix typo: default value should be auto
I can't think of a reason why a fixed gop length would ever be preferred over automatic. Perhaps you could enlighten me?
For the same reason you need it in an H264 when you are doing fragmented streaming: having equispaced keyframes to produce independently decodable fragments of the same duration and synchronise streams of different qualities. I have also submited a patch for libvpx, because it doesn't respect the fixed keyframe distance[1] [1]http://review.webmproject.org/#change,1708
You don't need or want equispaced keyframes for fragmented streaming.
For a live multibitrate setup you need it. How would you produce a live stream with X different qualities and synchronise them? Note that it should support adding a new stream with a different quality on the fly, that's synchronised with the other ones, or synchronising an encoder that needed to be restarted. And everything is running in a distributed system where the encoders doesn't know each other and there is a single point of synchronisation: the live source. For me the obvious option is knowing when the next keyframe will be inserted with respect of the frame offset produced by the source. Setting a fixed keyframe distance you know that all the encoders are going to produce a keyframe each 25 frames for example, and than you can synchronise the encoders in a multiple of 25. The same for the segmenters, knowing that you produce segments of 10 seconds: you can synchronise them in multiples of 25 * 10. If this option is supported by the encoder and it's documented in the SDK why shouldn't it be available?
> How would you produce a live stream with X different qualities and synchronise > them? Note that it should support adding a new stream with a different quality > on the fly, that's synchronised with the other ones, or synchronising an > encoder that needed to be restarted. And everything is running in a distributed > system where the encoders doesn't know each other and there is a single point > of synchronisation: the live source. The implementation details of flumotion are uninteresting to me. However, I do know that it uses gdppay/gdpdepay internally, which means that you can send a GstForceKeyUnit event periodically to all of the encoders, which seems like a darn good way to implement whatever keyframe structure you want and be guaranteed that it's synchronized across all encoders. We're trying to move toward a more uniform encoder interface in GStreamer. Adding an encoder-specific feature is a step in the wrong direction.
That's a much better answer. Thanks