GNOME Bugzilla – Bug 582166
videoencoder: Implement QoS support
Last modified: 2017-10-25 10:18:55 UTC
Hi, ---steps to reproduce: gst-launch-0.10 -v videotestsrc ! "video/x-raw-yuv, framerate=(fraction)25/1, format=(fourcc)I420, width=1920, height=1080" ! ffenc_ljpeg buffer-size=2000000 ! queue ! ffdec_mjpeg ! queue ! videoscale ! "video/x-raw-yuv, width=384, height=216" ! identity ! xvimagesink ---error: It seems to drop 99%. ---more infos: It works (no drops) if I replace xvimagesink by fakesink. gst-launch-0.10 -v videotestsrc ! "video/x-raw-yuv, framerate=(fraction)25/1, format=(fourcc)I420, width=1920, height=1080" ! ffenc_ljpeg buffer-size=2000000 ! queue ! ffdec_mjpeg ! queue ! videoscale ! "video/x-raw-yuv, width=384, height=216" ! fakesink sync=1 Maybe something is not working with the qos. (note that I do not know an other decoder than ffedc_mjpeg to decode loss less jpeg) Julien
fakesink never drops data. the ffmpeg encoders don't implement QoS yet and it's likely the reason why this pipeline cannot keep realtime performance.
So why is it working with a lower video size for the source ? : gst-launch-0.10 -v videotestsrc ! "video/x-raw-yuv, framerate=(fraction)25/1, \ format=(fourcc)I420, width=1000, height=900" ! ffenc_ljpeg buffer-size=2000000 ! \ queue ! ffdec_mjpeg ! queue ! videoscale ! "video/x-raw-yuv, width=384, \ height=216" ! identity ! xvimagesink (with fakesink, even if the CPU usage is high, it seems to be realtime)
with a smaller video size, there is enough CPU power left to do this realtime.
What kind of things would the encoder do ? Also, aren't the elements upstream of the encoder already handling QoS ?
> What kind of things would the encoder do ? It could simply skip input frames instead of encoding them, if upstream (a sink syncing to the clock) has indicated that they would be too late anyway. > Also, aren't the elements upstream of the encoder already handling QoS ? Which elements? Sinks? They will just drop stuff that's too late, but that would kind of defeat the purpose if the problem is that the encoder is too slow.
Same for muxers of course - the element that's causing the most overhead should drop stuff.
Maybe this should just be moved to -base for GstVideoEncoder?
Yes
Note, this should probably be enabled per codec, since it would break theora/ogg timing iirc ?
Not 100% sure, Ogg requires a fixed framerate because of the way how timestamps are encoded in the granulepos. But if you have a fixed framerate and just a few frames are missing in some places it can still handle that. Might just not be according to the spec
Main question is just whether the (our) decoder can differentiate between 'we lost some data/frames, resync needed' and 'we skipped a few frames, but data integrity is maintained, we can just continue decoding'.
(In reply to comment #11) > Main question is just whether the (our) decoder can differentiate between 'we > lost some data/frames, resync needed' and 'we skipped a few frames, but data > integrity is maintained, we can just continue decoding'. I would be expecting this to be transparent from the decoder. Few frames will be duplicate or have longer duration, but that would be about it no ?
I started implementing this. I didn't have a chance to extensively test it so it's not yet ready for merging but I'd appreciate some feedback. I basically re-implemented the same logic as in gstvideodecoder.c
Created attachment 361500 [details] [review] videoencoder test: properly name the encoder variable The element is an encoder so calling it 'dec' makes things confusing.
Created attachment 361501 [details] [review] videoencoder: implement QoS It allows encoders to detect and drop input frames which are already late to increase the chance of the pipeline to catch up. The QoS logic and code is directly copied from gstvideodecoder.c.
Attachment 361500 [details] pushed as 9264a29 - videoencoder test: properly name the encoder variable Attachment 361501 [details] pushed as bcca3b9 - videoencoder: implement QoS
Does this enable QoS by default? If so, I'm not sure that's a good idea. I think it should be opt-in. Decoder + videofilters having qos enabled is already leading to confusion in transcoding pipelines.
The QoS events are handled by default yes, but it just keeps stats and forward the event. It's up to sub-class to actually drop late frames (when calling gst_video_encoder_finish_frame() with frame->output_buffer == NULL). Maybe we could add a qos property on the encoder base class that subclass could check before dropping? Or the subclass wouldn't handle QoS events if qos=FALSE and then gst_video_encoder_get_max_encode_time() would always return G_MAXINT64.
I think that's something we want, a property, right now some decoder do it always, and some decoder never do that and have no control over it, the encoder inherits from this little gap. But ! I'd file a separate bug.
I opened bug #789467 adding the qos property.