GNOME Bugzilla – Bug 720104
videorate: Should be able to duplicate frames in a live stream
Last modified: 2018-11-03 11:26:58 UTC
Currently the videorate component insert a set of duplicated frames only at the reception of a new buffer with timestamps in the past for that new frames. If you requested a framerate of 20fps and that your input stream is let say 1 fps, each time videorate receive 1 frame, it send a burst 19 duplication of the previous frame with proper timestamps over the last second. If this behaviour is quite ok with non live streams, for live streams this create a lag in your pipeline. For live streams, you expect that a new frame is inserted every 50 ms (for a target of 20fps).
Currently, the only way to reasonably use videorate with a live stream is with drop-only=true and average-period!=0
We may want to detect automatically if the stream is live based on the latency query.
Ideally you would implement timeouts in videorate, after which it just duplicates the last frame and considers this frame too late and gone
Created attachment 271605 [details] [review] videorate: Duplication of the last received buffer on timeout (release 1.2.3) I have been working on duplicating the last frame on timeout (where the timeout is computed based on an user defined number of lost frames multiplicated by the duration of the frame). In case a real buffer arrives and that the duplication process was ongoing, we stop duplicating. Here is the patch that does this. Can you have a look at it and make some suggestions?
Does your patch apply on master ? It won't be possible to add it to 1.2, since we don't allow new features in stable branches (unless we have good reason to do so.
Review of attachment 271605 [details] [review]: ::: gst/videorate/gstvideorate.c @@ +256,3 @@ + "Maximum time in number of lost buffers" + " - will be multiplied with the buffer's duration", 0, 10, 0, + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | G_PARAM_CONSTRUCT)); Timeout shall match the expected framerate/frameduration, not a user driven input. @@ +677,3 @@ + /* The waiting time is expressed in microseconds, so we have to divide + * the timeout (expressed in nanoseconds) by 1000 */ + end_time = g_get_monotonic_time () + videorate->timeout / 1000; You should use the pipeline clock. Also gst_clock_id_wait_async().
(In reply to comment #6) > Review of attachment 271605 [details] [review]: > > ::: gst/videorate/gstvideorate.c > @@ +256,3 @@ > + "Maximum time in number of lost buffers" > + " - will be multiplied with the buffer's duration", 0, 10, 0, > + G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS | G_PARAM_CONSTRUCT)); > > Timeout shall match the expected framerate/frameduration, not a user driven > input. > By choosing a timeout that matches the expected framerate/ frameduration, if the upstream buffers do not arrive at regular intervals (e.g. if a decoder precedes the videorate element) and that the downstream element is able of buffering some frames, we might insert duplicate frames and drop the real buffers even though this is not actually necessary. The quality of the stream will thus be highly degraded. I was thinking of computing a timeout in such a way that a buffer generated beyond this time would automatically be considered as too late by the downstream element (by using the PTS, segment values, the running time of the pipeline clock etc). Is such a calculus even possible? > @@ +677,3 @@ > + /* The waiting time is expressed in microseconds, so we have to divide > + * the timeout (expressed in nanoseconds) by 1000 */ > + end_time = g_get_monotonic_time () + videorate->timeout / 1000; > > You should use the pipeline clock. Also gst_clock_id_wait_async(). I am now using the pipeline clock and I trigger a periodic notification that I reinit at each buffer reception.
To be more specific, here is my use case: I would like to insert black images (or whatever predefined image) on timeout in case of a signal loss and insert the real frames as soon as we get the signal back without stopping the pipeline. Duplicating frames seemed to be a good way to start as we can duplicate the last frame until a time threshold is met and then and only then insert the black / predefined image. In my pipeline, the videorate element is preceded by a decoder that doesn't provide the frames in a regular manner, that is the time between the reception of two consecutive frames is not equal with the frame's duration. Here is an example: frame_duration = 40ms Reception time 1. 0:00:03.010784018 got buffer with timestamp 0:00:03.124476778 2. 0:00:03.332974336 got buffer with timestamp 0:00:03.164476778 3. 0:00:03.373752503 got buffer with timestamp 0:00:03.204476778 4. 0:00:03.410815013 got buffer with timestamp 0:00:03.284476778 5. 0:00:03.427801185 got buffer with timestamp 0:00:03.324476778 We can see here that the packet no. 2 arrives 320 ms later than the packet no. 1, while the third and fourth packet arrive at a 40ms interval and finally that packet no. 5 arrives at a 17ms interval from packet no. 4. If I set a timeout according to the expected frame rate (that is inserting a frame each 40ms) duplicate frames will have been inserted before receiving the packet no. 2. If the output frame rate equals the input frame rate, packets no. 2 - 5 will be dropped because their timestamps are already too old. This also means that we had a regular buffer injection thanks to the timeout mechanism, followed by at least 100 ms of no buffer injection at all due to dropping incoming buffers. This doesn't seem to be a correct way of handling live streams - we've replaced real buffers by duplicate ones without getting round the lag problem. Do you see a solution for this use case? What would be a more appropriate approach? For having a regular output I guess I could modify the videorate element such that instead of dropping frames 2 - 5, I could insert them with the next theoretical timestamps, but this means that I artificially extend the duration of the input signal and I am not that familiar with Gstreamer to grasp the impact on the next elements in my pipeline.
So I ran into this problem, which is described in great detail by Andreea, but the attached patch no longer cleanly applies to git master. I've attached a new version of Andreea's patch which is compatible with 1.7.90 for future reference. I did not, however, make the changes suggested by Sebastian and Nicolas. Furthermore, Andreea's patch uses the incoming framerate to determine whether the timeout should fire but this does not solve the problem for forced low framerates (e.g. when the input framerate is 1 fps and the output is 25fps, this patch does not help at all since you would still wait a multiple ("timeout" property) of input-frames). In that case, I suggest we use the output framerate to set the timer instead. I'll start thinking on how to tackle that use case. Thanks and credits @Andreea for the patch!
Created attachment 323592 [details] [review] videorate: Duplication of the last received buffer on timeout (git master 1.7.90)
Btw, such timeout implementation exist in GstAggregator base class. The current implementation for videorate is incorrect as stated before. It should also take the upstream latency into account.
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/97.