After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 703348 - glimagesink: Add support for synchronizing the video to VSYNC
glimagesink: Add support for synchronizing the video to VSYNC
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gst-plugins-base
git master
Other Linux
: Normal enhancement
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2013-06-30 12:42 UTC by Sebastian Dröge (slomo)
Modified: 2018-11-03 11:25 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description Sebastian Dröge (slomo) 2013-06-30 12:42:50 UTC
Could be done similar to what is done in audiosinks. There would be a new thread, that just renders on each VSYNC and gets the latest frame to be rendered from the streaming thread.
Comment 1 Edward Hervey 2013-07-01 05:30:13 UTC
Might be good mentioning the existing/future elements that could benefit from that. The ones that come to mind are all the gl-based ones and wayland. Maybe some fb based ones also ? Any other ones come to mind ? Specific hardware sinks ? v4l2 ?

Also note that for this to work efficiently you need some kind of time reference. "on each VSync" is very vague. Is it before ? Just in time ? When it happens (so what you return will be on the next vsync) ? etc...

The bare minimum would be to have feedback from the system on when a certain frame was actually displayed (Frame F1 that was given at time T1 was displayed at time P1, F2 from T2 at P2, ...). Based on that you can calculate an average latency between T:time_when_callback_asks_for_frame and P:time_when_frame_is_actually_presented. This is vaguely similar to the audiosink segment latency.

If you do not have the above ... the presentation will always be off by a certain unknown/variable latency (and therefore doesn't bring any advantage against the current "push-based" video sink systems).

Bonus points on the underlying system giving you the *actual* rate at which frames will be presented (from the hardware). This allows you to fine-tune what frame you will be presenting at a given time (and also adjust the afore-mentionned latency more consistently). Note that you could estimate this from the rate at which your "vsync" callback happens, but it's not as accurate (can block/delay?) as getting the actual rate.
Would be awesome to finally display 24000/1001 progressive (film) on a 60/1 display "spot on" :)

Other notes:
* it would be great to have a generic clock extrapolated from all those info (just like the audiosink clocks), that can be slaved or can be master. 
* Make sure the "prepare" vmethod still gets called before a buffer enters the queue, and drop ones you know will be not rendered (because they arrive too late). For this new audio-like system to work, the action called in that separate thread needs to be almost-instantaneous.
Comment 2 Robert Swain 2014-07-01 13:47:44 UTC
Just a note here that maybe sometimes you actually want to render at a faster rate, say the display's refresh rate. It would be good to have an option to use (or even configure) that to drive the rendering in a separate thread from the incoming streaming thread.

The use case for this is when you perhaps want to update the rendered view more often than the frame rate.
Comment 3 GStreamer system administrator 2018-11-03 11:25:41 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/87.