GNOME Bugzilla – Bug 739681
GstGL: let applications use the glimagesink's output texture
Last modified: 2015-03-18 15:39:38 UTC
glimagesink should provide api to retrieve the current buffer to render. An app will call this api to use the texture in its own GL scene. Currently we have api to share texture between the 2 gl contexts. The one from gstgl and the one from the application. See "gst_context_set_gl_display" + "gst_context_new ("gst.gl.app_context", ...)" + "gst_gl_context_new_wrapped" + "gst_gl_display_new(disp)" . So this is already awesome but we could go further! Indeed we are currently forced to use one of those 2 ways: 1: Use fakesink or appsink, and do custom handling to pass to the app. And this way we loose benefit of gstvideosink. Also this way we still have to put a gleffects or glcolorscale element before those sinks in order to have gstglbufferpool. 2: The other working way is to implement custom sink in the app, like webkit does for example, with webkitvideosink. Though sometimes it is fine to do it to have more granularity, ex: cluttersink with its gstgl patches. But there would be a third way which would be enough in most of the case, and much more easy to setup for users: We should be able to just use videotestsrc ! glimagesink and retrieve last buffer through for example an action signal. When the app connect to this signal, then glimagesink would just not create an internal window. The app will trigger this signal when it's ready to render, and will call gst_video_gl_texture_upload_meta_upload right after.
Isn't that what the client-reshape a client-draw signals are for? otherwise, using the last-sample property would also work. sample = gst_base_sink_get_last_sample (sink) buffer = gst_sample_get_buffer (sample) gst_video_frame_map (frame, buffer, info, _READ | _GL) tex = *(guint *)frame.data[0] gst_video_frame_unmap (frame) gst_sample_unref (sample)
You raised good points :) "client-draw" is triggered by the sink. What I am suggesting is actually the opposite. Triggered by the application from the thread where its gl context is current, so that it triggers and use it right after for an easy management. About the "last-sample" property it is actually more similar to what I suggest. Also it makes not easy to tell glimagesink to use surface-less context (http://cgit.freedesktop.org/gstreamer/gst-plugins-bad/tree/gst-libs/gst/gl/egl/gstglcontext_egl.c#n419). It would require another property to setup that. And having an action signal would make it possible to define our own parameters and a return value. For example the user will have to connect them before starting the pipeline. Otherwise we will return false or something. We could also check that the user triggers the signal from where the gl app context is current, just to prevent him doing something wrong. We could also call "gst_video_gl_texture_upload_meta_upload" internally in this action. So that it makes the user life easier. i.e. forward the "gst_video_gl_texture_upload_meta_upload" parameters directly to this action signal. Maybe I am missing something ... What do you think ?
(In reply to comment #2) > About the "last-sample" property it is actually more similar to what I suggest. > Also it makes not easy to tell glimagesink to use surface-less context > (http://cgit.freedesktop.org/gstreamer/gst-plugins-bad/tree/gst-libs/gst/gl/egl/gstglcontext_egl.c#n419). > It would require another property to setup that. It probably makes sense to create a glappsink in order to seperate out the GstGLWindow usage. > And having an action signal would make it possible to define our own parameters > and a return value. > For example the user will have to connect them before starting the pipeline. > Otherwise we will return false or something. Or we just wait for the buffer to be consumed? Whatever appsink does probably makes sense. > We could also check that the user triggers the signal from where the gl app > context is current, just to prevent him doing something wrong. Might be easy to do with the current API, or it might require tracking GstGLContext's in a GPrivate. > We could also call "gst_video_gl_texture_upload_meta_upload" internally in this > action. So that it makes the user life easier. i.e. forward the > "gst_video_gl_texture_upload_meta_upload" parameters directly to this action > signal. I'm actually working on making GstGLUpload output different buffers based on the input/output caps. So that one can select that you want GLmemory filled buffers or EGLImages or GLUploadMeta or GL_TEXTURE_EXTERNAL or whatever we come up with.
Anymore updates on this? The ideas sound interesting. I'm actually looking for an easy approach into getting the GL texture in my own app.
I am going to work on that during the GStreamer hackfest 2015 this weekend. I realized that we can make it work with the existing "client-draw" callback. So if the user provide a wrapped GL context with "gst_gl_context_new_wrapped through glimagesink property or through GstContext", and if the user also use the "client-draw" callback then we need to make glimagesink not create any visible window. Also existing signal is good because it's blocking so we can benefit qos for example, and we make sure to be synchronized with audio and subtitles. I have some initial work around. I will also make a minimal example with SDL. I also wonder if we should add GstSample* as one of the input. Currently this is: gboolean client_draw_callback (GstElement* object, GstGLContext*, guint texture_id, guint width, guint height, gpointer user_data); (having explicitly id, w, and h still avoids the user to do gst_video_frame_map to get the size and texture id). Having also GstBuffer* would allow some custom work. I also do not like the name of the signal "client-draw", I would like to rename it to "new-sample" (like appsink signal) or to "push-sample".
commit 7dd3a2ec9e27473bde9f636581fd7e71d59d1df3 Author: Julien Isorce <j.isorce@samsung.com> Date: Sat Mar 14 15:38:28 2015 +0000 gl/examples: add sdlshare2 that uses glimagesink to output textures https://bugzilla.gnome.org/show_bug.cgi?id=739681 commit f8fca66fb94618ef8c624e321da7adc7af6b469e Author: Julien Isorce <j.isorce@samsung.com> Date: Sat Mar 14 16:30:42 2015 +0000 glimagesink: keep window invisible when sharing output https://bugzilla.gnome.org/show_bug.cgi?id=739681 commit 0150255a46da756f42e6f2e3776e2d39ed70dcec Author: Julien Isorce <j.isorce@samsung.com> Date: Sat Mar 14 15:16:55 2015 +0000 glimagesink: provide GstSample in client-draw signal Instead of prividing texture and size directly. And apply changes to examples. https://bugzilla.gnome.org/show_bug.cgi?id=739681 In this end I kept the name "client-draw" for the signal but I changed the signature to use GstSample. Also sdlshare2 example demonstrates how to use glimagesink to output texture in sdl having context sharing setup. (for glx it makes only the window unvisible for we should use pbuffer for real offscreen. Though we already have this offscreen support for egl. See #704807)
Great, I'll test this as as soon as I can. But just to be sure.... this only works in git master (1.5), right? I'm also curious why the mutexes. The previous examples didn't have those, right?
(In reply to Arnaud Loonstra from comment #7) > Great, I'll test this as as soon as I can. But just to be sure.... this only > works in git master (1.5), right? > Right. > I'm also curious why the mutexes. The previous examples didn't have those, > right? I think this is needed to ensure the "client" draws a valid texture. The texture ID is stored in a GstSample which is unreffed/freed after the signal emission. So the sink basically blocks until the "client" is done painting the texture on its side. AFAIU :)