After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 697112 - GLTextureUploadMeta: No support for multi-texture formats
GLTextureUploadMeta: No support for multi-texture formats
Status: RESOLVED FIXED
Product: GStreamer
Classification: Platform
Component: gst-plugins-base
git master
Other Linux
: High blocker
: 1.1.1
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2013-04-02 14:53 UTC by Gwenole Beauchesne
Modified: 2013-04-17 08:53 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description Gwenole Beauchesne 2013-04-02 14:53:50 UTC
Hi,

The GstSurfaceMeta is not really usable as is. I think it would be much simpler to simply make gst_surface_meta_create_converter() take a GstBuffer instead of a GstSurfaceMeta, unless it is possible to retrieve the parent GstBuffer from a GstMeta. Likewise, GstSurfaceMeta::create_converter would also take a GstBuffer argument. That way, we will have similar semantics to older 0.10 GstSurfaceBuffer.

No patch yet, as I am undecided on whether (i) I simply nuke GstSurfaceMeta arg away and replace it with GstBuffer, or (ii) simply add GstBuffer after the GstSurfaceMeta * argument. I would favor option (i) for now.

However, for OpenGL use cases, there is still the question how do we get the original GL context? Would the client application or library actually allow exposure of its internal GL context, from which the GL texture was created from?
Comment 1 Víctor Manuel Jáquez Leal 2013-04-02 15:01:44 UTC
Gwenole,

During this last hackfest, a new API was defined for this purpose: 

It's a meta associated to the buffer that offers the upload to a texture:

http://cgit.freedesktop.org/gstreamer/gst-plugins-base/commit/?id=5f79a8cb933e1b2a1fad54d854ff89495e621333

I started the implementation for gst-vaapi 1.0
Comment 2 Gwenole Beauchesne 2013-04-02 15:16:09 UTC
Hi, sorry, I could not attend the hackfest but my usual comments are: a single GL texture is not the most optimal way to convey the underlying HW surface. In particular, I don't like the semantics of "upload" to a GL texture, even if the source surface already lies in HW memory.

Alternatively, if you really want to expose a texture, please make provisions to actually make it N (up to 3) textures, and additional fields to express the structure of that buffer (multiplanar). A good example of how all planar YUV formats could be factored out is to have a look at the existing Wayland/Weston implementation and new VA/EGL proposal here: http://cgit.freedesktop.org/~gb/mesa/ (21.VA_pixel_buffer).
Comment 3 Sebastian Dröge (slomo) 2013-04-02 18:25:58 UTC
At the hackfest people kept telling me that all those hardware codec APIs are exposing this functionality as a single GL_RGBA or GL_RGB texture... before that I actually had it complicated like that with support for multiple textures, etc :)

Do you have examples where multiple textures are used?
Comment 4 Tim-Philipp Müller 2013-04-09 21:57:40 UTC
Gwenole: ping?
Comment 5 Sebastian Dröge (slomo) 2013-04-10 09:48:36 UTC
So what we would have in the end would be support for RGBA, RGB with one texture, NV12/NV21 with one texture (custom texture format that needs to be just copied and not touched by shaders) and with two textures (that need to be converted to RGBA via shaders if there's no special YUV support), I420/etc with one texture (custom texture format that needs to be just copied and not touched by shaders) and with three textures (that need to be converted to RGBA via shaders if there's no special YUV support)

As what can be produced depends on the "provider" of the GLTextureUploadMeta (i.e. the decoder):
a) We need to negotiate the format with the caps, e.g. select I420
b) Inside the meta we would store the exact format or at least the number of textures 1 vs. 2 or 3
c) The user of the meta would know the format from that caps and the exact representation from the meta, and would need to handle 1 and 2 (or 3) textures properly.

Is there any other possible texture representation for the formats, other than 1 texture (independent of number of planes and just needs to be copied) and 1 texture per plane (where each plane contains one of RGB, RGB, Y, U, V or UV)?
Comment 6 Gwenole Beauchesne 2013-04-10 17:09:11 UTC
Some background about my request, which actually comes from other projects like clutter-gst or xbmc. Basically, they want a way to expose each individual plane as a separate texture, and actually an EGLImage, so that they can easily fit into an existing rendering pipeline that you would have e.g. for SW decoded frames.

The original concern was that OES_EGL_image_external incurs black magic to the driver and the extension does not know about the underlying structure (progressive, top/bottom-field), color conversion matrix, etc. Sure, there are extensions to that... extension (e.g. from TI), but nothing standardized yet, AFAIK.

So, the idea is to negotiate the capabilities from the provider but also from the consumer. The provider (decoder) may have hard constraints, e.g. it can only emit to NV12, and you can't make it render to "foreign" storage. This means that the provider is responsible for allocating the storage in this case. The consumer (renderer) may also have additional constraints or capabilities, depending on the underlying hardware.

I foresee the following usage models:

1) Hardware/driver can sample from YUV textures, i.e. a single texture represents the decoded frame.
a. The renderer bears default color conversion matrices and doesn't care of interlaced contents ;
b. The renderer can use additional extensions to control those ;
c. The renderer will be able to get Y/U/V/A components as R/G/B/A components.

(a) and (b) are driver-side, (c) could be an application-side usage model.

2) Hardware/driver can't sample from YUV textures, i.e. you have a texture per plane (obsolete GL_LUMINANCE/GL_LUMINANCE_ALPHA, or modern EXT_texture_rg). If you have NV12 format, you would need 2 textures: 1 for the Y plane where Y is mapped to r component, 1 for the UV plane where U/V are mapped to r/g components. This is more flexible as you can handle bob-deinterlacing, or control color conversion matrix with standard GLSL if there is no appropriate extension for it.

3) Fallback: the storage is in RGB/RGBA format. In this case, the conversion from YUV to RGB is either explicit (upload in client application), or implicit (case 1.a -- in driver/HW).

I think we probably could expose as many meta as we have supported combos. Note: in my vision of things, the entity being transported would be EGLImage(s) instead of texture ids.
Comment 7 Sebastian Dröge (slomo) 2013-04-10 17:25:35 UTC
For EGLImage see gst-plugins-bad/gst-libs/gst/egl (which is also not complete yet, but works fine for eglglessink and gst-omx already). Ideally I'd like to use EGLImages instead of stupid texture IDs everywhere too, yes. Just that not every API can handle that unfortunately.

I'll reply to everything else later, thanks :)
Comment 8 Gwenole Beauchesne 2013-04-11 04:40:22 UTC
(In reply to comment #7)
> For EGLImage see gst-plugins-bad/gst-libs/gst/egl (which is also not complete
> yet, but works fine for eglglessink and gst-omx already). Ideally I'd like to
> use EGLImages instead of stupid texture IDs everywhere too, yes. Just that not
> every API can handle that unfortunately.

Actually, if you talk about GLX, then I would like to deprecate it both in libva and gstreamer-vaapi. It has a terrible design but it was useful back to time when the hard constrain was "make it work with AMD/XvBA that can only render to RGBA texture". Now that they published a Mesa/VDPAU implementation, this opens the door to EGL (and VA-API). There is still NVIDIA/VDPAU to cope with, but they are working on something too.

Since GstSurfaceMeta is the existing interface for GStreamer 1.0.x, I would personally focus on it first. In order to avoid API change there, I think I will just implement a GstSurfaceMeta API but customize the returned struct/info to contain extra room for a GstBuffer pointer I could use later on, i.e. a GstVaapiSurfaceMeta. Then, for 1.2, we can use the more correct APIs.

I think the first user for it would be WebKit. I will try to ping Damien or Rob for clutter-gst but they would really want to handle only EGLImages. And I don't think their opinion changed since this is the most sensible way to do, anyway. :)
Comment 9 Sebastian Dröge (slomo) 2013-04-11 05:13:31 UTC
However GstSurfaceMeta is only in gst-plugins-bad and nothing we would like to have in the stable API, it is going to be removed sooner or later. Something like the current GLTextureUploadMeta is what will be there for the future, as this allows to properly negotiate things inside the pipeline.

So you're saying too that all existing API that wants to provide a GL texture ID only does RGB/RGBA textures and nothing actually uses the possibility to generate multiple textures? And that this would only be interesting for EGLImage (where it is supported already)? Or should the GLTextureUploadMeta be changed as outlined in comment 5?
Comment 10 Gwenole Beauchesne 2013-04-11 05:21:08 UTC
No, I am saying that generating an RGBA texture was an inefficient and dirty way to do it, but it was based on an old constraint (AMD/XvBA). For instance, VDPAU/GLX already permits to expose up to 3 textures, and actually 6 textures if you account for interlaced contents. Ah, and this reminds me I forgot about this case: API that makes it possible to expose each individual plane for a specific field. :)
Comment 11 Sebastian Dröge (slomo) 2013-04-11 06:02:11 UTC
Ok, thanks :) I'll update the GLTextureUpload to support this, for EGLImage all this should be handled already.
Comment 12 Víctor Manuel Jáquez Leal 2013-04-11 09:03:23 UTC
> I think the first user for it would be WebKit.

https://bugs.webkit.org/show_bug.cgi?id=86410
Comment 13 Sebastian Dröge (slomo) 2013-04-17 08:52:48 UTC
commit 74f6376c53551cbb7af77f1230c672c55ff512a0
Author: Sebastian Dröge <sebastian.droege@collabora.co.uk>
Date:   Wed Apr 17 10:35:22 2013 +0200

    videometa: Extend GstVideoGLTextureUploadMeta
    
    https://bugzilla.gnome.org/show_bug.cgi?id=697112

commit e8ad67f2b4def992a2d8da23087acc343c9fb9da
Author: Sebastian Dröge <sebastian.droege@collabora.co.uk>
Date:   Wed Apr 17 10:48:31 2013 +0200

    eglglessink: Update for GLTextureUploadMeta and EGL API changes

commit a6e50140b723734bcf99d4c2af79d808010ca0ab
Author: Sebastian Dröge <sebastian.droege@collabora.co.uk>
Date:   Wed Apr 17 10:40:48 2013 +0200

    egl: Use new types from libgstvideo instead of defining our own