GNOME Bugzilla – Bug 677012
gst-plugins-gl: Port to GStreamer 1.0
Last modified: 2013-06-29 08:12:56 UTC
Created attachment 215165 [details] [review] Update versioning for gstreamer-1.0 This patch updates the version requirements in the build files for building with the 1.0 API -rename GST_MAJORMINOR to GST_API_VERSION in build files -remove -lgstinterfaces from _LDADD flags Note: this only affects the build files, no code changes have occurred.
I think we don't really know yet what The Plan is with -gl and GStreamer 1.0, but there's an aspiration to do it a bit differently and integrate better. Until that's figured out, we should probably leave -gl at 0.10.
*** Bug 677072 has been marked as a duplicate of this bug. ***
Created attachment 215529 [details] [review] GstGLFilter: update for 1.0
Created attachment 215534 [details] [review] GstGLColorspace: update for 1.0
in order to prevent spamming bugzilla with 50+ patches, I have a git branch with changes for 1.0 (includes above patches): https://github.com/ystreet/gst-plugins-gl Currently the gltestsrc, glupload, gldownload, gleffects, glimagesink elements have been ported and minimally tested. (Note: you have to change Makefile.am's and gstopengl.c to actually compile it at the moment)
Created attachment 221783 [details] [review] port to 1.0 Everything but the libvisual-gl element has been ported which is waiting on #681719 being fixed. The code can also be downloaded from the git repo above.
Created attachment 223417 [details] [review] prevent double unref in differencematte For lack of a better place to put these. The test was failing because of this
Created attachment 223418 [details] [review] fix GL ES2 code
Created attachment 223419 [details] [review] fix window API definitions
Created attachment 225243 [details] [review] port to 1.0 v2 pt1 This is the first part of 2 parts (bugzilla 1MB limit) This patch adds support for upload/download within GLMemory on top of the features of the previous patch
Created attachment 225244 [details] [review] port to 1.0 v2 pt2
I was wondering if any of the core developers had thoughts on this port, if it was headed in the right direction or not. I'd like to display 10-bit video on 10-bit displays (think for medical imaging and graphics pros), and the best way to do this is through OpenGL. Doing it through gst-plugins-gl would be the cleanest way. (also maybe Matthew or someone could rename this bug to be something meaningful, like "port gst-plugins-gl to 1.0")
For reference, here is a message Matthew posted to the list about this port: http://lists.freedesktop.org/archives/gstreamer-devel/2012-June/036164.html I'm not sure how closely he followed that plan in his patches above and his code here: https://github.com/ystreet/gst-plugins-gl
I haven't looked at the code yet, but since Tim comment there has been quite some work to introduce better way to share context across elements. If that is not yet the case (I have not read the code), this port should make use of these new mechanism. This include creating an allocators instead of custom video/x... format and introduce GstMeta plus all what is required to map/unmap the data.
So, status update 1 year on for all :) Most of this work is on the wip-platform branch which I plan to merge soon. What works: 1. GL element -> GL element* (using GstGLMemory/GstVideoMeta) 2. non-GL element -> GL element (and vice versa) 3. runtime backend handling (required for wayland vs X11) and as a consequence/convenience, runtime GL api selection. 4. GL 3.0 context in GLX (shiny as of 2 days ago) (none of the other backends have that) 5. most of the elements have been updated so that they don't rely on deprecated GL functionality (preemptive GL 3+) or at least can be easily changed to work with it. 6. GLES 2.0 code 7. new backend: wayland 8. win32 9. removed dependency on GLEW (has some side effects e.g. none of the GL tokens are defined anymore) What doesn't work: 1. interoperability with any of the other GL/VAAPI/VDPAU/... plugins (we fall back to the slow download/upload. maybe GstVideoGLTextureUploadMeta and its non-existant download counterpart? or meta transforms ?) 2. marshalling based on GMainContext in all backends. Currently only the wayland backend has it. 3. some of the examples (they didn't work before so meh) 4. more unit testing 5. the cmake build files (autotools all the way), codeblocks/vcproj files 6. GL + GLES (it may never) 7. libvisual (waiting on GstAudioVisualizer) Unknown: 1. the cocoa backend (it works with GNUStep, no idea about OSX) What needs to be done: 1. all the stuff that doesn't work/is unknown 2. remove dependency on GLU (only required by some elements) 3. matrix routines for the switch to GL 3 *by GL element I mean the GL elements in gst-plugins-gl.
In the long run, what about folding the code into gst-plugin-bad and then evolving them to gst-plugins-good?
Also take a look at gst-plugins-bad/gst-libs/gst/egl and -bad/ext/eglgles for some inspiration. We should try to create a generic GL/GLES sink that can work with any windowing system (EGL/GLX/WGL/EAGL/Cocoa/etc) from that, that does not pull in any of the complex and large code of gst-plugins-gl but properly interoperates with that.
(In reply to comment #16) > In the long run, what about folding the code into gst-plugin-bad and then > evolving them to gst-plugins-good? From looking at the git log, it looks like it was originally in with other plugins but got split out for some reason. I'm easy either way. (In reply to comment #17) > Also take a look at gst-plugins-bad/gst-libs/gst/egl and -bad/ext/eglgles for > some inspiration. I have, it was interesting. > We should try to create a generic GL/GLES sink that can work > with any windowing system (EGL/GLX/WGL/EAGL/Cocoa/etc) from that, Agreed. > that does not > pull in any of the complex and large code of gst-plugins-gl but properly > interoperates with that. This point and the previous point seem to contradict each other. If you're going to support all the GL implementations within one element, its gonna be hairy :). The only files (in gst-libs) that do not get used with glimagesink are the gl baseclasses (filter and mixer) and I can't think of what you can remove.
(In reply to comment #18) > (In reply to comment #16) > > In the long run, what about folding the code into gst-plugin-bad and then > > evolving them to gst-plugins-good? > > From looking at the git log, it looks like it was originally in with other > plugins but got split out for some reason. I'm easy either way. I would like to put it back in gst-plugins-bad. > > that does not > > pull in any of the complex and large code of gst-plugins-gl but properly > > interoperates with that. > > This point and the previous point seem to contradict each other. If you're > going to support all the GL implementations within one element, its gonna be > hairy :). The only files (in gst-libs) that do not get used with glimagesink > are the gl baseclasses (filter and mixer) and I can't think of what you can > remove. Ok, fine then :) There's just all this threading complexity that is not really needed for a simple sink.
(In reply to comment #19) > > > that does not > > > pull in any of the complex and large code of gst-plugins-gl but properly > > > interoperates with that. > > > > This point and the previous point seem to contradict each other. If you're > > going to support all the GL implementations within one element, its gonna be > > hairy :). The only files (in gst-libs) that do not get used with glimagesink > > are the gl baseclasses (filter and mixer) and I can't think of what you can > > remove. > > Ok, fine then :) There's just all this threading complexity that is not really > needed for a simple sink. The threading complexity is exactly the same as with eglglessink. You have a single gl thread + all the gst element threads and all gl activity happens in the gl thread. Without it, ... ! glfilter ! queue ! glimagesink would blow up (I know there are GL extensions to work around/facilitate that (and EGLImage)). The way I see it, you have a simple sink that is slow across queue's because it has to upload/download the image data or you add a little bit of complexity to avoid hitting a very slow path (that could be avoided).
(In reply to comment #20) > (In reply to comment #19) > > > > that does not > > > > pull in any of the complex and large code of gst-plugins-gl but properly > > > > interoperates with that. > > > > > > This point and the previous point seem to contradict each other. If you're > > > going to support all the GL implementations within one element, its gonna be > > > hairy :). The only files (in gst-libs) that do not get used with glimagesink > > > are the gl baseclasses (filter and mixer) and I can't think of what you can > > > remove. > > > > Ok, fine then :) There's just all this threading complexity that is not really > > needed for a simple sink. > > The threading complexity is exactly the same as with eglglessink. You have a > single gl thread + all the gst element threads and all gl activity happens in > the gl thread. Without it, ... ! glfilter ! queue ! glimagesink would blow up > (I know there are GL extensions to work around/facilitate that (and EGLImage)). > > The way I see it, you have a simple sink that is slow across queue's because it > has to upload/download the image data or you add a little bit of complexity to > avoid hitting a very slow path (that could be avoided). Yeah you need a thread in any case in the sink to separate all the GL work from anything else (and also for another reason, that later). But what you don't always need is magic to dispatch processing into that GL thread (as you only can do GL stuff in that thread), e.g. when using EGLImage or just upload normal memory as a texture. But yeah, overall I agree. What exactly is it that you pass between elements, e.g. between a GL filter and sink? Texture IDs plus a way to dispatch work into the actual GL thread? Who owns this thread, who starts/stops it? The other reason for having another thread is that you ideally want this thread to only display something every vsync. So you would just loop there forever in vsync intervals and always just render the frame that is to be rendered at this very moment (and of course do all other work asap, like uploading memory to GL). Which could mean that for a 120fps stream you skip approximately every second frame, or for a 20fps stream you render every frame multiple times. Currently this is not implemented anywhere public, but that's how video sinks should really work. Oh and also see gst-plugins-base/gst-libs/gst/video/gstvideometa.h for the GstVideoGLTextureUploadMeta.
(In reply to comment #21) > (In reply to comment #20) > > (In reply to comment #19) > > The way I see it, you have a simple sink that is slow across queue's because it > > has to upload/download the image data or you add a little bit of complexity to > > avoid hitting a very slow path (that could be avoided). > > Yeah you need a thread in any case in the sink to separate all the GL work from > anything else (and also for another reason, that later). But what you don't > always need is magic to dispatch processing into that GL thread (as you only > can do GL stuff in that thread), e.g. when using EGLImage or just upload normal > memory as a texture. But yeah, overall I agree. > > What exactly is it that you pass between elements, e.g. between a GL filter and > sink? Texture IDs plus a way to dispatch work into the actual GL thread? Who > owns this thread, who starts/stops it? Well, essentially everything is passed at the moment using GstGLDisplay. The essential stuff is the gl api we're using (GL, GLES, and the version (can be parsed from glGetString)), the marshalling stuff and some way that elements can call GL. At the moment, there is a function table that elements use for calling GL that is filled at context creation time. The reason that exists is because it is essentially impossible to link to both libGL and libGLESv2 (some of the exported symbols are the same) and technically on windows the function pointers are context specific so we grab the function pointers at runtime. The thread is created within GstGLDisplay at the moment which is created by the most downstream element that supports creating the context/thread. So that means glimagesink unconditionally creates the thread. In this pipeline: ... ! glfilter_a ! glfilter_b ! ximagesink glfilter_b would create the context and glfilter_a would query it from _b. No-one really owns the context, it is created and destroyed with pad (de)activation. When an element is done with the context, it explicitly loses its ref. Each element keeps track of the GL objects so there is no need for GstGLDisplay to. > The other reason for having another thread is that you ideally want this thread > to only display something every vsync. So you would just loop there forever in > vsync intervals and always just render the frame that is to be rendered at this > very moment (and of course do all other work asap, like uploading memory to > GL). Which could mean that for a 120fps stream you skip approximately every > second frame, or for a 20fps stream you render every frame multiple times. > Currently this is not implemented anywhere public, but that's how video sinks > should really work. > > > Oh and also see gst-plugins-base/gst-libs/gst/video/gstvideometa.h for the > GstVideoGLTextureUploadMeta.
GstGLDisplay would be something that should be implemented with GstContext I guess... So how exactly is data-flow between two GL elements happening? How is the upstream GL element passing it's stuff to downstream?
(In reply to comment #23) > GstGLDisplay would be something that should be implemented with GstContext I > guess... A stripped down version yes (currently it has some fbo stuff and some sink specific stuff). > So how exactly is data-flow between two GL elements happening? How is the > upstream GL element passing it's stuff to downstream? GstGLMemory + GstGLBufferpool. GstGLMemory holds one rgba texture id. It also allows non-GL elements to read and write to a data pointer that is up/downloaded as needed. Where the element wants the image data and whether it needs up/downloading is based on the map flags _READ, _WRITE and _GL.
(In reply to comment #24) > > So how exactly is data-flow between two GL elements happening? How is the > > upstream GL element passing it's stuff to downstream? > > GstGLMemory + GstGLBufferpool. > > GstGLMemory holds one rgba texture id. It also allows non-GL elements to read > and write to a data pointer that is up/downloaded as needed. Where the element > wants the image data and whether it needs up/downloading is based on the map > flags _READ, _WRITE and _GL. That sounds like a good solution, yes. Do you also have support for other color formats, especially YUV? Or only for uploading/downloading maybe? How would this integrate with EGLImage or things like vaapi (which implements the GstVideoGLTextureUploadMeta)?
Small note here, I remember reading the GL Display passing in 0.10 for those elements, it was using pad query, but did not work when element get's dynamically plugged (from upstream to downstream), limiting the use of those elements to static pipelines. Also, at the time, it was not clear how an application would share it's gl display with the elements (if that is needed). Obviously the GstContext is what should be used, if that is not already the case in this port.
(In reply to comment #25) > (In reply to comment #24) > > > > So how exactly is data-flow between two GL elements happening? How is the > > > upstream GL element passing it's stuff to downstream? > > > > GstGLMemory + GstGLBufferpool. > > > > GstGLMemory holds one rgba texture id. It also allows non-GL elements to read > > and write to a data pointer that is up/downloaded as needed. Where the element > > wants the image data and whether it needs up/downloading is based on the map > > flags _READ, _WRITE and _GL. > > That sounds like a good solution, yes. Do you also have support for other color > formats, especially YUV? Or only for uploading/downloading maybe? At the moment the YUV formats supported are I420, YV12, YUY2, UYVY and AYUV which is implemented using glsl shaders (in GstGLUpload). > How would this integrate with EGLImage or things like vaapi (which implements > the GstVideoGLTextureUploadMeta)? Well, I haven't really had too much of a look at those but my first thoughts are that GstGLMemory would be a fallback and elements should look for other things first (like EGLImage and TextureUploadMeta) using caps features.
(In reply to comment #27) > (In reply to comment #25) > > (In reply to comment #24) > > > > > > So how exactly is data-flow between two GL elements happening? How is the > > > > upstream GL element passing it's stuff to downstream? > > > > > > GstGLMemory + GstGLBufferpool. > > > > > > GstGLMemory holds one rgba texture id. It also allows non-GL elements to read > > > and write to a data pointer that is up/downloaded as needed. Where the element > > > wants the image data and whether it needs up/downloading is based on the map > > > flags _READ, _WRITE and _GL. > > > > That sounds like a good solution, yes. Do you also have support for other color > > formats, especially YUV? Or only for uploading/downloading maybe? > > At the moment the YUV formats supported are I420, YV12, YUY2, UYVY and AYUV > which is implemented using glsl shaders (in GstGLUpload). Also in the downloader and the sink? Are the uploader/downloader required as elements are transparently used inside the filter/etc elements? > > How would this integrate with EGLImage or things like vaapi (which implements > > the GstVideoGLTextureUploadMeta)? > > Well, I haven't really had too much of a look at those but my first thoughts > are that GstGLMemory would be a fallback and elements should look for other > things first (like EGLImage and TextureUploadMeta) using caps features. Makes sense, yes.
(In reply to comment #28) > (In reply to comment #27) > > (In reply to comment #25) > > > (In reply to comment #24) > > > That sounds like a good solution, yes. Do you also have support for other color > > > formats, especially YUV? Or only for uploading/downloading maybe? > > > > At the moment the YUV formats supported are I420, YV12, YUY2, UYVY and AYUV > > which is implemented using glsl shaders (in GstGLUpload). > > Also in the downloader and the sink? Are the uploader/downloader required as > elements are transparently used inside the filter/etc elements? GstGLUpload is an object that an element uses to upload stuff to GL. It used to be an actual element in 0.10 but does not exist in this port. The idea being that the element/memory only creates the object when needed.
Sounds like a perfect plan then, I like it :) What's the current status btw, and what do you think about integrating this into gst-plugins-bad?
(In reply to comment #30) > Sounds like a perfect plan then, I like it :) > > What's the current status btw, and what do you think about integrating this > into gst-plugins-bad? https://bugzilla.gnome.org/show_bug.cgi?id=677012#c15 and sure I'm easy either way for the integration with gst-plugins-bad. It would make it a bit more visible.
(In reply to comment #15) > What doesn't work: > 1. interoperability with any of the other GL/VAAPI/VDPAU/... plugins (we fall > back to the slow download/upload. maybe GstVideoGLTextureUploadMeta and its > non-existant download counterpart? or meta transforms ?) GstVideoGLTextureUploadMeta + potentially specific support for some things, especially EGLImage if EGL is used. > 2. marshalling based on GMainContext in all backends. Currently only the > wayland backend has it. What does this mean? Does it *depend* on a main loop running on the default main context, or does it have its own main context where it runs a loop in some thread and there does the marshalling? And it's only implemented for Wayland so far, and not existing for other backends? > 6. GL + GLES (it may never) You mean using GL and GLES in the same process and having GL elements interoperate with GLES elements? Interesting idea but that really sounds complicated... and not that important? ;) > 7. libvisual (waiting on GstAudioVisualizer) What's missing there? > Unknown: > 1. the cocoa backend (it works with GNUStep, no idea about OSX) And an EAGL backend for iOS :) What about the backends for WGL, EGL and GLX? Also, what about support for GstVideoMeta (i.e. arbitrary strides) and GstVideoCropMeta? You can steal some code for that from eglglessink (and GL_UNPACK_ROW_LENGTH probably is of use here too). You can probably also steal some YUV conversion shaders from eglglessink (I think more formats are supported there and they're also a bit more optimized, at least last time I looked). Do the GL elements handle ALLOCATION queries properly, i.e. sharing the GL memory allocator and potential pools with each other through that query? I asked others on IRC and it seems having the GL stuff in gst-plugins-bad would be a good idea. Giving more visibility and after feature-equality we can get rid of eglglessink.
(In reply to comment #32) > (In reply to comment #15) > > > What doesn't work: > > 1. interoperability with any of the other GL/VAAPI/VDPAU/... plugins (we fall > > back to the slow download/upload. maybe GstVideoGLTextureUploadMeta and its > > non-existant download counterpart? or meta transforms ?) > > GstVideoGLTextureUploadMeta + potentially specific support for some things, > especially EGLImage if EGL is used. > > > 2. marshalling based on GMainContext in all backends. Currently only the > > wayland backend has it. > > What does this mean? Does it *depend* on a main loop running on the default > main context, or does it have its own main context where it runs a loop in some > thread and there does the marshalling? And it's only implemented for Wayland so > far, and not existing for other backends? The GstGLWindow (where the context-specific stuff happens) has its own mainloop that is used for marshelling. Yes, only the wayland backend has it, the others rely on platform specfic code. > > 6. GL + GLES (it may never) > > You mean using GL and GLES in the same process and having GL elements > interoperate with GLES elements? Interesting idea but that really sounds > complicated... and not that important? ;) > > > 7. libvisual (waiting on GstAudioVisualizer) > > What's missing there? To be honest, I haven't had time to look at that. However last I looked GstAudioVisualizer explicitly mapped the output buffer, not giving us the chance to say we wanted the data on the GPU. > > Unknown: > > 1. the cocoa backend (it works with GNUStep, no idea about OSX) > > And an EAGL backend for iOS :) What about the backends for WGL, EGL and GLX? WGL and GLX work fine. EGL works under x11, I haven't really tested anywhere else. I had a brief stint at getting it to work with android but ran into issues getting the libraries to link properly. > Also, what about support for GstVideoMeta (i.e. arbitrary strides) and > GstVideoCropMeta? You can steal some code for that from eglglessink (and > GL_UNPACK_ROW_LENGTH probably is of use here too). You can probably also steal > some YUV conversion shaders from eglglessink (I think more formats are > supported there and they're also a bit more optimized, at least last time I > looked). GstVideoMeta and GstVideoCropMeta are unsupported at the moment. I have eyed off those shaders. > Do the GL elements handle ALLOCATION queries properly, i.e. sharing the GL > memory allocator and potential pools with each other through that query? Nope. One pool between each pair of elements at the moment. > I asked others on IRC and it seems having the GL stuff in gst-plugins-bad would > be a good idea. Giving more visibility and after feature-equality we can get > rid of eglglessink. Sounds good.
(In reply to comment #33) > > What does this mean? Does it *depend* on a main loop running on the default > > main context, or does it have its own main context where it runs a loop in some > > thread and there does the marshalling? And it's only implemented for Wayland so > > far, and not existing for other backends? > > The GstGLWindow (where the context-specific stuff happens) has its own mainloop > that is used for marshelling. Yes, only the wayland backend has it, the others > rely on platform specfic code. And your plan is to make it the same for all backends? > > > Unknown: > > > 1. the cocoa backend (it works with GNUStep, no idea about OSX) > > > > And an EAGL backend for iOS :) What about the backends for WGL, EGL and GLX? > > WGL and GLX work fine. EGL works under x11, I haven't really tested anywhere > else. I had a brief stint at getting it to work with android but ran into > issues getting the libraries to link properly. Once it's merged in -bad, I'll make it work on Android :) > > Also, what about support for GstVideoMeta (i.e. arbitrary strides) and > > GstVideoCropMeta? You can steal some code for that from eglglessink (and > > GL_UNPACK_ROW_LENGTH probably is of use here too). You can probably also steal > > some YUV conversion shaders from eglglessink (I think more formats are > > supported there and they're also a bit more optimized, at least last time I > > looked). > > GstVideoMeta and GstVideoCropMeta are unsupported at the moment. I have eyed > off those shaders. Any comments on the shaders and also the Meta support would be appreciated, I'd like to learn :) Same goes for questions about how it works currently in eglglessink (which I really like to get rid of in favor of a great unified GL sink). > > Do the GL elements handle ALLOCATION queries properly, i.e. sharing the GL > > memory allocator and potential pools with each other through that query? > > Nope. One pool between each pair of elements at the moment. How is it negotiated between the elements other than the ALLOCATION query? And do you plan to have the same pool for a chain of GL elements in the future?
(In reply to comment #34) > (In reply to comment #33) > > > > What does this mean? Does it *depend* on a main loop running on the default > > > main context, or does it have its own main context where it runs a loop in some > > > thread and there does the marshalling? And it's only implemented for Wayland so > > > far, and not existing for other backends? > > > > The GstGLWindow (where the context-specific stuff happens) has its own mainloop > > that is used for marshelling. Yes, only the wayland backend has it, the others > > rely on platform specfic code. > > And your plan is to make it the same for all backends? Yes. It will hopefully make the marshalling code more consistent. > > > > Unknown: > > > > 1. the cocoa backend (it works with GNUStep, no idea about OSX) > > > > > > And an EAGL backend for iOS :) What about the backends for WGL, EGL and GLX? > > > > WGL and GLX work fine. EGL works under x11, I haven't really tested anywhere > > else. I had a brief stint at getting it to work with android but ran into > > issues getting the libraries to link properly. > > Once it's merged in -bad, I'll make it work on Android :) Cool :) > > > Also, what about support for GstVideoMeta (i.e. arbitrary strides) and > > > GstVideoCropMeta? You can steal some code for that from eglglessink (and > > > GL_UNPACK_ROW_LENGTH probably is of use here too). You can probably also steal > > > some YUV conversion shaders from eglglessink (I think more formats are > > > supported there and they're also a bit more optimized, at least last time I > > > looked). > > > > GstVideoMeta and GstVideoCropMeta are unsupported at the moment. I have eyed > > off those shaders. > > Any comments on the shaders and also the Meta support would be appreciated, I'd > like to learn :) Same goes for questions about how it works currently in > eglglessink (which I really like to get rid of in favor of a great unified GL > sink). Will do. > > > Do the GL elements handle ALLOCATION queries properly, i.e. sharing the GL > > > memory allocator and potential pools with each other through that query? > > > > Nope. One pool between each pair of elements at the moment. > > How is it negotiated between the elements other than the ALLOCATION query? And > do you plan to have the same pool for a chain of GL elements in the future? It does use the allocation query, it just doesn't pass on the pool between multiple elements. It probably could though. (In reply to comment #32) > (In reply to comment #15) > > 6. GL + GLES (it may never) > > You mean using GL and GLES in the same process and having GL elements > interoperate with GLES elements? Interesting idea but that really sounds > complicated... and not that important? ;) Correct, however choosing between GL and GLES via an env variable is on the radar.
Matthew, is the wip-platform branch ready to be merged now or ready to have other people starting to do something with it? I'd like to merge this to gst-plugins-bad as soon as you're ready :)
Found a bug in GstGLMemory btw, it should do refcounting for the mapping/unmapping. You can read-map multiple times, and only should remove the memory after the last read-map was unmapped.
Will fix that later if nobody is faster :)
(In reply to comment #36) > Matthew, is the wip-platform branch ready to be merged now or ready to have > other people starting to do something with it? I'd like to merge this to > gst-plugins-bad as soon as you're ready :) I just merged my wip-platform + your pull request into the 1.0 branch. (In reply to comment #37) > Found a bug in GstGLMemory btw, it should do refcounting for the > mapping/unmapping. You can read-map multiple times, and only should remove the > memory after the last read-map was unmapped. What is 'memory' here. GstGLMemory? the data pointer in sytem memory? The data pointer in system memory is alloced and freed with the GstGLMemory. The only thing that is performed in unmap is flag setting when unmapping from a write operation.
Indeed, that actually is correct and a very good idea to do that way :) I've merged everything into gst-plugins-gl on fd.o now... and will merge it later into gst-plugins-bad. Your changes overall look like a great improvement!