After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 431252 - Very simple & universal way to make serious hw acceleration.
Very simple & universal way to make serious hw acceleration.
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: dont know
git master
Other All
: Normal enhancement
: NONE
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2007-04-19 06:35 UTC by Baybal Ni
Modified: 2012-10-16 20:38 UTC
See Also:
GNOME target: ---
GNOME version: Unversioned Enhancement


Attachments
opengl test plugin tarball (316.37 KB, application/x-compressed-tar)
2007-04-19 10:15 UTC, Edward Hervey
Details

Description Baybal Ni 2007-04-19 06:35:09 UTC
What about something like glpixelshaderimagesink? It's quite convenient to make hw acceleration(or even decoding) by using pixel shader mini programs. Modern graphic systems are capable of ultimate digit crushing logics, why not to use them more convenient way?
Comment 1 Edward Hervey 2007-04-19 10:15:12 UTC
Created attachment 86627 [details]
opengl test plugin tarball
Comment 2 Edward Hervey 2007-04-19 10:16:06 UTC
The above plugin is a proof of concept plugin that uses the GPU to do some processing. It takes RGB video frames, uses the GPU to apply a light shade, and feeds them back into the pipeline.

Example pipeline : videotestsrc ! opengltest ! ximagesink
Comment 3 Baybal Ni 2007-04-19 14:35:50 UTC
Then why not to make full functional glx backend? hdtv will come soon, 3d codecs even sooner.
Comment 4 Baybal Ni 2007-04-19 14:44:05 UTC
Then, we need some real working acceleration for comfortable playback, or at least some skel code.
Comment 5 Baybal Ni 2007-04-19 15:02:41 UTC
I think that lack of accel is a serious trouble of gst.
Everywhere we can see hw accel
microsoft - gdi, ddraw
apple computers - nsglboost, cocoa, quicktime..............
Row of gui toolkits - gtk+, qt, tk...
vlc - have a lot's of realy high offloading acceleration backends
nvidia - nvxvmc, purevideo
DirectFB
Sis - officialy declared that their accel api is fully based on standart pixelshaders
Sun - released that their next os will have their own native hw accel api
And even techno terrists from obsd have some accel ifaces

Then why gst should stay at the bottom?
Comment 6 Edward Hervey 2007-04-19 15:18:08 UTC
Pavel : are you trying to make a point or have a specific request... or just ranting ? If you can't come up with anything productive I'm going to close this bug.

When it comes to acceleration, there are already quite a lot of *accelerated* (which is a VERY vague term) plugins : xvimagesink (uses XVideo extensions for video output), glsink (uses opengl for video output, which while we're at it is EXACTLY what apple uses for *accelerated* video output), directfbsink , ....

Comment 7 Baybal Ni 2007-04-20 07:28:48 UTC
Make direct glx sink
Make some code that can accelerate such operations as deinterlacing, resizing, yuv>rgb, rgb>yuv and mc through gl pixel shaders
Comment 8 Edward Hervey 2007-04-20 08:41:21 UTC
right, so what you're asking is exactly in par with the code I posted further up.... except that I think limiting those processing to the sink is not great.

why shouldn't I be able to use the GPU to:
* resize
* deinterlace
* motion-compensate
* do colour-balance
* do colourspace conversion
* ...

... in the middle of my pipeline ? This would be helpful for a LOT more tasks that just playback. And the code above is the proof of concept that we can do this as a input/output element and not just as a sink. Doing it as a sink is simple, elisa (elisa.fluendo.com) have already managed that for example.

Also, if you use the glsink gstreamer element... it already does internally rescaling for example.
Comment 9 Jan Schmidt 2007-05-24 14:23:54 UTC
The problem with the approach in the example code above is that glReadPixels is notoriously slow. On modern cards with a PCI-X bus, it's less so, but on older cards, reading back from the gfx card has always been a slow process. 

It's certainly not feasible to read the frames back and forth for several processing steps - it very quickly saturates the bus.

In general, I think we can concoct a method where a glxcontext is passed between elements using buffer_alloc somehow though, so that multiple processing steps can be performed in a chain with the results being read back at the end, either by an element which has the job of allocated the glx context and passing it upstream, then reading the pixels at the end, or by having each element in the chain recognise that downstream wants image/x-raw-rgb and taking the task on itself.
Comment 10 Edward Hervey 2007-05-30 09:38:22 UTC
(In reply to comment #9)
> The problem with the approach in the example code above is that glReadPixels is
> notoriously slow. On modern cards with a PCI-X bus, it's less so, but on older
> cards, reading back from the gfx card has always been a slow process. 

  What other methods exist to read data back from the GPU ? If you check out modern software that offer GPU acceleration... they all have in their recommendation to have a PCI-X bus.

> 
> It's certainly not feasible to read the frames back and forth for several
> processing steps - it very quickly saturates the bus.

  Totally agree. The current code was just a proof of concept to show it's possible to actually do some processing on the GPU and re-use the processed data as a GStreamer element in the middle of a pipeline.

> 
> In general, I think we can concoct a method where a glxcontext is passed
> between elements using buffer_alloc somehow though, so that multiple processing
> steps can be performed in a chain with the results being read back at the end,
> either by an element which has the job of allocated the glx context and passing
> it upstream, then reading the pixels at the end, or by having each element in
> the chain recognise that downstream wants image/x-raw-rgb and taking the task
> on itself.
> 

  Using buffer_alloc to share the context could be a good idea, but might require a specific mime type (application/glx-context ?).
  Solving all these issues would be best done by having a base class for opengl processing elements (derived straight from GstElement to offer maximum flexibility). That base class would take care of settingup/tearingdown the glx context, doing the buffer alloc, knowing when to send the data to the GPU (because the previous element was not already there) or retrieve it (because downstream elements are not gpu-based), ....

  Also we need to take into account conflicts between several glx contexts running simultaneously (gpu-proc ! non-gpu-proc ! gpu-proc), potential conflicts with opengl-accelerated desktops, optimizations when using opengl sinks, ....
Comment 11 Baybal Ni 2007-05-31 09:28:02 UTC
Good idea. We can use glx inside of aiglx.
Comment 12 Tristan Brindle 2007-06-18 18:54:59 UTC
Would it be possible to write pixel shader routines in liboil? 

That way it would be possible to use the GPU for resizing/deinterlacing/colourspace conversion/all sorts of other things, and step down to using SSE/MMX/altivec/whatever if the GPU isn't fast enough or doesn't support what you're trying to do.

Also with regard to hardware acceleration, many cards now support XvMC for (parts of?) the MPEG2 decoding process, and according to Wikipedia, some VIA cards can now do H.264 decoding this way too (as newer Nvidia cards can on Windows). An "xvmcimagesink" element would be a welcome addition to GStreamer -- if only I was smart enough and/or knowledgeable enough to write one!
Comment 13 Baybal Ni 2007-06-20 04:27:23 UTC
liboil is under bsd license
Comment 14 Adam Lofts 2007-12-08 15:49:29 UTC
I'm having a go at this. The plan is to create a pipeline like videotestsrc ! glfxsrc ! ... elements which process gl textures ... ! glfxsink ! xvimagesink.

At the moment I am able to round trip videotestsrc ! glfxsrc ! glfxsink ! xvimagesink using a buffer_alloc function in glfxsink but only a couple of frames are shown. It is slow progress since I dont know gstreamer too well. The code is at: http://alofts.co.uk/git/gstgpu.git
Comment 15 Baybal Ni 2008-10-23 12:56:59 UTC
As i know VIA uses pixel shaders as codecs too.
Comment 16 Tim-Philipp Müller 2012-02-18 13:49:29 UTC
I don't really see the point in keeping this bug open. It's too generic to be useful, and covers too many different things IMHO.

GPU-accelerated video decoding/encoding is probably best handled using va-api/vdpau etc.

Shaders for image manipulation / gst-plugins-gl need re-thinking for 0.11/1.0. If anyone wants to do so, now might be a good time. If this bug helps with that or not, I don't know.
Comment 17 Mart Raudsepp 2012-02-19 02:51:46 UTC
Maybe something like http://dummdida.blogspot.com/2012/02/gst-plugins-cl-opencl-plugins-for.html fits under this as well; that's someone having made plugins to use OpenCL kernels in a gstreamer pipeline to do accelerated buffer manipulations. Ensuring things like that across multiple elements (to not go between CPU/GPU memory again) is (more) easily achievable in 1.0 may be most important at this point; and from there to a GL texture or something quickly, and in playbin-ish pipelines
Comment 18 Tim-Philipp Müller 2012-10-16 20:38:34 UTC
Closing this, see comment #16. This is (still) being worked on, and there will be a comprehensive solution hopefully, but I don't think we need to keep this bug open for this to happen, there are other more specific bugs.