GNOME Bugzilla – Bug 339896
Use gstreamer as a video input source
Last modified: 2008-09-22 18:54:47 UTC
It would be nice if someone could write a PVideoInputDevice plugin implementation which would somehow use gstreamer as a backend. The details of how it would work could be discussed here.
I'm still reading the documentation, but already have some ideas how the plugin could work : (1) it would need a fakesink, from which we would get the data to put it into PWLIB's data flow (out of gstreamer's -- which is evil) : http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/section-data-spoof.html (2) the pipeline would probably look something like this : whateversrc -> converter -> filter -> fakesink (i) the filter would be configured to accept only whatever ekiga wants as format (ii) the plugin would make the filter->fakesink itself, but leave the rest to gstreamer's autoplugging abilities, which will configure the converter automagically (3) a PVideoInputDevice is supposed to return the list of possible devices as strings (ouch), so I suppose the plugin will internally store a map from said strings to more complete information ; ie : when asked which are the possible devices, it will search the possible devices, and say detect a "Foo" device in v4lsrc, and a "Bar" device in dv1394src. It will store this as a map, and return only the strings. When asked to open the "Foo" device, the map will help it use directly the right element. I don't know exactly how this is done, but hopefully reading the documentation will make it clearer. (4) the Start and Stop methods would just put the pipeline in playing or paused mode.
Ok, the idea would be to have a base GstElement which would be something like : videoscale ! video/x-raw-yuv,width=176,height=144 ! fakesink (with probably also a ffmpegcolorspace filter thrown in some way) The plugin would detect all possible sources to link at the left of this base GstElement and let the user choose which (how so ?) On opening, we build a pipeline with our "usersrc ! baseelement", and work from here.
Just an implementation detail, but I would recommend not to use capsfilter + fakesink, but rather just implement your own sink class based on GstBaseSink. This isn't much harder than using fakesink and much cleaner from an implementation point of view (IMHO.)
Well, that will require reading the GStreamer Plugin Writer's Guide, which is what I'm going to do now I finished GStreamer Application Development Manual ;-) It sounds cleaner indeed. Any pointers on how to autodetect the possible sources to link to it?
Hmmm... I'll have to call gst_init[_check] somewhere -- can't be in ekiga since I'm in a plugin. And if the plugin does it itself, what happens when bug #339897 is solved too? Won't it hurt to have two plugins call a gst_init* ?
I find the following line of code fantastic : pipeline = gst_parse_launch ("v4lsrc ! videoscale ! video/x-raw-yuv,width=176,height=144 ! xvimagesink", NULL); Now the question is : if I create only the videoscale ! video/x-raw-yuv,width=176,height=144 ! myfakesink pipeline, how do I find elements to plug at the left ? And more precisely, how do I find out that v4lsrc can be put there, and how do I show the user that we can access "Webcam of this brand" ?
> And more precisely, how do I find out that v4lsrc can be put there, and how do > I show the user that we can access "Webcam of this brand" ? I wouldn't worry about whether v4lsrc can be put there or not, but just insert an ffmpegcolorspace element (and other converters like a videorate element, if needed), so that you can plug almost anything to the left. The converters should operate in passthrough mode if no conversions are required. To find out what webcams are available you can use the GstPropertyProbe interface on v4lsrc (bad and hacky, but sort of works most of the time), or query this information externally, e.g. via HAL or some other library.
The problem isn't only "what to put on the left", but also "how to detect and list what can be put on the left". The v4lsrc wasn't the only one I had in mind (my DV cam would be great too).
This looks very very interesting : http://ronald.bitfreak.net/cupid.php and especially : http://ronald.bitfreak.net/images/gst-rec-2.png
Created attachment 72214 [details] Header file for experimental plugin
Created attachment 72215 [details] Implementation file for experimental plugin
Ok, that code is quite buggy : - the myBuffer (and MY_BUFFER_SIZE) is hackish - there's something wrong with v4l device detection, which forces to ask ekiga explicitly to redetect devices so it works - using that plugin makes CPU usage go to about 35% (with the usual V4L plugin it's around 5%) - many Get/Set (colour, hue, etc...) aren't implemented. My hope is that by making it public others will at least have pointers to solutions. PS1: notice that the v4l detection needs gstreamer and gstreamer-plugins-base from cvs to work (almost) correctly. PS2: I compile it with : g++ -shared -DPTRACING -I../pwlib_PNoCollisionDictionary `pkg-config --cflags --libs gstreamer-plugins-base-0.10` -lgstinterfaces-0.10 vidinput_gst.cxx -o gst_pwplugin.so (it uses my code from bug #332442 )
Notice that I used fakesink instead of writing my own sink : I got lazy. This can still change at some point, should the need arise.
Created attachment 74388 [details] Implementation file for experimental plugin This one adds a default AVC device if the right gstreamer plugin is present. Notice that I couldn't test it since my firewire setup is (as is terribly usual...) broken. If someone finds the time to test with a working setup, that would be nice.
Created attachment 77072 [details] Implementation file for experimental plugin Well, this is a small update to make it compile with recent libs. The current V4L plugin of ekiga uses about 40% of my processor when showing my webcam image. This plugin uses about 70% of my processor showing the same, and about 80% showing the video test. I don't understand why this plugin stresses my box that much ; could a gstreamer guru have a quick look? Surely I made a big mistake somewhere...
That's quite a lot of CPU usage ... have you tried a capture-and-display pipeline with gst-launch to see how much that takes? (when using v4lsrc or v4l2src use sync=false on the video sink in gst-launch). The only thing I can see right away is that you do a lot of memcpy()ing there in the code. If I'm not mistaken you copy each frame at least twice: once in OnHandoff and once in GetFrameDataNoDelay(). It would be much better if you could pass the data around without copying (ie. pass the GstBuffer or a similar structure around). At least the first memcpy() should be easy to avoid by passing the GstBuffer instead. Don't know if this is what eats so much CPU (seems unlikely if your frames aren't very big), but it's still something that should be avoided.
This pipeline fails : gst-launch v4lsrc ! video/x-raw-yuv,format=\(fourcc\)I420,width=176,height=144,framerate=\(fraction\)80/16 ! xvimagesink This one takes 2% of cpu : gst-launch v4lsrc ! videoscale ! video/x-raw-yuv,width=176,height=144 ! xvimagesink I'll try to have a look at the memcpy()ing ; there probably will be some anyway since we're getting from gstreamer's streaming framework to pwlib's...
This pipeline fails : gst-launch v4lsrc ! video/x-raw-yuv,format=\(fourcc\)I420,width=176,height=144,framerate=\(fraction\)80/16 ! xvimagesink This one takes 2% of cpu : gst-launch v4lsrc ! videoscale ! video/x-raw-yuv,width=176,height=144 ! xvimagesink I'll try to have a look at the memcpy()ing ; there probably will be some anyway since we're getting from gstreamer's streaming framework to pwlib's... Thanks for your reply, and sorry to reply back this late ; I've been quite busy recently...
Ok, I had a look and the memcpy from my internal buffer to the view buffer shouldn't hurt. On the other end, the following probably hurts : - there are three buffers involved there : (B1) the GstBuffer which gets the data in the first place ; (B2) the myBuffer of my class which serves as a intermediary between gstreamer and pwlib (B3) the buffer in opal to which the data is finally (from my code's point of view) passed. - GetFrameData calls GetFrameDataNoDelay, but sleeps if called too often ; - GetFrameDataNoDelay does a memcpy for the B2->B3 move ; - OnHandoff is called from the "handoff" signal of the fake sink, and does a memcpy for the B1->B2 move. I think I'll try to make OnHandoff do nothing if called too often (I think sleeping in a signal handler is evil -- tell me if I'm wrong) : that will probably avoid useless memcpying. That, or I'll add a myBufferWasRead switch to know whether the frame in B2 needs updating.
Created attachment 80634 [details] Implementation file for experimental plugin Last version : I put in the myBufferWasRead boolean so I could only memcpy when needed, and made capture_duration a *static* int, thinking it would solve the issue. Unfortunately, it doesn't... is there some way I could know/set how fast gstreamer captures, to see if that matters?
Created attachment 80635 [details] Header file for experimental plugin Some things were added to avoid memcpying too much.
Ok, I did a test run : OnHandoff was called 1665 times. The data was needed 29 times : the rest was unneeded. That means this function is called more than *fifty* times too much! This leads me think that gstreamer was getting frames way too fast ; and indeed, trying : gst-launch videotestsrc ! fakesink gave me a 100% cpu load too! So now the question is : how do I make gstreamer reasonable about the fps ? Isn't there some magical thing I could add to make it less CPU-eager ?
> This leads me think that gstreamer was getting frames way too fast ; and > indeed, trying : > gst-launch videotestsrc ! fakesink > gave me a 100% cpu load too! > > So now the question is : how do I make gstreamer reasonable about the fps ? > Isn't there some magical thing I could add to make it less CPU-eager ? This is expected behaviour for videotestsrc/fakesink, try: gst-launch-0.10 videotestsrc ! fakesink sync=true With v4lsrc it (should) work differently: in the 'normal' decoding/videotestsrc case, the sync/waiting is done in the sink; v4lsrc, however, is regarded as a 'live source', and the sync/waiting will be done in the source and not in the sink. You'd typically want to force a framerate on v4l*src by putting filtercaps with a specific framerate after it, otherwise it might just negotiate to the highest it can do (or thinks it can do).
wtay just told me about the sync option on irc. What filtercap do you suggest ?
Created attachment 81657 [details] Implementation file for experimental plugin Ok, this version gets away with the CPU-eagerness. Now I just have to figure out why the v4lsrc plugin isn't detected at startup :it means one has to go to the preferences, redetect devices then choose the webcam again... each time! We're getting there!
Could it be that the call to gst_init launches asynchronous work, which isn't finished when the first call to get the list of devices occurs ? PS: pushing severity&priority down... this is ridiculous for a simple enhancement plugin.
Created attachment 81755 [details] Implementation file for experimental plugin Ok, this version tries to be as smart as Damien's version for audio : it doesn't use static-string filters like before (it still uses some, see below). Here is what works and what doesn't : - videotest works great ; - v4l works great when it starts ; - however, changing the device to it can freeze ekiga for a long time before it starts ; - the detection problem is still there ; - obviously my try to make SetFrameSize work is a failure : it doesn't spit any warning, but if I didn't set the right size on opening (this is the last static-string filtering I do), that won't make things work -- which raises the question of why does ekiga change the size *after* opening -- which raises the answer that ekiga gets the device with CreateOpenedDevice, and only afterwards modifies it : which makes sense! So how does one make it with gstreamer? - I have no idea if the dv1394 video is supposed to work : I've been unable to use my camcoder either with kino or with ekiga... which probably means my firewire setup is just broken again :-( Comments and help welcome!
Well, my firewire setup got better since I switched back to good old debian, so I can sadly say my dv1394 code doesn't work wonders. The v4lsrc code doesn't seem to either, which is more annoying (I must have done something stupid... I did that a lot those last days!). The device detection problem and the device-switching one are still there too :-/ The videotest works great, though, so not all is wrong.
The v4lsrc code works, but takes ages to open the device. Bad cold, but it seems I didn't do everything that wrong :-)
The following works : gst-launch-0.10 dv1394src ! decodebin ! ffmpegcolorspace ! videoscale ! video/x-raw-yuv,format=\(fourcc\)I420,width=176,height=144 ! fakesink But my code which : - builds a pipeline with dv1394src ! decodebin ! ffmpegcolorspace - builds a videoscale element - builds a capsfilter element (with those caps) - builds a fakesink - puts all of this in a bin - tries to link the four elements in the bin... fails at this point. Do I miss something obvious ? I'm wondering if I shouldn't go back to the "all in a string then gst_parse_launch it" method. Notice that the problem with finding v4lsrc&dv1394src during runtime gave me an interesting result lately : it wasn't possible to detect them even afterwards! Is there a way to detect when gstreamer finishes its initialization?
Created attachment 82958 [details] Header file for experimental plugin
Created attachment 82959 [details] Implementation file for experimental plugin This version has good v4lsrc and videotestsrc. Known problems : - the dv1394src image is weird (investigating) ; - the detection sometimes fails (v4lsrc and dv1394src plugins not found!?).
Created attachment 82988 [details] Screenshot showing dv1394src problem Here is a screenshot of ekiga using the dv1394src source ; the pipeline is : dv1394src ! decodebin ! videoscale name=ekiga_scaler ! capsfilter caps=video/x-raw-yuv,format=(fourcc)I420,width=176,height=144 name=ekiga_filter ! fakesink name=ekiga_sink
Created attachment 83043 [details] Implementation file for experimental plugin This implementation now uses the property probe I just finished implementing. It still has the following two bugs : 1) the v4lsrc and dv1394src plugin aren't detected at startup : I still have to click in the preferences to have my devices. Could it be a threading issue where I begin to detect things while gstreamer isn't fully initialized first ? 2) the dv1394src plugin gives an awful image (see screenshot).
Gaaahhh... (1) ekiga uses : dv1394src ! decodebin ! videoscale name=ekiga_scaler ! capsfilter caps=video/x-raw-yuv,format=(fourcc)I420,width=176,height=144 name=ekiga_filter ! fakesink name=ekiga_sink it works but gives an awful image (2) from a terminal : gst-launch-0.10 dv1394src ! decodebin ! videoscale name=ekiga_scaler ! capsfilter caps=video/x-raw-yuv,format=\(fourcc\)I420,width=176,height=144 name=ekiga_filter ! xvimagesink ==> failure with : ERROR: from element /pipeline0/dv1394src0: Internal data flow error. Additional debug info: gstbasesrc.c(1569): gst_base_src_loop (): /pipeline0/dv1394src0: streaming task paused, reason not-linked (-1) (3) from a terminal : $ gst-launch-0.10 dv1394src ! decodebin ! xvimagesink ==> works and gives a nice image Where is the logic in all of this !? Perhaps a gstreamer guru could help...
Created attachment 83247 [details] Header file for experimental plugin The new version works with v4lsrc, dv1394src and videotestsrc. Remaining issues : - v4lsrc takes ages to open the device ; - v4lsrc and dv1394src aren't detected correctly on startup.
Created attachment 83248 [details] Implementation for experimental plugin
Grmph. If I leave v4lsrc autoprobe, it takes ages but works. If I don't leave it, it just doesn't work. And I'm talking command-line gst-launch-0.10, not inside my plugin... perhaps I should report a bug on it! No new information on the "no v4lsrc nor dv1394src on startup" issue...
Ok, reported the v4lsrc issue as bug #414489 : as my plugin works perfectly with videotestsrc and dv1394src, then the problem lies in v4lsrc. Which leaves only the startup detection issue for this bug.
I'd like to test the plugin but am too lazy to figure out how to amend configure.ac, configure.in, Makefiles and so on (because I know others already did it). Could you post all necessary files? Otherwise, would it be possible to check the plugin in and disable it by default? That would make testing even more easy.
I compile the plugin by hand with a one-line script on my debian/unstable : g++ -shared -DPTRACING -I../pwlib_PNoCollisionDictionary `pkg-config --cflags -- libs gstreamer-plugins-base-0.10` -lgstinterfaces-0.10 vidinput_gst.cxx -o gst_p wplugin.so where the -I../pwlib_PNoCollisionDictionary is to find the header posted in bug #332442 ; and nothing is needed to find pwlib because the needed -I is already obtained by the list of gstreamer flags. Installing is just copying next to the other pwlib video plugins... Run ekiga, enjoy (or not...). Does that help?
Plugin compiles. I copied gst_pwplugin.so to pwlib/devices/videoinput, but no 'gst' video plugin shows up in ekiga. Looking around for some pwlib test program (like gst-inspect), I tried to 'make plugintest', but that fails because there is no $HOME/pwlib/make/ptlib.mak. I figured out that I need to modify the Makefile manually (?!?) to point to the correct location of the sources. But then 'make debug' fails with lots of errors. How can I debug the correct loading of the gst plugin?
There's no makefile to modify... I certainly hope you didn't compile pwlib yourself because it generally leads to big problems. You could try to run ekiga with "-d 4" and look at the log or use strace.
I committed a piece of code to get gstreamer video input in ekiga to my post3 branch.