GNOME Bugzilla – Bug 709970
rpi: Add support for camera source
Last modified: 2018-05-07 15:41:11 UTC
https://github.com/tjormola/gst-omx/commits/tjormola/camera and git remote https://github.com/tjormola/gst-omx.git
Pushed some minor updates. When reviewing, could you especially try to read/understand/verify/comment the possible deadlock bug in gstbasesrc.c as described here, thanks. https://github.com/tjormola/gst-omx/blob/761fa7e36c62eca5de1847fc56cc3f7a07c7eb81/omx/gstomxcamerasrc.c#L1785
Thanks a lot for your contribution! Am I looking at the wrong branch or is there noting GstOMXBaseSrc ? Am I right to assume you did the testing on a RPi? Any other device ? I haven't read the src bits of the OMX spec, but I assume your code works. Can you have a look at bug #678402 and check if that API works for OMX camera devices? As for the code itself: Don't do gst_omx_camera_src_format_caps, use gst_caps_new_simple() Why the stop()->close()->close_unlocked() chain of calls, could all be the same function? Same comment for enable/disable/start/stop_capturing functions and their _unlocked variants. Don't use g_new() for the GstVideoInfo, put it on the stack or in the object structure. Don't implement get_times(), it's only for fake sources, OMX is real sources In the change_state function, the start part should happen before chaining up. I'm not a big fan of the color enhancement as a string, does it make sense to make it into two separate integer properties instead? Do you know if its possible to change any of the properties at runtime?
Thanks for your comments. First, I think I need to clear up things a bit. I guess I didn't document the goals of my work properly. All the bits are scattered as comments in the code, but no proper summary. So let's do it right here. The patch in current state is very preliminary and several things need to be addressed before I'm considering proposing this for merging. 1. Feature: On RPi, it should be possible to tunnel the camera via the encoder component and get e.g. H.264 or MJPEG encoded video passed to downstream in the pipeline, if such caps was requested. 2. Feature: Probe the camera output port for supported uncompressed video output formats (even RPi supports other format than the currently hard-coded YUV I420) and return a dynamically generated caps in get_caps() based on the real hardware support. Some kind of mapping between the OpenMAX OMX_COLOR_FORMATTYPE and Gstreamer data types needs to be implemented. Also for RPi, we need to probe the encoder output port and add those output formats to the list of supported caps in order to support the feature 1 listed above. Then the buffer filling logic needs to be updated to support both uncompressed and compressed formats and the way buffers are passed in each case (e.g. on RPi for uncompressed formats you'll get several OMX buffers from the camera for each video frame from which the frame needs to be reconstructed and zero-copy passing of the frame data downstream isn't possible, but from the encoder, you get one buffer for one frame and it might be possible to do zero-copy). 3. Feature: On RPi, we also need to activate camera preview output port (OMX port 70) and tunnel that to OMX component null sink. According to the documentation, usage of the preview port is contributing to the closed algorithms in firmware doing white balancing etc. So in order to get the best possible results, the preview port needs to be active even if the data is eventually discarded inside the firmware side. 4. Feature: According to the RPi/Broadcom camera component docs, we should setup a callback notifying when the camera is ready after opening by using a OMX_CONFIG_REQUESTCALLBACKTYPE based config request which then causes a OMX_EventParamOrConfigChanged event to be fired when the device has been opened and ready for use. Currently this isn't done. At least for me the camera works as is even without checking, but the clean way to do it as documented by the vendor, of course. The best place to handle the event is the generic event handler in gstomx.c since there's nothing camera specific about this. 5. Feature: Make it possible to change camera configuration (e.g. image filter, brightness and so on) when live. Camera configuration stuff needs maybe a bitmask which describes which config options have changed and apply only those. Each property setter alter the mask for that property/config mapping and requests config change. This will be exectuted right away if camera is live and ok, or dealyed/queued if applying of the configuration isn't yet possible and then execute all the delayed config changes in one go when doing the initialization. 6. I consider myself a novice C programmer and I'd really like someone to check that error and concurrency handling is ok. 7. On my system there's a deadlock happening in gstbasesrc.c which prevents us from using gst_base_src_start_complete() in the GstBaseSrc->start() virtual method. Even though it works without making this call, it's against GstBaseSrc docs that state that gst_base_src_start_complete() should be called. > Am I looking at the wrong branch or is there noting GstOMXBaseSrc ? There's the base class GstOMXSrc defined in gstomxsrc.[ch]. It's just a skeleton right now and really not needed functionality wise. But the plugin loader stuff in gstomx.c assumes all the GstOMX* plugins are based on some GstOMX* base class so I deicded to create a minimal base class for sources instead of trying to hack the plugin loader code since I don't really know it that well. > Am I right to assume you did the testing on a RPi? Any other device ? That's correct. After finishing all the features (see below), I'm planning to test it against the camera component implementation in Bellagio and hopefully it's easy to make it work. I don't have any other OpenMAX camera implementations available for testing, sorry. > I haven't read the src bits of the OMX spec, but I assume your code works. I do get the YUV I420 video out of RPi using this code so it works in that sense. But whether it's correct way to do things or not, that's another story ;) And for one part, I know it's not (see feature 4) even if it happens to work for me. > Can you have a look at bug #678402 and check if that API works for OMX camera > devices? I'm no expert but I think enumeration of the devices is not possible using OpenMAX. But querying of the output formats supported by the device (i.e. caps in Gstreamer speak) is possible once you know the port number. Missing feature 2 described above would exploit this. > As for the code itself: > Don't do gst_omx_camera_src_format_caps, use gst_caps_new_simple() This will get fixed when feature 2 is implemented. > Why the stop()->close()->close_unlocked() chain of calls, could all be the same > function? > Same comment for enable/disable/start/stop_capturing functions and their > _unlocked variants. Yes there's probably some redundancy here. Basically I wanted to isolate the GstBaseSrc virtual methods and the functions driving the hardware using OMX API. As for enable/disable and start/stop capture goes, I think these could be separate as start/stop are light operations and could be done multiple times in row without shutting down the device in between. But if there's no real life use case for this, I guess it doesn't matter then. But I don't quite understand how the Gstreamer pipeline might live so I decided to make it so that implementing e.g. pausing video and then start live play again is efficient. > Don't use g_new() for the GstVideoInfo, put it on the stack or in the object > structure. It's actually ending up to the object structure. I'm just using a locally defined variable when allocating and parsing the GstVideoInfo structure but if it's found out to be ok, the pointer gets saved to the object structure. I guess I could use the object structure variable all the time to make it more evident... > Don't implement get_times(), it's only for fake sources, OMX is real sources Ok, I think the Gstreamer docs wasn't doing that good job at describing then get_times() is required so I did it just in case. Will remove. > In the change_state function, the start part should happen before chaining up. Ok, that's what I initially did, but back then I was experiencing the deadlock situation and doing things in this order was helping a bit so I just left it like that. At that time I didn't know what was triggering the deadlock (call to gst_base_src_start_complete()), but now that it's worked around elsewhere, there's no reason not to fix this. > I'm not a big fan of the color enhancement as a string, does it make sense to > make it into two separate integer properties instead? That was the way how raspivid tool implemented this setting so I just copied their way of doing things. No problem making this two separate integer properties. > Do you know if its possible to change any of the properties at runtime? Good point. I haven't tested it, but I guess it should be possible if you think about the applications OpenMAX was designed for. I guess it's pretty obvious that when shooting a video on a mobile phone, it should be possible to tune e.g. image sensitivity modes when live. This should be implemented, I marked this as feature 5. And actually, I'd like write a simple GUI test program which shows the live video and has controls for setting all the properties that you could then change and see the result in real time. It could be added to gst-omx under examples/camera if having an optional dependency for Gtk3 is ok or if not, then just have it as a standalone program.
For encoding tunnelling, I assume that you can get the raw video (for preview/display on screen) and the encoded one at the same time? In that case, it might make sense to have something using GstBaseCamerabinSrc (which is still in -bad and still has some API issues).
Committed some updates. Highlight being H.264 now supported in RPi. > 1. Feature: On RPi, it should be possible to tunnel the camera via the encoder > component and get e.g. H.264 or MJPEG encoded video passed to downstream in the > pipeline, if such caps was requested. and > 2. Feature: Probe the camera output port for supported uncompressed video > output formats and > As for the code itself: > Don't do gst_omx_camera_src_format_caps, use gst_caps_new_simple() https://github.com/tjormola/gst-omx/commit/8dea32751058a78431d998b6cf772f9c20380c0e > 3. Feature: On RPi, we also need to activate camera preview output port https://github.com/tjormola/gst-omx/commit/5e347c6f00b02b28d3e17ffee51100c0a5bc46da > Why the stop()->close()->close_unlocked() chain of calls, could all be the same > function? > Same comment for enable/disable/start/stop_capturing functions and their > _unlocked variants. https://github.com/tjormola/gst-omx/commit/a43064e63a7c89a3fbe1bbde7da63e5fd17be2d0 > Don't implement get_times(), it's only for fake sources, OMX is real sources https://github.com/tjormola/gst-omx/commit/8ae3bd7bb9b9cfd53b105e47e4720c7088356ca8 > In the change_state function, the start part should happen before chaining up. https://github.com/tjormola/gst-omx/commit/454daa88e4df374dd79ce3ccfe0253c2f9ac635f > I'm not a big fan of the color enhancement as a string, does it make sense to > make it into two separate integer properties instead? https://github.com/tjormola/gst-omx/commit/2d8002ad52d329de86527e446bb74771e4526010 Still TODO: - Config callback stuff - Use config callback when opening the device - Test with Bellagio camera component - Re-work setting properties and applying of camera configuration so that properties can be changed when live - Write a GTK3 test tool - Is it possible to implement zero copy for non-fragmented frames? - Investigate the deadlock issue - Cleanups and little fixes (assertions etc.)
I think before this can be merged most of the code should be put into the base class, and only the video and camera specific code should stay in the subclass. Also probably needs some integration with gstomx.conf. (In reply to comment #5) > > I'm not a big fan of the color enhancement as a string, does it make sense to > > make it into two separate integer properties instead? > https://github.com/tjormola/gst-omx/commit/2d8002ad52d329de86527e446bb74771e4526010 Maybe the GstPhotography interface from gst-plugins-bad can help with that a bit. > Still TODO: > - Test with Bellagio camera component I wouldn't worry about that, Bellagio usually is more broken than anything else. > - Is it possible to implement zero copy for non-fragmented frames? What do you mean with non-fragmented frames?
Also some squashing of the commits would be good, so we only have a handful of commits in the end to merge instead of your complete development progress :)
Perhaps we can close this one now that gst-rpicamsrc works fairly well?
No, I still would like to have OpenMAX IL sources and sinks (and filters) :) Not for RPi, but in general
Hi Tuomas, any update ?
Closing this bug report as no further information has been provided. Please feel free to reopen this bug report if you can provide the information that was asked for in a previous comment. Thanks!