GNOME Bugzilla – Bug 687182
playbin: autoplugging s/w and h/w accelerated deinterlacers
Last modified: 2018-11-03 11:22:38 UTC
The current implementation of playbin is autoplugging the deinterlace element and which is doing the deinterlacing (in AUTO mode) if the incoming caps has "interlace-mode" != "progressive" . But this will create issue when h/w accelerated elements like vaapi plugins plug in to the pipeline . For eg: vaapidecode o/p is not intended to deinterlace with the deinterlace element .If vaapidecode autoplugged in to the pipeline, vaapipostproc(which is doing post processing functions like deinterlacing for vaapi) should replace "deinterlace" element.
A video-filter property to playsink will simplify the things i guess. So that we can select/set postprocessing element to playsink during decoder-sink selection time in playbin. Another way could be to sort the videofilters from playsink and autoplugg postprocessing element if it has more than one common capsfeatures with the currently configured sink (instead of just creating deinterlace element)
Related bugs: https://bugzilla.gnome.org/show_bug.cgi?id=679031 https://bugzilla.gnome.org/show_bug.cgi?id=696045
I had this thought that we could introduce an Hardware class, if that is sufficient. Would help implementing basic feature like enabling / disabling hardware acceleration.
(In reply to comment #3) > I had this thought that we could introduce an Hardware class, if that is > sufficient. Would help implementing basic feature like enabling / disabling > hardware acceleration. Do you mean to implement a new public interface ? or something else to playbin?
I think an interface would be useful for this, used in the same way as GstColorBalance in playbin right now
Okay, one idea is to: Implement a GstVideoPostProc interface and all videopost processing elements (videobalance, videoconvert, videoscale etc) are needed to implement this interface. This should have the following virtual methods: -- get_supported_capsfeatures() : should return a list of supported capsfeatures. -- get_supported_vpp_functions() : should return a list of supported vpp_functions: a string or some enum values (eg: vaapipostproc is supporting deinterlace, videoscale, denoise etc in a single element) -- has_vpp_function() : should return true if the element supports specific vpp function. -- has_hardware_acceleration() : should return true if the element can provide h/w acceleration.(or something like GstColorBalanceType) These details are enough to make efficient assumptions for autoplugging. (I guess) Then it is the duty of playsinkvideoconvert to select the vpp elements based on supplied list of vpp_elements_having_video_post_proc_interface and supplied videosink element. Does it seems weird ? :)
(In reply to comment #6) > Okay, one idea is to: > Implement a GstVideoPostProc interface and all videopost processing elements > (videobalance, videoconvert, videoscale etc) are needed to implement this > interface. > > This should have the following virtual methods: > > -- get_supported_capsfeatures() : should return a list of supported > capsfeatures. Well, the CAPS query does that already > -- get_supported_vpp_functions() : should return a list of supported > vpp_functions: a string or some enum values (eg: vaapipostproc is supporting > deinterlace, videoscale, denoise etc in a single element) > > -- has_vpp_function() : should return true if the element supports > specific vpp function. > > -- has_hardware_acceleration() : should return true if the element can > provide h/w acceleration.(or something like GstColorBalanceType) > > These details are enough to make efficient assumptions for autoplugging. (I > guess) > > Then it is the duty of playsinkvideoconvert to select the vpp elements based on > supplied list of vpp_elements_having_video_post_proc_interface and supplied > videosink element. Makes sense otherwise, yes. The capabilities (supported functions) could be done similar to what GstPhotography does.
(In reply to comment #7) > (In reply to comment #6) > > Makes sense otherwise, yes. The capabilities (supported functions) could be > done similar to what GstPhotography does. Thanks. Do we need to retrieve the vpp methods also from the interface? (for eg: there are differnt methods to do the deinterlace,GST_DEINTERLACE_SCALER_BOB, GST_DEINTERLACE_LINEAR etc). GstPhotography has implemented everything as object properties which will force the elements implementing this interface to have a compatible property.
Maybe the configurations for the different functions can be hidden behind a GstStructure API, and some way to query what could be set there?
Simple scenario is to , struct _GstVideoPostprocInterface { GTypeInterface iface; gboolean (*get_vpp_types) (GstVideoPostproc *vpp, GstVideoPostprocType *vpp_type); gboolean (*has_hw_acceleration) (GstVideoPostproc *vpp); /*< private >*/ gpointer _gst_reserved[GST_PADDING]; }; /* public methods */ gboolean gst_video_postproc_get_vpp_types (GstVideoPostproc *vpp, GstVideoPostprocType *vpp_type); gboolean gst_video_postproc_has_hw_acceleration (GstVideoPostproc *vpp); gboolean gst_video_postproc_has_vpp_type (GstVideoPostproc *vpp, GstVideoPostprocType vpp_type); -- no vpp_method implementation at the moment -- one read-only property "vpp-type" to the interface
And the GstVideoPostprocType would be a GFlagValue.. What do you think?
Created attachment 248387 [details] [review] Add new videopostproc interface.
Created attachment 248390 [details] [review] colorbalance: Fix the type in base_init() This is a minor typo fix in colorbalance interface implementation.
Hi, (In reply to comment #12) > Created an attachment (id=248387) [details] [review] > Add new videopostproc interface. If GstVideoPostprocType is a GFlagValue, are we guaranteed that we will never support more than 32 types of VPP functions? Anyway, I would immediately add the following two: rotation, sharpening. :)
Let's wait for comments from others ;)(In reply to comment #14) > Hi, > > (In reply to comment #12) > > Created an attachment (id=248387) [details] [review] [details] [review] > > Add new videopostproc interface. > > If GstVideoPostprocType is a GFlagValue, are we guaranteed that we will never > support more than 32 types of VPP functions? > > Anyway, I would immediately add the following two: rotation, sharpening. :) Let's wait for comments from others ;)
Comment on attachment 248390 [details] [review] colorbalance: Fix the type in base_init() commit c9e65dbccc52ee7a2861d2c96a9ff456fc22e6ea Author: Sreerenj Balachandran <sreerenj.balachandran@intel.com> Date: Thu Jul 4 17:09:00 2013 +0300 colorbalance: Fix the typo in base_init().
Review of attachment 248387 [details] [review]: Some name nitpicking... GstVideoPostProcessor maybe? And instead of vpp write it out? Shouldn't there be something to enable/disable the different postproc functions, i.e. enable/disable a denoise filter? ::: gst-libs/gst/video/videopostproc.c @@ +63,3 @@ + /* Video post processing operations like deinterlace, colorbalance etc */ + g_object_interface_install_property (g_class, + g_param_spec_flags ("vpp-type", -typeS. Maybe call it supported-*-types? Why is this a property but HW acceleration is a vfunc? ::: gst-libs/gst/video/videopostproc.h @@ +59,3 @@ + GST_VIDEO_POSTPROC_DENOISE = (1 << 2), + GST_VIDEO_POSTPROC_SCALE = (1 << 3), + GST_VIDEO_POSTPROC_COLORSPACE = (1 << 4), This would probably not be just about colorspace conversion but also subsampling conversion, component order conversion, plane layout conversion, etc... like what videoconvert does?
(In reply to comment #17) > Review of attachment 248387 [details] [review]: > > > Shouldn't there be something to enable/disable the different postproc > functions, i.e. enable/disable a denoise filter? Do you meant to add properties for that? Yup we need this feature anyways. Because, for eg: vaapipostproc can do many vpp functions in a single pass. Something to enable/disable the filter would be a nice feature. > Why is this a property but HW acceleration is a vfunc? Adding more properties to the interface will force the elements to add a compatible property and the gst-inspect will display all those unnecessary (in usual case) properties. In my thinking vpp-type was the main feature which should be displaying by the gst-inspect. Actually I was thinking about the vpp_methods also :) ..But as long as we have only vpp-type and has_hw_acceleration, I have no objection to add it as a property. Do you think it better to keep both of them as properties? > ::: gst-libs/gst/video/videopostproc.h > @@ +59,3 @@ > + GST_VIDEO_POSTPROC_DENOISE = (1 << 2), > + GST_VIDEO_POSTPROC_SCALE = (1 << 3), > + GST_VIDEO_POSTPROC_COLORSPACE = (1 << 4), > > This would probably not be just about colorspace conversion but also > subsampling conversion, component order conversion, plane layout conversion, > etc... like what videoconvert does? So do we need to split each of them ? Then we need to answer Gwenole's question :) "only add maximum of 32 vpp features" . Otherwise just keep it as a simple GST_VIDEO_POSTPROC_COVERT. ? note: Personally I prefer GstVideoPostProc instead of GstVideoPostProcessor. But no objection to change it :)
Just for the record, I don't think anything related to this new feature should be committed before we have sorted out the mess that playbin/playsink is in right now. (But that shouldn't prevent anyone from discussing API of course.)
(In reply to comment #19) > Just for the record, I don't think anything related to this new feature should > be committed before we have sorted out the mess that playbin/playsink is in > right now. (But that shouldn't prevent anyone from discussing API of course.) Sorry, what kind of mess? Is it the issue mentioned in https://bugzilla.gnome.org/show_bug.cgi?id=701997 ? Also the interface itself is not touching the playbin..
(In reply to comment #18) > (In reply to comment #17) > > Review of attachment 248387 [details] [review] [details]: > > Shouldn't there be something to enable/disable the different postproc > > functions, i.e. enable/disable a denoise filter? > > Do you meant to add properties for that? > > Yup we need this feature anyways. Because, for eg: vaapipostproc can do many > vpp functions in a single pass. Something to enable/disable the filter would be > a nice feature. I will even say that, under some situations, it would also be possible to get two output surfaces with different pixel formats... :) I have a use-case for that, but not something intended to be used through the whole GStreamer pipeline. BTW, I don't know if we can derive a clean & generic API. That's because for each VPP function, we'd need to pass different settings. Here is what I want on the gst-vaapi side BTW: https://bugzilla.gnome.org/attachment.cgi?id=248332
(In reply to comment #21) Anyway this interface is not intended to do something like colorbalance or photography which are dealing with actual vpp operations itself. But just to retrieve the supported vpp information which could be helpful for stuffs like autoplugging.
(In reply to comment #22) > (In reply to comment #21) > > Anyway this interface is not intended to do something like colorbalance or > photography which are dealing with actual vpp operations itself. But just to > retrieve the supported vpp information which could be helpful for stuffs like > autoplugging. OK, so if the interface is only used for advertising capabilities, then we probably should name it after GstVideoProcessingCapsInterface? :) Because, on the contrary, GstColorBalanceInterface does actually perform the parametrization of that filter.
But then we could also just omit this interface and have a GstVideoDeinterlaceInterface, GstVideoDenoiseInterface, etc ;) Or just put the name of the postprocessing "function" in the element factory classification.
So the questions which we need clarifications are: --- is there any other way which is better than implementing an interface ?? (as per current design the interface can provide three capabilities- 1: has_hw_acceleration, 2: which_vpps_are_supported 3: an api to disable a specific vpp) --- if we proceed with the interface, --- shall we stick with GFlagsValue? --- A proper name for the interface. --- Is it enough to have a GST_VIDEO_POSTPROC_COVERT instead of GST_VIDEO_POSTPROC_COLORSPACE ?
Maybe we should step back a bit and first try to understand which problem we're trying to solve. If it's just about autoplugging, using the GstElementFactory classification should be sufficient. If it's about enabling/disabling postproc functions you need a real interface. Should it be one interface per postproc function with function specific parameters? Should it be one generic interface with a generic configuration interface?
I guess there is no gstreamer design objection to add more than one classification for a single element.Something like "Filter/Effect/Video/Deinterlace, Filter/Converter/Video/Scaler, Filter/Effect/Video/Colorbalance" :) Basically we only need to enable autoplug for deinterlacer in the current playbin at the moment. If I understood correctly all other code to enable the autoplugging of elements like videoconvert, videobalance etc will be untouched in this case. So just doing some elementfactory classification comparison + capsfeatures comparison could be okay, If we are not going for some generalized vpp_element autoplugging feature. But IMHO, it would be good to have vpp_interface if we need to enable autoplugging for each element like videobalance, videoconvert etc. Another thing that we should consider is the playback starting delay of playbin. Gwenole might have some profiling information regarding this since he was complaining about this :) So it would be good to avoid string comparisons. @Gwenole: What do you think about vpp filter enable/disable option? I guess it is would be good to have such a feature since vappipostproc is doing many thing in one pass as you said before.
(In reply to comment #27) > Another thing that we should consider is the playback starting delay of > playbin. Gwenole might have some profiling information regarding this since he > was complaining about this :) So it would be good to avoid string comparisons. Some profiler results for this would be useful indeed, yes :) (In reply to comment #27) > I guess there is no gstreamer design objection to add more than one > classification for a single element.Something like > "Filter/Effect/Video/Deinterlace, Filter/Converter/Video/Scaler, > Filter/Effect/Video/Colorbalance" :) > > Basically we only need to enable autoplug for deinterlacer in the current > playbin at the moment. If I understood correctly all other code to enable the > autoplugging of elements like videoconvert, videobalance etc will be untouched > in this case. So just doing some elementfactory classification comparison + > capsfeatures comparison could be okay, If we are not going for some generalized > vpp_element autoplugging feature. > But IMHO, it would be good to have vpp_interface if we need to enable > autoplugging for each element like videobalance, videoconvert etc. > > @Gwenole: What do you think about vpp filter enable/disable option? I guess it > is would be good to have such a feature since vappipostproc is doing many thing > in one pass as you said before. Ok, so maybe an interface like the one you proposed plus something to enable/disable specific functions. And if additional configuration of a postproc function is required a different interface is used (GstColorBalance for example). Instead of flags you could also use strings btw and make them GQuarks internally. Which would be just a pointer comparison then. See the GstStructure API for example.
The reason by which i used GFlagValue was because of the bitwise capability. And we were only required gst_video_postproc_get_vpp_types() and gst_video_postproc_has_vpp_types() implementations since the element implementing the VideoPostProcInterface can provide the bitwised values. Anyway I guess we can just replace with GEnumValue instead of using stuffs like GQuarks much easily.
Created attachment 250193 [details] [review] Add new videopostprocessor interface. Some minor name changes and enable/disable vfuntion as proposed by the previous comments.
any further update here?
There was some work recently and making playbin handle the case correctly where these postprocessing is handled by the video sink. I think we should handle that case first, and then go back to this bug afterwards.
(In reply to comment #32) > There was some work recently and making playbin handle the case correctly where > these postprocessing is handled by the video sink. I think we should handle > that case first, and then go back to this bug afterwards. Do you have any pointers to those works you mentioned? some bugreports, commits or any relevant irc log?
(In reply to comment #32) > There was some work recently and making playbin handle the case correctly where > these postprocessing is handled by the video sink. I think we should handle > that case first, and then go back to this bug afterwards. I think there is new infrastructure in place for GStreamer >= 1.4 to support GstColorBalance interface in sinks, and not add videobalance, but I haven't seen anything for deinterlace. Besides, it would be great if it could also add an intermediate element that handles HW colorbalance for instance, but this would be another bug report. The reason I ask is because if a sink supports video/x-raw(meta:GstVideoGLTextureUploadMeta) but not GstColorBalance, autopluging machinery would fail because it would try to add the videobalance element, which does not support video/x-raw(ANY), obviously. sree's proposal makes sense to me and could possibly handle those things at playsink level too with ease.
deinterlacing should probably be negotiated with the interlacing related fields on the raw video caps, and then based on the input and sink caps a deinterlace element should be chosen that can work with the available caps features.
(In reply to comment #35) > deinterlacing should probably be negotiated with the interlacing related fields > on the raw video caps, and then based on the input and sink caps a deinterlace > element should be chosen that can work with the available caps features. Yes, this is what I desired once and tried ("interlace-mode=(string)progressive") but this miserably failed to auto-plug anything IIRC. :) Not tried recently to be honest.
I didn't say that it works already :) But that seems like the way how it should be implemented, probably just needs fixes everywhere
(In reply to comment #35) > deinterlacing should probably be negotiated with the interlacing related fields > on the raw video caps, and then based on the input and sink caps a deinterlace > element should be chosen that can work with the available caps features. Which means we are only concerned about deinterlace..How about other vpp element, like videoscale? I would say, In an ideal situation playbin should only select h/w_vpp elements if available (if the sink element has been already selected by decodebin based on capfeatures). Reference_Comments: 27 && 28
I think there could be an easy way to do this in playsink. Right now we are always configuring decoder+sink combination together from decodebin. So before creating the deinterlacer in playsink, we will always have a sink element configured with playsink. So just attach the deinterlacer which has more number of memory_capsfeatures in common with the sink element. Only thing is that the vaapipostproc element should advertise itz Factory details Klass as "Filter/Converter/Video; Filter/Effect/Video/Deinterlace". Am I missing something?
(In reply to comment #39) > I think there could be an easy way to do this in playsink. Right now we are > always configuring decoder+sink combination together from decodebin. > So before creating the deinterlacer in playsink, we will always have a sink > element configured with playsink. So just attach the deinterlacer which has > more number of memory_capsfeatures in common with the sink element. > > Only thing is that the vaapipostproc element should advertise itz Factory > details Klass as "Filter/Converter/Video; Filter/Effect/Video/Deinterlace". > > Am I missing something? Yes, playsink can also be used independently of playbin. However in the reconfigure function of playsink you have the video sink and the (raw) video caps. So there you can such a selection based on the capsfeatures and caps... and also check if the sink can directly handle deinterlacing already.
(In reply to comment #40) > (In reply to comment #39) > > I think there could be an easy way to do this in playsink. Right now we are > > always configuring decoder+sink combination together from decodebin. > > So before creating the deinterlacer in playsink, we will always have a sink > > element configured with playsink. So just attach the deinterlacer which has > > more number of memory_capsfeatures in common with the sink element. > > > > Only thing is that the vaapipostproc element should advertise itz Factory > > details Klass as "Filter/Converter/Video; Filter/Effect/Video/Deinterlace". > > > > Am I missing something? > > Yes, playsink can also be used independently of playbin. > > > However in the reconfigure function of playsink you have the video sink and the > (raw) video caps. So there you can such a selection based on the capsfeatures > and caps... and also check if the sink can directly handle deinterlacing > already. That is exactly the place I have mentioned, right? The gen_video_deinterlace_chain() is getting invoked always from the do_reconfigure() method. ??
Created attachment 289965 [details] [review] playsink: autoplug the deinterlacer based on capsfeatures
Bypassing the deinterlace chain creation if the sink element is capable of handling the interlaced content by itself can be done in another patch IMHO.
The latter would need some changes in the existing sinks, e.g. xvimagesink claims to not care about interlacing but really just handles progressive. Not sure what to do about that in a backwards compatible way. Can that code you have there be refactored a bit to reuse almost the same code inside gstplaybin2.c? It looks very similar
(In reply to comment #44) > The latter would need some changes in the existing sinks, e.g. xvimagesink > claims to not care about interlacing but really just handles progressive. Not > sure what to do about that in a backwards compatible way. > > Can that code you have there be refactored a bit to reuse almost the same code > inside gstplaybin2.c? It looks very similar IIUC all the capsfeature comparison code I&You wrote for playbin is taken to playsink too. Still more stuffs to copy?
Yeah my point is that we shouldn't copy that code :) Put it into a gstplaybackutils.c inside gst-plugins-base/gst/playback.
(In reply to comment #44) > The latter would need some changes in the existing sinks, e.g. xvimagesink > claims to not care about interlacing but really just handles progressive. Not > sure what to do about that in a backwards compatible way. The sink elements must provide a caps template field "interlace-mode ={interleaved}" or the same in caps_query if they are capable of handling the interlaced content. right? If so why we need to change the xvimagesink code. Do we have any sink element which is capable of handling interlaced content?
(In reply to comment #46) > Yeah my point is that we shouldn't copy that code :) Put it into a > gstplaybackutils.c inside gst-plugins-base/gst/playback. Okay make sense! But would be good to add it as another patch since it touching playbin too..
Yes, separate patch that comes before the current one :) (In reply to comment #47) > (In reply to comment #44) > > The latter would need some changes in the existing sinks, e.g. xvimagesink > > claims to not care about interlacing but really just handles progressive. Not > > sure what to do about that in a backwards compatible way. > > The sink elements must provide a caps template field "interlace-mode > ={interleaved}" or the same in caps_query if they are capable of handling the > interlaced content. right? If so why we need to change the xvimagesink code. > > Do we have any sink element which is capable of handling interlaced content? v4l2sink can probably handle interlaced content, and vaapisink should be able to do that too, no?
(In reply to comment #49) > Yes, separate patch that comes before the current one :) > > (In reply to comment #47) > > (In reply to comment #44) > > > The latter would need some changes in the existing sinks, e.g. xvimagesink > > > claims to not care about interlacing but really just handles progressive. Not > > > sure what to do about that in a backwards compatible way. > > > > The sink elements must provide a caps template field "interlace-mode > > ={interleaved}" or the same in caps_query if they are capable of handling the > > interlaced content. right? If so why we need to change the xvimagesink code. > > > > Do we have any sink element which is capable of handling interlaced content? > > v4l2sink can probably handle interlaced content, and vaapisink should be able > to do that too, no? vaapisink has render function flags to specify TOP and BOTTOM fields. But there are other things which are handling by decoder .For eg: Reusing a same surface for top and bottom field, by waiting for the second field to arrive. The other deinterlacing stuffs are handling by the vaapipostproc.
Created attachment 290002 [details] [review] gst/playback: Add gstplaybackutils.{h,c} to deploy the common subroutines
Created attachment 290003 [details] [review] gstplaybackutils: Skip 'ANY' capsfeature while finding the count of common capsfeatures
Created attachment 290004 [details] [review] playsink: autoplug the deinterlacer based on capsfeatures
Created attachment 290134 [details] [review] playsink: Avoid deinterlace chain creation if the sink is capable of handling the interlaced content by itself
(In reply to comment #54) > Created an attachment (id=290134) [details] [review] > playsink: Avoid deinterlace chain creation if the sink is capable of > handling the interlaced content by itself This last patch needs rework. BTW other three patches are enough for enabling the autoplugging of h/w_accelerated deinterlacer. The required patches are landed in gstreamer-vaapi also, so testing with gstreamer-vaapi (master branch) is possible. The fourth patch can be treated as an another feature IMHO, I will get back to that later. It Would be really helpful if some one can review/commit the others.
Created attachment 290892 [details] [review] playsink: Avoid deinterlace chain creation if the sink is capable of handling the interlaced content by itself
any chance to get a review for this ?
Most GStreamer developers are now returning from the holiday season. Give them some time to catch up and this will be reviewed soon.
I see slomo merged a couple of the cleanup patches finally. It would be really good to get the auto-plugging merged before 1.6, even if the new interface isn't there yet.
Review of attachment 290892 [details] [review]: ::: gst/playback/gstplaysink.c @@ +3246,3 @@ + const gchar *s; + guint tmpl_caps_size, i, j; + GstPlaySinkInterlaceMode in_mode, sink_mode; Seems like these can be used uninitialised below. @@ +3308,3 @@ + sink_mode != MODE_INTERLACED) + need_deinterlacer = TRUE; + } I'm not sure what the difference between these 2 cases is in practice. If playsink has the deinterlace flag set, we need a deinterlacer if the input is interlaced or unspecified and the sink wants progressive vs otherwise only if the input is explicitly interlaced? Add a comment explaining the reasoning? @@ +3747,3 @@ playsink->vischain->vissinkpad); + gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (playsink->vischain-> + srcpad), NULL); Everything here an below seem like accidental whitespace changes
Review of attachment 290004 [details] [review]: The maximum number of caps-features in common metric is a bit weird, and the whitespace changes need to go, but otherwise seems OK ::: gst/playback/gstplaysink.c @@ +1500,3 @@ } + /* find the deinterlacer which has more number of common capsfeatures more -> "the largest" /randomenglishnitpick This seems like a funny metric for deciding on the best match, but I'm not sure of a better one @@ +1506,3 @@ + gst_element_factory_list_get_elements + (GST_ELEMENT_FACTORY_TYPE_MEDIA_VIDEO | + GST_ELEMENT_FACTORY_TYPE_MEDIA_IMAGE, GST_RANK_NONE); Should we ever autoplug elements with rank NONE? @@ +3640,3 @@ playsink->vischain->vissinkpad); + gst_ghost_pad_set_target (GST_GHOST_PAD_CAST (playsink-> + vischain->srcpad), NULL); whitespace changes below to remove.
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/75.