GNOME Bugzilla – Bug 733056
videoparsers: Send codec specific headers as bus messages for stream analyzer
Last modified: 2018-11-03 13:24:51 UTC
This is the placeholder for discussing the implementation of bus messages for every codec specific headers parsed by the parser plugins in gst-plugins-bad. Codecanalzyer in gst-devtools will be one of the main user of this feature. Method1: We already have codec specific meta for mpeg2. Downstream element can use the allocation query for informing the upstream element whether it need(it can handle) codec metas. But there should be an allocation query from upstream. So implement something similar for every parser. That is enough from codecanalzyer's perspective. Do we really need bus messages? Method2. Send all parsed headers to the bus as messages. This is the idea proposed by Edward and Stefan. -- a new message for each header or a new message for each frame with all parsed headers of the corresponding frame ? -- a new field in message to indicate the frame number should be useful. -- the frame order from parser_plugin's source pad is different from frame order output from the decoder's src pad. how to communicate the video frames original order number (the order in which they are encoded) and the output order number (order in which frames should be displayed) to the application?
I'd vote for having a 'message' property on the parsers, that defaults to FALSE. In the codec-analyzer-ui, one would listen for the element-added signal and set it to TRUE. Then the parsers would post the messages over the bus, as the media is analyzed. The UI can populate the UI as the messages come in. Also I'd leave it to the UI to ev. page out analysis data to disk (e.g. if processing a long video). Granularity: do the parsers know what makes a frame? also for audio there is no frame. I'd add timestamps (as good as we can) Ordering: Upstream ordering (thats stable and know by the parser).
(In reply to comment #1) > I'd vote for having a 'message' property on the parsers, that defaults to > FALSE. In the codec-analyzer-ui, one would listen for the element-added signal > and set it to TRUE. Then the parsers would post the messages over the bus, as > the media is analyzed. The UI can populate the UI as the messages come in. > Also I'd leave it to the UI to ev. page out analysis data to disk (e.g. if > processing a long video). > > Granularity: do the parsers know what makes a frame? Yes. videoparsers know what makes a frame. also for audio there is no > frame. I'd add timestamps (as good as we can) > > Ordering: Upstream ordering (thats stable and know by the parser). Sorry I didn't get it. Anyway this is something different from the actual bug report. What I asked is, how to correlate the parsed-info of frame-x with decoded frame frame-x ? We can not make sure that frame-x at parser's input is same as that of frame-x that we are rendering if there are B-frames in the stream.
Why not just grab them as buffers from the pipeline? Sending them as messages seems not very useful
(In reply to comment #3) > Why not just grab them as buffers from the pipeline? Sending them as messages > seems not very useful Because you would need to parse them in the app again? If we want to show them as pretty printed structures, explaining the fields, the parser would emit them in a parsed form.
They should be in parsed form as GstMeta on the buffers anyway, like what is done for mpegvideoparse currently. And there's a patch to do the same for h264parse.
(In reply to comment #3) > Why not just grab them as buffers from the pipeline? Sending them as messages > seems not very useful Do you meant to stay with only codec-meta? I am okay with that approach actually. One advantage with messages is that we can send a message when ever the parser finishes the parsing of particular header, doesn't need to wait for the complete frame parsing. But then we might need a BoxedType for every codecspecific header. Also I have a plan to add support for parsing all the headers (including optional headers like copyright_extension_mpeg2) which is unnecessary for a usual gstreamer pipeline except for a codecanalyzer. May be we can keep the codec-specific meta as a minimal one and as Stefan proposed add a send-message property to baseparse which is for enabling extensive parsing. WDT? Otherwise we can just add more stuffs to the codec-meta and add a property to the baseparse to enable extensive parsing which means meta will have a long list of codec specific structures if and only if we set the extensive-parse=TRUE.
I'm more in favor of having it parsed/put on GstMeta. For headers that aren't processing-critical (such as copyrigh extension, user-data, etc...), you could skip parsing, and just indicate the location/size of it in the buffer. That will allow downstream elements (or applications intercepting) the buffers to then parse it out .. but won't be expensive on the parser.
Or just negotiate it via caps or the allocation query. Things like avdec_h264 for example wouldn't need any meta at all, mxfdemux only the parsed SPS/PPS and vaapidecode probably everything parsed completely inside the meta (yes I know that vaapidecode will never use the meta, just theoretically :) ).
(In reply to comment #8) > Or just negotiate it via caps or the allocation query. Things like avdec_h264 > for example wouldn't need any meta at all, mxfdemux only the parsed SPS/PPS and > vaapidecode probably everything parsed completely inside the meta (yes I know > that vaapidecode will never use the meta, just theoretically :) ). Then it should be via caps negotiation. The allocation_query method is not reliable since we cannot expect the upstream element to initiate allocation query always. eg; "filesrc ! parser ! fakesink " won't initiate any allocation query. Any suggestions for the caps-template naming for different headers? How about headers: {seq, seq_ext, pic, pic_ext, slice etc.. } Another way could be to use a single field in caps (lets say parsing-level) which has low, medium and high values to differentiate different levels of parsing.
Now I am more towards the allocation_query method. I was thinking to add a capfeatures "meta:Gst##Codec##VideoMeta" to each parser . But that seems to be polluting the existing parser implementation more. Requesting necessary header parsing via allocation query seems to bit more clear. One extra step need is to initiate an allocation query with NULL caps explicitly from parser plugin, probably from start() virtual method. Any comments?
This has implemented in the following way(only for mpeg2 video): --New capsfeature "meta:GstMpegVideoMeta" to mpegvideoparse and requirement of GstMpegVideoMeta will be derived from the negotiated caps. --Individual header parsing will be requested via allocation query. And the parser will always initiate an allocation query with NULL caps to avoid the dependency of upstream-element initiation of allocation query.
Created attachment 281770 [details] [review] mpegvideometa: Define a Capsfeature name for MpegVideoMeta
Created attachment 281771 [details] [review] videoparsers: mpeg2: find GstMpegVideoMeta requirement through caps negotiation
Created attachment 281772 [details] [review] videoparsers: mpeg2: Allow downstream to request for individual header parsing
All the above patches are on top of the patches i have provided in https://bugzilla.gnome.org/show_bug.cgi?id=704865 . If everything is fine, then i will remove the sinkpad_allocation_query() virtual method implementation (which was used for GstMpegVideoMeta negotiation previously) in another patch together with necessary changes (some name changes like need-slice-header to parse-slice-header etc). Also I can provide another patch to vdpau plugin for supporting the new CapsFeature.
I think the general approach makes sense as discussed on IRC. (In reply to comment #11) > --Individual header parsing will be requested via allocation query. And the > parser will always initiate an allocation query with NULL caps to avoid the > dependency of upstream-element initiation of allocation query. Why an ALLOCATION query with NULL caps? You should know the caps at that point already. You would do the ALLOCATION query right after setting the caps on the srcpad.
(In reply to comment #15) > All the above patches are on top of the patches i have provided in > https://bugzilla.gnome.org/show_bug.cgi?id=704865 . > > If everything is fine, then i will remove the sinkpad_allocation_query() > virtual method implementation (which was used for GstMpegVideoMeta negotiation > previously) in another patch together with necessary changes (some name changes > like need-slice-header to parse-slice-header etc). > > Also I can provide another patch to vdpau plugin for supporting the new > CapsFeature. Initially, I wanted a mechanism to let the downstream element select what level of meta information it needs (frame, slice, macroblock). However, I am not too sure now. e.g. for the h264parse element, we'd ideally need slice level information should we need proper MVC parsing (e.g. and substreaming, etc.). Or, could we think about a flags based protocol? i.e. xxx_{FRAME,SLICE,MACROBLOCK} would be simpler to merge from all downstream requirements, up to including the parser element itself. WDYT?
The detail of information would be selected via the ALLOCATION query. For the meta fields in the query you can add an abitrary GstStructure with whatever information you want.
(In reply to comment #16) > I think the general approach makes sense as discussed on IRC. > > (In reply to comment #11) > > > --Individual header parsing will be requested via allocation query. And the > > parser will always initiate an allocation query with NULL caps to avoid the > > dependency of upstream-element initiation of allocation query. > > Why an ALLOCATION query with NULL caps? You should know the caps at that point > already. You would do the ALLOCATION query right after setting the caps on the > srcpad. parser will only set the src caps once it finishes the parsing of first frame. We cannot delay till that point. So initiating the allocation query from start() virtual method. Anyway we can still use the negotiated caps here. Is there any requirement to make this caps as a fixed one?
>(In reply to comment #17) > (In reply to comment #15) and substreaming, etc.). > > Or, could we think about a flags based protocol? i.e. > xxx_{FRAME,SLICE,MACROBLOCK} would be simpler to merge from all downstream > requirements, up to including the parser element itself. > > WDYT? If the downstream needs codec specific meta from upstream parser , it should be requested via capsnegotiation through specific capsfeatures. This is implemented for mpegvideoparser. Tested with codecanalyzer. If downstream needs specific header parsing and corresponding meta for those headers, it should be requested through allocation query. For mpegvideo the structure name is "Gst.Meta.MpegVideo" and individual headers should be requested through structure members like "parse-slice-header, parse-copy-right-ext" etc.
I didn't get any bugzilla mails for both comments from Sebastian and Gwenole ! Don't know how many are missing like this. :(
Created attachment 281983 [details] [review] videoparsers: mpeg2: Allow downstream to request for individual header parsing
(In reply to comment #22) > Created an attachment (id=281983) [details] [review] > videoparsers: mpeg2: Allow downstream to request for individual header parsing This patch modifies the older to to send negotiated caps also to the allocation query instead of passing NULL.
(In reply to comment #19) > (In reply to comment #16) > > I think the general approach makes sense as discussed on IRC. > > > > (In reply to comment #11) > > > > > --Individual header parsing will be requested via allocation query. And the > > > parser will always initiate an allocation query with NULL caps to avoid the > > > dependency of upstream-element initiation of allocation query. > > > > Why an ALLOCATION query with NULL caps? You should know the caps at that point > > already. You would do the ALLOCATION query right after setting the caps on the > > srcpad. > > parser will only set the src caps once it finishes the parsing of first frame. > We cannot delay till that point. So initiating the allocation query from > start() virtual method. Anyway we can still use the negotiated caps here. Is > there any requirement to make this caps as a fixed one? I see, that makes sense. And no requirement to make the caps fixed :)
Anything else preventing to push these patches? Please push the patches in #704865 first. That should be okay as per https://bugzilla.gnome.org/show_bug.cgi?id=704865#c6
I don't know if anybody reviewed the patches yet. I didn't :)
(In reply to comment #26) > I don't know if anybody reviewed the patches yet. I didn't :) Aha okay :( The whole patch set can be tested using codecanalyzer (which has internal copy of codecparser and videoparser which are in sync with upstream + the patches i have provided in #733056, #704865, #733872). This codecanalyzer repository is only for development easiness and will not be bit-exact with gst-devtools/codecanayzer :) https://github.com/sreerenjb/codecanalyzer
Created attachment 282741 [details] [review] mpegvideometa: Define a Capsfeature name for MpegVideoMeta
Created attachment 282742 [details] [review] videoparsers: mpeg2: find GstMpegVideoMeta requirement through caps negotiation
Created attachment 282743 [details] [review] videoparsers: mpeg2: Allow downstream to request for individual header parsing
Added bug-report link to each patch and for the last patch the allocation query initiation moved to handle_frame() from start().
Review of attachment 282743 [details] [review]: I wonder if we could skip the allocation query, and just use a some field, e.g.: mpeg2videoparse ! "video/mpeg\(meta:Mpeg2Video\),headers=sequence+slice" ! ... If it's omitted, then everything is provided. At the same time, I don't fully understand the need. In stateless CODEC, like VAAPI and V4L2 State Less, we need everything. Is this granularity needed for CODEC parser ?
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/158.