After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 570144 - [ffdec] decodes unlinked streams
[ffdec] decodes unlinked streams
Status: RESOLVED DUPLICATE of bug 646638
Product: GStreamer
Classification: Platform
Component: gst-libav
git master
Other All
: Normal minor
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2009-02-01 21:54 UTC by LRN
Modified: 2013-07-17 10:27 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Almost a megabyte of debug information (28.69 KB, application/x-7z-compressed)
2009-03-05 10:45 UTC, LRN
Details
Log of reading ogg file with theora-encoded video (277.69 KB, application/x-7z-compressed)
2009-03-05 11:51 UTC, LRN
Details
Log for theora (with default logger) (393.67 KB, application/x-7z-compressed)
2009-03-05 13:43 UTC, LRN
Details

Description LRN 2009-02-01 21:54:02 UTC
While application may not link some pads exposed by "pad-added" signal, this will not prevent decodebin2 from decoding data for these pads.

Because of that decoding time significantly increases, since data from unwanted (unlinked) pads are still being decoded to a raw form.

In theory, a decoder (or a bin it is placed into) should check whether decoder's output is required by downstream elements or not, and skip the decoding when it is not required. In practice this doesn't happen.

Workaround: tap on "autoplug-select" signal do not allow decodebin2 to autoplug unwanted streams any further, preventing it from attaching decoders to the pipeline, so no decoding takes place for these streams. Whether a stream is unwanted or not could only be decided by analyzing proposed GstElementFactory, because GstCaps may not contain necessary information. For example, for audio/video in mp4 container first GstCaps that is passed has zero structures and it's name is "video/quicktime", so it could be identified as video, while in reality it's just a container which *must* be allowed to be autoplugged for other streams to appear.

Pipeline that exposes this bug: uridecodebin -> audioconvert -> appsink
Comment 1 Edward Hervey 2009-03-05 07:44:20 UTC
A simpler way to expose this would have just been to construct a pipeline by hand and not linking one of the pads.

Ex : Take a quicktime file with aac and h264 and do "uridecodebin ! audio/x-raw-int ! fakesink". This will only link the aac decoder and ffdec_h264 will not be linked.

And as far as I can tell ... it will not decode anything since we're not providing ffmpeg with any output buffer (since we get the buffers from downstream and it's not linked). That doesn't mean it's not calling any ffmpeg code ! But at least it's (1) not calling the cpu-intensive parts and (2) still has enough information so that if we decide to connect a pad all of a sudden it will be able to carry on decoding from that point on.

Unless you come up with some logs proving it's actually decoding something, this bug will be closed.
Comment 2 LRN 2009-03-05 09:23:20 UTC
Here's my code that suffers from unwanted decoding phenomena.
The fix is in GStreamerAutoplugSelectCallback() function - it prevents autoplugger from constructing video factories. If you comment the check and let it always return 0, video streams will be decoded (despite the fact that they are not linked in OnNewStream handler).

What's wrong with it?

Also, what kind of logging would uncover whether or not the video data is decoded? Debug log (which level?)? Or message bus log would be enough?

class MyApplicationClass : public foobar
{
public:
  MyApplicationClass(char *filename); 
  
  bool importinit();

  int import();

  void OnNewStream(GstElement *uridecodebin, GstPad *pad);
  int WriteData(GstBuffer *buffer);

  ...<cut> a bunch of functions

private:

   GstElement           *mPipeline;
   GstBus               *mBus;
   GstElement           *mDec;

   GstStaticCaps         mStaticCaps;

   char                 *mName;

   ...<cut> internal stuff
}

MyApplicationClass::MyApplicationClass(char *filename)
:  foobar(filename)
{
   mName = filename;

   mPipeline = mDec = NULL;

   memset(&mStaticCaps,0,sizeof(mStaticCaps));
   mStaticCaps.string = (               
     "audio/x-raw-int, "             
     "signed = (boolean) { TRUE }, "
     "width = (int) { 16, 32 }, "    
     "depth = (int) { 16, 24 }, "    
     "rate = (int) [ 1, MAX ], "     
     "channels = (int) [ 1, MAX ], " 
     "endianness = (int) BYTE_ORDER" 
     );
}

bool MyApplicationClass::importinit()
{
   GstPad *audiopad = NULL;

   // Create a pipeline
   mPipeline = gst_pipeline_new("pipeline");
   // Get it's bus an add a message watch to it
   mBus = gst_pipeline_get_bus(GST_PIPELINE(mPipeline));   

   // Set up source location
   gchar *name_utf8 = g_strdup(mName);
   gchar *name_with_proto_utf8 = g_strdup_printf("file:///%s",name_utf8);
   g_free(name_utf8);
   
   // Create uridecodebin and set up signal handlers
   mDec = gst_element_factory_make("uridecodebin", "decoder");
   g_signal_connect(mDec, "pad-added", G_CALLBACK(GStreamerNewStreamCallback), (gpointer)this);
   g_signal_connect(mDec, "autoplug-select", G_CALLBACK(GStreamerAutoplugSelectCallback), (gpointer)this);
   
   g_object_set(G_OBJECT(mDec), "uri", name_with_proto_utf8, NULL);

   // Add everythin into the pipeline
   gst_bin_add_many(GST_BIN(mPipeline), mDec, NULL);

   // Run the pipeline
   GstStateChangeReturn statechange = gst_element_set_state(mPipeline, GST_STATE_PAUSED);

   while (true)
   {
      GstMessage *msg = gst_bus_pop(mBus);
      if (msg)
      {
        //OnBusMessage returns FALSE on GST_MESSAGE_ASYNC_DONE or GST_MESSAGE_EOS and does message logging.
        if (!OnBusMessage(mBus,msg)) 
          break;
      }
   }

   return true; //After some more pondering application will call import() method
}

gint GStreamerAutoplugSelectCallback(GstElement *element, GstPad *pad, GstCaps *caps, GstElementFactory *factory, gpointer data)
{
   // Check factory class
   const gchar *fclass = gst_element_factory_get_klass(factory);
   // Skip video decoding
   if (g_strrstr(fclass,"Video"))
      return 2;
   return 0;
}

void GStreamerNewStreamCallback(GstElement *uridecodebin, GstPad *pad, gpointer data)
{
   MyApplicationClass *handle = (MyApplicationClass*)data;
   handle->OnNewStream(uridecodebin, pad);
}

void MyApplicationClass::OnNewStream(GstElement *uridecodebin, GstPad *pad)
{
   GstCaps *caps;
   GstStructure *str;

   caps = gst_pad_get_caps(pad);
   str = gst_caps_get_structure(caps, 0);

   // Check stream type
   const gchar *name = gst_structure_get_name(str);
   gboolean quit = FALSE;
   if (!g_strrstr(name, "audio"))
   {
      g_string_free(strinfo, TRUE);
      quit = TRUE;
   }
   gst_caps_unref (caps);
   if (quit)
     return;

   // Create unique name
   gchar *mAConvName = g_strdup_printf("aconv-%d",mScs->len);
   // Create an audioconv   
   GstElement *mConv = gst_element_factory_make("audioconvert", mAConvName);
   // Get it's sink pad
   GstPad *convpad = gst_element_get_static_pad(mConv, "sink");
   // Add it to the pipeline
   gst_bin_add(GST_BIN(mPipeline), mConv);

   GstPadLinkReturn ret = gst_pad_link(pad, convpad);
   gst_object_unref(convpad);
   if (ret)
   {
      g_print("Failed to link uridecodebin to audioconvert - %d\n",ret);
      g_free(mAConvName);
      gst_bin_remove(GST_BIN(mPipeline), mConv);
      gst_object_unref(mConv);
      g_string_free(strinfo, TRUE);
      return;
   }

   gchar *mASinkName = g_strdup_printf("asink-%d",mScs->len);
   GstElement *mSink = gst_element_factory_make("appsink", mASinkName);
   // Set up sink properties
   caps = gst_static_caps_get(&mStaticCaps);
   gst_app_sink_set_caps(GST_APP_SINK(mSink), caps);
   gst_caps_unref(caps);
   gst_base_sink_set_sync(GST_BASE_SINK(mSink), FALSE);
   gst_app_sink_set_drop(GST_APP_SINK(mSink), FALSE);
   gst_app_sink_set_emit_signals(GST_APP_SINK(mSink), FALSE);
   gst_app_sink_set_max_buffers(GST_APP_SINK(mSink), 10);

   gst_bin_add(GST_BIN(mPipeline), mSink);

   if (!gst_element_link(mConv, mSink))
   {
      g_free(mAConvName);
      gst_bin_remove(GST_BIN(mPipeline), mConv);
      gst_object_unref(mConv);
      g_free(mASinkName);
      gst_bin_remove(GST_BIN(mPipeline), mSink);
      gst_object_unref(mSink);
      g_string_free(strinfo, TRUE);
   }

   // Run newly added elements
   GstStateChangeReturn statechange;
   statechange = gst_element_set_state(mConv, GST_STATE_PAUSED);
   statechange = gst_element_set_state(mSink, GST_STATE_PAUSED);

   // Next it stores mSink internally, nothing interesting.
   ...
}

int MyApplicationClass::import()
{
  GstStateChangeReturn statechange = gst_element_set_state(mPipeline, GST_STATE_PLAYING);

  gboolean doit = TRUE;
  while (doit)
  {
    doit = FALSE;
    GstMessage *msg = gst_bus_pop(mBus);
    while (msg != NULL)
    {
      OnBusMessage(mBus,msg);
      msg = gst_bus_pop(mBus);
    }
    ...<cut> for each stream found in source and for which we've made sink and audioconv (see OnNewStream) do
    {
      ...<cut> get mSink for that stream from internal storage
      GstBuffer *incbuf = gst_app_sink_pull_buffer(GST_APP_SINK(mSink));
      if (incbuf)
      {
        doIt = TRUE;
        res = WriteData(incbuf);
        gst_buffer_unref(incbuf);
      }
    }
  }
}
Comment 3 LRN 2009-03-05 10:45:38 UTC
Created attachment 130105 [details]
Almost a megabyte of debug information

Yes, i am positive that MY code (without "autoplug-select" fix) makes a pipeline that actually decodes H.264 video from an mp4 audio/video file - i enabled debug logging and i've found in it messages like this:
GStreamer ffmpeg - Debug: in file 'gstffmpegdec.c' at line 1524 in function 'gst_ffmpegdec_video_frame' : after decode: len 10306, have_data 208
I've also set up gst_pad_add_data_probe() on both uridecodebin's pads (audio and video) and only audio pad triggers the probe. I think that decodebin2 decodes video internally and it never reaches uridecodebin.

Attached a chunk of logfile taken at the moment of decoding (i.e. while pipeline is in 'playing' state). It's a bit ugly and has strange artefacts in it - but i hope it will be useful.
I can also attach full debug log up to that point, but it's 235Mb uncompressed (4.4Mb compressed), so it be not a good idea (looking through it has to be painful).
Comment 4 Edward Hervey 2009-03-05 10:59:55 UTC
Ok, so the reason why it's decoding is because direct-rendering is disabled on your decoders.

Enable it and come back with more debug logs.
Comment 5 LRN 2009-03-05 11:51:31 UTC
Created attachment 130121 [details]
Log of reading ogg file with theora-encoded video

from gstffmpegdec.c, line 700+:

  if (ffmpegdec->direct_rendering) {
    GST_DEBUG_OBJECT (ffmpegdec, "trying to enable direct rendering");
    if (oclass->in_plugin->capabilities & CODEC_CAP_DR1) {
      if (oclass->in_plugin->id == CODEC_ID_H264) {
        GST_DEBUG_OBJECT (ffmpegdec, "disable direct rendering setup for H264");
        /* does not work, many stuff reads outside of the planes */
        ffmpegdec->current_dr = FALSE;
        ffmpegdec->extra_ref = TRUE;
      } else {
        GST_DEBUG_OBJECT (ffmpegdec, "enabled direct rendering");
        ffmpegdec->current_dr = TRUE;
      }
    } else {
      GST_DEBUG_OBJECT (ffmpegdec, "direct rendering not supported");
    }
  }

This is just the case: i'm reading mp4 with H.264 video and AAC audio. So, this seems to be a coincidence: to test whether uridecodebin decodes unplugged streams or not (i had such doubts from the very beginning, since at some point i thought about decoding streams one-by-one, not simultaneously) i've got a media file that would expose that for sure - 1 hour long recording of a concert, encoded in high-profile H.264. I thought that decoding high-profile H.264 video would burden my CPU significantly and will take much time, so it would be obvious that video stream is indeed being decoded. But judging by ffmpegdec source direct-rendering is always disabled for H.264, so this testcase is inappropriate.

OK, i've tried theora-encoded video file with vorbis audio (ogg container). Same thing - video is being decoded. Attaching a log file.

Since you've pointed me at buffering issue, i've tried to read the log myself. It seems to me that theoradec happily goes all the way down to theora_handle_data_packet(), calls theora_decode_packetin() to decode data and then suddenly realizes that there's no buffer because pad is not linked.
Comment 6 LRN 2009-03-05 12:11:17 UTC
i've tried xvid video too (being decoded by gst-ffmpeg). I won't send a log file, i'll just tell you that it does the same thing as theora - decodes and then finds not buffer to put it ("no output buffer"). With direct-rendering enabled.
Comment 7 Edward Hervey 2009-03-05 12:15:26 UTC
Could you please attach debug logs of :

* the theora/ogg case
* the xvid case

And could you provide those with the default debug handler. The one you're using is missing some information and due to lack of colors is insanely hard to parse.

Comment 8 LRN 2009-03-05 13:43:10 UTC
Created attachment 130130 [details]
Log for theora (with default logger)
Comment 9 LRN 2009-03-05 13:46:05 UTC
http://lrn.no-ip.info/other/gst.debug.xvid.default.7z - Log for xvid (with default logger), too large to be attached.
Comment 10 LRN 2009-03-05 16:19:05 UTC
>gst-launch-0.10 uridecodebin uri=file:///c:/test.avi ! audio/x-raw-int ! progressreport update-freq=1 ! wavenc ! filesink location=c:/test.avi.slow.wav
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
progressreport0 (00:00:01): 55 / 1350 seconds ( 4.1 %)
progressreport0 (00:00:02): 114 / 1350 seconds ( 8.4 %)
progressreport0 (00:00:03): 179 / 1350 seconds (13.3 %)
progressreport0 (00:00:04): 238 / 1350 seconds (17.6 %)
progressreport0 (00:00:05): 316 / 1350 seconds (23.4 %)
progressreport0 (00:00:06): 400 / 1350 seconds (29.6 %)
progressreport0 (00:00:07): 473 / 1350 seconds (35.0 %)
progressreport0 (00:00:08): 538 / 1350 seconds (39.9 %)
progressreport0 (00:00:09): 611 / 1350 seconds (45.3 %)
progressreport0 (00:00:10): 681 / 1350 seconds (50.4 %)
progressreport0 (00:00:11): 756 / 1350 seconds (56.0 %)
progressreport0 (00:00:12): 833 / 1350 seconds (61.7 %)
progressreport0 (00:00:13): 896 / 1350 seconds (66.4 %)
progressreport0 (00:00:14): 975 / 1350 seconds (72.2 %)
progressreport0 (00:00:15): 1036 / 1350 seconds (76.7 %)
progressreport0 (00:00:16): 1103 / 1350 seconds (81.7 %)
progressreport0 (00:00:17): 1166 / 1350 seconds (86.4 %)
progressreport0 (00:00:18): 1217 / 1350 seconds (90.1 %)
progressreport0 (00:00:19): 1289 / 1350 seconds (95.5 %)
progressreport0 (00:00:19): 1350 / 1350 seconds (100.0 %)
Got EOS from element "pipeline0".
Execution ended after 19515625000 ns.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

>gst-launch-0.10 filesrc location=c:/test.avi ! avidemux ! acmmp3dec ! progressreport update-freq=1 ! wavenc ! filesink location=c:/test.avi.fast.wav
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
progressreport0 (00:00:01): 155 / 1350 seconds (11.5 %)
progressreport0 (00:00:02): 344 / 1350 seconds (25.5 %)
progressreport0 (00:00:03): 536 / 1350 seconds (39.7 %)
progressreport0 (00:00:04): 732 / 1350 seconds (54.2 %)
progressreport0 (00:00:05): 924 / 1350 seconds (68.4 %)
progressreport0 (00:00:06): 1112 / 1350 seconds (82.4 %)
progressreport0 (00:00:07): 1301 / 1350 seconds (96.4 %)
progressreport0 (00:00:07): 1350 / 1350 seconds (100.0 %)
Got EOS from element "pipeline0".
Execution ended after 7062500000 ns.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

$ md5sum.exe /c/test.avi.slow.wav
862d0dad1fa0df37357d1ebdc21432cd */c/test.avi.slow.wav

$ md5sum.exe /c/test.avi.fast.wav
862d0dad1fa0df37357d1ebdc21432cd */c/test.avi.fast.wav
Comment 11 Sebastian Dröge (slomo) 2009-09-10 08:38:28 UTC
What's with this bug?
Comment 12 Tobias Mueller 2010-03-30 18:12:27 UTC
Edward, could you come back to the reporter who has provided the logs you requested for? TIA!
Comment 13 Edward Hervey 2010-03-30 19:07:00 UTC
An easier solution would be to add a mode to decodebin2/uridecodebin to remove unused elements based on the output caps.

That way if you only wanted audio, you could set "audio/x-raw-int;audio/x-raw-float" on the caps property of decodebin2, select that mode and then decodebin2 would remove unused elements once it sees it's a unrecognized stream.
Comment 14 Sebastian Dröge (slomo) 2011-05-19 17:21:58 UTC
This is related to bug #646638
Comment 15 Edward Hervey 2013-07-17 09:50:01 UTC
Should we mark this as a duplicate of #646638 ? Or make it depend on it ?
Comment 16 Sebastian Dröge (slomo) 2013-07-17 10:27:56 UTC
Duplicated, the videodecoder base class would handle that then.

*** This bug has been marked as a duplicate of bug 646638 ***