After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 685730 - audiodecoder: Timestamp and buffer tracking not threadsafe and robust
audiodecoder: Timestamp and buffer tracking not threadsafe and robust
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gst-plugins-base
git master
Other Linux
: Normal critical
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2012-10-08 14:38 UTC by Sebastian Dröge (slomo)
Modified: 2018-11-03 11:22 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description Sebastian Dröge (slomo) 2012-10-08 14:38:08 UTC
audiodecoder keeps a queue of pending input buffers that is filled before calling handle_frame() and cleaned in finish_frame(). For elements that have a different streaming thread on the srcpad (gst-omx, gst-amc, probably others) this can lead to buffers being removed from the queue while the subclass is inside handle_frame() and uses them.


Also the solution to just drop all frames from the queue if -1 is passed to finish_frame() is breaking timestamp tracking if the decoder still has input queued when outputting something.
Comment 1 Julien Isorce 2013-08-23 14:37:04 UTC
This is why there is no omx audio decoder ported to GstAudioDecoder in gst-omx ?
Comment 2 Sebastian Dröge (slomo) 2013-08-23 15:15:01 UTC
(In reply to comment #1)
> This is why there is no omx audio decoder ported to GstAudioDecoder in gst-omx
> ?

No, that's only because nobody did it yet (not true, I know of someone who started at least, but didn't share the code with me yet).
Comment 3 Edward Hervey 2015-03-10 06:41:16 UTC
As far as I can see, the threading issue seems to be solved (the code dealing with the frame list and other computation is protected by the audio decoder stream lock in _handle_frame() and _finish_frame()).

That being said, the timestamp handling is completely bust.

What I'm seeing on a amcaudiodecoder instance (omxqcomaudiodecodermultiaac) is the following:

Stream is 48000 stereo AAC

* Incoming buffers have an average duration of 0:00:00.021333333
* They are provided to the android media codec subsystem with proper timestamps (increasing by that duration every time)
* In the output loop thread we get decoded buffers with timestamps increasing by 0:00:00.042541667 (i.e. it seems the implementation provides us one output buffer for two input buffers)
* The _finish_frame algorithm doesn't care about any potential timestamp we could have put on the decoded buffer and tries to do some magic ... resulting in the final output buffers being spaced by 0:00:00.021333333

=> outgoing buffers have wrong ts/duration, the resulting audio is garbled in many situations on android.

I know there's a tolerance property which could mitigate this issue, but it seems completely wrong for audiodecoder to not be able to take into account some hints (such as buffer timestamp) that the implementation could provide.

A solution would be to have the base audio decoder compare the proposed timestamp *AND* number of samples to the total number of samples outputted (tracked in priv->base_ts and priv->samples_out)

Marking this as blocker since we should have proper audio playback on android for 1.6
Comment 4 Sebastian Dröge (slomo) 2015-03-15 15:08:32 UTC
That's clearly a bug unrelated to this specific bug report. finish_frame(2) should do exactly what you want here, independent of all the other things being completely broken :)

I think the only thing missing here for you is implementing this here for AAC too: http://cgit.freedesktop.org/gstreamer/gst-plugins-bad/tree/sys/androidmedia/gstamcaudiodec.c#n950
Comment 5 Edward Hervey 2015-05-11 08:24:15 UTC
My problem with that logic is that it implies the decoder knows how many frames are in output. Which is great if you have a very specific implementation where you know that, but is cumbersome to deal with if you don't (such as the case of the generic android audio decoder wrapper).

Most "generic" abstraction API won't provide you with that information but do/should provide you with timing information. Ergo my proposal.

The alternative is providing helper functions to figure out the frame size (for all codecs and formats (sic)) and pray the implementation doesn't do anything smart ...
Comment 6 Sebastian Dröge (slomo) 2015-05-11 08:33:48 UTC
That's why I think that we should remove this required from GstAudioDecoder completely. It already breaks quite a bit with the Apple AudioToolkit and the Android MediaCodec decoders. Especially for the latter, there are some devices out there that always output 1024 samples per output buffer (or less at EOS), independent of the codec frame size. Handling this properly would currently require the subclass to have an output adapter, and chunk things in multiples of the codec frame size.
Comment 7 GStreamer system administrator 2018-11-03 11:22:15 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/74.