After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 755220 - Feature requests for playbin/decodebin/uridecodebin and/or successors
Feature requests for playbin/decodebin/uridecodebin and/or successors
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gst-plugins-base
git master
Other Linux
: Normal enhancement
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on: 758960
Blocks:
 
 
Reported: 2015-09-18 13:33 UTC by Carlos Rafael Giani
Modified: 2018-11-03 11:41 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Patch for ensuring that 100% buffering is always posted (1.42 KB, patch)
2016-02-04 17:18 UTC, Carlos Rafael Giani
none Details | Review

Description Carlos Rafael Giani 2015-09-18 13:33:35 UTC
Hello,

these are some notes about my experiences with these bins, and what features should be included in them, or in successor elements.


1. Proper prebuffering/preparing in playbin
Right now, the gapless playback in playbin relies on downstream elements to have enough data buffered to cover the transition. In other words, playbin *doesn't* prepare the next stream before the current one ends. If the preparation isn't done very quickly, the queues downstream will eventually run out of data. In addition, if the stream is for example h.264 data, and a hardware h.264 decoder is to be used, an initial delay can occur, which would cause a noticeable stutter during the transition. This can be seen on embedded hardware like the i.MX6 devices.

I based my code on the concat element that is in GStreamer since version 1.6. With it, such prebuffering is possible. The downside is that a lot of steps that playbin does have to be done manually.

Prebuffering/preparing does introduce a new set of issues and necessities. First, preparing the next stream in the background causes an increase in CPU usage, since in that situation, *two* streams are active (though only one is producing output). This is particularly relevant on embedded devices with limited CPU power. It is therefore necessary to (a) prioritize the threads of the streams (give more CPU slices to the current stream) and (b) throttle the computations in the second stream, for example by limiting the input data rate in the streams with a probe and a condition variable inside that probe.
Second, if both the current and the next stream receive data over the network, then the current stream must be given priority. This can be solves in a similar way to (b) above, by throttling the data rate in the next stream.


2. Make embedded devices first-class citizens during developments

Improvements in these bins and/or development of successors must involve embedded hardware during development, since a lot of situations where deadlocks, "clicks", stuttering etc. occur are not reproducible on more powerful machines such as PCs. I have seen many of such issues on a single core Sitara AM3352 machine with 720MHz, while the PC build showed none of these issues.


3. Buffering improvements

"Buffering" in the bins is currently ill-defined and hardly introspectible. Stats such as the current buffer level are inaccessible. Buffer sizes/durations cannot be adjusted during playback. The buffering information must be a lot more detailed and be adjustable on the fly with more fine-grained controls. This is for example necessary with polluted Wi-Fi environments and HTTP transmissions of VBR data such as FLAC or Ogg Vorbis. Initial buffer size estimations can be way off with VBR, leading to frequent interruptions. A player's buffer logic could then resize the buffer based on observations made earlier. But right now, this is not doable with playbin/(uri)decodebin.


Comments are welcome. So are additional ideas for improvements.
Comment 1 Tim-Philipp Müller 2015-09-23 14:55:36 UTC
Thanks for your comment, that's very useful.

1) is probably something that could be fixed with the current design as well if one wanted to, but of course it would be nice to do it better from the start if one starts doing something new.


I'm not quite sure what there is to do about 2) to be honest. You seem to be under the misconception that somehow the current decodebin/playbin were written for the desktop somehow disregarding embedded device use cases. This is not the case, it was developed specifically with embedded use cases in mind. Of course there are bugs that only happen on low-powered embedded devices and not on "the desktop", just like there are bugs that tend to only manifest themselves on "the desktop". Most of these things are not related to the general decodebin/playbin design per se. Furthermore, these elements are used successfully in millions of embedded devices. There will always be bugs, they have to be debugged and fixed. It's not really a design issue IMHO.

3) Indeed.
Comment 2 Carlos Rafael Giani 2016-02-04 16:30:19 UTC
Since I have some additions to (3), I continue here:


* Currently, there is no way to find out if a stream that is being decoded by decodebin is going to send out buffering messages (at least not without manually searching for queue elements etc.). This is however essential, since otherwise, it is possible that the PLAYING state is reached before the very first buffering message arrives, leading to a brief bit of playback followed by silence (until buffering finishes). With a "will-use-buffering" signal, applications can know that buffering is going to happen, and ensure to not switch to PLAYING right away. Instead, they can initially just switch to PAUSED, and if "will-use-buffering" was emitted, they know they should not switch to PLAYING just yet.

Such a signal could for example be emitted whenever decodebin sets a queue's use-buffering property to TRUE.


* Under certain circumstances, it appears that a multiqueue never emits buffering messages even though use-buffering is set to TRUE. It is difficult to reproduce, and very much looks like a race condition. It is not clear if just having use-buffering set to TRUE is a guarantee that queue2/multiqueue will *always* post a BUFFERING message. If this isn't the case, then I propose we add a "dummy" message that signals 100%, and document that when use-buffering is set to TRUE, there will *always* be at least one buffering message. (Both in queue2 and multiqueue)
Comment 3 Carlos Rafael Giani 2016-02-04 17:18:54 UTC
Created attachment 320459 [details] [review]
Patch for ensuring that 100% buffering is always posted

Here is a proposal for fixing the 100% buffering issue. It is not final; most importantly, certain locks that I have seen inside multiqueue are not held, and I am not sure if they have to in this particular case.

One potential side effect of this is that a 100% buffering message could be posted more than once. This should be safe.

Another question is whether queue2 would also benefit from something like this (or whether it even needs it).
Comment 4 GStreamer system administrator 2018-11-03 11:41:35 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/224.