After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 726425 - Add new API to notify minimum buffering needed downstream
Add new API to notify minimum buffering needed downstream
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gstreamer (core)
git master
Other Linux
: Normal enhancement
: 1.3.1
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2014-03-15 17:54 UTC by Andoni Morales
Modified: 2018-05-04 08:46 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description Andoni Morales 2014-03-15 17:54:57 UTC
We need a new API to notify the minimum buffering needed by the downstream elements, like it would happen with HLS, where you need at least the duration of 2 fragments to handle gracefully bandwidth changes and adapt correctly.

With the current state, the pipeline will start playback right after fetching the first fragment, which means you need to fetch the next fragment in less than 10 seconds to avoid starvation. With DASH or Sooth Streaming queueing 6 seconds with fragments sizes of 2 seconds, you have 3 control points to correct the download bitrate (each 2 seconds), while in HLS with 10 seconds fragments, you have to wait another 10 seconds to adapt you download bitrate.
Comment 1 Sebastian Dröge (slomo) 2014-03-16 09:43:44 UTC
I think this could be limited to the adaptive streaming use case, and we could just have a custom sticky downstream event for that... which would be handled by multiqueue (only if use-buffering=true) to modify its limits.

That wouldn't be the most clean API but I'm not sure how a more generic API for more use cases would look like.


We need to fix this before 1.4.0 as currently there's a regression here.
Comment 2 Sebastian Dröge (slomo) 2014-03-16 16:29:21 UTC
So we need something in core for this and it should probably be a sticky event. I would propose GST_EVENT_TYPE_BUFFERING that contains our standard queue properties: max-size-{buffers,time,bytes}.

This could also be used by other network sources later to tell the queue2 inside uridecodebin to choose a potentially better value for the buffering than the hardcoded 2 seconds.

Any opinions?
Comment 3 Andoni Morales 2014-03-23 14:27:55 UTC
It looks good to me, and don't see a better way of handling it.
Comment 4 Nicolas Dufresne (ndufresne) 2014-05-18 23:06:10 UTC
This seems fair solution, but I'd like to know how we solve the problem of buffer pool sizes. The buffering event travel will end at the queue, and the queue will be in the middle of an allocation query that may impact the capacity to queue the required amount.

Should we simply get the queue to add that information to the minimum buffers for the pool configuration ? If so, we need this event to be sent before the allocation query is happens (might mean before the caps event). Would that be an issue ?
Comment 5 Sebastian Dröge (slomo) 2014-05-19 06:23:27 UTC
It's not really a blocker anymore after Thiago's changes to the adaptive streaming demuxers.

For your question, those two issues are orthogonal. What I'm talking about here is buffering as in making sure that at least a minimum amount of data is buffered (i.e. think of buffering messages here). What you mean is more like the existing buffer-size event, which is very closely related to the buffer counts in the allocation query.
Comment 6 Sebastian Dröge (slomo) 2018-05-04 08:46:49 UTC
Let's worry about this if it ever becomes a pressing issue again for some use-case.