After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 766929 - Adaptive demuxer CANNOT estimate bandwidth precisely under QOS service
Adaptive demuxer CANNOT estimate bandwidth precisely under QOS service
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gst-plugins-bad
git master
Other Windows
: Normal normal
: NONE
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2016-05-27 00:09 UTC by WeiChungChang
Modified: 2017-07-12 14:53 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Experimental environment & result (138.00 KB, application/msword)
2016-05-27 00:09 UTC, WeiChungChang
Details
Experimental environment & result (300.64 KB, application/pdf)
2016-05-27 00:15 UTC, WeiChungChang
Details

Description WeiChungChang 2016-05-27 00:09:51 UTC
Created attachment 328624 [details]
Experimental environment & result

Dear all:
 
I found a potential issue about adaptive demuxer's bitrate calculation.
 
From the experiment below, it shows the unstable problem for bitrate calculation.
 
1. Set media content server's QOS to max 400k bits/sec.
 
2. Playback a DASH media with two representation sets. Their bandwidths are of 256k & 1500k bits/sec.
 
3. As the result, we expect to playback the profile of 256k bits/sec since it is not greater than the QOS bandwidth = 400k bits/sec. However, by the attached figure you can see the result depends on the size of internal buffer.
 
Notice that the Vertical axis of  NG case shows that the ESTIMATE bitrates MAY be over 1500K bits/sec sometimes. In fact, there are total 8 times in 32 measures. It means current flow to calculate bandwidth has a potential issue. Under some cases it will OVER estimate network bandwidth.
 
The difference is the inter buffer you apply to Gstreamer's pipeline.
 
For example, the attached file also give the internal buffer settings to NG & OK cases.
For NG's case internal buffer AT MOST can buffer ~ 5~6 seconds.
But for OK's case it can almost buffer the entire DASH period.
 
Why this issue happens? The root MAY be like the example below.
 
1. Assume each DASH's segment is fixed to 3 second, the internal buffer is full soon since QOS's bandwidth (= 400K bits/second) is larger than 256K bits/sec & at the beginning we select the representaion in 256K. It tasks about (256*5)/(400-256) = 8.889 sec to have internal buffer full (in fact, the (256K * 3) is the MAX size each segemnt could be but they are usually smaller than it).
 
2. Once the internal buffer is full the source in adaptive stream can NOT push data into multiqueue0 & get stuck here. Notice that the time spent in waiting push data in NOT included into the calculation to bandwidth.
 
3. The QOS sever does NoT receive any data request from the client for a while since the client's source is stuck in pushing data into mq0, for example, 0.8 sec. Once client can push data into mq0 again, the QOS server think it can deliver the rest quota of 400K in the remaining 0.2 sec & it indeed does.
 
The Gstreamer player start to download the next fragment & complete receiving it soon within 0.2 sec. So it believe the bandwidth is perfectly GOOD to over (256K/0.2) > 1280K. It is the case you see the NG case there are some wrongly estimated bandwidth which is far larger than the real QOS limit = 400K appeared.
 
I think the problem is we do estimate the bandwidth by "after downloading a fragment with N bytes & spend T second, Bandwidth = N*8/T".
 
It has problem in handling the burst read where T (smaller than 1 second) is very small. However for QOS service the unit is usually in second so the burst read will give us a wrong estimation to current bandwidth. Potentially it may lead to UNDER-RUN issue by over-estimate current network’s bandwidth.
 
But if we do estimate the bandwidth by " trigger the bandwidth estimator every second & see how many bytes we have downloaded = N, Bandwidth = N*8", this issue could be avoided.
 
Please estimate if we can do estimate the bandwidth by the mentioned method.
Comment 1 WeiChungChang 2016-05-27 00:15:05 UTC
Created attachment 328625 [details]
Experimental environment & result
Comment 2 Edward Hervey 2017-01-16 13:17:49 UTC
Not sure how to solve this in the playbin2/decodebin2 use-case.

This should be properly fixed with playbin3/decodebin3/urisourcebin.

Could you retry with playbin3 ?
Comment 3 Edward Hervey 2017-07-12 14:41:03 UTC
ping ?
Comment 4 Tim-Philipp Müller 2017-07-12 14:53:35 UTC
Let's close this. Please re-open if you're still having issues with playbin3/decodebin3, thanks!