GNOME Bugzilla – Bug 772330
adaptivedemux: Improve bitrate estimation
Last modified: 2016-10-31 14:33:51 UTC
+++ This bug was initially created as a clone of Bug #733959 +++
Created attachment 336753 [details] [review] adaptivedemux: Calculate values before queue2 In order to calculate the *actual* bitrate for downloading a fragment we need to take into account the time since we requested the fragment. Without this, the bitrate calculations (previously reported by queue2) would be biased since they wouldn't take into account the request latency (that is the time between the moment we request a specific URI and the moment we receive the first byte of that request). Such examples were it would be biased would be high-bandwith but high-latency networks. If you download 5MB in 500ms, but it takes 200ms to get the first byte, queue2 would report 80Mbit/s (5Mb in 500ms) , but taking the request into account it is only 57Mbit/s (5Mb in 700ms). While this would not cause too much issues if the above fragment represented a much longer duration (5s of content), it would cause issues with short ones (say 1s, or when doing keyframe-only requests which are even shorter) where the code would expect to be able to download up to 80Mbit/s ... whereas if we take the request time into account it's much lower (and we would therefore end up doing late requests). Also calculate the request latency for debugging purposes and further usage (it could allow us to figure out the maximum request rate for example). https://bugzilla.gnome.org/show_bug.cgi?id=733959
Review of attachment 336753 [details] [review]: ::: gst-libs/gst/adaptivedemux/gstadaptivedemux.c @@ +2375,3 @@ + stream->last_latency = + gst_adaptive_demux_get_monotonic_time (stream->demux) - + (stream->download_start_time * GST_USECOND); While it is currently true here that we only receive the SEGMENT event right before the first buffer, this does not seem guaranteed. There's nothing that forbids sending the SEGMENT event right away (or sending multiple ones). It would probably be better to watch out for the actual first buffer
(In reply to Sebastian Dröge (slomo) from comment #2) > Review of attachment 336753 [details] [review] [review]: > > ::: gst-libs/gst/adaptivedemux/gstadaptivedemux.c > @@ +2375,3 @@ > + stream->last_latency = > + gst_adaptive_demux_get_monotonic_time (stream->demux) - > + (stream->download_start_time * GST_USECOND); > > While it is currently true here that we only receive the SEGMENT event right > before the first buffer, this does not seem guaranteed. There's nothing that > forbids sending the SEGMENT event right away (or sending multiple ones). > > It would probably be better to watch out for the actual first buffer True, I could move that to the buffer part (if bytes_downloaded == 0)
Created attachment 336854 [details] [review] adaptivedemux: Calculate values before queue2 In order to calculate the *actual* bitrate for downloading a fragment we need to take into account the time since we requested the fragment. Without this, the bitrate calculations (previously reported by queue2) would be biased since they wouldn't take into account the request latency (that is the time between the moment we request a specific URI and the moment we receive the first byte of that request). Such examples were it would be biased would be high-bandwith but high-latency networks. If you download 5MB in 500ms, but it takes 200ms to get the first byte, queue2 would report 80Mbit/s (5Mb in 500ms) , but taking the request into account it is only 57Mbit/s (5Mb in 700ms). While this would not cause too much issues if the above fragment represented a much longer duration (5s of content), it would cause issues with short ones (say 1s, or when doing keyframe-only requests which are even shorter) where the code would expect to be able to download up to 80Mbit/s ... whereas if we take the request time into account it's much lower (and we would therefore end up doing late requests). Also calculate the request latency for debugging purposes and further usage (it could allow us to figure out the maximum request rate for example). https://bugzilla.gnome.org/show_bug.cgi?id=733959
Review of attachment 336854 [details] [review]: ::: gst-libs/gst/adaptivedemux/gstadaptivedemux.c @@ +2655,3 @@ stream->uri_handler = uri_handler; stream->queue = queue; + stream->src_probe_id = src_probe_id; This seems unused
Review of attachment 336854 [details] [review]: Looks good to me as well.
commit 4aadf50012cb057422f29644b41fcaa30c1a2dd2 Author: Edward Hervey <edward@centricular.com> Date: Sun Oct 2 09:34:51 2016 +0200 adaptivedemux: Calculate values before queue2 In order to calculate the *actual* bitrate for downloading a fragment we need to take into account the time since we requested the fragment. Without this, the bitrate calculations (previously reported by queue2) would be biased since they wouldn't take into account the request latency (that is the time between the moment we request a specific URI and the moment we receive the first byte of that request). Such examples were it would be biased would be high-bandwith but high-latency networks. If you download 5MB in 500ms, but it takes 200ms to get the first byte, queue2 would report 80Mbit/s (5Mb in 500ms) , but taking the request into account it is only 57Mbit/s (5Mb in 700ms). While this would not cause too much issues if the above fragment represented a much longer duration (5s of content), it would cause issues with short ones (say 1s, or when doing keyframe-only requests which are even shorter) where the code would expect to be able to download up to 80Mbit/s ... whereas if we take the request time into account it's much lower (and we would therefore end up doing late requests). Also calculate the request latency for debugging purposes and further usage (it could allow us to figure out the maximum request rate for example). https://bugzilla.gnome.org/show_bug.cgi?id=733959 https://bugzilla.gnome.org/show_bug.cgi?id=772330