GNOME Bugzilla – Bug 667838
jitterbuffer: don't produce lost-events for expired packets
Last modified: 2012-12-13 11:38:18 UTC
Created attachment 205152 [details] [review] patch he scenario where you have a gap in a steady flow of packets of say 10 seconds (500 packets of with duration of 20ms), the jitterbuffer will idle up until it receives the first buffer after the gap, but will then go on to produce 499 lost-events, to "cover up" the gap. Now this is obviously wrong, since the last possible time for the earliest lost-events to be played out has obviously expired, but the fact that the jitterbuffer has a "length", represented with its own latency combined with the total latency downstream, allows for covering up at least some of this gap. So in the case of the "length" being 200ms, while having received packet 500, the jitterbuffer should still create a timeout for packet 491, which will have its time expire at 10,02 seconds, specially since it might actually arrive in time! But obviously, waiting for packet 100, that had its time expire at 2 seconds, (remembering that the current time is 10) is useless... The patch will "move up" the packet it should be waiting for based on the current time and jitterbuffer "length" I am going to attach a test-case and the gsttestclock that is essential for this kind of tests.
Created attachment 205153 [details] test the jitterbuffer-lost-event testcases
Created attachment 205154 [details] GstTestClock This is a Test-clock written by Ole-Andre Ravnaas. It basically allow you to take control of time in your pipeline, collecting and releasing id-waits. Great for writing deterministic tests.
TestClock looks neat, I've wanted to write one of these for a long time. I though the lost-events where produced when the next packet was expected? Not when the next know packet is? Or did I miss something ?
(In reply to comment #3) > TestClock looks neat, I've wanted to write one of these for a long time. It is! I hope it will find its way into -core or -base so that it can benefit all of man-kind. Even comes with its own test-suite. :) > > I though the lost-events where produced when the next packet was expected? Not > when the next know packet is? Or did I miss something ? You only missed the fact that there needs to be a gap for a lost-event to be produced. Aka, the seq-num space between two buffers. So without that other buffer arriving, no lost-event produced. This patch is trying to fix the fact that this gap can be huge, and then the jitterbuffer currently does the wrong thing IMO.
Comment on attachment 205152 [details] [review] patch Test is good, but this patch is not quite right yet. Discovered some issues with it, so will write some more tests to expose its weaknesses, and submit a new one.
Created attachment 206937 [details] test Two more tests added and some improvements.
Created attachment 206938 [details] [review] patch This patch now makes all the tests pass.
Since this has gotten so little love, I thought I would spell out the problem this patch fixes: If you are using rtpmanager, and you get into a hold/resume situation, where packets stops arriving, you are basically screwed without this patch... :)
Created attachment 223420 [details] Missing types and functions referenced in the original patch. Sebastian Rasmussen discovered that there were pieces missing from this patch. There are some functions and types referenced in the test clock that aren't part of the test clock implementation itself. I've created a new attachment, testutils.zip, with a header and implementation of the missing pieces.
commit 9c94f1187c1cdc13e2ea3bb019052c0887df0584 Author: Havard Graff <havard.graff@tandberg.com> Date: Fri Jan 13 01:11:31 2012 +0100 jitterbuffer: bundle together late lost-events The scenario where you have a gap in a steady flow of packets of say 10 seconds (500 packets of with duration of 20ms), the jitterbuffer will idle up until it receives the first buffer after the gap, but will then go on to produce 499 lost-events, to "cover up" the gap. Now this is obviously wrong, since the last possible time for the earliest lost-events to be played out has obviously expired, but the fact that the jitterbuffer has a "length", represented with its own latency combined with the total latency downstream, allows for covering up at least some of this gap. So in the case of the "length" being 200ms, while having received packet 500, the jitterbuffer should still create a timeout for packet 491, which will have its time expire at 10,02 seconds, specially since it might actually arrive in time! But obviously, waiting for packet 100, that had its time expire at 2 seconds, (remembering that the current time is 10) is useless... The patch will create one "big" lost-event for the first 490 packets, and then go on to create single ones if they can reach their playout deadline. See https://bugzilla.gnome.org/show_bug.cgi?id=667838
Tests are broken, please port. I commited and disabled them so they are not fogotten. commit 50391c77731544b61f14d9c65cb9b4a1f00d9028 Author: Wim Taymans <wim.taymans@collabora.co.uk> Date: Thu Dec 13 12:36:20 2012 +0100 check: add (but disable) more rtp jitterbuffer tests Tests need to be ported to 1.0 before they can be enabled but added here so they don't get forgotten. See https://bugzilla.gnome.org/show_bug.cgi?id=667838