GNOME Bugzilla – Bug 751179
rtpjitterbuffer: jitter buffer does not send do-lost events downstream when packets don't arrive in time
Last modified: 2018-11-03 15:01:02 UTC
If using the jitter buffer with mode "synced" or "none", it does not deliver expected results when RTP packets arrive too late. Here is how to replicate: Sender: gst-launch-1.0 audiotestsrc wave=saw ! opusenc audio=true ! rtpopuspay ! udpsink host=127.0.0.1 port=54001 -v Leitung wird auf PAUSIERT gesetzt ... Receiver: gst-launch-1.0 udpsrc port=54001 caps="application/x-rtp, media=(string)audio, clock-rate=(int)48000, encoding-name=(string)OPUS" ! rtpjitterbuffer do-lost=true mode=synced latency=200 ! rtpopusdepay ! opusdec plc=true ! autoaudiosink Once both are running, start the network simulator in another console: sudo tc qdisc add dev lo root netem delay 1000ms This simulates a delay of one second on localhost. To remove it again, use: sudo tc qdisc del dev lo root netem Expected behavior: when the simulated delay is active, I'd hear PLC kick in, and once the delay is removed, regular audio continues. Actual behavior: after the simulated delay is added, no audio can be heard. There is sudden silence. No PLC kicks in. Once the simulated delay is removed, PLC is active for quite a while, *then* regular audio follows. The problem is that the rtpjitterbuffer isn't pushing anything downstream once the delay is enabled. Once a packet arrives in time (that is, after the delay is removed), it suddenly pushes downstream many do-lost events. It should however send a do-lost event downstream if packets don't arrive soon enough, right? Apparently, some internal timeout timer isn't timing out correctly. I looked through the bug reports, and the ones I think are related are https://bugzilla.gnome.org/show_bug.cgi?id=738363 and https://bugzilla.gnome.org/show_bug.cgi?id=720655 . Also, I am not even 100% sure if I am correct about the expected behavior. Can anybody confirm that the way I described it is indeed how it should behave?
I think there's no timer for that. A packet can only be really considered as last when you saw a newer packet. How would you otherwise know that there actually was going to be a packet?
This depends on whether or not the jitter buffer expects a steady flow of data. If so, packets would have to arrive within the given jitterbuffer's latency time (default 200ms). And this is what the comments in the .c files say: The rtpjitterbuffer will wait for missing packets up to a configurable time limit using the #GstRtpJitterBuffer:latency property. Packets arriving too late are considered to be lost packets. If the #GstRtpJitterBuffer:do-lost property is set, lost packets will result in a custom serialized downstream event of name GstRTPPacketLost. This does sound like a timeout to me?
Comments from IRC: <slomo> wtay_: https://bugzilla.gnome.org/show_bug.cgi?id=751179 i think we discussed this before, but what's your opinion on that? a bug? ;) <wtay_> slomo, I think it makes sense to push a do-lost when you waited for it for the complete latency of the jitterbuffer <wtay_> slomo, you would need to guess when the packet would need to arrive (or last packet + latency, if you can't determine packet spacing) <wtay_> and I guess you can only do this once (unless you decide that it's ok to push out lost events after each estimated packet spacing interval) <slomo> wtay_: ok, can you add a comment about that. i think it should also not report 1000000 lost packet events after a gap, but maybe accumulate them somehow ;) A timeout for a packet's expected arrival time is already there for retransmissions, that can probably be reused here.
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/193.