After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 743945 - rtpjitterbuffer: add "do-deadline" property
rtpjitterbuffer: add "do-deadline" property
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gst-plugins-good
1.4.5
Other Linux
: Normal enhancement
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2015-02-03 17:22 UTC by Miguel París Díaz
Modified: 2018-11-03 14:57 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Patch v1 (4.01 KB, patch)
2015-02-03 17:22 UTC, Miguel París Díaz
none Details | Review

Description Miguel París Díaz 2015-02-03 17:22:10 UTC
Created attachment 296048 [details] [review]
Patch v1

I have noticed that gstjitterbuffer adds an initial delay of "latency" in ms retaining the firsts packets.
This behaviour can be useful in some cases, but not in others cases where the time of the media establishment is too important.
To solve this, I have added a new property to configure the behaviour of the jitter buffer: "do-deadline".
By default it has the value of TRUE, so the element has the same behaviour that until now.
Comment 1 Sebastian Dröge (slomo) 2015-02-03 19:19:19 UTC
I think that's a good idea, I was wondering about the same a few days ago. It does not seem to make sense to let the jitterbuffer always wait until the latency... it should just output what it can before already, and only wait until the latency if packets are missing.
Comment 2 Tim-Philipp Müller 2015-02-03 19:26:43 UTC
Why would it ever not output packets right away if none are missing?

I mean, why do we need a property at all for this?
Comment 3 Sebastian Dröge (slomo) 2015-02-03 19:38:40 UTC
I don't know, maybe someone uses jitterbuffer with a sink with sync=false. It would be very jitterish then whenever a packet went missing :)
Comment 4 Tim-Philipp Müller 2015-02-03 19:45:54 UTC
If someone cares about jitter they shouldn't be using sync=false. And didn't the jitterbuffer send out stuff asap at some point in the past anyway?
Comment 5 Miguel París Díaz 2015-02-03 19:56:05 UTC
Reading the current code, I understand that for the first received packet, it is not possible to know if it is the "real" first one, if previous packets are lost, or if it is disordered.
From this point I had two possibilities to improve the case where the
time of the media establishment is too important:
  a) Wait rtx-delay-reorder instead of the latency (http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-rtpjitterbuffer.html#GstRtpJitterBuffer--rtx-delay-reorder)
  b) Not wait, because I don't want to loss the time of waiting any packet (I did this solution).

On the other hand, I want to be careful because there can be some case where the current behaviour is needed, so I don't want to break them and I have added this new attribute.

Anyway, to avoid assumptions, this has been done by Wim, so he could tell us why he did this ;).
Comment 6 Nicolas Dufresne (ndufresne) 2015-02-03 21:00:36 UTC
Isn't the solution to that is to make the jitterbuffer dynamic ?
Comment 7 Miguel París Díaz 2015-02-03 21:14:51 UTC
It is another solution followed by some implementers as Chrome in its WebRtc implementation.
I think that it is more expensive in time, although we can do it in medium-term  with calm if we can use the "do-deadline" property purposed.
Comment 8 Olivier Crête 2015-02-03 22:25:26 UTC
The "do-deadline" property you're proposing doesn't make sense. If the latency is not "used" in the jitterbuffer, the same latency will happen at the sink, so the first packet will arrive at the output at the exact same time. 

If you want a lower latency, just reduce the "latency" property, if it's not enough, increase it (well, make it into a dynamic JB..)

@tim @slomo: The JB will immediately push buffers where the seqnum follows the previous one, but for the first one, it has to wait until the latency deadline to make sure the first buffer was not re-ordered.
Comment 9 Miguel París Díaz 2015-02-04 10:45:07 UTC
I also think that using a dynamic JB is a better solution, but it has more complexity.
So, I think that my proposal is a simple and direct solution for use cases which have a hard restriction to establish the communication in the less possible time and for people who want to avoid investing the effort of managing a dynamic JB because in their systems the disordering takes place with a low frequency and they manage them properly.

Moreover, a very important point is that this solution does not change the default behaviour, so the current users will not be affected.
Comment 10 Sebastian Dröge (slomo) 2015-02-04 10:56:14 UTC
Thanks Olivier, that perfectly makes sense.


Maybe dynamic JB should be a property on JB then, to allow people switching between both modes? Or there should be a new rtpdynamicjitterbuffer element, and on rtpbin it can be selected via a property which one should be used?

If I understand the issue correctly now, the do-deadline property indeed does not make much sense as is.
Comment 11 Olivier Crête 2015-02-04 22:20:12 UTC
I'm not sure exactly how a dynamic jb would work, the problem is that you have a range of different algorithms possible and I'm not sure if we should pick one and implement it inside the rtpjitterbuffer element. I understand some people have use rtpbin/rtpjitterbuffer and implemented such algorithms externally (in non-free code) and then just tweak the "latency" property as required. The way I understand it, different algos work best for different use-cases, RTSP stream vs voip calls probably have different tradeoffs.

@miguel: The problem is that your patch doesn't make the first buffer show up to the screen/speaker any faster.
Comment 12 Olivier Crête 2015-02-04 22:21:50 UTC
@miguel: If you just want to know when a new ssrc has been discovered, you want want to look at the "on-new-ssrc", "on-ssrc-validated" and "on-ssrc-active" signals on the rtpsession.
Comment 13 Miguel París Díaz 2015-02-04 22:46:10 UTC
@olivier: about this: "The problem is that your patch doesn't make the first buffer show upto the screen/speaker any faster."

I do not understand why. I have checked it and works for my case.
I should have told you this info:
 - I am using jitterbuffer with mode RTP_JITTER_BUFFER_MODE_SYNCED, so the PTSs are based only on the rtptime, isn't it or am I wrong?
 - I do not render the media in a screen/speaker. I have a live pipeline that redirect RTP streams and its syncs are sync=FALSE.
Comment 14 Olivier Crête 2015-02-04 23:08:38 UTC
@miguel: You probably want to put sync=true on the sinks for the outgoing RTP stream. Otherwise, as soon as you have a one lost packet in the input stream,
you will cause a big time gap between of duration "latency" in the output stream, followed by the content of the queue inside the jitterbuffer in quick succession, which is bad, as it will make it more likely to have further dropped packets on the outgoing stream (as you suddenly increase the packet rate just when you have a loss potentially caused by congestion). It also means that the receiver of the output stream will need a latency of at least "latency" to not have drops (as this is the jitter you're introducing) And if you put sync=true, then this patch is not useful as the sink will block until you've exhausted the full pipeline latency anyway.
Comment 15 Miguel París Díaz 2015-02-04 23:55:37 UTC
"You probably want to put sync=true on the sinks for the outgoing RTP
stream"
  Why do I use sync=false?
   I want that my pipeline adds the less possible latency (a "live pipeline")

"Otherwise, as soon as you have a one lost packet in the input stream,
you will cause a big time gap between of duration "latency" in the output
stream."
  I use retransmissions and if the packet is definitely lost I manage the situation in a downstream element (e.g.: requesting a key-frame if it is video).
  Moreover, my jitterbuffer is configured with "rtx-max-retries"=0, so only a retransmission is requested (if needed) and the delay is incremented basing on the RTT (if I do not remember bad), so the "gap" is not very high.

"as it will make it more likely to have further
dropped packets on the outgoing stream (as you suddenly increase the packet
rate just when you have a loss potentially caused by congestion."
  You are right, and we assume this situation. Why?
    - Because we want very low latency.
    - Because we use bandwidth estimation and bitrate control. Even if the bandwidth is not the bottleneck, this does not matter.
    - Because we are implementing another behaviour based on not using the jitterbuffer, depay, pay, and based on mangling RTP packets.

In conclusion, you are right that in the case that you are exposing the "do-deadline"=FALSE does not make sense, but in other cases as mine I think that yes.
I do not know if I have explained properly, if not, I do not mind answering questions, etc. to clarify my case ;)
Comment 16 Olivier Crête 2015-02-05 01:11:43 UTC
If I understand correctly, you really don't want a jitterbuffer because you're not trying to eliminate jitter, the only feature you want from rtpjitterbufer is the rtx stuff? Ideally you want to just forward all packets immediately? I wonder if you don't need a different element that doesn't actually queue the buffers but instead just keeps observations.

Using the rtpjitterbuffer as it currently is, even if the packet is definitely lost, you're still going to introduce a jitter event (a gap between packets n-1 and n+1) of "latency", which is really not what you want.
Comment 17 Miguel París Díaz 2015-02-05 10:51:56 UTC
The point here is when a timer for a packet lost is scheduled. There are 3 cases:
1) Commented by you: http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/gst/rtpmanager/gstrtpjitterbuffer.c?id=1.4.5#n1984
2) Without retransmissions: http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/gst/rtpmanager/gstrtpjitterbuffer.c?id=1.4.5#n2011
3) Taking into account rtx_retry_period http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/gst/rtpmanager/gstrtpjitterbuffer.c?id=1.4.5#n2748

So, you are right but only in the first case.

In the 3rd case, the packet will be considered lost when a timeout based on the RTT and the jitter (http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/gst/rtpmanager/gstrtpjitterbuffer.c?id=1.4.5#n2683), so the delay added is not the "latency".

Moreover, I am using a patch suggested by me few months ago: https://bug739868.bugzilla-attachments.gnome.org/attachment.cgi?id=290320
This patch adds a property to set the max number of retransmission retries. So if it is 0, only a retransmission will be requested, and if the related RTP packet does not arrive on the timeout, it is considered lost (applying the 3rd case).

You can see a set of patches to improve jitterbuffer (in our case): https://bugzilla.gnome.org/show_bug.cgi?id=739868
Comment 18 GStreamer system administrator 2018-11-03 14:57:31 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/156.