GNOME Bugzilla – Bug 711412
rtpjitterbuffer: Automatically calculate RTX properties based on RTT
Last modified: 2013-12-27 15:56:41 UTC
The description for some of the jitterbuffer properties state that a value of -1 results in automatic calculation. This patch implements that for the rtx-delay, rtx-retry-timeout and rtx-retry-period properties.
Created attachment 258933 [details] [review] rtpjitterbuffer: automatically calculate RTX properties based on RTT
Review of attachment 258933 [details] [review]: > if (priv->rtx_delay == -1) > delay = (priv->avg_rtx_rtt / 2) * GST_MSECOND; > else > delay = priv->rtx_delay * GST_MSECOND; avg_rtx_rtt should be set to some reasonable default value until it has been determined dynamically Also, the delay should be calculated based on the observed interpacket arrival jitter, which you need to get from the session somehow. > > if (priv->rtx_retry_period == -1) > calculated_period = priv->avg_rtx_rtt; > else > calculated_period = priv->avg_rtx_rtt; that's not right (typo?) the retry period is either the rtx_retry_period or based on the total jitterbuffer_size - avg_rtx_rtt or something like that > if (priv->rtx_delay == -1) > calculated_delay = priv->avg_rtx_rtt / 2; > else > calculated_delay = priv->rtx_delay; again, must be based on interpacket arrival jitter.
Created attachment 260461 [details] [review] rtpjitterbuffer: automatically calculate RTX properties based on jitter Updated patch that generates downstream events with jitter calculations which are then used to autotune jitterbuffer parameters. Also includes a minor comment typo fix.
Comment on attachment 260461 [details] [review] rtpjitterbuffer: automatically calculate RTX properties based on jitter > #define DEFAULT_MODE RTP_JITTER_BUFFER_MODE_SLAVE > #define DEFAULT_PERCENT 0 > #define DEFAULT_DO_RETRANSMISSION FALSE >-#define DEFAULT_RTX_DELAY 20 >+#define DEFAULT_RTX_DELAY -1 > #define DEFAULT_RTX_DELAY_REORDER 3 >-#define DEFAULT_RTX_RETRY_TIMEOUT 40 >-#define DEFAULT_RTX_RETRY_PERIOD 160 >+#define DEFAULT_RTX_RETRY_TIMEOUT -1 >+#define DEFAULT_RTX_RETRY_PERIOD -1 >+ >+#define DEFAULT_AUTO_RTX_DELAY 20 >+#define DEFAULT_AUTO_RTX_TIMEOUT 40 >+#define DEFAULT_AUTO_RTX_PERIOD 120 > /* calculate expected arrival time of the next seqnum */ > expected = dts + priv->packet_spacing; >- delay = priv->rtx_delay * GST_MSECOND; >+ if (priv->rtx_delay == -1) >+ if (priv->avg_rtx_rtt == 0) >+ delay = (priv->avg_jitter * 2 + DEFAULT_AUTO_RTX_DELAY) * GST_MSECOND; >+ else >+ delay = (priv->avg_jitter * 2 + priv->avg_rtx_rtt) * GST_MSECOND; The delay is based only on the measured jitter. >+ else >+ delay = priv->rtx_delay; This is missing a * GST_MSECOND; >+ int calculated_timeout; >+ int calculated_period; >+ >+ if (priv->rtx_retry_timeout == -1) >+ if (priv->avg_rtx_rtt == 0) >+ calculated_timeout = DEFAULT_AUTO_RTX_TIMEOUT + priv->avg_jitter * 2; >+ else >+ calculated_timeout = priv->avg_rtx_rtt * 2 + priv->avg_jitter * 2; >+ else >+ calculated_timeout = priv->rtx_retry_timeout; This does not depend on the jitter. >+ >+ if (priv->rtx_retry_period == -1) >+ if (priv->avg_rtx_rtt == 0) >+ calculated_period = DEFAULT_AUTO_RTX_PERIOD + priv->avg_jitter * 2; >+ else >+ calculated_period = priv->avg_rtx_rtt + priv->avg_jitter * 2; >+ else >+ calculated_period = priv->rtx_retry_period; This is not the period.. the period to try retransmission is based on the jitterbuffer size and the RTT. >@@ -2467,8 +2509,8 @@ do_expected_timeout (GstRtpJitterBuffer * jitterbuffer, TimerData * timer, > "running-time", G_TYPE_UINT64, timer->rtx_base, > "delay", G_TYPE_UINT, GST_TIME_AS_MSECONDS (delay), > "retry", G_TYPE_UINT, timer->num_rtx_retry, >- "frequency", G_TYPE_UINT, priv->rtx_retry_timeout, >- "period", G_TYPE_UINT, priv->rtx_retry_period, >+ "frequency", G_TYPE_UINT, calculated_timeout, >+ "period", G_TYPE_UINT, calculated_period, > "deadline", G_TYPE_UINT, priv->latency_ms, > "packet-spacing", G_TYPE_UINT64, priv->packet_spacing, NULL)); This is the only place where you use the calculated values. Where it actually matters (a few lines below) you use the rtx_retry_period and rtx_retry_timeout, which could be -1. >--- a/gst/rtpmanager/gstrtpsession.c >+++ b/gst/rtpmanager/gstrtpsession.c >@@ -404,6 +404,29 @@ on_sender_timeout (RTPSession * session, RTPSource * src, GstRtpSession * sess) > src->ssrc); > } > >+static void >+on_sending_rtcp (RTPSession * rtp_session, GstBuffer * buffer, gboolean early, >+ GstRtpSession * session) >+{ >+ GstPad *recv_rtp_sink; >+ GST_RTP_SESSION_LOCK (session); >+ if ((recv_rtp_sink = session->recv_rtp_sink)) >+ gst_object_ref (recv_rtp_sink); >+ GST_RTP_SESSION_UNLOCK (session); >+ >+ if (recv_rtp_sink) { >+ GstEvent *event; >+ guint jitter; >+ g_object_get (rtp_session, "average-jitter", &jitter, NULL); >+ GST_DEBUG ("Sending rtcp, with jitter of %u", jitter); >+ event = gst_event_new_custom (GST_EVENT_CUSTOM_UPSTREAM, >+ gst_structure_new ("GstRTPJitterCalculation", "jitter", G_TYPE_UINT, >+ jitter, NULL)); >+ gst_pad_push_event (recv_rtp_sink, event); >+ gst_object_unref (recv_rtp_sink); >+ } >+} Now you'll be sending this event whenever the session produces RTCP. You could also produce the event when the session receives RTCP. Both are probably not very optimal to get the most recent jitter estimation in the jitterbuffer. I'm starting to think that the jitterbuffer should just calculate the jitter itself (again, yes but the great plan is to merge the jitterbuffer in the RTPSource again at some point). >--- a/gst/rtpmanager/rtpsession.c >+++ b/gst/rtpmanager/rtpsession.c >@@ -2858,6 +2869,9 @@ session_report_blocks (const gchar * key, RTPSource * source, ReportData * data) > source->last_rr.lsr = lsr; > source->last_rr.dlsr = dlsr; > >+ data->total_jitter += jitter; >+ data->count_jitter++; >+ This is not where the jitter is calculated.. The jitter is calculated in each RTPSource when it receives a packet and rtp_source_process_rtp() is called. For all non-internal SSRCs that are senders (you received packets from the remote SSRC), you need to pass the jitter calculation to the jitterbuffer. The most easy way to do that is when you receive an SR from the SSRC, then you take the jitter, wrap it in an event and send it to the jitterbuffer. jitterbuffer.
Created attachment 263000 [details] [review] [PATCH] rtp: jitterbuffer: Fix inconsistency in waking up When the jitterbuffer is in slave mode, the src pad task can sometimes run out of buffers. Once that happens, the task waits for new events, but never gets woken up until a lost buffer timeout occurs. The jitterbuffer can enter a state of not setting up a dropped buffer timeout while still waiting for new buffers. This change wakes up the src pad task regardless of wait reason once a new buffer arrives on the sink pad. --- gst/rtpmanager/gstrtpjitterbuffer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Comment on attachment 263000 [details] [review] [PATCH] rtp: jitterbuffer: Fix inconsistency in waking up totally wrong git ref uploaded
Created attachment 263001 [details] [review] [PATCH] rtpjitterbuffer: automatically calculate RTX properties based on RTT gst/rtpmanager/gstrtpjitterbuffer.c | 104 +++++++++++++++++++++++++++++++++--- 1 file changed, 96 insertions(+), 8 deletions(-)
Created attachment 263158 [details] [review] [PATCH] rtpjitterbuffer: automatically calculate RTX properties based on RTT Fix compilation error of the previous patch.
Created attachment 263900 [details] [review] [PATCH] rtpjitterbuffer: automatically calculate RTX properties based on RTT Fix bug in the calculation of delay and also fix the unit test to work again.
Comment on attachment 263900 [details] [review] [PATCH] rtpjitterbuffer: automatically calculate RTX properties based on RTT >+ >+ if (priv->rtx_delay == DEFAULT_RTX_DELAY) { >+ if (priv->jitter == DEFAULT_JITTER) >+ delay = DEFAULT_AUTO_RTX_DELAY * GST_MSECOND; >+ else >+ delay = priv->jitter * 2 * GST_MSECOND; >+ } else { >+ delay = priv->rtx_delay * GST_MSECOND; >+ } Here you expect the jitter to be expressed in milliseconds >+static void >+calculate_jitter (GstRtpJitterBuffer * jitterbuffer, guint8 pt, guint32 dts, >+ guint rtptime) >+{ >+ guint32 rtparrival, transit; >+ gint32 diff; >+ GstRtpJitterBufferPrivate *priv; >+ >+ priv = jitterbuffer->priv; >+ >+ if (G_UNLIKELY (dts == GST_CLOCK_TIME_NONE)) >+ goto no_time; >+ >+ rtparrival = gst_util_uint64_scale_int (dts, priv->clock_rate, GST_SECOND); >+ >+ transit = rtparrival - rtptime; >+ >+ if (G_UNLIKELY (priv->transit != -1)) { >+ if (transit > priv->transit) >+ diff = transit - priv->transit; >+ else >+ diff = priv->transit - transit; >+ } else >+ diff = 0; >+ >+ priv->transit = transit; >+ >+ priv->jitter += diff - ((priv->jitter + 8) >> 4); >+ priv->prev_rtptime = priv->last_rtptime; >+ priv->last_rtptime = rtparrival; And here you calculate jitter expressed in clock-rate. >+ >+ if (priv->rtx_retry_period == DEFAULT_RTX_RETRY_PERIOD) { >+ if (priv->avg_rtx_rtt == 0) >+ calculated_period = DEFAULT_AUTO_RTX_PERIOD; >+ else >+ calculated_period = priv->avg_rtx_rtt * priv->jbuf->window_size; >+ } else { >+ calculated_period = priv->rtx_retry_period; >+ } window-size is not related to this. It needs to use something related to the size of the jitterbuffer.
Review of attachment 263900 [details] [review]: Just some coding comments, but don't forget Wim's algorithmic issues first.. ::: gst/rtpmanager/gstrtpjitterbuffer.c @@ +292,3 @@ + guint32 jitter; + GstClockTime prev_rtptime; + GstClockTime last_rtptime; prev_rtptime & last_rtptime are never used @@ +700,3 @@ priv->rtx_retry_period = DEFAULT_RTX_RETRY_PERIOD; + priv->transit = DEFAULT_TRANSIT; + priv->jitter = DEFAULT_JITTER; You also need to reset transit & jitter on gst_rtp_jitter_buffer_flush_stop() @@ +1961,3 @@ + diff = transit - priv->transit; + else + diff = priv->transit - transit; diff = ABS (priv->transit - transit)
Created attachment 264605 [details] [review] [PATCH] rtpjitterbuffer: automatically calculate RTX properties based on RTT gst/rtpmanager/gstrtpjitterbuffer.c | 104 ++++++++++++++++++++++++++++++++---- 1 file changed, 95 insertions(+), 9 deletions(-)
Comment on attachment 264605 [details] [review] [PATCH] rtpjitterbuffer: automatically calculate RTX properties based on RTT >+ >+ if (priv->rtx_delay == DEFAULT_RTX_DELAY) { >+ if (priv->jitter == DEFAULT_JITTER) >+ delay = DEFAULT_AUTO_RTX_DELAY; >+ else >+ delay = priv->jitter * 2; >+ } else { >+ delay = priv->rtx_delay; >+ } Same problem as previous patch, jitter is expected to be in milliseconds here. >+ >+ rtparrival = gst_util_uint64_scale_int (dts, priv->clock_rate, GST_SECOND); >+ >+ transit = rtparrival - rtptime; >+ >+ if (G_UNLIKELY (priv->transit != -1)) { >+ diff = ABS (transit - priv->transit); >+ } else >+ diff = 0; >+ >+ priv->transit = transit; >+ >+ priv->jitter += diff - ((priv->jitter + 8) >> 4); .. but here you calculate the jitter in clock-rate. >+ if (priv->rtx_retry_period == DEFAULT_RTX_RETRY_PERIOD) { >+ if (priv->avg_rtx_rtt == 0) >+ calculated_period = DEFAULT_AUTO_RTX_PERIOD; >+ else >+ calculated_period = >+ priv->avg_rtx_rtt * g_queue_get_length (priv->jbuf->packets); >+ } else { >+ calculated_period = priv->rtx_retry_period; >+ } retry period is unrelated to the amount of packets currently in the buffer, it should be something based on max-latency of buffer - round-trip
commit 4b1decacc7a817d5b12126d0d7d2fa6160b1cf81 Author: Wim Taymans <wtaymans@redhat.com> Date: Fri Dec 27 16:51:32 2013 +0100 rtpjitterbuffer: dynamically recalculate RTX parameters Use the round-trip-time and average jitter to dynamically calculate the retransmission interval and expected packet arrival time. Based on patches from Torrie Fischer <torrie.fischer@collabora.co.uk> Fixes https://bugzilla.gnome.org/show_bug.cgi?id=711412 commit 801be84cb216eb16e63f9e1c31523c0a1aa34afc Author: Wim Taymans <wtaymans@redhat.com> Date: Fri Dec 27 16:50:52 2013 +0100 rtpjitterbuffer: calculate average jitter commit 824d35a7bbc6db397b261d128e2c4037249e62bc Author: Wim Taymans <wtaymans@redhat.com> Date: Fri Dec 27 16:48:48 2013 +0100 rtpsession: use RTT from the Retransmission event Place the estimated RTT in the Retransmission event and let the session manager use that instead of the hardcoded value. commit fa594ef83ca0dc6f95b2bab7be95767c54afa769 Author: Wim Taymans <wtaymans@redhat.com> Date: Fri Dec 27 15:57:39 2013 +0100 jitterbuffer: take more accurate running-time for NACK Don't use the current time calculated from the tmieout loop for when we last scheduled the NACK because it might be unscheduled because of a max packet misorder and then we don't accurately calculate the current time. Instead, take the current element running time using the clock.