After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 744338 - opusdec: LATENCY query handling looks wrong
opusdec: LATENCY query handling looks wrong
Status: RESOLVED FIXED
Product: GStreamer
Classification: Platform
Component: gst-plugins-bad
unspecified
Other Linux
: Normal major
: 1.5.1
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2015-02-11 15:57 UTC by Sebastian Dröge (slomo)
Modified: 2015-07-31 19:45 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
fix latency query in FEC case (1.32 KB, patch)
2015-03-04 09:27 UTC, Vincent Penquerc'h
committed Details | Review
Add expected-packet-duration to opusdec (3.73 KB, patch)
2015-07-31 19:45 UTC, Ilya Konstantinov
none Details | Review

Description Sebastian Dröge (slomo) 2015-02-11 15:57:53 UTC
+++ This bug was initially created as a clone of Bug #744106 +++

Currently opusdec does this if inband fec is used:
> gst_audio_decoder_set_latency (dec, 2 * GST_MSECOND + GST_MSECOND / 2,
>          120 * GST_MSECOND)

This is probably not right. The max latency would mean that it can buffer up to 120ms, but that seems unlikely. Maybe 120ms is the maximum delay the decoder can cause, i.e. the minimum latency? And the 2.5ms is the minimum delay, for which we currently have nothing in the LATENCY query?
Comment 1 Vincent Penquerc'h 2015-03-03 12:14:52 UTC
An Opus packet can contain up to 120 ms (in frames of up to 60 ms). So the decoder may output 120 ms at once.
Similarly, the smallest possible packet is 2.5 ms.

In both those cases, this means the first sample may be 120 ms (resp. 2.5 ms) late compared to the last sample.

I'm unsure how that should be accounted for here. This delay is dependent on whatever the encoder produced, so it's not really due to the decoder, but the decoder has to somehow let downstream know...
Comment 2 Sebastian Dröge (slomo) 2015-03-03 12:55:53 UTC
Ok, in that case the minimum latency would be 120ms. And the maximum latency would be the same because there's no other queueing inside opusdec... unless

Can we know from the stream header or something how big packets can become?
Comment 3 Tim-Philipp Müller 2015-03-03 13:10:55 UTC
Is this really decoder latency then? If the first packet contains N ms of audio, will the decoder output N ms of audio samples right away (without waiting for further packets)?
Comment 4 Vincent Penquerc'h 2015-03-03 13:25:15 UTC
(In reply to Tim-Philipp Müller from comment #3)
> Is this really decoder latency then? If the first packet contains N ms of
> audio, will the decoder output N ms of audio samples right away (without
> waiting for further packets)?

I think so. Unless you've decided to us FEC, in which case you have one packet's delay.
Comment 5 Vincent Penquerc'h 2015-03-03 13:27:18 UTC
(In reply to Sebastian Dröge (slomo) from comment #2)
> Can we know from the stream header or something how big packets can become?

No
Comment 6 Nicolas Dufresne (ndufresne) 2015-03-03 13:48:48 UTC
The size of the buffer in time is accounted in source already. If you count it again in the decoder, it will be doubled.
Comment 7 Sebastian Dröge (slomo) 2015-03-03 14:07:02 UTC
So for non-FEC mode the latency should be 0 here as it currently is... for FEC mode you need to set both latencies to the worst-case duration of one packet unless you somehow can get the maximum packet duration from elsewhere.
Comment 8 Nicolas Dufresne (ndufresne) 2015-03-03 14:40:10 UTC
Make sense. Do the buffer size really varies ? If not, we can easily send a latency update message on first buffer received. We could also do dynamic upward update if needed (to prevent the pipeline from dropping forever).
Comment 9 Vincent Penquerc'h 2015-03-04 09:12:23 UTC
Buffer sizes vary, yes.
Comment 10 Vincent Penquerc'h 2015-03-04 09:27:44 UTC
Created attachment 298508 [details] [review]
fix latency query in FEC case

It's all a bit confusing, isn't  it. I just read the descriptions in part-latency.txt.
Comment 11 Sebastian Dröge (slomo) 2015-03-04 09:58:36 UTC
Comment on attachment 298508 [details] [review]
fix latency query in FEC case

Looks good, thanks :)

Did the new explanations in part-latency.txt make sense to you? If not let's improve them :)
Comment 12 Vincent Penquerc'h 2015-03-04 10:06:14 UTC
(In reply to Sebastian Dröge (slomo) from comment #11)

> Did the new explanations in part-latency.txt make sense to you? If not let's
> improve them :)

        - the minimum latency in the pipeline, meaning the minimum time
          an element synchronizing to the clock has to wait until it can
          be sure that all data for the current running time has been
          received.

To me, this reads "this depends on how late downstream is". The next paragraph says it's latency this element introduces, but the first paragraph seems to be contradicting that (or at least being irrelevant to it and therefore confusing).

The max-latency one is clearer.
Comment 13 Sebastian Dröge (slomo) 2015-03-04 10:14:40 UTC
Do you have suggestions how to improve that? What is meant there is the effect of the minimum latency. Downstream *has* to wait at least that long to make sure buffers arrive in time.
Comment 14 Vincent Penquerc'h 2015-03-04 10:17:35 UTC
Oh, I was seeing "an element" as the element that we were considering the latency for! It becomes a lot clearer now :D

So, how about:

        - the minimum latency in the pipeline, meaning the minimum time
          downstream elements synchronizing to the clock have to wait until they
          can be sure that all data for the current running time has been
          received.
Comment 15 Sebastian Dröge (slomo) 2015-03-04 11:01:05 UTC
Sounds good, please push your patch and also that change to the design docs :)
Comment 16 Vincent Penquerc'h 2015-03-04 11:05:49 UTC
commit b9c521c0ece762df409b50f7d4035ce047d96738
Author: Vincent Penquerc'h <vincent.penquerch@collabora.co.uk>
Date:   Wed Mar 4 09:24:27 2015 +0000

    opusdec: fix latency query in FEC case
    
    The max latency parameter is "the maximum time an element
    synchronizing to the clock is allowed to wait for receiving all
    data for the current running time" (docs/design/part-latency.txt).
    
    https://bugzilla.gnome.org/show_bug.cgi?id=744338

commit 7b48b7a5bcf91bac4ce61c8f352003927098dc04
Author: Vincent Penquerc'h <vincent.penquerch@collabora.co.uk>
Date:   Wed Mar 4 11:02:41 2015 +0000

    docs: clarify min-latency wording in part-latency.txt
    
    https://bugzilla.gnome.org/show_bug.cgi?id=744338
Comment 17 Ilya Konstantinov 2015-07-31 11:34:24 UTC
(In reply to Sebastian Dröge (slomo) from comment #7)
> So for non-FEC mode the latency should be 0 here as it currently is... for
> FEC mode you need to set both latencies to the worst-case duration of one
> packet

$ gst-inspect-1.0 opusenc

  frame-size          : The duration of an audio frame, in ms
                           (2): 2.5              - 2.5
                           (5): 5                - 5
                           (10): 10               - 10
                           (20): 20               - 20
                           (40): 40               - 40
                           (60): 60               - 60

So why are we setting min-latency to 120ms? It seems to account for two packets, while we only delay by one packet (in FEC) and 0 packets (otherwise).
Comment 18 Vincent Penquerc'h 2015-07-31 12:10:20 UTC
IIRC Opus can have packets up to 120 ms (those would have to be made of more than one frame, since a frame can be 60 ms max).
Comment 19 Ilya Konstantinov 2015-07-31 12:34:48 UTC
You are right. According to the RFC:

“Opus can encode frames of 2.5, 5, 10, 20, 40, or 60 ms.  It can also combine multiple frames into packets of up to 120 ms.“

opusenc would not produce packets with more than one frame; we'd have to use the Opus repacketizer for that.

My follow-up to this patch is to add an 'expected-packet-duration' property to opusdec, for when it's lower than 120ms. This value could be hardcoded (when you also control your opusenc), or discovered through by user's app through proprietary out-of-band signaling (e.g. SDP extension).
Comment 20 Olivier Crête 2015-07-31 18:17:12 UTC
Can we add something in the caps that matches the maxptime from the SDP, and then we can make the payloader/depayaders convert it into the application/x-rtp caps. This way, the information can be propagated easily.
Comment 21 Ilya Konstantinov 2015-07-31 19:45:51 UTC
Created attachment 308583 [details] [review]
Add expected-packet-duration to opusdec

This is a preliminary patch, and not a very interesting one :-)

My goal was to make FEC usable (since with unconditional 120ms latency, it's not very useful for RTC). Unfortunately, my test app doesn't show any improvement (between iOS devices with packet loss generated by iOS Network Conditioner).

Also, perhaps Oliver's approach of passing it in caps is more "correct", so I'll leave it for others to decide.