After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 704331 - Clock estimation (aka: Forward QoS)
Clock estimation (aka: Forward QoS)
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gstreamer (core)
git master
Other Linux
: Normal enhancement
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
: 632222 (view as bug list)
Depends on:
Blocks:
 
 
Reported: 2013-07-16 15:17 UTC by Edward Hervey
Modified: 2018-11-03 12:18 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description Edward Hervey 2013-07-16 15:17:37 UTC
Problems:
    Currently elements that do remote clock estimation modify the values
    of the buffer timestamps they output.

    This causes several problems:
      * Loss of original information
         An original perfect 25fps video stream ends up having buffers
         no longer spaced by 1/25th of a second.
      * Correction should only be applied when doing synchronzation
        in, and only in, the current pipeline and with the same clock.
          If the stream is stored in a file, the time corrections are
          stored with it and the resulting file ends up playing back
          faster/slower than targetted.

Goal: For live systems split the remote clock estimation/correction
   into two parts:
     1) Remote clock estimation is done in elements that can do it
     2) Elements capable of clock estimation do not modify the original
        values of in-band timing and instead only modify the segment
        values such that the running-time of buffers being outputted
        is coherent with the running-time of buffers it received.
     3) Synchronization correction is only done in elements that
        actually wait against the clock.


Proposal:
  New event GST_EVENT_CLOCK_CORRECTION (in-band, sticky)

  Fields:
    GstClockTime time : The running-time of the clock to which this
                        correction applies.
    GstClockTimeDiff offset : The correction to apply to 'time' (can
                              be negative)
    gdouble rate : The long-term rate correction.

  GstEvent *gst_event_new_clock_correction (GstClockTime time,
                                            GstClockTimeDiff offset,
					    gdouble rate);

  gboolean gst_event_parse_clock_correction (GstEvent *event,
  	   				     GstClockTime *time,
					     GstClockTimeDiff *offset,
					     gdouble *rate);

Description

  Over time the rate will stabilize. We get a more and more accurate
   estimation of the remote clock rate.

  The rate might change if the nature of the remote source changes (such
   as changing from one participant in a RTP multi-participant conference
   to another). It does also appear to change on some DVB channels when
   the nature of the feed changes from pre-recorded to live (by a few
   parts-per-million (ppm). Finally it can also change when changing
   DVB channels.

   If rate == 1.0, this means that the remote provider is using a
    clock with the same exact rate as the clock locally used.

   If rate > 1.0, this means that the remote provider is using a
    clock which has a slower rate than the clock locally used.
    Buffers end up being synchronized slower (we end up with
    a target synchronization running time bigger (and therefore later).

   If rate < 1.0, this means that the remote provider is using a
    clock which has a higher rate than the clock locally used.
    Buffers end up being synchronized faster (we end up with a target
    synchronization running timer smaller (and therefore earlier).


  The offset correponds to the exact correction that needs to be applied
   for a given running time.

  Initially more corrections will be needed while the rate stabilizes, so
   this helps ensures that we can still get exact corrections instantly.

  This also helps to do one-shot adjustments quickly. This could happen
   if there is a routing/networking change between the remote device and
   the local device which would affect the travel latency (but not the
   rate).


  Rate of emission of the event is left up to the element providing them
  but an expected usage would be to:
   * Assume an initial rate of 1.0 without any correction
   * As soon as a new corrected timestamp diverges too much from the
     the previous (or assumed) correction/rate, it send an update and
     remember the new correction/rate for future values.

  Until a new correction is sent, any new running-time received will be
  corrected as such:

    T  : clock_correction.time
    C  : clock_correction.offset
    R  : clock_correction.rate
    T2 : running-time > T

    Corrected time for T2 = R * T2 + C

    (and for those following, since the default values are R=1.0 and C=0.0,
     ... no correction is applied when no clock correction event was
     received).


Clarification needed:

 What happens in the case where we have multiple clock estimation
 systems chaining each other, such as mpeg-ts over rtp ?
   1) Second one "drops" the first observations ?
   2) Second one disables its observation system ?
   3) Second one uses it to fine-tune its own observations ?

 What happens in the case where we multiplex streams with different
 clock estimation, such as several different RTP sources into one
 mpeg-ts program (with one clock/PCR) we want to transmit live ?

   (Note: in the case where we multiplex into different mpeg-ts programs,
    the problem doesn't occur since we can use different PCR streams,
    with different clocks per program).

 A new method for storing/calculating "corrected" running-time ?
 This shouldn't modify gst_segment_to_running_time() since that is
 used outside of clock-synchronized elements ?
Comment 1 Arnaud Vrac 2013-07-17 16:46:50 UTC
I think this proposition would be very useful to work with hardware oscillators, that allow changing the hardware clock rate by a few ppm. I don't think it can be done easily right now with the current QoS API. To avoid large swings in the ppm correction values, usually a PID controller is used to converge quickly to a stable correction.
Comment 2 Edward Hervey 2013-07-17 16:56:15 UTC
Arnaud, this is not the goal of this proposal. This proposal is for correctly extracting and reporting remote clock estimation (with information contained within the stream as is the case for RTP and DVB) against a local clock *AND* allowing the synchronization to be corrected for *that* stream and *that* clock.

A GstClock *needs* to have a constant monotonic rate, otherwise all the synchronization paradigms in GStreamer fall down.

You might want to open another bug with further in-depth explanation of the issue you are mentionning.
Comment 3 Olivier Crête 2013-07-29 07:34:16 UTC
It may be interesting to have some measure of the "stability" of the rate in the event (so that a downstream element could ignore it until the rate stabilizes enough). 

I would assume that the first to do clock estimation is probably best placed, so for example if you have mpeg-ts in RTP, the RTP one should be used.

I guess for example, the mpeg-ts muxer would need to "apply" the correction, or we could have an element that just applies the rate correction (ie modifies the timestamps according to the long term rate).

Another question is what happens to the current upstream QoS message, how does that interact with this? Is upstream QoS supposed to just be sent as if there was no forward-qos message ?
Comment 4 Edward Hervey 2013-08-13 15:37:26 UTC
(In reply to comment #3)
> It may be interesting to have some measure of the "stability" of the rate in
> the event (so that a downstream element could ignore it until the rate
> stabilizes enough). 

  I'm tempted to say it's up to the emitter of the event to handle that.

> 
> I would assume that the first to do clock estimation is probably best placed,
> so for example if you have mpeg-ts in RTP, the RTP one should be used.

  That would make sense, though only testing will prove whether that's correct or not :) Both elements will need to have the code for estimation/emission anyway.

> 
> I guess for example, the mpeg-ts muxer would need to "apply" the correction, or
> we could have an element that just applies the rate correction (ie modifies the
> timestamps according to the long term rate).

  Tricky.

  If all streams coming into the muxer have the same clock estimation then it means they all belong to the same "time realm". In that case the muxer only needs to forward the estimation events downstream and it uses the incoming timestamps as if no estimation/correction was to be applied.

  If they belong to different domains ... should it correct everything at that point (and essentially "swallow and apply" those estimations and not forward estimations downstream) ? 
  

> 
> Another question is what happens to the current upstream QoS message, how does
> that interact with this? Is upstream QoS supposed to just be sent as if there
> was no forward-qos message ?

  The time-realm for upstream QoS message would need to remain "uncorrected" running-time.
Comment 5 Edward Hervey 2013-10-27 11:31:46 UTC
Just to clarify. The intent is to provide hints for the synchronization time while not modifying the running-time.

It would only be extending code doing synchronization (i.e. go alongside the places where we synchronize against running_time + latency) in sinks and other objects.

Knowing if buffers/events correspond to the same moment still just use running-time.
Comment 6 Edward Hervey 2018-05-04 09:43:14 UTC
*** Bug 632222 has been marked as a duplicate of this bug. ***
Comment 7 GStreamer system administrator 2018-11-03 12:18:31 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/40.