GNOME Bugzilla – Bug 667314
baseaudiosink: allow the internal clock to be calibrated externally
Last modified: 2018-05-04 09:51:53 UTC
Created attachment 204632 [details] [review] patch With the current implementation, any calibration set on the provided_clock (except the num and denom) is overridden when sync_latency is called This patch adds a simple check to see if there is any pre-existing calibrations, and does not set them again if there is.
<wtay> I don't understand how the previous code could cause problems or do you think it is just an optimization? when will it adjust callibration again when the external clock changes? <__tim> wtay, it sounds like they're setting/forcing something on the clock, and then the code always resets/overwrites that, and they don't want the code to overwrite already-configured values <wtay> then I wonder why they need to do that. what's wrong with the default implementation
We basically want to dictate what time position 0 is in the ringbuffer. The way to do this is to externally use gst_clock_set_calibration (base_sink->provided_clock, 0, time, 1, 1); Now the ringbuffer will have the same idea about time that our internal thread (that is reading from the ringbuffer) is using . This is really important for the over/under-run checks in the _get_alignment () function, because if the two ideas about time (ringbuffer and our reader-thread) are different, it can cause a permanent long delay (over-run) or complete silence. (under-run). Now, we found that if our thread has started before the sink starts, we will have to apply the right calibration *before* reading any buffers, or you will start up the buffer with one calibration, and then by changing it after having read out a few, you will basically create an enormous headroom in the buffer. (with clock-time not starting from 0, you could have *years* of headroom...), so it becomes important to be able to set the right calibration *prior* to having performed "sync-latency". We do this in _ringbuffer_start() which seems like the natural place for it. However, sync-latency will always reset the calibration, so this patch basically tries to preserve an external effort of dictating the start-time of the internal ringbuffer.
Couldn't you just offset your internal time based on the time during start, so that it also starts from 0? If I understand this correctly, the problem here is that the internal clock does not start at 0?
Closing this bug report as no further information has been provided. Please feel free to reopen this bug report if you can provide the information that was asked for in a previous comment. Thanks!