After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 667314 - baseaudiosink: allow the internal clock to be calibrated externally
baseaudiosink: allow the internal clock to be calibrated externally
Status: RESOLVED INCOMPLETE
Product: GStreamer
Classification: Platform
Component: gst-plugins-base
git master
Other All
: Normal enhancement
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2012-01-04 20:58 UTC by Håvard Graff (hgr)
Modified: 2018-05-04 09:51 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
patch (3.27 KB, patch)
2012-01-04 20:58 UTC, Håvard Graff (hgr)
none Details | Review

Description Håvard Graff (hgr) 2012-01-04 20:58:07 UTC
Created attachment 204632 [details] [review]
patch

With the current implementation, any calibration set on the provided_clock
(except the num and denom) is overridden when sync_latency is called

This patch adds a simple check to see if there is any pre-existing
calibrations, and does not set them again if there is.
Comment 1 Tim-Philipp Müller 2012-01-05 17:58:13 UTC
<wtay> I don't understand how the previous code could cause problems or do you think it is just an optimization? when will it adjust callibration again when the external clock changes?
<__tim> wtay, it sounds like they're setting/forcing something on the clock, and then the code always resets/overwrites that, and they don't want the code to overwrite already-configured values
<wtay> then I wonder why they need to do that. what's wrong with the default implementation
Comment 2 Håvard Graff (hgr) 2012-01-06 11:35:30 UTC
We basically want to dictate what time position 0 is in the ringbuffer. The way to do this is to externally use gst_clock_set_calibration (base_sink->provided_clock, 0, time, 1, 1);

Now the ringbuffer will have the same idea about time that our internal thread (that is reading from the ringbuffer) is using . This is really important for the over/under-run checks in the _get_alignment () function, because if the two ideas about time (ringbuffer and our reader-thread) are different, it can cause a permanent long delay (over-run) or complete silence. (under-run).

Now, we found that if our thread has started before the sink starts, we will have to apply the right calibration *before* reading any buffers, or you will start up the buffer with one calibration, and then by changing it after having read out a few, you will basically create an enormous headroom in the buffer. (with clock-time not starting from 0, you could have *years* of headroom...), so it becomes important to be able to set the right calibration *prior* to having performed "sync-latency". We do this in _ringbuffer_start() which seems like the natural place for it.

However, sync-latency will always reset the calibration, so this patch basically tries to preserve an external effort of dictating the start-time of the internal ringbuffer.
Comment 3 Sebastian Dröge (slomo) 2014-12-22 10:09:52 UTC
Couldn't you just offset your internal time based on the time during start, so that it also starts from 0?

If I understand this correctly, the problem here is that the internal clock does not start at 0?
Comment 4 Sebastian Dröge (slomo) 2018-05-04 09:51:53 UTC
Closing this bug report as no further information has been provided. Please feel free to reopen this bug report if you can provide the information that was asked for in a previous comment.
Thanks!