GNOME Bugzilla – Bug 764396
Add GST_EVENT_LATENCY_CHANGED
Last modified: 2018-11-03 12:34:05 UTC
Created attachment 325055 [details] [review] suggested implementation clipped from #gstreamer: Lets start with some history: Back in 2008 we needed a dynamic jitterbuffer. After discussing back and forth with wtay, we came to the conclusion that the proper way to do this was to: 1. Change the latency property on the jitterbuffer 2. The jb would then post a latency message on the bus 3. in the bus-handler (in the application), this msg would be picked up, and wtay added a new method gst_bin_recalculate_latency that the application could emit on the main pipeline to have the whole pipeline query its latency again (each sink sending upstream latency queries etc). Now this has worked ok for us ever since, in Pexip we terminate the latency in the mixers, not in the sinks, but other then that it works exactly the same way as before. But lately we have discovered that this has some horrific side-effects, in that with many participants connecting, and latency changing often, we in fact, using this method, are recalculating EVERYONES latency when we only want to reconfigure that of a single participant. or more specifically the jitterbuffers with the same cname belonging to that participant. Now, I know the idea of "groups of sinks" have been mentioned before, but we are at a stage now where we really need to be able to re-calculate latency only for the affected sinks / mixers, and this is where I have a concrete suggestions: a downstream "latency-changed" event that in turn can trigger an upstream latency query, either from a sink or from a mixer. If this was emitted from the jitterbuffer in the same place the latency-message is emitted, it would mean all affected downstream mixers/sinks would know "directly" that latency had changed somewhere upstream from them, and could make decisions based on this. For us it would mean re-sending latency-queries upstream from the mixers that received this event, and then using this new latency in the mix. For this sink-case, it would be the same way, only that with using this "mode" the sinks would get their new latency "independent" of the pipeline latency, and the "grouping" would then happen from whichever sources decided to send this event. So in the case of rtpbin, you are already changing latency for all the jitterbuffers with the same cname as one "group", and hence all of them would also be sending the latency event, ensuring lipsync at the mixers/sinks.
And how we intend to use it for the jitterbufer: https://github.com/pexip/gst-plugins-good/commit/5adb429da783814c94893995e9360c3f4332b62f
I'm not convinced that it is the right approach, if you have two branches, one that is "jitterbuffer ! audiodec ! sink" and another that is "jitterbuffer ! videodec ! sink".. and the audio decoder is zero latency, but the video decoder isn't, then you need the pipeline (or the application) to do the query on both sinks and take the max of their min latencies (as GstBin currently does on start). With this event, if two branches have different latency, you still can't just re-compute at the sink, you still need to do it at the application level? Why not just check the source of the latency message and re-compute the latency for the right sinks ?
The event allows you to know which sinks are actually affected by the latency change. You can then take this information from the application to recalculate the latency for these specific pipeline branches. That said, you still need knowledge about how the pipeline looks like so it might also be not very useful as is and not add any new information.
Thanks for input! I really see this as a useful "puzzle piece", not as a complete implementation in itself. Like, some elements maintain an internal "upstream_latency" variable (like the jitterbuffer), upon receiving this event, it would always be a good idea to re-query for the new value here. As Olivier points out, this is not going to solve anything by itself, but the fact that you now will know which sinks are directly affected by the latency-change, from the pure novelty that they are the ones connected to the element that have changed its latency, seems very useful to me as a piece in a perhaps new and more focused way of doing latency in a pipeline. As an idea, if we now added a property to base-sink, "group-id", upon receiving the latency-changed event, base sink could now emit a signal to the pipeline, and that could now resync all the sinks with the same group-id as the one that emitted the signal. Note how if one were to attempt the same thing with todays bus-approach, the application would need to keep a map of which elements (with the potential of changing their latency) are connected to which sinks. The latency-changed event basically keeps the information localized by utilizing the pipeline itself. I am sure there are other approaches as well, but I strongly feel that leaving the bus out of something so "pipline-internal" as the latency-calculation really is, is a right way forward, and that we then can allow for bigger complexities, like having multiple "groups" etc.
Spinning a bit on the group-id idea(!), by simply adding group-id 0 as a default for basesink, we would effectively have the same behavior as today, but by now changing that property to 1, you start operation with a second group. Meaning the code could be generic for all the cases, and we would not need a "special case" for group-id. So a full "recalculate latency" might call gst_bin_recalculate_group_latency() for each group present.
(In reply to Håvard Graff (hgr) from comment #4) > I really see this as a useful "puzzle piece", not as a complete > implementation in itself. Like, some elements maintain an internal > "upstream_latency" variable (like the jitterbuffer), upon receiving this > event, it would always be a good idea to re-query for the new value here. Indeed, this would make aggregator a bit more reliable in that regard as it could query by itself instead of assuming that the application triggers complete latency recalculation. (In reply to Håvard Graff (hgr) from comment #4) > > As an idea, if we now added a property to base-sink, "group-id" Note that stream-start events also have something called group-id. Which generally refers to a collection of streams that somehow should be synchronized together. Currently used by playbin for group the main media and an external subtitle stream together, and for distinguishing one of these "stream groups" from the next one that would appear when you switch to the next media.
> > I really see this as a useful "puzzle piece", not as a complete > > implementation in itself. Like, some elements maintain an internal > > "upstream_latency" variable (like the jitterbuffer), upon receiving this > > event, it would always be a good idea to re-query for the new value here. > > Indeed, this would make aggregator a bit more reliable in that regard as it > could query by itself instead of assuming that the application triggers > complete latency recalculation. Which is exactly the use-case we have right now! > Note that stream-start events also have something called group-id. Which > generally refers to a collection of streams that somehow should be > synchronized together. Currently used by playbin for group the main media > and an external subtitle stream together, and for distinguishing one of > these "stream groups" from the next one that would appear when you switch to > the next media. "latency-group", "synchronization-group", "sync-group", "sync-group-id" ? :)
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/166.