GNOME Bugzilla – Bug 791979
aggregator: only process one event/query at a time
Last modified: 2018-11-03 12:44:04 UTC
Created attachment 366009 [details] [review] only process one event/query at a time This is required for implementing non-flushing segmented seeks. See explanation in the patch.
Review of attachment 366009 [details] [review]: The patch in itself seems ok, but I'm not sure I understand what it changes? Because it would still put "processed-event" on the pad, which will cause the next check in gst_aggregator_aggregate_func() to loop again where it will process the next event. Do you expect the segment-done to be in the same slot in all the queues? To me, it looks like there is something missing to stop event processing per pad until the desired event was received.
Right, the problem this 'solved' was that I am setting a have_segment_done flag when getting segment_done and unsetting it when I get segment_start. With this change it works as long as the queues are 'balanced'. Introducing queues with event-sequence numbers is expensive and would be a complicated change Maybe a can add a generic flag to 'freeze' pad queues instead, wdyt?
Maybe, but I think this is quite specific, would some other non-synchronized events unfreeze it, like flushes? I would just add code specific to segment seeking in the GstAggregator class, if it's all internal, we can make it more generic if a second use-case appears.
gst_aggregator_aggregate_func() does: while (priv->send_eos && priv->running) { ... gst_element_foreach_sink_pad (GST_ELEMENT_CAST (self), gst_aggregator_do_events_and_queries, NULL); ... } gst_aggregator_do_events_and_queries() will call klass->sink_event() (or klass->sink_query). There is no way for sink_event() to tell gst_aggregator_do_events_and_queries() to stop looping or to not pop an element off the queue. Conceptionally I would need to make sure that the last sink_event() call sees that all pads had a segment_done, before any sink_event() will see a segment_start().
(In reply to Stefan Sauer (gstreamer, gtkdoc dev) from comment #4) > Conceptionally I would need to make sure that the last sink_event() call > sees that all pads had a segment_done, before any sink_event() will see a > segment_start(). I totally agree, but I think we need to add some logic to block processing on the various pads to "synchronise" them for this.
(In reply to Olivier Crête from comment #5) > (In reply to Stefan Sauer (gstreamer, gtkdoc dev) from comment #4) > > Conceptionally I would need to make sure that the last sink_event() call > > sees that all pads had a segment_done, before any sink_event() will see a > > segment_start(). > > I totally agree, but I think we need to add some logic to block processing > on the various pads to "synchronise" them for this. The logic is in the other patches https://bugzilla.gnome.org/show_bug.cgi?id=791986 This change is mostly to allow the subclasses to do this.
Let me explain more. The pad queues take *serialized* events. The old code did: foreach q in pad_queues: foreach d in q: break if d is buffer handle d IMHO this will not allow the subclasses to sync on the queue items. Of could we could implement another 'special case: foreach q in pad_queues: foreach d in q: break if d is buffer break if d is event and d.type is segment-done handle d but I don't think this is the right place to do this. The patch removes the "foreach d in q" and this way subclasses can peek at the heads of the queue and and act accordingly. We've been taking that for segmented non flushing seeks we need queues to track the events. We do have queues to track the events already. I think we need a better way to trigger doing stuff depending on what kind of objects are in the queues.
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/266.