After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 793607 - aggregator: flushing state never resets with a tee
aggregator: flushing state never resets with a tee
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gstreamer (core)
1.13.1
Other Linux
: Normal normal
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2018-02-19 17:39 UTC by Florian Zwoch
Modified: 2018-11-03 12:44 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
log with seek (4.90 KB, text/plain)
2018-02-20 09:13 UTC, Florian Zwoch
Details
sample_app (808 bytes, text/x-python)
2018-02-22 12:51 UTC, Florian Zwoch
Details

Description Florian Zwoch 2018-02-19 17:39:33 UTC
Actually I noticed with 1.12.4 - But looking at the code makes me belief this affects 1.13.x as well.

I'm not 100% sure how flushing works in detail, so maybe help me out here. But I made an element based on GstAggregator and it works just fine so far. I noticed that seeking is broken when I attach a tee element downstream to the aggregator.

The basic use case is to listen on the bus for an EOS, and then seeking back to the start (with flushing). But I think it also happened when I tried a regular seek while in PAUSED.

The result is that the aggregator is processing data just fine but does not output any more data after the first seek.

A little bit of tracing lead me to this code:

(gstaggregator.c:1985)
  if (flush) {
    priv->pending_flush_start = TRUE;
    priv->flush_seeking = TRUE;
  }

 [..]

(gstaggregator.c:2004)
  if (!evdata.result || !evdata.one_actually_seeked) {
    GST_OBJECT_LOCK (self);
    priv->flush_seeking = FALSE;
    priv->pending_flush_start = FALSE;
    GST_OBJECT_UNLOCK (self);
  }


So when I have a tee downstream I do get two seek events (correct?). The first one seems to work just fine - the second one however has much less debug log (not doing any flushing?).

Basically the second seek was a success (evdata.result == 1) but I think evdata.one_actually_seeked was also 1 because after that point I kept hitting this:

(gstaggregator.c:555)
  GST_INFO_OBJECT (self, "Not pushing (active: %i, flushing: %i)",
  self->priv->flush_seeking, gst_pad_is_active (self->srcpad));

with "active: 1, flushing: 1". Which indicates that "priv->flush_seeking = FALSE;" was not hit again, leaving its state in flush_seeking.

To verify I boldly moved the "priv->flush_seeking = FALSE;" line out of the brackets so it gets set regardless of the results and it did "fix" my original problem.

Surely this is not the correct fix, but maybe your understanding is much better and you already have an idea what may cause this behavior in case a tee is present downstream?

I can attach some traces for this particular case - just lacking access to the right machine.
Comment 1 Florian Zwoch 2018-02-20 09:13:25 UTC
Created attachment 368609 [details]
log with seek

GST_DEBUG=*aggregator*:4
Comment 2 Florian Zwoch 2018-02-22 12:51:44 UTC
Created attachment 368763 [details]
sample_app

Here is an aggregator.py as sample app. You can set a video file in there that gets loaded and decoded twice and fed into the compositor element.

There is an "if" which you can toggle between a working use case and the failure case where a tee filter is used for a second output (just a fakesink).

The app loops the video after EOS. When the the tee has a second path the video renderer does not update anymore and the CPU usage increases as decodebin processes the video but nothing reaches the renderer anymore. EOS will get hit sooner this time and the the loop begins again without the video recovering.
Comment 3 GStreamer system administrator 2018-11-03 12:44:56 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/276.