After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 746524 - queue: Add property to allow pushing all queued buffers together
queue: Add property to allow pushing all queued buffers together
Status: RESOLVED OBSOLETE
Product: GStreamer
Classification: Platform
Component: gstreamer (core)
git master
Other Linux
: Normal enhancement
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2015-03-20 12:54 UTC by Jose Antonio Santos Cadenas
Modified: 2018-11-03 12:26 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
queue: Add property to allow pushing all queued buffers together (6.49 KB, patch)
2015-03-20 12:54 UTC, Jose Antonio Santos Cadenas
none Details | Review
queue: Add property to allow pushing all queued buffers together (6.74 KB, patch)
2015-03-25 14:41 UTC, Jose Antonio Santos Cadenas
none Details | Review
queue: Add property to allow pushing all queued buffers together (6.98 KB, patch)
2015-05-20 08:55 UTC, Miguel París Díaz
none Details | Review
queue: Add property to allow pushing all queued buffers together (6.92 KB, patch)
2015-05-20 09:53 UTC, Jose Antonio Santos Cadenas
none Details | Review
queue: Add property to allow pushing all queued buffers (7.02 KB, patch)
2015-06-24 14:14 UTC, Jose Antonio Santos Cadenas
none Details | Review
queue: Add property to allow pushing all queued buffers (7.00 KB, patch)
2016-01-28 10:37 UTC, Miguel París Díaz
none Details | Review
queue: Add property to allow pushing all queued buffers together (7.01 KB, patch)
2016-01-28 11:59 UTC, Jose Antonio Santos Cadenas
none Details | Review

Description Jose Antonio Santos Cadenas 2015-03-20 12:54:27 UTC
Created attachment 299946 [details] [review]
queue: Add property to allow pushing all queued buffers together

Attached patch adds a property that allows queue to send a buffer list with all the
queued buffers currently in the queue. If there are other kind of data in the queue order is preserved I mean, buffer list only contains buffers up to an event or a query.
Comment 1 Sebastian Dröge (slomo) 2015-03-20 17:11:28 UTC
Review of attachment 299946 [details] [review]:

Generally looks good to me, but I think the impact of this will be not too big. It would only cause different behaviour if the queue is blocked long enough to accumulate more than one buffer.

::: plugins/elements/gstqueue.c
@@ +130,3 @@
 #define DEFAULT_MAX_SIZE_TIME     GST_SECOND    /* 1 second    */
 
+#define DEFAULT_GENERATE_BUFFER_LIST FALSE

You should initialize the struct field to that in gst_queue_init()
Comment 2 Jose Antonio Santos Cadenas 2015-03-25 14:40:28 UTC
I think that in some circumstances this could be importatn. I found casaes where we have a strong delay in audio (that was buffered), if all buffers could be pushed together and the downstream elements can handle all buffers at once (as in multiudpsink) this can have an impact in performance not having to call chain functions for every single buffer.
Comment 3 Jose Antonio Santos Cadenas 2015-03-25 14:41:11 UTC
Created attachment 300285 [details] [review]
queue: Add property to allow pushing all queued buffers together
Comment 4 Jose Antonio Santos Cadenas 2015-04-29 09:40:25 UTC
Hi, any new comments on this?
Comment 5 Jose Antonio Santos Cadenas 2015-05-13 15:26:29 UTC
Sebastian, you said that this patch looked OK, will be great if we could accept it.
Comment 6 Sebastian Dröge (slomo) 2015-05-13 16:15:54 UTC
I'd like to have a second opinion on this :)
Comment 7 Tim-Philipp Müller 2015-05-13 16:21:07 UTC
I don't know, the whole thing smells a bit weird to me, what's the exact use case for this?

If you have a scenario where the individual buffer pushes might be a performance problem, then you should tackle that elsewhere in the pipeline and make that use buffer lists already. I understand the latency-vs-performance trade-off that involves of course, but what you're trying to do here feels a bit desperate somehow :)
Comment 8 Jose Antonio Santos Cadenas 2015-05-13 20:52:21 UTC
The use case for this is a pipeline with audio and video coming from the same source and demuxed later. When both streams are sent to many sinks using a tee. In this case as audio buffers are more frequent and they start to be queued because they can not be processed that fast because the time consumed by push functions starts to be a bottleneck, as buffers are being queued, latency is not a problem.

As Tim says this is only useful if all the downstream elements also support bufferlist.
Comment 9 Sebastian Dröge (slomo) 2015-05-13 20:57:31 UTC
And it's also only useful if the queue actually has more than a single buffer at the time of pushing. I would be interested in seeing some numbers of how much this improves something in a real use case.

I think what would be more useful is something on queue that allows it to actively buffer up to N nanoseconds/bytes of data and then push that downstream in one buffer list. That way you are guaranteed to get data downstream in that chunksize and don't depend on arbitrary decisions by the scheduler and other uncontrollable events. And it would deterministically introduce latency instead of something completely random.

(All this would have to take the configured latency and the latency query into account though, for live pipelines... and especially have a timeout when it stops waiting for further data)
Comment 10 Jose Antonio Santos Cadenas 2015-05-14 06:58:31 UTC
But if there is only one buffer in the queue this mean that media is flowing on time and we do not need to speed up nor increase latency. We only want this to work when latency is already there because the CPU cannot handle buffers ontime.
Comment 11 Sebastian Dröge (slomo) 2015-05-14 07:08:05 UTC
The reasons for multiple buffers in the queue are not necessarily that the data flow is too slow.
Comment 12 Jose Antonio Santos Cadenas 2015-05-14 07:12:29 UTC
Yes, in general case not, but in our case (not synchronized pipelines) when we are queuing is because something is going slower than it should.
Comment 13 Sebastian Dröge (slomo) 2015-05-14 07:27:46 UTC
No, even in your case it would happen a lot because of the scheduler and not because something is actually too slow.


Anyway, do you have some numbers about how much this improves anything for your use cases?
Comment 14 Jose Antonio Santos Cadenas 2015-05-14 07:41:15 UTC
Not numbers yet for this case, but is something we want to measure. What we have is a perception of audio not being lost.
Comment 15 Sebastian Dröge (slomo) 2015-05-14 08:14:34 UTC
That's interesting, but my feeling is that your patch only covers up another problem elsewhere then. Should look into that a bit closer :)
Comment 16 Miguel París Díaz 2015-05-18 14:58:16 UTC
Hello Sebastian and Tim,
an easy way to measure the improvement (see total time per each case):

1) With generate-buffer-list=true
$ gst-launch-1.0 fakesrc num-buffers=1000000 ! queue generate-buffer-list=true ! tee name=t t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false
Setting pipeline to PAUSED ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:01.074721943
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

2) With generate-buffer-list=false
$ gst-launch-1.0 fakesrc num-buffers=1000000 ! queue generate-buffer-list=false ! tee name=t t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false
Setting pipeline to PAUSED ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:03.213017035
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
Comment 17 Tim-Philipp Müller 2015-05-18 15:13:56 UTC
Yes, but it seems like a bit of a silly setup / test scenario :)

I'm not disputing that it wouldn't make a difference performance-wise, I would like to understand a real-world use case where this is the only and best solution available.
Comment 18 Miguel París Díaz 2015-05-20 08:55:50 UTC
Created attachment 303646 [details] [review]
queue: Add property to allow pushing all queued buffers together
Comment 19 Miguel París Díaz 2015-05-20 08:56:43 UTC
Review of attachment 300285 [details] [review]:

Replace to new attached patch
Comment 20 Miguel París Díaz 2015-05-20 09:38:12 UTC
Review of attachment 303646 [details] [review]:

Not apply
Comment 21 Jose Antonio Santos Cadenas 2015-05-20 09:53:01 UTC
Created attachment 303648 [details] [review]
queue: Add property to allow pushing all queued buffers together

New version fixing the issues that Miguel discovered
Comment 22 Jose Antonio Santos Cadenas 2015-06-24 14:14:23 UTC
Created attachment 306005 [details] [review]
queue: Add property to allow pushing all queued buffers
Comment 23 Miguel París Díaz 2016-01-28 10:37:17 UTC
Created attachment 319910 [details] [review]
queue: Add property to allow pushing all queued buffers

Apply over gst 1.7.1
Comment 24 Jose Antonio Santos Cadenas 2016-01-28 11:59:05 UTC
Created attachment 319920 [details] [review]
queue: Add property to allow pushing all queued buffers together
Comment 25 Sebastian Dröge (slomo) 2016-02-16 11:46:28 UTC
This also has another problem btw. It would mean that the queue always goes from full to completely empty immediately, basically moving "buffering" to downstream elements while additionally being able to buffer more data again. It could increase memory usage of the overall pipeline, in the worst case you could have a single queue "buffer" twice as much as the max-size-* properties.
Comment 26 Jose Antonio Santos Cadenas 2016-02-16 12:34:27 UTC
In our example this is not happening, pushing all buffer together is cleaning the pipeline correctly. Think that nicesink can handle all the buffer at once. 

We have done performance tests that show that this change increases efficiency, because if the queue is does not have space for the new buffers, downstream thread is stopped and this causes problem on the thread that is reading packages from the network, causing packet lost because packets are dropped by the kernel.

We are already using this patch in our fork, and our experience is that using this flag increases the capacity of the pipeline. If you think that the patch is going to create problems we can maintain it in our fork, no worries :)
Comment 27 Sebastian Dröge (slomo) 2016-02-17 14:26:05 UTC
Yeah it does not happen in your scenario, I'm just listing this here as another reason why this is not the default behaviour for posterity if this patch is merged :)


But you're basically agreeing to my point: "using this flag increases the capacity of the pipeline"
Comment 28 Jose Antonio Santos Cadenas 2016-02-17 14:32:15 UTC
By capacity I mean that it can process more data and as our pipeline is not synchronized it's what we want.
Comment 29 GStreamer system administrator 2018-11-03 12:26:20 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/101.