GNOME Bugzilla – Bug 746524
queue: Add property to allow pushing all queued buffers together
Last modified: 2018-11-03 12:26:20 UTC
Created attachment 299946 [details] [review] queue: Add property to allow pushing all queued buffers together Attached patch adds a property that allows queue to send a buffer list with all the queued buffers currently in the queue. If there are other kind of data in the queue order is preserved I mean, buffer list only contains buffers up to an event or a query.
Review of attachment 299946 [details] [review]: Generally looks good to me, but I think the impact of this will be not too big. It would only cause different behaviour if the queue is blocked long enough to accumulate more than one buffer. ::: plugins/elements/gstqueue.c @@ +130,3 @@ #define DEFAULT_MAX_SIZE_TIME GST_SECOND /* 1 second */ +#define DEFAULT_GENERATE_BUFFER_LIST FALSE You should initialize the struct field to that in gst_queue_init()
I think that in some circumstances this could be importatn. I found casaes where we have a strong delay in audio (that was buffered), if all buffers could be pushed together and the downstream elements can handle all buffers at once (as in multiudpsink) this can have an impact in performance not having to call chain functions for every single buffer.
Created attachment 300285 [details] [review] queue: Add property to allow pushing all queued buffers together
Hi, any new comments on this?
Sebastian, you said that this patch looked OK, will be great if we could accept it.
I'd like to have a second opinion on this :)
I don't know, the whole thing smells a bit weird to me, what's the exact use case for this? If you have a scenario where the individual buffer pushes might be a performance problem, then you should tackle that elsewhere in the pipeline and make that use buffer lists already. I understand the latency-vs-performance trade-off that involves of course, but what you're trying to do here feels a bit desperate somehow :)
The use case for this is a pipeline with audio and video coming from the same source and demuxed later. When both streams are sent to many sinks using a tee. In this case as audio buffers are more frequent and they start to be queued because they can not be processed that fast because the time consumed by push functions starts to be a bottleneck, as buffers are being queued, latency is not a problem. As Tim says this is only useful if all the downstream elements also support bufferlist.
And it's also only useful if the queue actually has more than a single buffer at the time of pushing. I would be interested in seeing some numbers of how much this improves something in a real use case. I think what would be more useful is something on queue that allows it to actively buffer up to N nanoseconds/bytes of data and then push that downstream in one buffer list. That way you are guaranteed to get data downstream in that chunksize and don't depend on arbitrary decisions by the scheduler and other uncontrollable events. And it would deterministically introduce latency instead of something completely random. (All this would have to take the configured latency and the latency query into account though, for live pipelines... and especially have a timeout when it stops waiting for further data)
But if there is only one buffer in the queue this mean that media is flowing on time and we do not need to speed up nor increase latency. We only want this to work when latency is already there because the CPU cannot handle buffers ontime.
The reasons for multiple buffers in the queue are not necessarily that the data flow is too slow.
Yes, in general case not, but in our case (not synchronized pipelines) when we are queuing is because something is going slower than it should.
No, even in your case it would happen a lot because of the scheduler and not because something is actually too slow. Anyway, do you have some numbers about how much this improves anything for your use cases?
Not numbers yet for this case, but is something we want to measure. What we have is a perception of audio not being lost.
That's interesting, but my feeling is that your patch only covers up another problem elsewhere then. Should look into that a bit closer :)
Hello Sebastian and Tim, an easy way to measure the improvement (see total time per each case): 1) With generate-buffer-list=true $ gst-launch-1.0 fakesrc num-buffers=1000000 ! queue generate-buffer-list=true ! tee name=t t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false Setting pipeline to PAUSED ... Pipeline is PREROLLED ... Setting pipeline to PLAYING ... New clock: GstSystemClock Got EOS from element "pipeline0". Execution ended after 0:00:01.074721943 Setting pipeline to PAUSED ... Setting pipeline to READY ... Setting pipeline to NULL ... Freeing pipeline ... 2) With generate-buffer-list=false $ gst-launch-1.0 fakesrc num-buffers=1000000 ! queue generate-buffer-list=false ! tee name=t t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false t. ! multiudpsink sync=false async=false Setting pipeline to PAUSED ... Pipeline is PREROLLED ... Setting pipeline to PLAYING ... New clock: GstSystemClock Got EOS from element "pipeline0". Execution ended after 0:00:03.213017035 Setting pipeline to PAUSED ... Setting pipeline to READY ... Setting pipeline to NULL ... Freeing pipeline ...
Yes, but it seems like a bit of a silly setup / test scenario :) I'm not disputing that it wouldn't make a difference performance-wise, I would like to understand a real-world use case where this is the only and best solution available.
Created attachment 303646 [details] [review] queue: Add property to allow pushing all queued buffers together
Review of attachment 300285 [details] [review]: Replace to new attached patch
Review of attachment 303646 [details] [review]: Not apply
Created attachment 303648 [details] [review] queue: Add property to allow pushing all queued buffers together New version fixing the issues that Miguel discovered
Created attachment 306005 [details] [review] queue: Add property to allow pushing all queued buffers
Created attachment 319910 [details] [review] queue: Add property to allow pushing all queued buffers Apply over gst 1.7.1
Created attachment 319920 [details] [review] queue: Add property to allow pushing all queued buffers together
This also has another problem btw. It would mean that the queue always goes from full to completely empty immediately, basically moving "buffering" to downstream elements while additionally being able to buffer more data again. It could increase memory usage of the overall pipeline, in the worst case you could have a single queue "buffer" twice as much as the max-size-* properties.
In our example this is not happening, pushing all buffer together is cleaning the pipeline correctly. Think that nicesink can handle all the buffer at once. We have done performance tests that show that this change increases efficiency, because if the queue is does not have space for the new buffers, downstream thread is stopped and this causes problem on the thread that is reading packages from the network, causing packet lost because packets are dropped by the kernel. We are already using this patch in our fork, and our experience is that using this flag increases the capacity of the pipeline. If you think that the patch is going to create problems we can maintain it in our fork, no worries :)
Yeah it does not happen in your scenario, I'm just listing this here as another reason why this is not the default behaviour for posterity if this patch is merged :) But you're basically agreeing to my point: "using this flag increases the capacity of the pipeline"
By capacity I mean that it can process more data and as our pipeline is not synchronized it's what we want.
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/101.