GNOME Bugzilla – Bug 779131
waylandsink: disable last sample sink feature
Last modified: 2017-02-24 14:26:53 UTC
Created attachment 346564 [details] [review] waylandsink-disable-last-sample-sink-feature The "last-sample" basesink feature stores a reference to the last buffer sent for rendering. A consequence is that one buffer is always locked, decreasing the number of buffers of the pool actually available. Disabling this feature improves the buffer pool capacity and prevents some deadlock race conditions.
Wonder if this has undesirable side-effects though, like affecting playbin's screenshot functionality for example.
Oh yes it probably prevents screenshot. The point is that waylandsink does not really have the full control of the buffer locking since this is delegated to the wayland compositor. From what I have observed weston keeps a reference on a buffer even after that one is no more displayed (released more or less one frame later). So the pool is always close to propose 'unavailable' buffers. In some cases, I observed dead lock : all buffers used by weston or decoder and no one wants to free any. The other point is that when DMABuf buffers are used, waylandsink does not propose any pool but it is up to the upstream element to propose one. But this upstream element is not aware of sink/weston buffer locking, so it is hard to set correctly the min_buffer value. Disabling last-sample solves a part of the problem But I agree with you: this has a drawback (that I shall have explicitly mentioned in the initial comment).
Review of attachment 346564 [details] [review]: I'll nack this one, sorry. We use "pipeline stall" for what you described as a deadlock here. The right way it to add 1 to the number of requested buffer in propose_allocation().
Do you mean that I shall raise a new ticket to increase min_buffers from 2 to 3 in gst_wayland_sink_propose_allocation() to avoid pipeline stall? snip: static gboolean gst_wayland_sink_propose_allocation (GstBaseSink * bsink, GstQuery * query) { ... if (pool) { gst_query_add_allocation_pool (query, pool, sink->video_info.size, 2, 0); g_object_unref (pool); } ...
You can re-purpose this one really. What you came up with, is a solution to your problem. I'm just saying, this isn't a solution. last-sample should not stall your pipeline, and that's what you should fix. enable-last-sample is ON on every single video-sink by default. It would be very inconsistent to not follow this. Obviously, if you don't use screenshot in your application, turn it off. Now, 2 should be enough, but slow, since it means a buffer cannot be filled while being rendered. So 3 is a much better default. Exception, if you add support for interlacing at some point, you may need to increase this a bit. So the buffer purpose would be: 1. Upstream being filled 2. Being rendered by the compositor 3. Stuck in last-sample Arguably, this extra one could have been proposed by upstream, but I think in practice no one does that, so better account at least 1 buffer being out. In many many cases, you will not notice, as most buffer pools can grow on demand.
OK, thank your for the clarification. Since this is definitively not the right way to solve this issue, I close this ticket.