GNOME Bugzilla – Bug 792403
compositor not producing buffers unless a buffer has been received despite start-time-selection=zero
Last modified: 2018-01-11 15:13:59 UTC
I'm trying to feed compositor with data from udpsrc; i am expecting compositor to start producing buffers from 0 (since start-time-selection is set to zero), but it doesn't. GST_DEBUG=agg*:6 gst-launch-1.0 compositor name=comp ! timeoverlay ! xvimagesink udpsrc port=1234 ! "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:0, depth=(string)8, width=(string)320, height=(string)240, colorimetry=(string)BT601-5, payload=(int)96, ssrc=(uint)2807559578, timestamp-offset=(uint)1965208493, seqnum-offset=(uint)21164" ! rtpvrawdepay ! videorate ! queue ! comp. The display sink will only display something whenever some data is coming in with e.g. gst-launch-1.0 videotestsrc is-live=true pattern=18 ! rtpvrawpay ! udpsink host=127.0.0.1 port=1234
As far as I can tell, there are two issues here: First we need to query_latency when a pad is added or removed in aggregator, this in order to possibly update "peer_latency_live". Second, videoaggregator expects at least one configured pad to decide on an output format, even when the output caps are fully specified, it could possibly be improved to not require this if indeed an output format has been fully specified (see gst_video_aggregator_default_update_src_caps)
That basically asking for aggregator to work like source without any input, I'm not sure there exist a solution to that.
Well, as a workaround i can * kill the pipeline if udpsrc timeouts * connect a 1 px videotestsrc to the compositor But in any case i find start-time-selection=zero slightly implying that it can work from ts 0.
What it means is that it will start producing output buffers from zero, not at time 0.
Considering a use case like OBS (open broadcaster software), you startup the software which shows an empty composition stage which you can already record, onto which you can attach sources dynamically. This is why i believe it would still be useful for compositor, in addition to making it a bit easier to use (since the pipeline wouldn't block when starting up, but show the background instead). If this is not possible to do and not intended by compositor/aggregator, feel free to close this ticket; for my usecase, i will just stop my pipeline in case of udp timeout event.
(In reply to Florent Thiéry from comment #5) > Considering a use case like OBS (open broadcaster software), you startup the > software which shows an empty composition stage which you can already > record, onto which you can attach sources dynamically. > > This is why i believe it would still be useful for compositor, in addition > to making it a bit easier to use (since the pipeline wouldn't block when > starting up, but show the background instead). How's different from setting a low z-order pad, with a source producing that background ?
The difference comes down to the added processing cost for videotestsrc vs compositor (which is generating the background). Apparently compositor isn't clever enough yet to skip drawing the background if it's filled. /* TODO: If the frames to be composited completely obscure the background, * don't bother drawing the background at all. */ https://github.com/GStreamer/gst-plugins-bad/blob/master/gst/compositor/compositor.c#L1129 But that's probably neglectable and a very valid workaround, i guess we should close this issue then.
Ok, btw I quite in favour to add a performant variant or mode to videotestsrc. Right now it builds new buffers from scratch, so we have RW out buffers, and we don't cache the fixed parts. And for completely fixed output, we could also have a mode pushing non-writable images (like imagefreeze is doing).
For benchmarks i tend to use filesrc location=/dev/zero ! rawvideoparse, which is way faster than videotestsrc. Having a performant variant to videotestsrc would indeed be better (as it is a full featured live source compared to my hack).
you could also just produce buffers at 1fps or so for this purpose, no ? :)
although I guess that would affect the reported latency then, so bleh.
(In reply to Tim-Philipp Müller from comment #10) > you could also just produce buffers at 1fps or so for this purpose, no ? :) Or a 1x1px videotestsrc 30 fps transparent AYUV source :p