GNOME Bugzilla – Bug 673794
udpsink: implement timestamp smoothing / sender throttling (for rtprawpay and gigabit ethernet)
Last modified: 2018-11-03 14:46:08 UTC
Created attachment 211668 [details] good It is some problem with rtpvrawdepay and gigabit ethernet. All right, if we use Fast Ethernet mode: sender: kwisp@klochkov ~ $ LANG=en.en GST_PLUGIN_PATH=/usr/local/lib/gstreamer-0.10/ GST_PLUGIN_SYSTEM_PATH=/usr/lib/gstreamer-0.10/ gst-launch-0.10 -v videotestsrc ! video/x-raw-yuv,format=\(fourcc\)I420,width=320,height=240 ! rtpvrawpay ! udpsink host="192.168.136.130" port=5000 Setting pipeline to PAUSED ... /GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)I420, width=(int)320, height=(int)240, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, framerate=(fraction)30/1 Pipeline is PREROLLING ... /GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)I420, width=(int)320, height=(int)240, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, framerate=(fraction)30/1 /GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw-yuv, format=(fourcc)I420, width=(int)320, height=(int)240, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, framerate=(fraction)30/1 /GstPipeline:pipeline0/GstRtpVRawPay:rtpvrawpay0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:0, depth=(string)8, width=(string)320, height=(string)240, colorimetry=(string)BT601-5, payload=(int)96, ssrc=(uint)2837818414, clock-base=(uint)2969841640, seqnum-base=(uint)10940 /GstPipeline:pipeline0/GstRtpVRawPay:rtpvrawpay0.GstPad:sink: caps = video/x-raw-yuv, format=(fourcc)I420, width=(int)320, height=(int)240, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, framerate=(fraction)30/1 /GstPipeline:pipeline0/GstRtpVRawPay:rtpvrawpay0: timestamp = 2969841640 /GstPipeline:pipeline0/GstRtpVRawPay:rtpvrawpay0: seqnum = 10940 /GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:0, depth=(string)8, width=(string)320, height=(string)240, colorimetry=(string)BT601-5, payload=(int)96, ssrc=(uint)2837818414, clock-base=(uint)2969841640, seqnum-base=(uint)10940 Pipeline is PREROLLED ... Setting pipeline to PLAYING ... New clock: GstSystemClock receiver: unit29@calligraphy2:~$ LANG=en.en gst-launch -v udpsrc uri="udp://192.168.136.130:5000" caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:0, depth=(string)8, width=(string)320, height=(string)240, colorimetry=(string)BT601-5, payload=(int)96, ssrc=(uint)3179834474, clock-base=(uint)2843576415, seqnum-base=(uint)64658" ! rtpvrawdepay ! xvimagesink Setting pipeline to PAUSED ... Pipeline is live and does not need PREROLL ... Setting pipeline to PLAYING ... New clock: GstSystemClock /GstPipeline:pipeline0/GstRtpVRawDepay:rtpvrawdepay0.GstPad:src: caps = video/x-raw-yuv, width=(int)320, height=(int)240, format=(fourcc)I420, framerate=(fraction)0/1 /GstPipeline:pipeline0/GstRtpVRawDepay:rtpvrawdepay0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:0, depth=(string)8, width=(string)320, height=(string)240, colorimetry=(string)BT601-5, payload=(int)96, ssrc=(uint)3179834474, clock-base=(uint)2843576415, seqnum-base=(uint)64658 /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0.GstPad:sink: caps = video/x-raw-yuv, width=(int)320, height=(int)240, format=(fourcc)I420, framerate=(fraction)0/1 <good> If we use Gigabit ethernet mode: sender: LANG=en.us gst-launch -v videotestsrc ! video/x-raw-yuv,format=\(fourcc\)I420,width=320,height=240 ! rtpvrawpay ! udpsink host="192.168.192.2" port=5000 Setting pipeline to PAUSED ... /GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)I420, width=(int)320, height=(int)240, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, framerate=(fraction)30/1 Pipeline is PREROLLING ... /GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)I420, width=(int)320, height=(int)240, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, framerate=(fraction)30/1 /GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw-yuv, format=(fourcc)I420, width=(int)320, height=(int)240, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, framerate=(fraction)30/1 /GstPipeline:pipeline0/GstRtpVRawPay:rtpvrawpay0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:0, depth=(string)8, width=(string)320, height=(string)240, colorimetry=(string)BT601-5, payload=(int)96, ssrc=(uint)2151192507, clock-base=(uint)1835557868, seqnum-base=(uint)14436 /GstPipeline:pipeline0/GstRtpVRawPay:rtpvrawpay0.GstPad:sink: caps = video/x-raw-yuv, format=(fourcc)I420, width=(int)320, height=(int)240, color-matrix=(string)sdtv, chroma-site=(string)mpeg2, framerate=(fraction)30/1 /GstPipeline:pipeline0/GstRtpVRawPay:rtpvrawpay0: timestamp = 1835557868 /GstPipeline:pipeline0/GstRtpVRawPay:rtpvrawpay0: seqnum = 14436 /GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:0, depth=(string)8, width=(string)320, height=(string)240, colorimetry=(string)BT601-5, payload=(int)96, ssrc=(uint)2151192507, clock-base=(uint)1835557868, seqnum-base=(uint)14436 Pipeline is PREROLLED ... Setting pipeline to PLAYING ... New clock: GstSystemClock receiver2: unit29@calligraphy2:~$ LANG=en.en gst-launch -v udpsrc uri="udp://192.168.192.2:5000" caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:0, depth=(string)8, width=(string)320, height=(string)240, colorimetry=(string)BT601-5, payload=(int)96, ssrc=(uint)2151192507, clock-base=(uint)1835557868, seqnum-base=(uint)14436" ! rtpvrawdepay ! ffmpegcolorspace ! ximagesink Setting pipeline to PAUSED ... Pipeline is live and does not need PREROLL ... Setting pipeline to PLAYING ... New clock: GstSystemClock /GstPipeline:pipeline0/GstRtpVRawDepay:rtpvrawdepay0.GstPad:src: caps = video/x-raw-yuv, width=(int)320, height=(int)240, format=(fourcc)I420, framerate=(fraction)0/1 /GstPipeline:pipeline0/GstRtpVRawDepay:rtpvrawdepay0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)RAW, sampling=(string)YCbCr-4:2:0, depth=(string)8, width=(string)320, height=(string)240, colorimetry=(string)BT601-5, payload=(int)96, ssrc=(uint)2151192507, clock-base=(uint)1835557868, seqnum-base=(uint)14436 /GstPipeline:pipeline0/GstFFMpegCsp:ffmpegcsp0.GstPad:src: caps = video/x-raw-rgb, bpp=(int)16, depth=(int)16, endianness=(int)1234, red_mask=(int)63488, green_mask=(int)2016, blue_mask=(int)31, width=(int)320, height=(int)240, framerate=(fraction)0/1, pixel-aspect-ratio=(fraction)1/1 /GstPipeline:pipeline0/GstFFMpegCsp:ffmpegcsp0.GstPad:sink: caps = video/x-raw-yuv, width=(int)320, height=(int)240, format=(fourcc)I420, framerate=(fraction)0/1 /GstPipeline:pipeline0/GstXImageSink:ximagesink0.GstPad:sink: caps = video/x-raw-rgb, bpp=(int)16, depth=(int)16, endianness=(int)1234, red_mask=(int)63488, green_mask=(int)2016, blue_mask=(int)31, width=(int)320, height=(int)240, framerate=(fraction)0/1, pixel-aspect-ratio=(fraction)1/1 <bad> We need show video more then 320x240 resolution 1024x768 <very bad> It is Intel Atom 1.6GHz on the receiver side. Gigabit ethernet maximum load is 6Mb/sec Maximum CPU load is 30%. gstrtpjitterbuffer dont save us too. mailto: kwispost@gmail.com
Created attachment 211669 [details] bad
Created attachment 211670 [details] very bad
checked for version >=0.10.30
Try to set the kernel buffer size to something bigger than the default. You can use the buffer-size property on udpsrc and udpsink, use something like 0x80000 for high bitrate streams.
Unfortunately, it does not completely solve the problem:( see attachment 'with-maximum-buffer-size'
Created attachment 211899 [details] with-maximum-buffer-size
As I say set property udpsrc 'buffer-size' to maximum value does not completely solve the problem. But I set system net buffer size via sysctl -w [net.rmem_max, wmem_max, rmem_default, wmem_default] to 1524288. (It is abnormally net buffer size to my mind.) And it works "normally", with small black lines sometimes take place(about 1 times per 2-3 second)... I want more smooth video. CPU(Intel Atom 1,6GHz) load with video 1024x768, 25fps is about 90% :( As I wrote - on fast ethernet maximum CPU load(Inte Atom 1,6GHz) with video 1024x768, 12fps is 25-30% It is very interesting. If we set on reciever side --gst-debug=basertpdepayloader:5, this(see 'bad' and 'very bad' attachment) problem take place even on Intel i7 CPU based machine. And there are no problem if we set abnormally net buffer size to 1524288. But without --gst-debug=basertpdepayloader:5 Intel i7 CPU based machine works fine with 131071 net buffer size. I think - some bug take place in (udpsrc or rtpvrawdepay) on the speed, that 1Gb ethernet set.
(In reply to comment #7) > It is very interesting. > If we set on reciever side --gst-debug=basertpdepayloader:5, this(see 'bad' and > 'very bad' attachment) problem take place even on Intel i7 CPU based machine. > And there are no problem if we set abnormally net buffer size to 1524288. But > without --gst-debug=basertpdepayloader:5 Intel i7 CPU based machine works fine > with 131071 net buffer size. Enabling debug info requires much more CPU. You desktop will likely also start dropping packets. > > I think - some bug take place in (udpsrc or rtpvrawdepay) on the speed, that > 1Gb ethernet set. Maybe.. to me it looks like packet loss.
Do you know any way to get more smooth video? What do you think about net buffer size 1524288? Does it normal?
(In reply to comment #9) > Do you know any way to get more smooth video? What do you mean? Here are some hints: increase the kernel buffers, add a jitterbuffer on the receiver side, make the receiver thread a real-time thread, try to send data that doesn't need colorspace conversion. Sending raw video is going to be heavy, it requires lots of memory bandwidth. You might have better results using some sort of compression. Also try to measure what the performance problems are with tools like callgrind or so. > What do you think about net buffer size 1524288? Does it normal? The bigger the less chance you have of losing a packet.
Can this be closed?
So this is NOTABUG, right? Please reopen if this is not the case.
I seem to remember an IRC discussion about this. Basically, the problem was that the max. udp buffer size could not be made big enough, so if you split a large raw video frame into multiple RTP/UDP packets to be sent at the same time, you'd basically be guaranteed to drop half of the data, because the buffer will fill up and we try to send the following packets immediately. Letting identity timestamp the packets before sending them out helped. I believe the conclusion was that it would make sense to implement some kind of throttling/smoothing in udpsink. Re-opening this for now. Wim: feel free to close again.
We have this now, but I'm not sure how to make it work (i.e. how to transfer the bitrate info from the payloader to the sink - put it in a tag event?): commit d8413cd0a2485af215c729f8109b1382f1ea6573 Author: Wim Taymans <wim.taymans@collabora.co.uk> Date: Fri Nov 9 16:50:50 2012 +0100 basesink: add simple rate control Add a max-bitrate property that will slightly delay rendering of buffers if it would exceed the maximum defined bitrate. This can be used to do rate control on network sinks, for example. API: GstBaseSink::max-bitrate API: gst_base_sink_set_max_bitrate() API: gst_base_sink_get_max_bitrate()
(In reply to comment #14) > We have this now, but I'm not sure how to make it work (i.e. how to transfer > the bitrate info from the payloader to the sink - put it in a tag event?): You don't need to transfer the bitrate, do you? You would configure the speed of your network as the max bitrate in the sink and that's it.
> You don't need to transfer the bitrate, do you? You would configure the speed > of your network as the max bitrate in the sink and that's it. I was hoping we could make this Just Work (tm) without the user or application having to set a property first.
(In reply to comment #16) > > You don't need to transfer the bitrate, do you? You would configure the > speed > > of your network as the max bitrate in the sink and that's it. > > I was hoping we could make this Just Work (tm) without the user or application > having to set a property first. Are you thinking of using the bitrate of the stream?
> Are you thinking of using the bitrate of the stream? That's what I was thinking, yes. I guess it might affect latency marginally though? Alternatively, would setting it to some non-0 value by default (e.g. 10Mbps or 100Mbps) be better than 0?
Why would you set the nominal bitrate of the stream ? That's completely unrelated. I don't think you can have a generically useful value. Actually, outside of simple scenarios, I see that thing being mostly useful in scenarios where you have some algorithms/mecanism to determine some network bitrate to enforce.
As I see it: 1. if we don't set a rate in this case, we are guaranteed to almost always drop data, and a lot of it, because the kernel buffers are limited in size and raw video frames are much larger than that size. That seems undesirable to me. 2. ergo: we need to set some rate to make it work automagically and not always drop data 3. ideally, we'd set the max rate of the connection, but we can't really do that or know that. We could default to *some* rate, but if our default is too high, we will send packets too fast and they get dropped even if the connection could handle it theoretically. 4. the nominal rate of the stream is the lowest rate at which no packet loss is bound to occur. If we use that rate, and packets are still dropped, then it's simply because the connection can't handle it, and there is no scenario in which fewer packets are dropped. Hence it seems the optimal rate to avoid packet loss. (However, there are latency considerations as well of course.)
Hi, Any updates on this thread?? regards JasonP
(In reply to comment #21) > Hi, > > Any updates on this thread?? No updates, it's fixed IMO, there is some discussion about configuring a better default value but you should just configure a max bitrate on udpsink to get the best performance. > > regards > JasonP
Wim, what do you think about what I wrote in comment #20 ?
I will investigate the max bitrate potential solution as I am bitten by the problem as well. I haven't seen similar story so I'll share the description to bring a new perspective. I am streaming mp4 video with rtp over multicast ip. The rtp sender has a 100 Mbps interface. Not all part of the LAN is 100 mbps. Some segments are 10 Mbps only. I think we see bursts that coincide with keyframes tx. One would hope that with good Ethernet switches, the bitrate smoothing would occur at the hardware level on the switches but I guess we have not so good switches and we experience heavy packet drops. I need to add that the network is already well used. It is not exclusively used specifically to stream this particular stream. IMO, it is not overused but it doesn't take a very big spike to cause packet drops.
*** Bug 749058 has been marked as a duplicate of this bug. ***
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/60.