GNOME Bugzilla – Bug 770150
multiudpsink: Add round-robin property
Last modified: 2018-11-03 15:10:55 UTC
Created attachment 333643 [details] [review] multiudpsink: Add round-robin property This new property allows changing the way lists of buffers are being sent out. If enabled, instead of sending n buffers to each client at a time, it sends the first buffer to all clients before iterating to the next buffer. This avoids bursts of packets to each endpoint, which can get overwhelmed reading large number of packets, which eventually leads to dropped packets once the socket receive buffer fills up due to such bursts.
I wonder if we shouldn't make this the only behaviour, it feels better in all cases. Tim? Slomo?
I think this makes sense in general. There may be a performance impact though: if we send packets to the same client in one go, we can just re-use the address from the previous packet. If we round-robin to all clients, we end up going through g_socket_address_to_native() for every single packet. I wonder if perhaps we should default to different behaviour in the payloader (whatever created these buffer lists) to send packets in multiple lists, like rtpvrawpay does. What was the format where you observed this?
(In reply to Tim-Philipp Müller from comment #2) > There may be a performance impact though: if we send packets to the same > client in one go, we can just re-use the address from the previous packet. > If we round-robin to all clients, we end up going through > g_socket_address_to_native() for every single packet. Maybe, is it such a large impact though? I haven't really observed any difference and the boxes this runs on don't have much horse power (more than a raspi, but much less than any desktop). > I wonder if perhaps we should default to different behaviour in the > payloader (whatever created these buffer lists) to send packets in multiple > lists, like rtpvrawpay does. What was the format where you observed this? What we were seeing is that when streaming FLAC audio over RTP at 24bps/96k or higher we had significant problems with 5 or more endpoints, using the rtpgstpay. The endpoints towards the end of the list were still receiving packets, but because they came in so quickly (usually like 20 packets at a time) I believe that the udpsrc just didn't keep up reading from the socket and pushing them through. The socket buffer size in our case was like 200k, so enough for maybe 130+ packets. Placing a queue in between the udpsrc and rtpbin also did not help as one would think, because the burst of packets came in but too late, which then triggered rtpbin to already request a rtx, which then just made the situation even worse. Effectively those endpoints would fade in and out, not playing most of the time. And this is at data rates of only like 1 mbps per endpoint, so really not a large volume of data. This was the only thing that made it all work, and it works surprisingly well. We still placed a queue in between the udpsrc and rtpbin to make sure we can absorb temporary bursts. I hope this makes sense.
(In reply to Tim-Philipp Müller from comment #2) > I wonder if perhaps we should default to different behaviour in the > payloader (whatever created these buffer lists) to send packets in multiple > lists, like rtpvrawpay does. What was the format where you observed this? I don't understand how this would work. Wouldn't that imply that the payloader would have to know what endpoints to send to, and effectively multiplex the buffers into one list per endpoint? And then use a dynudpsink rather than a multiudpsink?
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/291.