GNOME Bugzilla – Bug 757565
rtpmp2tpay: Add property to aggregate multiple TS packets or not
Last modified: 2018-11-03 15:05:26 UTC
Created attachment 314786 [details] patch file for adding "aggregate" property to rtpmp2tspay Sometimes one TS packet is made of one video frame. In that case rtpmp2tpay waits for next TS packets to fill the payload. Most other RTP payloaders do not mix up the seperated video frames in one RTP packet. That's more resistant to packet lost and could reduce latency. That's why adding the "aggregate" property. If the aggregate property is set as false,rtpmp2tpay fills the RTP payload without aggregating next TS packets. If true, rtpmp2tpay fills the payload not until exceed the MTU size just as now.
Created attachment 314787 [details] [review] rtpmp2tpay: Add property to aggregate multiple TS packets or not I add the bugzilla url in this patch
Related see https://bugzilla.gnome.org/show_bug.cgi?id=757567 Also you probably want to make the aggregate property an integer for the number of packets, and probably also have a possibility to define a time (i.e. never aggregate more than 200ms of packets) to not unnecessarily increase latency (note: this requires using the clock for correct implementation).
What kind of silly MTU would allow aggregation of 200ms of packets? At most you're talking about aggregating 7-8 MPEG-TS packets, right?
I think we should just not aggregate, to minimise latency, which is usually the desired behaviour for RTP. Anyone who wants to make sure output buffers are always full can just chunk the input accordingly. The aggregation may have been required previously because mpegtsmux at some point used to push every single mpeg-ts packet as individual buffer to downstream (now it at least pushes a buffer list of one-buffer-per-packet buffers ;-)) Anyway: should check that approach works with what tsmux outputs I guess.
and tsparse.
As now, mpegtsmux ouput could be multiple TS packets.. it depends on the input of the mpegtsmux. For example, video I frame input is muxed to more than 7 TS packets. Actually RTP is designed for low latency streaming which is via UDP. So aggregation is not good for latency. But some application could think RTP header is overhead. Those application could want to aggregate the TS packets for reducing the RTP header overhead. I think aggregate property is good for supporting both case. What's your opinion on this issue?
Is there anyone who wanna share the thoughts?
Tim, another way is to port this muxer to aggregator base class. With the internal queues and the configurable latency, it would be easy to give back control over the aggregation, and in general, we'd be aggregating the entire queue, so even without latency, having a certain queue size could help performance, without adding any latency. It's also a good candidate as it's resides in -bad.
Nicolas, I think perhaps you misunderstood the purpose of my comment. I don't disagree with anything you say, and I do think we should port all muxers over at some point, but I don't think it's related to this issue really (the performance is bad because of the way the code is structured, that's all). I was arguing that instead of making this a property in the payloader we should just never aggregate in the payloader. This raises the question why it was done like this in the payloader in the first place. My comment about how mpegtsmux used to work was supposed to provide a possible explanation why this was added in the first place. So in short: I think we should just never aggregate in the payloader and double-check that mpegtsmux and tsparse do the right thing these days.
Ah, it aggregates in the payloader, sorry, indeed I missed it.
Need to modify the rtpmp2tpay not to agrregate the TS packets?
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/233.