GNOME Bugzilla – Bug 723850
Elements in rtpbin don't set PROXY_ALLOCATION flag
Last modified: 2018-11-03 14:51:33 UTC
The PROXY_ALLOCATION flag is not currently set in any of the elements in the rtpbin (e.g. rtpsession, jitterbuffer, etc.). The effect is that allocation queries sent by the source during 'reconfigure' (i.e. after linking) never make it to the downstream sink.
One possible fix is to add this pad for all of the sink flags for elements in the bin. An alternative would be to modify pad templates so that they can optionally specify pad flags, or create a new template macro with this flag set.
commit bbe6d9a25876b6869fbd4811b38f9ad65b664a1b Author: Wim Taymans <wtaymans@redhat.com> Date: Fri Feb 14 12:01:00 2014 +0100 rtpsession: proxy caps and allocation on RTP pads recv_rtp_sink: allow proxying of the allocation query. send_rtp_sink: allow proxying of caps and allocation. This allows us to query caps downstream as well as get an allocator from downstream. send_rtp_src: allow proxy of caps, this makes the caps query do upstream. See https://bugzilla.gnome.org/show_bug.cgi?id=723850
There is no need to set the PROXY_ALLOCATION flag on any of the ghostpads, that's already done. For the session manager I proxied some caps and allocation queries with previous patch. This works fine for a sender pipeline. The problem is that on a receiver pipeline, the ssrcdemux does not know what branch to proxy to (maybe if there is only 1 branch, it could work but how should it combine results from multiple branches?). Same for the ptdemux element. They currently don't proxy so in a receiver pipeline, the udpsrc elements would not be able to get a pool from a depayloader (if that is even useful). What kind of information would you like to negotiate with downstream in a receiving pipeline?
commit 353e510f94b30549eeca3c8ff2e573289c4560cd Author: Wim Taymans <wtaymans@redhat.com> Date: Fri Feb 14 15:59:46 2014 +0100 rtpjitterbuffer: add support for serialized queries See https://bugzilla.gnome.org/show_bug.cgi?id=723850
In my case, I was trying to negotiate a bufferpool for the entire receive pipeline. I only had a single source, so setting the flag on those ssrcdemux and ptdemux worked, however, as you pointed out, it doesn't much help for the case where multiple synchronization sources exist. I was trying to follow existing patterns for bufferpool implementations, so the sink (in my case alsasink) was providing the pool. This required adding the proxy flag to the depayloader and forwarding the query in the jitterbuffer as well (unlike the change above which buffers the query). I was trying to negotiate a bufferpool within an entire RTP receiver pipeline, preferably without modifying any elements. My original thought was to send the allocation queries myself, until I read through the bufferpool notes in the design docs. After that, I overrode the sink's propose_allocation method in my application, which led me to discover that the query never made it to the udpsrc, which led to this bug. As a note, once all the PROXY_ALLOCATION changes I mentioned above were made, it "worked", however, it was apparent that it wasn't a great overall solution, but rather one very case-specific to a single SSRC. On idea is for the rtpbin emit a signal when it receives an allocation query and rely on the application to forward it to the correct jitterbuffer.
The main problem is that the pipeline goes through a couple of format transformations that likely require different pools. For example, the udpsrc needs a buffer to store RTP packets, the buffers allocated by alsasink should contain raw samples, you are not supposed to place the RTP header in there. Even if you place the header in those alsasink allocated buffers, you would need to strip them off again in the depayloader and alsasink would have to copy anyway. You could do something clever where you reuse alsasink buffer memory only for the payload of the rtp packet. This is not easy because udpsrc then needs to know the RTP header size before reading the packet (it can be done by peeking the packet first but that's not implemented). The real question is, why do you want to make alsasink propose a bufferpool? it only makes sense if you want to avoid a memcpy and can write directly into alsa mmaped memory (which alsasink doesn't implement).
The goal of using a bufferpool in the first place was to hopefully reduce the number of malloc/free operations and measure the performance improvement, if any. The code is being used on a smaller CPU, and profiling the pipeline(callgrind) identified mallocs (and mutexing) as two of the most common operations. As to why I wanted alsasink to propose the pool -- I thought having the sink propose the pool was the preferred implementation approach. Most of the examples I've seen, and code I've read show the sink proposing the pool, so I tried to follow that pattern. I don't really care if there is only one pool, or that it is proposed by the alsasink, so long as I can reuse buffers. Are multiple bufferpools commonly used within pipelines? Is there an implementation example? It appears this bug may not be valid in light of the discussion above, so if it needs to be closed and there's something I can do I'm happy to do so.
I think your use-case of re-using the buffers using a pool totally makes sense, the problem with ptdemux/ssrcdemux is the same as for tee, whatever we do for bug #730758 we can probably re-use. Possibly by putting the allocation query merging code in the default handler and enabling it if PROXY_ALLOCATION is set.
Actually, in your usecase, you don't know the size of incoming buffers, so by default it won't create a buffer pool. But if you don't care about wasting a little memory, you can create a buffer pool in udpsrc with a buffer size of 65536 (enough for any UDP packet), then receive in there and you will trade some wasted memory for no allocations.
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/107.