GNOME Bugzilla – Bug 731404
hlsdemux memmory leak
Last modified: 2015-01-16 16:21:59 UTC
Created attachment 278136 [details] [review]
randomly change quality of adaptive stream after every fragment
I made patch that test adaptive scenario that heavily change quality after every fragment in hlsdemux
There is huge memory leak somewhere in hlsdemux.
I use this hls stream
gst-launch-1.0 playbin uri="http://stream.gravlab.net/003119/sparse/v1d30/posts/2013/merry-christmas/at3am/takemehometon1ght.m3u8"
Did you track the leak? Why do you think it is in hlsdemux?
It might be related to the new elements being created after each fragment. With valgrind I couldn't trace it as it fails after creating 500 threads. Each fragment switch forces decodebin to create new elements and new threads, might be related to this but I didn't look very deep.
It happens only if hlsdemux change quality.
I get from 7% to 25% MEM
Sorry I did not track the leak.
But it is hlsdemux related.
Can you provide a valgrind log? For memory leaks memcheck, for just allocating a lot of memory massif can be used.
sorry I don't have any experience with valgrind and gst memory managment
try my "random" patch + and my test hls stream
still presented in 1.4.0
Still present in git master (after adaptivedemux base class rewrite)?
I can see it happen, I will debug.
I pushed a few leak fixes to hlsdemux and related code, but the big one seems to be in playbin/decodebin rather than hlsdemux. There is no actual leak, so valgrind output is (mostly) clean. Decoder elements that get created at each change are kept alive till the end of the pipeline, along with the buffer pool and buffers they are using. At the end of the pipeline, all of this gets unreffed. I'm not sure what ref is keeping all of this alive, however.
That's intentional currently. There is nothing that removes the old groups from decodebin.
You would need some kind of cleanup thread for that, as at the moment when you switch groups you would be called from the streaming thread and could potentially deadlock when shutting down the elements.
Thanks for the leak fixes Vincent.
I had a similar issue in adaptivedemux and I store the old stuff in a list instead of cleaning up and the next thread I start for downloading buffers is the one that checks this list and unrefs everything. If no new thread is created it will also be cleaned up when going to NULL. Not really beautiful but better than keeping the unused memory around. Maybe something similar can be done in decodebin?
Yes, you could in theory clean up the old elements (A) when the next elements (B) are finished. This might only block for a while if A are still doing processing, i.e. if B was very short.
Victor can you plz backport some leak fixes to 1.4 branch
Created attachment 294695 [details] [review]
free old groups when switching groups
Something like this ?
This seems to work fine, and memory usage stays roughly stable after increasing for a while. Yell if I misunderstood the code :)
Can we track that in another bug against -base? And I think there is already one :)
Bug #678306 is relevant
Review of attachment 294695 [details] [review]:
@@ +3443,3 @@
+gst_decode_chain_start_free_hidden_groups_thead (GstDecodeChain * chain)
@@ +3465,3 @@
+ GST_DEBUG_OBJECT (chain->dbin, "Started free-hidden-groups thread");
+ /* We do not need to wait for it or get any results from it */
+ g_thread_unref (thread);
Is the unreffing necessary here really? I remember that the refcounting with GThread was a bit weird, or maybe not?
Typo fixed, I posted the updated patch on that other thread.
My understanding is that you need to unref the thread if you're not interested in joining to it later. When the thread ends, it will drop its ref to the GThread object, which will be reaped at that time.
The hlsdemux/adaptive/fragment bits are now in 1.4 too. The main not-a-leak will be when/if it passes the test of master.