After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 731404 - hlsdemux memmory leak
hlsdemux memmory leak
Status: RESOLVED FIXED
Product: GStreamer
Classification: Platform
Component: gst-plugins-bad
1.4.0
Other Linux
: Normal major
: 1.5.1
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2014-06-09 13:54 UTC by m][sko
Modified: 2015-01-16 16:21 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
randomly change quality of adaptive stream after every fragment (1.77 KB, patch)
2014-06-09 13:54 UTC, m][sko
none Details | Review
free old groups when switching groups (2.73 KB, patch)
2015-01-16 15:24 UTC, Vincent Penquerc'h
committed Details | Review

Description m][sko 2014-06-09 13:54:35 UTC
Created attachment 278136 [details] [review]
randomly change quality of adaptive stream after every fragment

I made patch that test adaptive scenario that heavily change quality after every fragment in hlsdemux
There is huge memory leak somewhere in hlsdemux.
Comment 1 m][sko 2014-06-09 14:04:22 UTC
I use this hls stream

gst-launch-1.0 playbin uri="http://stream.gravlab.net/003119/sparse/v1d30/posts/2013/merry-christmas/at3am/takemehometon1ght.m3u8"
Comment 2 Thiago Sousa Santos 2014-06-09 21:51:36 UTC
Did you track the leak? Why do you think it is in hlsdemux?

It might be related to the new elements being created after each fragment. With valgrind I couldn't trace it as it fails after creating 500 threads. Each fragment switch forces decodebin to create new elements and new threads, might be related to this but I didn't look very deep.
Comment 3 m][sko 2014-06-10 06:20:43 UTC
It happens only if hlsdemux change quality.
I get from 7% to 25% MEM
Sorry I did not track the leak.
But it is hlsdemux  related.
Comment 4 Sebastian Dröge (slomo) 2014-06-23 18:30:34 UTC
Can you provide a valgrind log? For memory leaks memcheck, for just allocating a lot of memory massif can be used.
Comment 5 m][sko 2014-07-10 07:34:41 UTC
sorry I don't have any experience with valgrind and gst memory managment
try my "random" patch + and my test  hls stream
Comment 6 m][sko 2014-08-06 21:15:44 UTC
still presented in 1.4.0
Comment 7 Tim-Philipp Müller 2015-01-09 19:39:39 UTC
Still present in git master (after adaptivedemux base class rewrite)?
Comment 8 Vincent Penquerc'h 2015-01-14 12:59:49 UTC
I can see it happen, I will debug.
Comment 9 Vincent Penquerc'h 2015-01-16 11:49:09 UTC
I pushed a few leak fixes to hlsdemux and related code, but the big one seems to be in playbin/decodebin rather than hlsdemux. There is no actual leak, so valgrind output is (mostly) clean. Decoder elements that get created at each change are kept alive till the end of the pipeline, along with the buffer pool and buffers they are using. At the end of the pipeline, all of this gets unreffed. I'm not sure what ref is keeping all of this alive, however.
Comment 10 Sebastian Dröge (slomo) 2015-01-16 12:09:09 UTC
That's intentional currently. There is nothing that removes the old groups from decodebin.

You would need some kind of cleanup thread for that, as at the moment when you switch groups you would be called from the streaming thread and could potentially deadlock when shutting down the elements.
Comment 11 Thiago Sousa Santos 2015-01-16 12:29:51 UTC
Thanks for the leak fixes Vincent.

I had a similar issue in adaptivedemux and I store the old stuff in a list instead of cleaning up and the next thread I start for downloading buffers is the one that checks this list and unrefs everything. If no new thread is created it will also be cleaned up when going to NULL. Not really beautiful but better than keeping the unused memory around. Maybe something similar can be done in decodebin?
Comment 12 Sebastian Dröge (slomo) 2015-01-16 13:22:51 UTC
Yes, you could in theory clean up the old elements (A) when the next elements (B) are finished. This might only block for a while if A are still doing processing, i.e. if B was very short.
Comment 13 m][sko 2015-01-16 14:53:47 UTC
Victor can you plz backport some leak fixes to 1.4 branch
thx
Comment 14 Vincent Penquerc'h 2015-01-16 15:24:34 UTC
Created attachment 294695 [details] [review]
free old groups when switching groups

Something like this ?
This seems to work fine, and memory usage stays roughly stable after increasing for a while. Yell if I misunderstood the code :)
Comment 15 Sebastian Dröge (slomo) 2015-01-16 15:37:57 UTC
Can we track that in another bug against -base? And I think there is already one :)
Comment 16 Sebastian Dröge (slomo) 2015-01-16 15:39:28 UTC
Bug #678306 is relevant
Comment 17 Sebastian Dröge (slomo) 2015-01-16 15:43:10 UTC
Review of attachment 294695 [details] [review]:

::: gst/playback/gstdecodebin2.c
@@ +3443,3 @@
+
+static void
+gst_decode_chain_start_free_hidden_groups_thead (GstDecodeChain * chain)

Typo: thead

@@ +3465,3 @@
+  GST_DEBUG_OBJECT (chain->dbin, "Started free-hidden-groups thread");
+  /* We do not need to wait for it or get any results from it */
+  g_thread_unref (thread);

Is the unreffing necessary here really? I remember that the refcounting with GThread was a bit weird, or maybe not?
Comment 18 Vincent Penquerc'h 2015-01-16 15:51:21 UTC
Typo fixed, I posted the updated patch on that other thread.

My understanding is that you need to unref the thread if you're not interested in joining to it later. When the thread ends, it will drop its ref to the GThread object, which will be reaped at that time.
Comment 19 Vincent Penquerc'h 2015-01-16 16:21:16 UTC
The hlsdemux/adaptive/fragment bits are now in 1.4 too. The main not-a-leak will be when/if it passes the test of master.