After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 725630 - gst-rtsp-server: memory leak after connecting/disconnecting repeatedly
gst-rtsp-server: memory leak after connecting/disconnecting repeatedly
Status: RESOLVED INVALID
Product: GStreamer
Classification: Platform
Component: gst-rtsp-server
git master
Other Linux
: Normal normal
: git master
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2014-03-04 00:32 UTC by Aleix Conchillo Flaqué
Modified: 2014-03-04 15:12 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
client for test-launch sample (663 bytes, text/x-python)
2014-03-04 00:32 UTC, Aleix Conchillo Flaqué
Details

Description Aleix Conchillo Flaqué 2014-03-04 00:32:25 UTC
Created attachment 270859 [details]
client for test-launch sample

I'm investigating what it seems to be a gst-rtsp-server memory leak.

It can be reproduced with the test-launch example. This is what I run:

./test-launch "( videotestsrc ! capsfilter caps=\"video/x-raw,width=1920,height=1080\" ! x264enc tune=zerolatency speed-preset=ultrafast ! rtph264pay name=pay0 pt=96 )"

Then I have a really simple python script (attached) that creates as many clients as desired (sequentially). 

./test-launch-client.py 100

Memory keeps slowly increasing indefinitely (just looking at htop).

I have tried valgrind multiple times, but right now it doesn't seem obvious to me.

I have also tried a different encoder and payloader (theora) with the same effects.

./test-launch "( videotestsrc ! capsfilter caps=\"video/x-raw,width=1920,height=1080\" ! theoraenc ! rtptheorapay name=pay0 pt=96 )"

With jpegenc and rtpjpegpay, looks much better.

May be I don't understand something correctly of what's going on (e.g. regarding buffer pools).

I'll update the bug if I find something else.
Comment 1 Aleix Conchillo Flaqué 2014-03-04 00:35:47 UTC
I am also using getrusage() syscall to get more feedback at runtime.

stream ready at rtsp://127.0.0.1:8554/test
client connected
   media init - maxrss: 10.324219 MB
   preparing media - maxrss: 10.324219 MB
   media waiting preroll - maxrss: 10.324219 MB
   session bin joined - maxrss: 10.324219 MB
   media prerolled - maxrss: 31.195312 MB
   media prepared - maxrss: 31.195312 MB
   media waiting preroll - maxrss: 31.195312 MB
   media prerolled - maxrss: 37.066406 MB
   session media finalize
   stream finalize
   media finalize - maxrss: 47.117188 MB
client finalize
...
...
...
client connected
   media init - maxrss: 83.570312 MB
   preparing media - maxrss: 83.570312 MB
   media waiting preroll - maxrss: 83.570312 MB
   session bin joined - maxrss: 83.570312 MB
   media prerolled - maxrss: 86.894531 MB
   media prepared - maxrss: 86.894531 MB
   media waiting preroll - maxrss: 87.378906 MB
   media prerolled - maxrss: 90.523438 MB
   session media finalize
   stream finalize
   media finalize - maxrss: 101.171875 MB
client finalize
Comment 2 Aleix Conchillo Flaqué 2014-03-04 15:12:07 UTC
I'm afraid I will have to apologize here. I have tried this again overnight and no leak happened. test-launch memory only used 224MB after 8000 client connections/disconnections. It seems it just increased right now, but that's not the reason I reported this bug.

Yesterday, I tried test-launch with a slightly modified client script that unfortunately didn't kill all of its clients, so it seemed a memory leak was happening.

The reason I looked into this is that I have a gst-rtsp-server application that is leaking memory. I have found where the leak occurs, now I have to find why. :-)

Anyway, resolving the bug as INVALID. If my bug is related to gst-rtsp-server (which I doubt right now) I will open a new bug.

Sorry, for the extra noise.