GNOME Bugzilla – Bug 373003
debug traces prevent segmentation fault when playing RTSP/RTP
Last modified: 2007-05-11 09:33:09 UTC
Steps to reproduce: 1. Run vlc as an RTSP server for an MP3 file: 1.1 cp /path/to/xxx.mp3 . 1.1 vlc --ttl 12 -vvv --color -I telnet --telnet-password toto --rtsp-host 0.0.0.0:5555 1.2 telnet localhost 4212 > new tst vod enabled > setup tst input xxx.mp3 2. Run gstreamer with debug traces: $ gst-launch --gst-debug=3 rtspsrc location=rtsp://myserver:5555/tst ! rtpmpadepay ! fakesink => it does not crash 3. Run gstreamer without debug traces: $ gst-launch rtspsrc location=rtsp://myserver:5555/tst ! rtpmpadepay ! fakesink => it crashes (segmentation fault), probably on reception of the first RTP packet 4. Run gstreamer without debug traces and without an RTP depayloaded: $ gst-launch rtspsrc location=rtsp://myserver:5555/tst ! fakesink => it does not crash! Conclusion: the problem is probably in the RTP depayloader Stack trace: N/A (enabling debug traces prevent the crash to occur). Other information: The MP3 file is 256kbps, 32kHz, stereo The machine running gstreamer is a quad proc, AMD Opteron 270, 2GHz, 4Gb RAM (this last bit is just pendantic!) Personally, I think that this kind of problem is symptomatic of a shared resource accessed by 2 threads without proper mutexing. When debug is off, a thread is pre-empted by the other in the middle of writing something to this shared memory (variable/buffer/etc.) IMPORTANT: when using RTP interleaved in RTSP (using fenice server, as vlc does not support interleaved mode), the crash does not occur!!
I forgot to mention that if I run gstreamer on the same host as vlc, it does not crash (even without debug statements...) This machine is much less powerful, though: Intel Pentium 4 2.8GHz
Could you run whatever you do (your program, gst-launch-0.10) in valgrind to see whether that shows anything?
can you paste the output of the first packet with fakesink dump=1 to see if the data is valid enough.
Tried with vlc myself and made the following patch to fix a crasher due to invalid memory access: * gst/rtsp/rtspconnection.c: (read_body): Don't set a data pointer to NULL and a size > 0 when we deal with empty packets. * gst/rtsp/rtspmessage.c: (rtsp_message_new_response), (rtsp_message_init_response), (rtsp_message_init_data), (rtsp_message_unset), (rtsp_message_free), (rtsp_message_take_body): Check that we can't create invalid empty packets. Does that also fix it for you?
(In reply to comment #2) > Could you run whatever you do (your program, gst-launch-0.10) in valgrind to > see whether that shows anything? > I don't know much about valgrind, I'm afraid... Simply prepending "valgrind" on the command line does not give any useful information (only copyright statements)... Could you suggest some options to pass to valgrind?
> I don't know much about valgrind, I'm afraid... Simply prepending "valgrind" on > the command line does not give any useful information (only copyright > statements)... Could you suggest some options to pass to valgrind? Simply prepending it as in $ valgrind gst-launch-0.10 .... should be fine. valgrind makes everything run _much_ slower though, so you should give it 30-60 seconds or so to start doing anything. However, if there is any way you could try Wim's fixes in CVS, that would probably be even more useful than any valgrind log at the moment :)
(In reply to comment #3) > can you paste the output of the first packet with fakesink dump=1 to see if the > data is valid enough. > If I do that with the depayloader, nothing is received by fakesink. So I did that without the depayloader (as in point 4 in my bug report), and the output is: 00000000 (0x60c970): 80 8e a2 69 48 2f 53 fa 0f 8e 55 15 00 00 00 00 ...iH/S...U..... 00000010 (0x60c980): ff fb d8 00 00 00 00 00 00 4b 00 00 00 00 00 00 .........K...... 00000020 (0x60c990): 09 60 00 00 00 00 00 01 2c 00 00 00 00 00 00 25 .`......,......% 00000030 (0x60c9a0): 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000040 (0x60c9b0): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ [---snip---] then only zeros until: [---snip---] 00000480 (0x60cdf0): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000000 (0x60c970): 80 8e a2 6a 48 2f 60 a2 0f 8e 55 15 00 00 00 00 ...jH/`...U..... 00000010 (0x60c980): ff fb d8 00 00 00 00 00 00 4b 00 00 00 00 00 00 .........K...... 00000020 (0x60c990): 09 60 00 00 00 00 00 01 2c 00 00 00 00 00 00 25 .`......,......% 00000030 (0x60c9a0): 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000040 (0x60c9b0): 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ [---snip---] then only zeros until next packet, etc. In each cases, the first 12 bytes look like a valid RTP header, with payload type=14 (MPA). I don't know enough of MP3 to say whether the payload is valid or not, though...
(In reply to comment #4) > Tried with vlc myself and made the following patch to fix a crasher due to > invalid memory access: > > * gst/rtsp/rtspconnection.c: (read_body): > Don't set a data pointer to NULL and a size > 0 when we deal > with empty packets. > > * gst/rtsp/rtspmessage.c: (rtsp_message_new_response), > (rtsp_message_init_response), (rtsp_message_init_data), > (rtsp_message_unset), (rtsp_message_free), > (rtsp_message_take_body): > Check that we can't create invalid empty packets. > > Does that also fix it for you? > I tried it, but I still have the same problem...
(In reply to comment #6) > Simply prepending it as in > > $ valgrind gst-launch-0.10 .... > > should be fine. valgrind makes everything run _much_ slower though, so you > should give it 30-60 seconds or so to start doing anything. That's what I did. And it takes at most a couple of seconds on my machine!
There's been quite a bit of RTSP/RTP work in gst-plugins-base CVS and gst-plugins-bad CVS - is this still a problem with a current GStreamer CVS snapshot?
Hi Tim, Unfortunately, I am not working anymore on that and things have moved where I work... I don't think I would even be able to setup the same environment I used at that time! So I won't be able to re-test it again, I am afraid... Sorry about that! Fabrice
Ok, thanks, will close this then. If the bug is still there, someone will run into it again sooner or later.