GNOME Bugzilla – Bug 692323
rtsp-client: "rollback" do not destroy the rtsp watch
Last modified: 2013-06-06 08:11:55 UTC
Created attachment 234135 [details] rtsp client connection segfault Bug 685220 patch seems to do the right thing for destroying the client rtsp connection watch. commit 13e1b15da1fc0c04cf1816863febaeeb11ad5ea1 But under stress tests of the server (multiple clients connecting and disconnecting simultaneously) we have ran into segfaults multiple times (see attachment). Before, after closing the connection the watch was not available anymore. However, with the new code, when we close the connection the watch is still running, so we might be using the connection when we should not. I can think of two situations: - Bad implemented client that after TEARDOWN immediately sends some data. In handle_teardown_request we close the connection but the watch still runs. - A client session expires so we close the connection for that session but the watch still runs. If the client decides to send some data at that point we hit the problem again. So, I think the previous implementation is safer. The new one only works if the client closes the connection properly, but does not take into account corner cases.
Do you use RTP/TCP transport in your clients? I'm seeing crashes in "connection watch" area but triggered from streaming thread reported in bug 692433 It's not exactly the same although solution (which I don't know yet) may or really should fix both of them.
No we use RTP/UDP. But, the connection that I'm referring to is from the RTSP socket.
I think the fix for #692433 might also fix this. Can you retest?
Aleix, ping
Hey! Just forgot about this one. We are in the process of switching to GStreamer 1.0, almost there. So, I am not sure if I will be able to retest again under 0.10, but I will do for 1.0. So, may be you should just close this, considering 0.10 is not maintained anymore. I will reopen a bug if we still can reproduce this in 1.0.
Alright, thanks. Let's close this for now then and you'll re-open or file a new bug if it's still an issue with 1.x.