After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 795396 - rtsp-client: tunneled streams can stop streaming if client reads slowly and uses RTSP as keep-alive
rtsp-client: tunneled streams can stop streaming if client reads slowly and u...
Status: RESOLVED FIXED
Product: GStreamer
Classification: Platform
Component: gst-rtsp-server
git master
Other Linux
: Normal normal
: 1.15.1
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2018-04-20 09:09 UTC by Branko Subasic
Modified: 2018-07-23 15:05 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Always unlimit the backlog in rtsp watch when replying to rtsp commands (5.52 KB, patch)
2018-04-23 06:01 UTC, Branko Subasic
rejected Details | Review

Description Branko Subasic 2018-04-20 09:09:37 UTC
When tunneling RTP, all data, rtsp, rtp, and rtcp, is written on the same connection. The connection has a backlog where data is placed if it can not be written on the socket immediately. The backlog is limited and when it's full the thread trying to write will hang on a GCond until there's room in the backlog again. So, if the rtsp client tries to send a response on a rtsp command and the backlog is full the client thread will hang until signaled that it can continue. Unfortunately, the callback function handling the signaling is executed in the same context.

Some years ago a patch was made that circumvents the issue by opening up (i.e. removing the limit) the backlog while serving an rtsp request, and limiting it again when we're done. That was commit 404a80e38a508c13a9e9b7f9b1da70f3ccb2a77b.

However, that patch missed a few error cases. If authentication fails or the rtsp version supplied by the client is wrong then we will send a response before we make the backlog unlimited. Which results in the same deadloc if the backlog is full.
Comment 1 Branko Subasic 2018-04-23 06:01:06 UTC
Created attachment 371254 [details] [review]
Always unlimit the backlog in rtsp watch when replying to rtsp commands
Comment 2 Sebastian Dröge (slomo) 2018-07-23 15:05:21 UTC
This is obsolete now with

commit 12169f1e845a45db41cc7d416ca0f101fa607176
Author: David Svensson Fors <davidsf@axis.com>
Date:   Thu Jun 28 11:22:21 2018 +0200

    Limit queued TCP data messages to one per stream
    
    Before, the watch backlog size in GstRTSPClient was changed
    dynamically between unlimited and a fixed size, trying to avoid both
    unlimited memory usage and deadlocks while waiting for place in the
    queue. (Some of the deadlocks were described in a long comment in
    handle_request().)
    
    In the previous commit, we changed to a fixed backlog size of 100.
    This is possible, because we now handle RTP/RTCP data messages differently
    from RTSP request/response messages.
    
    The data messages are messages tunneled over TCP. We allow at most one
    queued data message per stream in GstRTSPClient at a time, and
    successfully sent data messages are acked by sending a "message-sent"
    callback from the GstStreamTransport. Until that ack comes, the
    GstRTSPStream does not call pull_sample() on its appsink, and
    therefore the streaming thread in the pipeline will not be blocked
    inside GstRTSPClient, waiting for a place in the queue.
    
    pull_sample() is called when we have both an ack and a "new-sample"
    signal from the appsink. Then, we know there is a buffer to write.
    
    RTSP request/response messages are not acked in the same way as data
    messages. The rest of the 100 places in the queue are used for
    them. If the queue becomes full of request/response messages, we
    return an error and close the connection to the client.
    
    Change-Id: I275310bc90a219ceb2473c098261acc78be84c97