After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 776720 - souphttpsrc: no request retry on early server termination
souphttpsrc: no request retry on early server termination
Status: RESOLVED FIXED
Product: GStreamer
Classification: Platform
Component: gst-plugins-good
git master
Other Linux
: Normal blocker
: 1.10.3
Assigned To: GStreamer Maintainers
GStreamer Maintainers
Depends on:
Blocks:
 
 
Reported: 2017-01-02 16:30 UTC by Arnaud Vrac
Modified: 2017-11-08 15:37 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
souphttpsrc: retry request on early termination from the server (1.69 KB, patch)
2017-01-02 17:51 UTC, Arnaud Vrac
none Details | Review
souphttpsrc: retry request on early termination from the server (1.72 KB, patch)
2017-01-19 11:37 UTC, Arnaud Vrac
committed Details | Review

Description Arnaud Vrac 2017-01-02 16:30:55 UTC
There's a regression in 1.10 and git master on the souphttpsrc element compared to 1.8. Before, when a request terminated early in case of a timeout server-side, souphttpsrc would retry the request and continue if possible. Now the pipeline just returns EOS.

The result is that when you pause a stream too long on an http source, playbacks stops soon after resume with no error reported.
Comment 1 Arnaud Vrac 2017-01-02 17:51:23 UTC
Created attachment 342728 [details] [review]
souphttpsrc: retry request on early termination from the server

Works for me and passes the autotests.
Comment 2 Arnaud Vrac 2017-01-02 17:52:47 UTC
I wonder if I should check the seekable flag when retrying with a range. The retry is done even if the server sets the header Accept-ranges: none. In this case, should an error be returned or EOS ?
Comment 3 Thiago Sousa Santos 2017-01-02 19:18:54 UTC
Do you have a setup to reproduce this issue?
Comment 4 Arnaud Vrac 2017-01-02 21:07:38 UTC
I've set the IO timeout to 5 seconds on my server to easily reproduce this. You can simply run gat-play-1.0 http://absolut.zogzog.org/share/samples/mkv/casino.royale.2006.1080p.bluray.x264.sample-hv.mkv

Press Space to pause and hit Space again after 5-10 seconds to resume. Playback should stop early without the patch.
Comment 5 Arnaud Vrac 2017-01-16 11:19:54 UTC
Shouldn't this be a blocker for the next 1.10 release ?
Comment 6 Thiago Sousa Santos 2017-01-16 20:24:45 UTC
Review of attachment 342728 [details] [review]:

::: ext/soup/gstsouphttpsrc.c
@@ -1667,3 @@
   } else {
     gst_buffer_unref (*outbuf);
-    if (read_bytes < 0) {

We should preserve this condition, because it is an explicit error. It can also happen when you don't have the size and then we have to retry anyway.
Comment 7 Sebastian Dröge (slomo) 2017-01-18 11:06:53 UTC
Arnaud, can you update the patch?
Comment 8 Arnaud Vrac 2017-01-19 11:37:35 UTC
Created attachment 343791 [details] [review]
souphttpsrc: retry request on early termination from the server

You're right, here's the updated patch.
Comment 9 Sebastian Dröge (slomo) 2017-01-23 10:52:23 UTC
Review of attachment 343791 [details] [review]:

::: ext/soup/gstsouphttpsrc.c
@@ +1632,3 @@
     gst_buffer_unref (*outbuf);
+    if (read_bytes < 0 ||
+        (src->have_size && src->read_position < src->content_size)) {

This seems problematic as Content-Size is the "encoded" content-size while read position is "decoded". Think of the case of gzip/deflate compression. libsoup is decompressing things for us transparently but the content-size will be a different value.

I'm worried that we might run into an infinite loop here, retrying over and over again just because we can read fewer bytes than content-size says there are (for whatever reason).


Is something preventing that?
Comment 10 Sebastian Dröge (slomo) 2017-01-26 11:51:34 UTC
Arnaud, how should we move forward with this? I'd like to include this patch in 1.10.3
Comment 11 Arnaud Vrac 2017-01-26 13:10:48 UTC
Hi Sebastian, after a quick look I haven't found a way to retrieve the read position at the socket level, to be able to compare with the content size. I also don't think we can differentiate between a normal end of stream and a disconnection from the server because of a timeout server side.

So while I do agree there's an issue retrying "encoded" content, here are some mitigation points:
 - this is the code that was in 1.8 with the same caveat
 - the "compress" property is false by default
 - input size is usually smaller than decoded size
 - there's a max number of retries, although it can be overriden to -1 for infinite retries.

I'm not sure if that's enough to warrant inclusion in 1.10...
Comment 12 Sebastian Dröge (slomo) 2017-01-26 13:59:57 UTC
Attachment 343791 [details] pushed as 03db374 - souphttpsrc: retry request on early termination from the server
Comment 13 Sebastian Dröge (slomo) 2017-01-26 14:00:38 UTC
(In reply to Arnaud Vrac from comment #11)
>  - there's a max number of retries, although it can be overriden to -1 for
> infinite retries.

This part is good enough already, thanks for making that clear :)
Comment 14 Edward Hervey 2017-11-08 15:37:37 UTC
fwiw, I just pushed a commit that fixed a regression introduced by this commit

commit 770bb07f3059a529a54a3fa3521da641afe25c5e (HEAD -> master, origin/master, origin/HEAD)
Author: Edward Hervey <edward@centricular.com>
Date:   Wed Nov 8 16:34:01 2017 +0100

    souphttpsrc: Fix seeking back to 0
    
    This is a regression introduced by "03db374 - souphttpsrc: retry
    request on early termination from the server"
    
    The problem was that when seeking back to 0, we would not end up calling
    add_range_header() which in addition to adding range headers *ALSO* sets
    the read_position to the requested one.
    
    This would result in a wide variety of later failures, like reading
    again and again instead of stopping properly.