GNOME Bugzilla – Bug 776720
souphttpsrc: no request retry on early server termination
Last modified: 2017-11-08 15:37:37 UTC
There's a regression in 1.10 and git master on the souphttpsrc element compared to 1.8. Before, when a request terminated early in case of a timeout server-side, souphttpsrc would retry the request and continue if possible. Now the pipeline just returns EOS. The result is that when you pause a stream too long on an http source, playbacks stops soon after resume with no error reported.
Created attachment 342728 [details] [review] souphttpsrc: retry request on early termination from the server Works for me and passes the autotests.
I wonder if I should check the seekable flag when retrying with a range. The retry is done even if the server sets the header Accept-ranges: none. In this case, should an error be returned or EOS ?
Do you have a setup to reproduce this issue?
I've set the IO timeout to 5 seconds on my server to easily reproduce this. You can simply run gat-play-1.0 http://absolut.zogzog.org/share/samples/mkv/casino.royale.2006.1080p.bluray.x264.sample-hv.mkv Press Space to pause and hit Space again after 5-10 seconds to resume. Playback should stop early without the patch.
Shouldn't this be a blocker for the next 1.10 release ?
Review of attachment 342728 [details] [review]: ::: ext/soup/gstsouphttpsrc.c @@ -1667,3 @@ } else { gst_buffer_unref (*outbuf); - if (read_bytes < 0) { We should preserve this condition, because it is an explicit error. It can also happen when you don't have the size and then we have to retry anyway.
Arnaud, can you update the patch?
Created attachment 343791 [details] [review] souphttpsrc: retry request on early termination from the server You're right, here's the updated patch.
Review of attachment 343791 [details] [review]: ::: ext/soup/gstsouphttpsrc.c @@ +1632,3 @@ gst_buffer_unref (*outbuf); + if (read_bytes < 0 || + (src->have_size && src->read_position < src->content_size)) { This seems problematic as Content-Size is the "encoded" content-size while read position is "decoded". Think of the case of gzip/deflate compression. libsoup is decompressing things for us transparently but the content-size will be a different value. I'm worried that we might run into an infinite loop here, retrying over and over again just because we can read fewer bytes than content-size says there are (for whatever reason). Is something preventing that?
Arnaud, how should we move forward with this? I'd like to include this patch in 1.10.3
Hi Sebastian, after a quick look I haven't found a way to retrieve the read position at the socket level, to be able to compare with the content size. I also don't think we can differentiate between a normal end of stream and a disconnection from the server because of a timeout server side. So while I do agree there's an issue retrying "encoded" content, here are some mitigation points: - this is the code that was in 1.8 with the same caveat - the "compress" property is false by default - input size is usually smaller than decoded size - there's a max number of retries, although it can be overriden to -1 for infinite retries. I'm not sure if that's enough to warrant inclusion in 1.10...
Attachment 343791 [details] pushed as 03db374 - souphttpsrc: retry request on early termination from the server
(In reply to Arnaud Vrac from comment #11) > - there's a max number of retries, although it can be overriden to -1 for > infinite retries. This part is good enough already, thanks for making that clear :)
fwiw, I just pushed a commit that fixed a regression introduced by this commit commit 770bb07f3059a529a54a3fa3521da641afe25c5e (HEAD -> master, origin/master, origin/HEAD) Author: Edward Hervey <edward@centricular.com> Date: Wed Nov 8 16:34:01 2017 +0100 souphttpsrc: Fix seeking back to 0 This is a regression introduced by "03db374 - souphttpsrc: retry request on early termination from the server" The problem was that when seeking back to 0, we would not end up calling add_range_header() which in addition to adding range headers *ALSO* sets the read_position to the requested one. This would result in a wide variety of later failures, like reading again and again instead of stopping properly.