After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 570772 - WebDAV upload progress indicator completely inaccurate
WebDAV upload progress indicator completely inaccurate
Status: RESOLVED FIXED
Product: gvfs
Classification: Core
Component: webdav backend
1.1.x
Other Linux
: Normal normal
: ---
Assigned To: Christian Kellner
gvfs-maint
: 639411 (view as bug list)
Depends on:
Blocks:
 
 
Reported: 2009-02-06 10:25 UTC by Pedro Villavicencio
Modified: 2014-04-30 19:10 UTC
See Also:
GNOME target: ---
GNOME version: 2.25/2.26


Attachments
Modify SoupOutputStream to use chunked requests (13.42 KB, patch)
2010-10-09 20:24 UTC, Ryan Brown
none Details | Review
dav: Implement push support (13.57 KB, patch)
2014-04-09 21:45 UTC, Ross Lagerwall
committed Details | Review

Description Pedro Villavicencio 2009-02-06 10:25:22 UTC
this report has been filed here:

https://bugs.edge.launchpad.net/ubuntu/+source/gvfs/+bug/314588

"I'm using Ubuntu 8.10, Nautilus 2.24.1 and I'm trying to copy a large file (700MB) to a WebDAV server (specifically, a file management program on my iPod Touch, over WiFi) which is mounted by Nautilus.

What I expected:
The progress meter would fill slowly, as in System Monitor I saw the network upload speed was around 100kB/s, and only 200MB was uploaded.

What happened:
The progress meter filled rapidly, and just froze at 100% until the copy was done (which took ages). The label also reported that "700MB of 700MB" was copied."

there's also a similar bug here:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=352018
http://bugzilla.gnome.org/show_bug.cgi?id=341837

Thanks,
Comment 1 Christian Kellner 2009-05-26 10:36:41 UTC
Yep. That is due to the way we are doing uploads at the moment. We should fix that.
Comment 2 Christian Kellner 2010-08-22 12:19:25 UTC
Currently we do gather everything in memory and upload (PUT) that on close(). That messes up the progressbar obviously. I did some research why this is implemented the way it is and just to not forget about this one more time:

Most servers I tested respond with an error code (411) when we try to do chunked upload and want to know the length (Content-Length) upfront the PUT. We cannot provide this  information since for the "normal" write operation (open, write, write, close) we do not have that information.
The situation is different for uploading local files (push method).
Comment 3 Dan Winship 2010-08-22 14:38:46 UTC
should still be able to get progress info by listening to the wrote-body-data signal on the message...
Comment 4 Christian Kellner 2010-08-22 17:41:04 UTC
The problem is that in the open,write,write,close case we don't have (manual) control over the progress-bar. The progress gets updated "automatically" in gio (copy_stream_with_progress, called from copy_fallback), after all bytes have written in one iteration of the copy loop. But we do the real write on close. So the progress bar is totally out of sync with the real write. Not sure how wrote-body-data signal could help here ;-(
Comment 5 Ryan Brown 2010-10-09 20:23:05 UTC
Here's a patch that modifies SoupOutputStream to use chunked requests. This avoids having to copy the entire file into memory and allows nautilus to provide a real progress bar. However, it also breaks compatibility with WebDAV servers that don't work with chunked requests.

Looking at the spec, WebDAV extends HTTP 1.1, which in turn says that chunked encoding support is mandatory. So technically, servers that don't support chunked encoding can be considered broken. I don't if it's acceptable to break support for them, though. The built-in WebDAV client for Mac OS X has also used chunked requests since 10.5.3 (May 2008) so to me it seems safe to make this change in gvfs.
Comment 6 Ryan Brown 2010-10-09 20:24:26 UTC
Created attachment 172027 [details] [review]
Modify SoupOutputStream to use chunked requests
Comment 7 Dan Winship 2010-10-09 23:35:54 UTC
(In reply to comment #5)
> Looking at the spec, WebDAV extends HTTP 1.1, which in turn says that chunked
> encoding support is mandatory.

Support for understanding the *syntax* of chunked encoding is mandatory, but implementations are not required to accept chunked encoding for any *particular* request/response. Eg, the server may insist on knowing the length of the resource before it decides whether or not it's going to accept it; it doesn't want to wait until getting and storing 2G of chunks first.

> The built-in WebDAV client for Mac OS X has also used
> chunked requests since 10.5.3 (May 2008) so to me it seems safe to make this
> change in gvfs.

Hm... does it send both Content-Length and Content-Encoding:chunked maybe?
Comment 8 Ryan Brown 2010-10-10 21:27:15 UTC
Point taken about the HTTP spec. I checked out how the Mac client sends PUT requests and I couldn't figure out a way to get it to send content-length encoding. Unless I'm mistaken, I think it's chunked only. It does have an odd header called "X-Expected-Entity-Length" that has the file size, though.
Comment 9 Thomas Hecker 2011-01-13 11:25:45 UTC
*** Bug 639411 has been marked as a duplicate of this bug. ***
Comment 10 Ross Lagerwall 2014-04-09 21:45:30 UTC
Created attachment 273941 [details] [review]
dav: Implement push support

Implement push support for the webdav backend.  This allows large files
to be uploaded properly without consuming large amounts of memory and
also makes the progress bar work when uploading.

Data is provided to libsoup in chunks, activated via the wrote_chunk
signal.  Note that this uses content-length encoding rather than chunked
encoding which is not supported by many servers.  The CAN_REBUILD flag
is set on the SoupMessage and accumulate is set to FALSE so that Libsoup
does not buffer all the data in memory at once.  This does mean that the
restarted signal needs to be handled correctly by seeking to the
beginning of the file.

The code is written in an asynchronous fashion so that other operations
are not blocked since the webdav backend is single-threaded.
Unfortunately, this does complicate the code, especially with regards to
having reads in flight and handling the restarted signal from libsoup.

A quick benchmark writing to a tmpfs via Apache's mod_dav achieved
just over 1GB/s.
Comment 11 Ondrej Holy 2014-04-30 12:49:51 UTC
Review of attachment 273941 [details] [review]:

It has to be rebased, otherwise looks good to me. Thanks!
Comment 12 Ross Lagerwall 2014-04-30 19:09:50 UTC
Thanks for the review, pushed to master as eb11ec725f5d2850597fc88635609474c857f6aa.

While this doesn't fix the case of copying from a network share to a WebDAV share, I think this is probably good enough for now to close.