After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 523015 - Implement sliding window based upload operation
Implement sliding window based upload operation
Status: RESOLVED FIXED
Product: gvfs
Classification: Core
Component: sftp backend
unspecified
Other All
: Normal normal
: ---
Assigned To: gvfs-maint
gvfs-maint
Depends on: 711247
Blocks:
 
 
Reported: 2008-03-17 18:10 UTC by Benjamin Otte (Company)
Modified: 2013-11-13 16:52 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Test details (1007 bytes, text/plain)
2009-10-26 17:49 UTC, Jakob Unterwurzacher
  Details
sftp: Implement push support (23.05 KB, patch)
2013-11-05 15:28 UTC, Ross Lagerwall
none Details | Review
sftp: Implement push support (23.20 KB, patch)
2013-11-08 07:15 UTC, Ross Lagerwall
accepted-commit_now Details | Review

Description Benjamin Otte (Company) 2008-03-17 18:10:38 UTC
Upload in SFTP is slow and limited by the ping to the server. I tested this using annarchy.freedesktop.org with a ~200ms ping. with scp I got ~110kB/s upload, with Nautilus and gvfs-copy ~12kB/s.

This is due to Nautilus waiting for the reply of every write. And writes are done in 8kB/s chunks.
Comment 1 Andrea Cimitan 2008-03-17 18:21:34 UTC
Confirm the slowness...
Comment 2 Alexander Larsson 2008-03-19 11:29:16 UTC
Hmm, the fallback copy operation uses a while(not done) { read(); write();} loop, but the ideal implementation for copying is to use multiple outstanding write operations without blocking on the write results. However, that doesn't generally match the read()/write() api that GIO uses, and is in many cases a bad idea as you'd like errors reported at the right spot when streaming the output. 

For downloads we do automatically chunk reads to large blocks and auto-preload data. However, this is not possible for writes. What is possible is to implement the upload() backend method for sftp and protocols that are able do handle this with a proper sliding window transfer approach.

Thats a lot of work though. A far easier workaround would be to bump up the block size in the copy fallback a bit. I'll look into that.
Comment 3 Alexander Larsson 2008-03-19 12:16:55 UTC
2008-03-19  Alexander Larsson  <alexl@redhat.com>

        * gfile.c:
        (copy_stream_with_progress):
	Bump block side for copy to 64k to minimize
	overhead for low latency links. (#523015)

This helps a great deal, but ideally we should implement sliding window transfers for upload for optimal performance.
Comment 4 Jakob Unterwurzacher 2009-10-26 17:49:11 UTC
Created attachment 146282 [details]
Test details

Some numbers:
In short: gvfs is still about 2x as slow as the command-line sftp against a SSH server on the LAN, for both upload and download. Interestingly, for a local SSH server gvfs performs just as well as command-line sftp. I guess this is because the latency is much lower.


The numbers (for more details see the attachment gvfs-copy-vs-sftp.txt ):

Transferring 100MB random data
Gnome 2.26.1 to OpenSSH server


Local SSH server (dual 1.6GHz Core 2 Duo)

           down  up
sftp       14.5s 14.7s
gvfs-copy  16.4s 16.6s


SSH server on 100Mb LAN (Atom 1.6GHz Eeepc):

           down  up
sftp       26.0s 12.6s
gvfs-copy  45.7s 30.6s
Comment 5 Benjamin Otte (Company) 2009-10-26 18:22:36 UTC
Interesting numbers, thanks.

What I'd be interested in is the results for FTP, as I recently implemented the new pull method on the FTP backend for Gnome 2.28. So it'd be nice to know if
1) FTP has the same effect with 2.26
2) FTP in 2.28 is faster
I guess I'll have to reproduce these numbers.

Also, did you have a look at CPU utilization? Maybe gvfs is just maxing out your CPU?
Comment 6 Jamin W. Collins 2010-01-23 17:33:28 UTC
I believe the detail below will clearly show that gvfs is not maxing out the CPU.  Note that a simultaneous transfer of two 1 gig files to the same destination takes roughly the same time as transferring a single file.  Which is roughly the same amount of time it takes to transfer both files sequentially using sftp.

$ apt-cache policy openssh-client
openssh-client:
  Installed: 1:5.1p1-6ubuntu2
  Candidate: 1:5.1p1-6ubuntu2
  Version table:
 *** 1:5.1p1-6ubuntu2 0
        500 http://us.archive.ubuntu.com karmic/main Packages
        100 /var/lib/dpkg/status

$ apt-cache policy gvfs-bin
gvfs-bin:
  Installed: 1.4.1-0ubuntu1
  Candidate: 1.4.1-0ubuntu1
  Version table:
 *** 1.4.1-0ubuntu1 0
        500 http://us.archive.ubuntu.com karmic/main Packages
        100 /var/lib/dpkg/status

$ dd if=/dev/urandom bs=1M count=1000 of=test1.img
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 149.168 s, 7.0 MB/s

$ dd if=/dev/urandom bs=1M count=1000 of=test2.img
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 149.596 s, 7.0 MB/s

$ time sftp -b batch 192.168.10.21:
Changing to: /home/jcollins/
sftp> put test1.img
Uploading test1.img to /home/jcollins/test1.img

real	1m31.333s
user	1m9.490s
sys	0m5.320s

$ time gvfs-copy test1.img .gvfs/sftp\ on\ 192.168.10.21/home/jcollins/

real	3m2.778s
user	0m0.120s
sys	0m1.530s

$ time gvfs-copy test1.img .gvfs/sftp\ on\ 192.168.10.21/home/jcollins/ & time gvfs-copy test2.img .gvfs/sftp\ on\ 192.168.10.21/home/jcollins/
[1] 9095

real	3m18.182s
user	0m0.150s
sys	0m1.160s
[1]+  Done                    time gvfs-copy test1.img .gvfs/sftp\ on\ 192.168.10.21/home/jcollins/

real	3m18.187s
user	0m0.250s
sys	0m2.410s

$ time sftp -b batch-combined 192.168.10.21:
Changing to: /home/jcollins/
sftp> put test1.img
Uploading test1.img to /home/jcollins/test1.img
sftp> put test2.img
Uploading test2.img to /home/jcollins/test2.img

real	2m59.424s
user	2m17.820s
sys	0m10.260s
Comment 7 Jamin W. Collins 2010-11-25 18:27:04 UTC
No update on this in nine months?  I believe the data I previously posted shows that this is not a CPU limitation, does it not?  Is there any further information I can provide to help?
Comment 8 Benjamin Otte (Company) 2010-11-25 18:52:54 UTC
I think it's just that nobody wrote the code yet, not that there's someone disagrees about the usefulness. Every open source project lacks manpower to implement all the ideas.
Comment 9 Yongzhi Pan 2011-11-27 09:13:56 UTC
Using 11.10, 100Mbps home LAN. Using rsync the speed is ~11MiB/s, nautilus 6~7MiB/s. Google search leads me here. Any improvement since reported?
Comment 10 vitalyhome 2012-02-17 13:32:01 UTC
Same issue.
Upload with gvfs\nautilus - 2-4kb/s
With FileZilla much faster.
No changes since report =(
Comment 11 Ross Lagerwall 2013-11-05 15:28:34 UTC
Created attachment 259020 [details] [review]
sftp: Implement push support

Implement sftp push support with a sliding window to improve the speed
of sftp uploads.

The implementation is based on the one from the OpenSSH sftp client.  It
uses up to 64 outstanding write requests where each request is 32KiB in
size which gives a maximum theoretical bandwidth of 2MiB per RTT.

This patch results in substantial performance improvments, especially
for high-latency links.

Some benchmark figures:
Old behavior:
Copying from local server = 6.1MB/s
Copying from local server with 250ms of RTT latency = 0.249MB/s
Copying many small files with 250ms of RTT latency = 0.93 files per
second

New behavior:
Copying from local server = 12.2MB/s
Copying from local server with 250ms of RTT latency = 6.2MB/s
Copying many small files with 250ms of RTT latency = 1.24 files per
second

OpenSSH sftp client:
Copying from local server = 12.8MB/s
Copying from local server with 250ms of RTT latency = 6.7MB/s
Copying many small files with 250ms of RTT latency = 1.33 files per
second
Comment 12 Jamin W. Collins 2013-11-06 19:52:23 UTC
@rosslagerwall,
Thank you for the patch to this long standing bug!  This does appear to improve the throughput for pushes (putting files to the remote server). However, it does not improve the throughput for pulls (downloading files from the remote server).  Additionally, the patch appears to break the progress notification.

This is definitely a positive step toward resolving this report.
Comment 13 Ross Lagerwall 2013-11-06 20:02:57 UTC
Thanks for the feedback. This bug is specifically about the upload operation (aka push). Please see https://bugzilla.gnome.org/show_bug.cgi?id=532951 for a patch for the download (or pull) operation.

Also, see https://bugzilla.gnome.org/show_bug.cgi?id=711247 for the bug about the progress notification and a corresponding patch.
Comment 14 Jamin W. Collins 2013-11-06 22:52:37 UTC
@rosslagerwall,

That is awesome news!  I've pulled all three patches, updated them for minor conflicts and built updated Ubuntu packages for 13.10.  So far they are working perfectly.  

Builds for 13.10 can be found in my PPA:
https://launchpad.net/~jcollins/+archive/gvfs
Comment 15 Alexander Larsson 2013-11-07 15:56:58 UTC
Review of attachment 259020 [details] [review]:

a few initial comments, will look more later.

::: daemon/gvfsbackendsftp.c
@@ +5277,3 @@
+    }
+
+  fin = g_file_read (source, NULL, &error);

This is a sync open, technically you should not do that in an async (try_*) operation.

@@ +5287,3 @@
+                                             G_FILE_ATTRIBUTE_STANDARD_SIZE ","
+                                             G_FILE_ATTRIBUTE_UNIX_MODE,
+                                             NULL, &error);

sync here too

@@ +5314,3 @@
+      command = new_command_stream (op_backend, SSH_FXP_LSTAT);
+      put_string (command, destination);
+      queue_command_stream_and_free (op_backend, command, push_improve_error_reply, job, NULL);

Do you really have to do this? Won't a NOT_SUPPORTED fall back to the fallback code that checks the same thing?

@@ +5320,3 @@
+
+out:
+  g_object_unref (source);

This is an implicit sync close operation. :)
Comment 16 Ross Lagerwall 2013-11-07 17:09:47 UTC
Thanks for the feedback.

(In reply to comment #15)
> Review of attachment 259020 [details] [review]:
> 
> a few initial comments, will look more later.
> 
> ::: daemon/gvfsbackendsftp.c
> @@ +5277,3 @@
> +    }
> +
> +  fin = g_file_read (source, NULL, &error);
> 
> This is a sync open, technically you should not do that in an async (try_*)
> operation.

OK, I will fix. I wondered about that and looked at the trash backend, which is full of sync operations in try_ methods :-)

Are sync operations in callbacks OK? E.g "g_clear_object (&handle->in);" in push_read_cb()?

> 
> @@ +5287,3 @@
> +                                             G_FILE_ATTRIBUTE_STANDARD_SIZE
> ","
> +                                             G_FILE_ATTRIBUTE_UNIX_MODE,
> +                                             NULL, &error);
> 
> sync here too

Will fix...

> 
> @@ +5314,3 @@
> +      command = new_command_stream (op_backend, SSH_FXP_LSTAT);
> +      put_string (command, destination);
> +      queue_command_stream_and_free (op_backend, command,
> push_improve_error_reply, job, NULL);
> 
> Do you really have to do this? Won't a NOT_SUPPORTED fall back to the fallback
> code that checks the same thing?

OK.

> 
> @@ +5320,3 @@
> +
> +out:
> +  g_object_unref (source);
> 
> This is an implicit sync close operation. :)

No it's not. But the line "g_object_unref (fin);", a few lines up is...
Comment 17 Alexander Larsson 2013-11-07 22:16:29 UTC
Well, local files are more likely to be fast, so lots of code cheate. But there is no guarantee of that really working, they may block due to some fsync, or be on a NFS mount with the network down or whatever. 

The unref is technically a sync close unless you ensured an async close before. But really, closing a read-only local file is probably safe enought that you can always get away with it.

As for the NOT_SUPPORTED, if it works with falling back to that, that is a better option in this case. If you're able to actually get atomic errors (i.e. without further i/o operations after the actual open) that may be worth doing. But if you're using a racy stat-based error check anyway you may as well use the well-tested standard one.
Comment 18 Ross Lagerwall 2013-11-08 07:15:31 UTC
Created attachment 259240 [details] [review]
sftp: Implement push support

Implement sftp push support with a sliding window to improve the speed
of sftp uploads.

The implementation is based on the one from the OpenSSH sftp client.  It
uses up to 64 outstanding write requests where each request is 32KiB in
size which gives a maximum theoretical bandwidth of 2MiB per RTT.

This patch results in substantial performance improvments, especially
for high-latency links.

Some benchmark figures:
Old behavior:
Copying from local server = 6.1MB/s
Copying from local server with 250ms of RTT latency = 0.249MB/s
Copying many small files with 250ms of RTT latency = 0.93 files per
second

New behavior:
Copying from local server = 12.2MB/s
Copying from local server with 250ms of RTT latency = 6.2MB/s
Copying many small files with 250ms of RTT latency = 1.24 files per
second

OpenSSH sftp client:
Copying from local server = 12.8MB/s
Copying from local server with 250ms of RTT latency = 6.7MB/s
Copying many small files with 250ms of RTT latency = 1.33 files per
second
Comment 19 Ross Lagerwall 2013-11-08 07:17:36 UTC
Changes in version 2:
* Use only async ops for the local file.

* Fallback to default implementation to get better error messages.
Comment 20 Alexander Larsson 2013-11-13 14:20:42 UTC
Review of attachment 259240 [details] [review]:

This looks good to me. We should get it in to get it tested.

::: daemon/gvfsbackendsftp.c
@@ +4867,3 @@
+          else
+            {
+              /* Restore the source file's permissions. */

Do we really want to do this? I mean, it may make sense for the user permissions. But the other will affect e.g. a different gid.
Comment 21 Ross Lagerwall 2013-11-13 16:49:02 UTC
(In reply to comment #20)
> Review of attachment 259240 [details] [review]:
> 
> This looks good to me. We should get it in to get it tested.

Thanks for the review. Pushed to master as 7890d2801a7f3998b336452fadc8ad0d01d06e60.

> 
> ::: daemon/gvfsbackendsftp.c
> @@ +4867,3 @@
> +          else
> +            {
> +              /* Restore the source file's permissions. */
> 
> Do we really want to do this? I mean, it may make sense for the user
> permissions. But the other will affect e.g. a different gid.

I would say yes, because it would be surprising behavior for users if some permissions are retained while others are not.  Secondly, if I backup some files to a server via sftp and then copy them back again, I am not interested in the interpretation of the group and other bits on the server but I'd like them to be preserved unless I specify G_FILE_COPY_TARGET_DEFAULT_PERMS.  As another data point, scp and sftp appear to preserve all the permissions bits rather than a subset of them (when run with the preserve flag).
Comment 22 Ross Lagerwall 2013-11-13 16:52:20 UTC
Could you maybe review #711247 since without the patch there this breaks progress updates? Thanks