GNOME Bugzilla – Bug 155872
Slow transfers with sftp
Last modified: 2006-07-13 16:10:40 UTC
1. Open two nautilus windows.
2. Open a remote location with sftp:// in one
3. Drag and drop a file from local to remote.
I tested this between my laptop and dekstop box, both on the same 100mbit switch. Further I tested
it between my desktop and a remote server available through a 10mbit network.
Between laptop and desktop, it used 6 minutes on a 350mib file. Doing the same filetransfer in the
commandline took 45 seconds, with speeds varying between 11mib/s to 7mib/s.
Between the remote server and the desktop, a 15mib file had an estimated transfertime of 30
minutes. I didn't complete this, but instead user scp from the commandline and completed the
transfer in 20 seconds.
The same results was achieved running the test between two computers using ubuntu linux.
*** Bug 159420 has been marked as a duplicate of this bug. ***
"1) Open Nautilus
2) File > Connect to Server
3) Enter the connection details:
- Service type: SSH
- Host: 10.0.0.1
- Port: 22
- Name used for connection: my username
Then the connection succeded but when i drag and drop files:
scp: 9.8M/s (01:24 mins).
nautilus: 1.3M/s (08:18 mins)
scp: 10.2M/s (00:03 seconds)
nautilus: 1.3M/s (00:40 seconds)
I did not notice any problem for the FTP."
I am seeing similar behavior using nautilus 2.8.2 and libgnomevfs 2.8.3-11 in
debian unstable. Using either smb or sftp server connections, transfers seem to
be going roughly 10 times slower than scp from the commandline. It's slow
enough that streaming video over a 55Mb/s wireless connection in totem is
Just some ideas:
My guess would be that inside the sftp module, buffer_check_alloc is
reallocating too often. atomic_io looks pretty straightforward - I can't hint at
any other culprit which is used at buffer_write/buffer_send time.
*** Bug 142022 has been marked as a duplicate of this bug. ***
This is present in FC4 too
Did anybody try to use gnomevfs-copy and see if this is also that slow? If yes
please report if now can you try? Thanks!
I've just tried copying single 133 MiB file between two hosts connected with
- gnomevfs-copy: 115 seconds
- scp: 21 seconds
So it looks like it has nothing to do with Nautilus, something is very slow at
(using gnome-vfs2 2.10.1-5 from Debian unstable)
Any operation I perform with Nautilus SFTP seem to be at least 10x slower than
in command-line SFTP. Using Ubuntu Linux 5.04. It's better to use shfs instead
of Gnome-vfs until it gets usable. I just can't wait 30 seconds for File dialog
I cleaned invalid connections from the .gtk-boorkmarks and the slowness
disappeared. Tried to reproduce the phenomenon to send a separate bug raport but
I am experiencing the same problem. I get ca. 350KB/s over a line that is
capable of 10Mbit (ca. 1.2MB/s). I get full speed when using scp or sftp directly.
Happens with Gnome 2.12.2!
I can still confirm that. Is this something just some people experience or is it a general problem? It is really a pity, because its nice to mount remote shares with ssh/sftp within nautilus, but its just too slow to be fun.
Is somebody working on this? I would really appreciate it.
http://www.xml-dev.com:8000/message/20060224.020226.e3bcf11b.en.html has some discussion on this. It's the sftp client's fault. Marking as NOTGNOME.
But the whole point of this bug is that Nautilus is slower than using the command line, which presumably also uses the "sftp client". Maybe I'm not understanding something.
At your link is discussed the speed between scp/sftp and ftp/rcp, which is something else than this bug.
This bug is about the speed difference between scp/sftp and GNOME/Nautilus sftp/scp client, which is huge as seen in the description of this bug.
So, please reopen the bug, as it is not yet resolved and is not the fault of openssh because the "scp/sftp" works just fine at a higher speed than Nautilus.
Sorry for the bugspam, I'll look into it.
Created attachment 61131 [details] [review]
GnomeVFSXFer, sftp performance patch
I have played with that idea of adjusting the blocksize in xfer quite some time ago (patch http://www.gnome.org/~gicmo/random/speed.patch from June 2005) but that didn't give my any noticable performance (seb128 also help'ed with the testing). Maybe it's the sftp blocksize tweaking... Are you sure you are seeing "literally by orders of magnitude". The sftp command line program doesn't do much better then we do, and if you are right, that would be awesome. I am going to have a quick look at the patch now.
The patch looks fine (just a quick look though) but I can't make any sense of that line:
info->io_block_size = max_req * default_req_len;
I mean default_req_len was an arbitrary value choosen not based on any RFC, I guess. At least I couldn't find that value in it. What I have found in it is this:
"All servers SHOULD support
packets of at least 34000 bytes (where the packet size refers to
the full length, including the header above). This should allow
for reads and writes of at most 32768 bytes."
(That's why I am using this odd value in my patch from June 2005).
So I guess I didn't correctly implement the xfer part, where your patch looks way better than my one. But I am not sure about the block_io_size for sftp yet. Can you explain that a bit more. Your explanation didn't help too much why it should exactly be 8K * 16).
Thanks for the great work though, rocking.
Ubuntu has https://launchpad.net/distros/ubuntu/+source/gnome-vfs2/+bug/23683 about that which just got that comment:
"There is a patch to this bug upstream. I recompiled libgnomevfs2 with the provided patch and the transfer speed rose to the same level as scp command line tool and was actually faster then LUFS sftp filesystem.
This is an annoying bug with a quite simple patch. It would be nice if it would be applied for dapper.
Note: scp uses the sftp protocol, so it is the same thing."
Re comment 20, comment 21:
> I have played with that idea of adjusting the blocksize in xfer quite some time
> ago (patch http://www.gnome.org/~gicmo/random/speed.patch from June 2005) but
that didn't give my any noticable performance
> Are you sure you are seeing "literally by orders of magnitude"
Yes, I am seeing that, at least on a loopback connection. Just try it out :). The sftp authors were quite aware of the performance issues, so they allowed for parallelized transfers, i.e. you request some non-overlapping (in this case contiguous) chunks, and wait for all to finish, requesting all of them "in parallel" (they requests are fired at once). You pretend to the outside world (i.e. to the VFS) to have a big block size, because the GnomeVFS code will force you to process one I/O block at once, which in fact consists of multiple parallel transfers, where foreach you request a typical size.
> I can't make any sense of that line:
> info->io_block_size = max_req * default_req_len;
> (...) I am using (32768 bytes buffer) in my patch from June 2005 (...).
You are right that it is a bit arbitrary. It's important to realize that because the GnomeVFS API works block-wise and uses a pull mechanism, i.e. requests the data separately for each of the blocks, the parallelization within this single GnomeVFS request only works when you request more than the length you pick for one sftp request. If the length of the GnomeVFS IO block is smaller or equal to the sftp request (which is what you did) it will only be handled by one of them, i.e. the inner while loop in sftp-method.c:do_read will be left after the first request.
I've picked max_req * default_req_len because my naive guess is that when using too many parallel requests there is some sort of saturation, i.e. you can't gain arbitrarily much performance, and when the max. possible number of requests is pending, you won't gain much, because you have to wait until any of the other requests is finished. Whether max_req, which is the number of max. parallel requests, should be raised, I don't know, but limiting the number of requests is recommended by the client, and transferring chunks of 0.5 MB at once sounds like a good measure for all kinds of pipes.
Sidenote: default_req_len was probably picked like this because the sftp draft you quoted suggests that it is a sane lower boundary for any read/write requests, and the fact that openssh (some sort of reference implementation) uses it qualifies it for being right by experience ;). OpenSSH seems to seek and write to the output fd as the request responses arrive, but we can't really do that because GnomeVFS doesn't have a push Xfer API (which can't be properly mapped on non-seekable FSes), i.e. both read and write are granular.
Do you have any questions on the Xfer optimization? In short, they were neccessary to handle big differences in two block I/O sizes for the I/O source and destination better, for instance writes to sftp locations are also parallelized (cf. do_write), which wouldn't work if we just considered the source I/O size, which is typically 8192 bytes for local file systems.
Re comment 22:
The patch looks trivial, but the involved machinery is a bit tricky. I'd not apply it in the distro unless it is blessed by at least one GnomeVFS maintainer, because it could break and damage all your data, and eat your children.
No worry, I've planned to wait to get the patch applied upstream. Maybe for 2.14.1 or for next cycle ...
A slightly modified patch was committed by Alex in March. Closing.