After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 532951 - slow download using sftp://
slow download using sftp://
Status: RESOLVED FIXED
Product: gvfs
Classification: Core
Component: sftp backend
0.2.x
Other All
: Normal minor
: ---
Assigned To: gvfs-maint
gvfs-maint
Depends on:
Blocks:
 
 
Reported: 2008-05-13 12:53 UTC by Michael Kaiser
Modified: 2013-11-18 17:48 UTC
See Also:
GNOME target: ---
GNOME version: 2.21/2.22


Attachments
sftp: Implement pull support (18.17 KB, patch)
2013-11-02 14:37 UTC, Ross Lagerwall
none Details | Review
hung copy with progress output (80.23 KB, text/plain)
2013-11-07 00:19 UTC, Jamin W. Collins
  Details
hung copy without progress output (4.26 KB, application/gzip)
2013-11-07 00:19 UTC, Jamin W. Collins
  Details
sftp: Implement pull support (18.14 KB, patch)
2013-11-07 10:14 UTC, Ross Lagerwall
none Details | Review
sftp: Fix handling of multiple reads of the packet length (1.11 KB, patch)
2013-11-07 10:14 UTC, Ross Lagerwall
committed Details | Review
sftp: Implement pull support (20.32 KB, patch)
2013-11-16 15:17 UTC, Ross Lagerwall
none Details | Review

Description Michael Kaiser 2008-05-13 12:53:12 UTC
Please describe the problem:
downloading files via sftp using nautilus has become really slow after i upgraded from ubuntu gutsy (nautilus 2.20.0) to hardy (nautilus 2.22.2)

Steps to reproduce:
1. open an sftp:// site in nautilus
2. copy some files to a local folder


Actual results:
the files are transferred to the local computer at a much lower speed than using sftp or scp commandline tools

Expected results:
transfer speed should be comparable to commandline tools sftp and scp

Does this happen every time?
yes

Other information:
for a benchmark i copied one large file (1.6 GB) from another computer on my local network. transfer speeds were:

sftp and scp: ~ 11 MB/s
nautilus: ~4 MB/s
Comment 1 Cosimo Cecchi 2008-05-13 16:07:02 UTC
-> gvfs

This seems to be more a gvfs issue than a Nautilus one.
Could you try copying from the command line with the gvfs-cp tool and report the result? Thanks.
Comment 2 Michael Kaiser 2008-05-13 17:09:47 UTC
I tried copying a file using gvfs-copy and the speed was approx. the same as using nautilus, around 2.6 MB/s. Using scp for the same file I reached 11 MB/s again, scp -C however gave me only 4.2 MB/s (it was a MPEG-movie file I was copying, so I expected something like that). Does the sftp-backend use compression by default and if yes, is there a way to disable it?
Comment 3 Sebastien Bacher 2008-05-13 17:40:35 UTC
what speed do you have using sftp instead of scp to do the copy?
Comment 4 Michael Kaiser 2008-05-13 17:45:32 UTC
sftp   : 10.7 MB/s
sftp -C:  3.9 MB/s
Comment 5 Josselin Mouette 2009-02-23 22:53:10 UTC
I have a similar issue here. My upload bandwidth is 1024 kbits/s, and using scp or gnome-vfs I can copy at the expected 100+ kB/s speed.

However when using gvfs, the upload is limited to 50 kB/s. Interestingly enough, the bandwidth used is between 100 and 120 kB/s, so it appears it uses twice the bandwidth it actually needs to copy the file.
Comment 6 Josselin Mouette 2009-04-29 20:52:06 UTC
Looks like it is fixed in 1.2.2.
Comment 7 Daniel 2010-03-20 02:52:32 UTC
(In reply to comment #6)
> Looks like it is fixed in 1.2.2.

It appears to have resurfaced in gvfs 1.4.3 (I'm using Fedora 12).  I having the exact same problem, where accessing a sftp share through nautilus transfers at about one third the speed of scp.
Comment 8 James 2011-12-11 23:08:33 UTC
This still seems to be around, in my case in Fedora 16. gvfs is 2-3 times slower than either sshfs or the usual scp/sftp tools.

When copying a large file over FUSE sshfs, or using sftp or scp, I see transfer rates of around 30 MB/s.

When using gvfs-fuse when copying via a browsed SSH share using Nautilus,
transfer rates typically drop to around 10 MB/s. Using cp on the remote file
(i.e., the ~/.gvfs path obtained by dragging and dropping to the terminal) is
only marginally faster (around 14 MB/s).

Version-Release number of selected component (if applicable):
nautilus-3.0.2-1.fc15.x86_64
gvfs-1.8.2-1.fc15.x86_64
gvfs-fuse-1.8.2-1.fc15.x86_64
openssh-5.6p1-34.fc15.1.x86_64
fuse-sshfs-2.3-1.fc15.x86_64
Comment 9 Josselin Mouette 2011-12-14 18:34:26 UTC
I have found this is highly dependent on the round-trip times between hosts, which suggests a too small buffer. By tuning my DSL settings to reduce the ping, I can vastly increase the bandwidth for gvfs.

My measurements find the affected buffer size to be on the order of 20 KiB, maybe 16 or 32 then. Any idea what this buffer could be?
Comment 10 James 2011-12-14 21:14:11 UTC
(In reply to comment #9)> 
> My measurements find the affected buffer size to be on the order of 20 KiB,
> maybe 16 or 32 then. Any idea what this buffer could be?

Well I had a poke in the source around modify_read_size() in gvfsreadchannel.c and put in a real_size *= 3 to see what happens. (Judging by the comments this function controls the size of the chunks gvfs requests.) Didn't make any noticeable difference over gigabit Ethernet for me, though.
Comment 11 Joshua Steiner 2012-09-17 03:46:51 UTC
On GNOME 3 in LinuxMint 12, I'd like to report this bug as still being present, but I've seen drops as big as one fifth of that of SCP (or rather, sshfs).
Comment 12 feenstra 2013-06-11 08:56:48 UTC
In https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/250517 this performance problem has been attributed to network latency and the implementation in GVFS of synchronous transfer (i.e., the sending end waits for acknowledgement from the receiving end before sending the next package).

Is this the right place to further pursue this? If so, could someone confirm that the acknowledgement indeed is the problem, and that it is therefore related to network latency?

Myself, I've confirmed this at home between my laptop and media center. I saw that throughput depended on whether either of the two were on both wifi or ethernet, or mixed. I could never get a throughput even close to saturating the bandwidth, so the obvious variable is the latency which is higher on wifi.

I can provide a list of timings if that helps?
Comment 13 feenstra 2013-06-11 09:00:33 UTC
FYI, I think https://bugzilla.gnome.org/show_bug.cgi?id=688079 may be a duplicate of this one, but I don't have privileges to mark duplicates.
Comment 14 feenstra 2013-06-11 09:05:51 UTC
This problem still exists more than 5 years after it was first reported both here  but also  in ubuntu (gvfs, and nautilus). How come no progress *at all* has been made here? Is there any (other) channel that we could use to draw attention to this serious performance issue?

Or, alternatively, any pointers to how to hot-wire some workaround in the gvfs code that may fix this?
Comment 15 Tomas Bzatek 2013-06-11 11:40:24 UTC
(In reply to comment #14)
> This problem still exists more than 5 years after it was first reported both
> here  but also  in ubuntu (gvfs, and nautilus). How come no progress *at all*
> has been made here? Is there any (other) channel that we could use to draw
> attention to this serious performance issue?

No need to draw more attention to this, developers are watching all active issues. If you want to help getting this fixed, we need to debug this issue in deep. I.e. find out what causes the slowdown, make some probes, look at the network traffic etc. Developers are simply busy with other things but anyone can provide a patch or proof-of-concept solution that will lead to a proper fix.
Comment 16 Benjamin Kingston 2013-08-01 07:08:00 UTC
I would like to offer my time do assist in debugging. I don't have specific experience with debugging issues, however I have a bit of systems and networking experience. Below is my current findings with both gvfs smb and sftp.

I'll try to come back with packet analysis soon


Steps to Reproduce:
1.root@host#pv /mnt/files/largefile |nc 10.0.2.254 3890
  user@client#nc -p 38902 -l|pv > /dev/null
  >record stable transfer rate

2.user@client#mount -t cifs -o user=user //fileserver.domain.com/share
/mnt/cifs
  user@client#pv /mnt/cifs/largefile > /dev/null
  >record stable transfer rate

3.user@client#sftp root@fileserver.domain.com
  get /mnt/files/largefile /dev/null
  >record stable transfer rate

4.Open nautilus or gvfs utility of choice
  connect to server at smb://fileserver.domain.com/share
  user@client# pv
/run/user/1000/gvfs/smb-share\:server\=fileserver.domain.com\,share\=share/largefile
> /dev/null
  >record stable transfer rate

5.Open nautilus or gvfs utility of choice
  connect to server at sftp://root@fileserver.domain.com/mnt/files
  user@client# pv
/run/user/1000/gvfs/sftp\:host\=file0-san0\,user\=root/mnt/storage/content/seed.random
> /dev/null
  >record stable transfer rate

Actual results:
nc result: [91.9MiB/s]
mount.cifs result: [72.8MiB/s]
sftp result: 63.9MB/s
gvfs-smb result: [31.2MiB/s]
gvfs-sftp result: [30.8MiB/s]

Expected results:
gvfs-smb and gvfs-sftp performance should be on par with sftp and mount.cifs

Additional info:
If the issue is deeper than gvfs please inform me so that I may report this to
the correct project.

I do not expect any results to be the same as nc since there is only tcp
overhead in that example, I provided it only as a baseline for the capacity of
my network link, as well as to demonstrate the host storage performance is not
a limiting factor. I also directed the pv to /dev/null to eliminate any
variable of the client's storage. It can be assured that transfers from host
disk to client disk are similar.

#INVOLVED SYSTEMS
-Client is a Westmere i5 3.3Ghz with ext4/lvm/intel RAID10, and 8GB of DDR3
RAM, 16GB of swap, 1G ethernet with 9000MTU, and running latest Fedora 19
packages at time of posting
-Host is a kvm guest, 2 vCPUs, 512MB of RAM (balloon can increase), 1024MB
swap, iSCSI storage, virtio ethernet witn 9000MTU over macvtap/bridge mode, and
running latest RHEL 6.4 compatible packages (with extra repository packages)
-iSCSI host is also a kvm guest, 1 vCPU 256MB of RAM (balloon can increase),
1024MB of swap, aesni_intel module loaded and fips verified, storage is
gvfs2/luks/linear md/RAID10 md/USB 3.0 disks which is physically connected to a
VT-d exported USB3.0 pci-e card, virtio ethernet witn 9000MTU over
macvtap/bridge mode, and running latest RHEL 6.4 compatible packages (with
extra repository packages)
-KVM hypervisor is a Westmere Xeon 2.4Ghz which stores VM disk images on
lvm/luks/intel RAID 10, aesni_intel is loaded and fips verified, 24GB DDR3 ECC
RAM, 20GB swap, 2x 1G ethernet LACP bonded interface with 9000MTU
-Connecting switch is a Cisco 24 port 2960-S 1G with 9000MTU, 2 of which ports
are LACP bonded to the hipervisor system, ports are a mix of access and trunked
VLANS

This is all personal use, which means two things in this situation: There is no
traffic on the network other than minor internet data, which is allowing for
100% of real 1G bandwidth (~730Mbps per nc) for each data test. It also means
that I can test anything that may be useful for this bug without concern for
uptime.

#BUG COMMENTS
This has been a long reported issue with GVFS I'm sure many people would like
to see resolved. There are bug reports going back to EOL or nearing EOL
versions of Fedora that have not been thoroughly addressed, either due to lack
of sufficient communication from the reporting user to the bugzilla team, or
possibly other factors.

I believe very strongly in the GNOME environment and I am very satisfied with
the progress it has made over the recent years, however this has been a major
short coming for me that I hope is addressed. I am prepared to provide any
requested information (however names will be changed to protect the innocent
files), on either the client or server.

I am not developer level in debugging these issues technically, yet I am well
versed in Linux systems so instructions on how to retrieve proper debug
information will be understood. I hope that my offer to lay myself and my lab
systems at your disposal is not interpreted to be egotistic, I merely want to
offer anything I can to solve a bug that effects my needs.
Comment 17 Tomas Bzatek 2013-08-01 07:57:40 UTC
(In reply to comment #16)
>   user@client# pv
> /run/user/1000/gvfs/smb-share\:server\=fileserver.domain.com\,share\=share/largefile

>   user@client# pv
> /run/user/1000/gvfs/sftp\:host\=file0-san0\,user\=root/mnt/storage/content/seed.random

FYI, you're going through our FUSE daemon, which is a fallback layer not designed with maximum throughtput in mind. Please use gvfs- tools or other native GIO client.
Comment 18 Benjamin Kingston 2013-08-02 03:49:42 UTC
I'm using that just to allow for pv compatibility, however transfers from Nautilus result in the exact same transfer speed.
Comment 19 Benjamin Kingston 2013-08-02 03:50:26 UTC
I will come back with a proper GIO client results to satisfy
Comment 20 Benjamin Kingston 2013-08-02 07:29:30 UTC
user@client$ gvfs-copy -p smb://file.nexusnebula.net/share/seed.random ~/Downloads/
...
progress 356652412/10000000000 (29.7 MB/s)
...
progress 362287058/10000000000 (30.2 MB/s)
...

The speed is identical GIO vs FUSE if not a bit slower.
Comment 21 Ross Lagerwall 2013-11-02 14:37:06 UTC
Created attachment 258803 [details] [review]
sftp: Implement pull support

Implement pull support with a sliding window to improve the speed of
sftp downloads.

The implementation is based on the one from the OpenSSH sftp client.  It
uses up to 64 outstanding read requests.  The limit of 64 is incremented
gradually to prevent overwhelming the server.  The file is fstat()ed to
determine the size.  When the size is reached, the maximum number of
outstanding requests is reduced to 1.

The implementation is complicated by the fact that reads can return
short and the can also be serviced out of order.

This patch in substantial performance improvments, especially for
high-latency links. Compared to the fallback copy implementation, other
performance improvements are achieved by performing the initial lstat()
and open() in parallel, as well as performing the fstat() and initial
read requests in parallel.

Some benchmark figures:
Old behavior:
Copying from local server = 6.1MB/s
Copying from local server with 250ms of RTT latency = 0.251MB/s
Copying many small files with 250ms of RTT latency = 0.64 files per second

New behavior:
Copying from local server = 13MB/s
Copying from local server with 250ms of RTT latency = 6.6MB/s
Copying many small files with 250ms of RTT latency = 1.24 files per second

OpenSSH sftp client:
Copying from local server = 14.2MB/s
Copying from local server with 250ms of RTT latency = 6.4MB/s
Copying many small files with 250ms of RTT latency = 1.34 files per second
Comment 22 Ross Lagerwall 2013-11-02 14:43:16 UTC
So I have tested the above patch and afaict it doesn't leak any memory or deviate from the fallback behavior.

It provides a really good performance improvement, especially for large RTT links (e.g. in the above figures it gets a 25 times improvement).

I'd also like to point out that gvfs currently "cheats" by sending larger than legal sftp read requests (https://bugzilla.gnome.org/show_bug.cgi?id=711287). If this were fixed, the 0.251MB/s would drop to about 0.125MB/s.
Comment 23 Josselin Mouette 2013-11-02 17:51:25 UTC
(In reply to comment #21)
> Implement pull support with a sliding window to improve the speed of
> sftp downloads.

Can I hug you?
Comment 24 Ross Lagerwall 2013-11-02 18:24:13 UTC
(In reply to comment #23)
> (In reply to comment #21)
> > Implement pull support with a sliding window to improve the speed of
> > sftp downloads.
> 
> Can I hug you?

:-)

Unfortunately it doesn't help for raw reads/writes but I guess that's probably not the main use case of gio/gvfs.

Next up: https://bugzilla.gnome.org/show_bug.cgi?id=523015
Comment 25 feenstra 2013-11-04 08:25:41 UTC
Brilliant!

Pardon my newbie question, but how would I go about testing your patch?
Comment 26 Ross Lagerwall 2013-11-04 08:36:02 UTC
(In reply to comment #25)
> Brilliant!
> 
> Pardon my newbie question, but how would I go about testing your patch?

If you're on a recent distro, then it may be possible to patch the package like so:
http://pascal.nextrem.ch/2010/05/06/build-ubuntudebian-packages-from-source-and-apply-a-patch/

Otherwise, you would have to go about building gvfs from scratch...

One way to do this is to use jhbuild:
https://wiki.gnome.org/GnomeLove/JhbuildIntroduction
Comment 27 Jamin W. Collins 2013-11-06 23:48:36 UTC
I've rebuilt the gvfs packages for Ubuntu 13.10 with this patch (along with the pull and progress patches).  While transfers are faster, there also appears to be some issue with large file transfers when using this patch as transfers of large files can (and so far do) hang the gvfs subsystem.  When this happens all further gvfs requests hang.
Comment 28 Jamin W. Collins 2013-11-06 23:49:31 UTC
That should read "along with the push and progress patches".
Comment 29 Jamin W. Collins 2013-11-07 00:19:06 UTC
Created attachment 259148 [details]
hung copy with progress output

I've captured straces of gvfs-copy downloads (with and without progress output) of a random 750M file hanging.

The file was created using dd:

dd if=/dev/urandom of=750M bs=1M count=750
Comment 30 Jamin W. Collins 2013-11-07 00:19:45 UTC
Created attachment 259149 [details]
hung copy without progress output
Comment 31 Jamin W. Collins 2013-11-07 03:44:10 UTC
I've completed 120+ loops of pushing the same random 750M file to the remote host without issue.  So, it would appear the problem is only with the download logic.
Comment 32 Ross Lagerwall 2013-11-07 06:14:27 UTC
Thanks for the feedback. Since I can't reproduce the hang locally, when a hang occurs please attach to the gvfsd-sftp process (with gdb -p <pid>) and then attach the output of the following command:
thread apply all bt

And then repeat that separately attaching with gdb to gvfd and gvfs-copy.



Does the hang happen every time?

Does it only happen on large files?

Is the ssh server local or remote?

Is it OpenSSH or some other server

Thanks
Comment 33 Ross Lagerwall 2013-11-07 10:14:37 UTC
Created attachment 259167 [details] [review]
sftp: Implement pull support

Implement pull support with a sliding window to improve the speed of
sftp downloads.

The implementation is based on the one from the OpenSSH sftp client.  It
uses up to 64 outstanding read requests.  The limit of 64 is incremented
gradually to prevent overwhelming the server.  The file is fstat()ed to
determine the size.  When the size is reached, the maximum number of
outstanding requests is reduced to 1.

The implementation is complicated by the fact that reads can return
short and the can also be serviced out of order.

This patch in substantial performance improvments, especially for
high-latency links. Compared to the fallback copy implementation, other
performance improvements are achieved by performing the initial lstat()
and open() in parallel, as well as performing the fstat() and initial
read requests in parallel.

Some benchmark figures:
Old behavior:
Copying from local server = 6.1MB/s
Copying from local server with 250ms of RTT latency = 0.251MB/s
Copying many small files with 250ms of RTT latency = 0.64 files per second

New behavior:
Copying from local server = 13MB/s
Copying from local server with 250ms of RTT latency = 6.6MB/s
Copying many small files with 250ms of RTT latency = 1.24 files per second

OpenSSH sftp client:
Copying from local server = 14.2MB/s
Copying from local server with 250ms of RTT latency = 6.4MB/s
Copying many small files with 250ms of RTT latency = 1.34 files per second
Comment 34 Ross Lagerwall 2013-11-07 10:14:48 UTC
Created attachment 259168 [details] [review]
sftp: Fix handling of multiple reads of the packet length

In certain cases, reading the packet length may take more than one call.
Make this work by calculating the offset into the reply_size correctly.
Comment 35 Ross Lagerwall 2013-11-07 10:17:20 UTC
OK, I managed to reproduce the hang. It seems like it had nothing to do with the actaul pull implementation but rather was exposed by it.

Please test with the two updated patches and see if it fixes the hangs. Thanks
Comment 36 Jamin W. Collins 2013-11-07 15:34:49 UTC
The new patches are looking promising, I've completed 15 loops of pulling the same random 750M file that was previously causing hangs.

I've updated my PPA builds with the new patches
Comment 37 Jamin W. Collins 2013-11-07 16:29:35 UTC
40+ loops and still looking good.
Comment 38 Ross Lagerwall 2013-11-13 16:49:44 UTC
I will update this soon to use asynchronous I/O for local files as per the discussion in #523015.
Comment 39 Alexander Larsson 2013-11-14 08:43:32 UTC
Review of attachment 259168 [details] [review]:

oooh, nice one!
Comment 40 Alexander Larsson 2013-11-14 08:43:44 UTC
Review of attachment 259168 [details] [review]:

oooh, nice one!
Comment 41 Alexander Larsson 2013-11-14 09:12:20 UTC
Review of attachment 259167 [details] [review]:

All the local i/o is blocking.
Also, were it not it would do multiple async i/o operations in parallel, which gio doesn't support atm.
Any parallel local write scheme has to be careful to not request more data before its actually on-disk as the network may be faster than local disk writes (this is atm handled by local writes being blocked).
The easiest approach would be to serialize the async writes (maybe combining blocks when possible), parallel writes to the same hd is unlikely to be faster anyway.

::: daemon/gvfsbackendsftp.c
@@ +4973,3 @@
+                            _("Not supported"));
+        }
+      else if (type == G_FILE_TYPE_DIRECTORY)

I think falling back to the default implementation here is a better choice in this case too rather than reimplementing it. Its not a performance problematic case anyway.

In fact, this should probably do a reverse check. I.E. verify that the remote is a regular file (or symlink if !NOFOLLOW), so that it will also fall back to the default for things like device nodes and fifos too. (and maybe same for the push code?)
Comment 42 Ross Lagerwall 2013-11-15 15:13:50 UTC
Pushed the multiple reading fix to master as 936708b2d947917be5aed3de0b0de85ded2d3283 and stable as 15601b4f10f30eaeab2a33006ef00e9f0888da7c.
Comment 43 Ross Lagerwall 2013-11-16 15:17:13 UTC
Created attachment 260005 [details] [review]
sftp: Implement pull support

Implement pull support with a sliding window to improve the speed of
sftp downloads.

The implementation is based on the one from the OpenSSH sftp client.  It
uses up to 64 outstanding read requests.  The limit of 64 is incremented
gradually to prevent overwhelming the server.  The file is fstat()ed to
determine the size.  When the expected size is reached, the maximum
number of outstanding requests is reduced to 1.

The implementation is complicated by the fact that reads can return
short and they can also be serviced out of order.

This patch results in substantial performance improvments, especially
for high-latency links.  Compared to the fallback copy implementation,
other performance improvements are achieved by performing the initial
lstat()/stat() and open() in parallel, as well as performing the
fstat() and initial read requests in parallel.

Some benchmark figures:
Old behavior:
Copying from local server = 6.1MB/s
Copying from local server with 250ms of RTT latency = 0.251MB/s
Copying many small files with 250ms of RTT latency = 0.64 files per second

New behavior:
Copying from local server = 13MB/s
Copying from local server with 250ms of RTT latency = 6.6MB/s
Copying many small files with 250ms of RTT latency = 1.24 files per second

OpenSSH sftp client:
Copying from local server = 14.2MB/s
Copying from local server with 250ms of RTT latency = 6.4MB/s
Copying many small files with 250ms of RTT latency = 1.34 files per second
Comment 44 Ross Lagerwall 2013-11-16 15:22:15 UTC
The updated patch:
* uses async ops for everything. It appends the received data to a linked list of requests which are written out sequentially. More data is requested only when a write finishes.

* Falls back to the default implementation unless we are copying a regular file (possibly via a symlink if NOFOLLOW is not specified).
Comment 45 Alexander Larsson 2013-11-18 08:49:43 UTC
Review of attachment 260005 [details] [review]:

Looks good to me.

::: daemon/gvfsbackendsftp.c
@@ +5535,3 @@
+                                       G_PRIORITY_DEFAULT,
+                                       NULL,
+                                       pull_set_perms_cb, handle);

This sets the permissions but not the uid/gid.
Honestly I'm not sure if this is right or wrong as the remote uid/gid mapping is often completely different than the local one. However, it is kind of unfortunate that this code differs from what we do in push where we set the remote uid/gid.

@@ +5655,3 @@
+  if (!g_seekable_seek (G_SEEKABLE (handle->output),
+                        request->request_offset, G_SEEK_SET,
+                        NULL, &error))

Ugh, its kinda lame that we don't have an async seek operation...
Comment 46 Ross Lagerwall 2013-11-18 10:06:29 UTC
(In reply to comment #45)
> Review of attachment 260005 [details] [review]:
> 
> Looks good to me.

OK, will commit.

> 
> ::: daemon/gvfsbackendsftp.c
> @@ +5535,3 @@
> +                                       G_PRIORITY_DEFAULT,
> +                                       NULL,
> +                                       pull_set_perms_cb, handle);
> 
> This sets the permissions but not the uid/gid.
> Honestly I'm not sure if this is right or wrong as the remote uid/gid mapping
> is often completely different than the local one. However, it is kind of
> unfortunate that this code differs from what we do in push where we set the
> remote uid/gid.

The ownership is set for the replace() code but not in the recent push() that I implemented.
Perhaps I'll open a separate bug about that (and take a look at #629135).

> 
> @@ +5655,3 @@
> +  if (!g_seekable_seek (G_SEEKABLE (handle->output),
> +                        request->request_offset, G_SEEK_SET,
> +                        NULL, &error))
> 
> Ugh, its kinda lame that we don't have an async seek operation...

Yeah, I noticed that.  At least seeking on a local file shouldn't be a slow operation.
Comment 47 Ross Lagerwall 2013-11-18 17:48:38 UTC
Pushed to master as ed826fdf386cd0891cbda5c9fc3904d2a5aba03f.  Thanks!

Some of the comments further up are about smb performance.  Please open a separate bug for this.