After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 704970 - gvfs smb and sftp transfer speed 2-3 times slower than their cli counterparts
gvfs smb and sftp transfer speed 2-3 times slower than their cli counterparts
Status: RESOLVED DUPLICATE of bug 688079
Product: gvfs
Classification: Core
Component: smb backend
1.16.x
Other Linux
: Normal normal
: ---
Assigned To: gvfs-maint
gvfs-maint
Depends on:
Blocks:
 
 
Reported: 2013-07-27 07:15 UTC by Benjamin Kingston
Modified: 2013-07-31 15:12 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description Benjamin Kingston 2013-07-27 07:15:20 UTC
Description of problem:
cli performance is between 60-90MBps, depending on protocol used.
gvfs performance will not pass 30MBps, even with extensive kernel and service tuning on file host.

Version-Release number of selected component (if applicable):
1.16.3

How reproducible:
Always

Steps to Reproduce:
1.root@host#pv /mnt/files/largefile |nc 10.0.2.254 3890
  user@client#nc -p 38902 -l|pv > /dev/null
  >record stable transfer rate

2.user@client#mount -t cifs -o user=user //fileserver.domain.com/share /mnt/cifs
  user@client#pv /mnt/cifs/largefile > /dev/null
  >record stable transfer rate

3.user@client#sftp root@fileserver.domain.com
  get /mnt/files/largefile /dev/null
  >record stable transfer rate

4.Open nautilus or gvfs utility of choice
  connect to server at smb://fileserver.domain.com/share
  user@client# pv /run/user/1000/gvfs/smb-share\:server\=fileserver.domain.com\,share\=share/largefile > /dev/null
  >record stable transfer rate

5.Open nautilus or gvfs utility of choice
  connect to server at sftp://root@fileserver.domain.com/mnt/files
  user@client# pv /run/user/1000/gvfs/sftp\:host\=file0-san0\,user\=root/mnt/storage/content/seed.random > /dev/null
  >record stable transfer rate
  
Actual results:
nc result: [91.9MiB/s]
mount.cifs result: [72.8MiB/s]
sftp result: 63.9MB/s
gvfs-smb result: [31.2MiB/s]
gvfs-sftp result: [30.8MiB/s]

Expected results:
gvfs-smb and gvfs-sftp performance should be on par with sftp and mount.cifs

Additional info:
If the issue is deeper than gvfs please inform me so that I may report this to the correct project.

I do not expect any results to be the same as nc since there is only tcp overhead in that example, I provided it only as a baseline for the capacity of my network link, as well as to demonstrate the host storage performance is not a limiting factor. I also directed the pv to /dev/null to eliminate any variable of the client's storage. It can be assured that transfers from host disk to client disk are similar.

#INVOLVED SYSTEMS
-Client is a Westmere i5 3.3Ghz with ext4/lvm/intel RAID10, and 8GB of DDR3 RAM, 16GB of swap, 1G ethernet with 9000MTU, and running latest Fedora 19 packages at time of posting
-Host is a kvm guest, 2 vCPUs, 512MB of RAM (balloon can increase), 1024MB swap, iSCSI storage, virtio ethernet witn 9000MTU over macvtap/bridge mode, and running latest RHEL 6.4 compatible packages (with extra repository packages)
-iSCSI host is also a kvm guest, 1 vCPU 256MB of RAM (balloon can increase), 1024MB of swap, aesni_intel module loaded and fips verified, storage is gvfs2/luks/linear md/RAID10 md/USB 3.0 disks which is physically connected to a VT-d exported USB3.0 pci-e card, virtio ethernet witn 9000MTU over macvtap/bridge mode, and running latest RHEL 6.4 compatible packages (with extra repository packages)
-KVM hypervisor is a Westmere Xeon 2.4Ghz which stores VM disk images on lvm/luks/intel RAID 10, aesni_intel is loaded and fips verified, 24GB DDR3 ECC RAM, 20GB swap, 2x 1G ethernet LACP bonded interface with 9000MTU
-Connecting switch is a Cisco 24 port 2960-S 1G with 9000MTU, 2 of which ports are LACP bonded to the hipervisor system, ports are a mix of access and trunked VLANS

This is all personal use, which means two things in this situation: There is no traffic on the network other than minor internet data, which is allowing for 100% of real 1G bandwidth (~730Mbps per nc) for each data test. It also means that I can test anything that may be useful for this bug without concern for uptime.

#BUG COMMENTS
This has been a long reported issue with GVFS I'm sure many people would like to see resolved. There are bug reports going back to EOL or nearing EOL versions of Fedora that have not been thoroughly addressed, either due to lack of sufficient communication from the reporting user to the bugzilla team, or possibly other factors.

I believe very strongly in the GNOME environment and I am very satisfied with the progress it has made over the recent years, however this has been a major short coming for me that I hope is addressed. I am prepared to provide any requested information (however names will be changed to protect the innocent files), on either the client or server.

I am not developer level in debugging these issues technically, yet I am well versed in Linux systems so instructions on how to retrieve proper debug information will be understood. I hope that my offer to lay myself and my lab systems at your disposal is not interpreted to be egotistic, I merely want to offer anything I can to solve a bug that effects my needs.
Comment 1 Ondrej Holy 2013-07-31 15:12:03 UTC
Thank you for your bug report, however we have also reported these performances bugs, so there is no need to create new bugs.

smb backend: https://bugzilla.gnome.org/show_bug.cgi?id=688079
sftp backend: https://bugzilla.gnome.org/show_bug.cgi?id=532951

Please continue in the existing bugs.

Just a note, network backend is something which shows you places on your network, so it is something different from sftp/smb backend.

*** This bug has been marked as a duplicate of bug 688079 ***