GNOME Bugzilla – Bug 773826
daemon: Bump maximum read channel buffer size
Last modified: 2017-01-03 12:33:08 UTC
.
Created attachment 338949 [details] [review] daemon: Bump maximum read channel buffer size 256k isn't in the "stupid" buffer size for network reads, so bump the maximum size of the read buffer. See https://bugzilla.gnome.org/show_bug.cgi?id=773632 See https://bugzilla.gnome.org/show_bug.cgi?id=773823
From https://bugzilla.gnome.org/show_bug.cgi?id=773632 > Indeed, and that does boost the speed further, 29.65 MB/s over 10 separate runs > of 1GB downloads with a maximum of 31.6 MB/s.
Review of attachment 338949 [details] [review]: Makes sense to me, however, truly, I have no idea whether it is reasonable, or not. It helps to smb, but it may cause troubles somewhere else. Probably, it is worth to ask Alex, who added the comment about the "stupid" sizes... I would also change the following: @@ -120,4 +120,6 @@ modify_read_size (GVfsReadChannel *channel, real_size = 32*1024; - else + else if (channel->read_count <= 5) real_size = 64*1024; + else + real_size = 128*1024;
Well, its complicated, but for a sufficiently smart lower level, using a large buffer (like libsmb apparently is) would let it do multiple rad requests in parallel, this avoiding multiple roundtrips. However, at some point you loose out to pipelining on the write side. As an extreme example, in a file copy say you read the 10 megs before you writing, then you'll block on the write side because the write cache is full, so you won't start the next read operation until that is written. An *ideal* pipelined copy would have outstanding reads and writes in parallel, but the gio apis make this a bit hard, you typically end up doing "do { read; write } until done;".
(In reply to Alexander Larsson from comment #4) > Well, its complicated, but for a sufficiently smart lower level, using a > large buffer (like libsmb apparently is) would let it do multiple rad > requests in parallel, this avoiding multiple roundtrips. > > However, at some point you loose out to pipelining on the write side. As an > extreme example, in a file copy say you read the 10 megs before you writing, > then you'll block on the write side because the write cache is full, so you > won't start the next read operation until that is written. > > An *ideal* pipelined copy would have outstanding reads and writes in > parallel, but the gio apis make this a bit hard, you typically end up doing > "do { read; write } until done;". That sounds like a separate enhancement. I'll try to find a bug for it, or file a new one.
Created attachment 341541 [details] [review] daemon: Bump maximum read channel buffer size 256k isn't in the "stupid" buffer size for network reads, so bump the maximum size of the read buffer. See https://bugzilla.gnome.org/show_bug.cgi?id=773632 See https://bugzilla.gnome.org/show_bug.cgi?id=773823
Attachment 341541 [details] pushed as 59f9c6a - daemon: Bump maximum read channel buffer size