GNOME Bugzilla – Bug 776824
underlying SSH process exited on g_file_replace_contents_async() with larger file sizes
Last modified: 2017-06-07 11:51:31 UTC
Bluefish uses g_file_replace_contents_async to save files. In many different setups of clients and servers (Ubuntu, Arch, Fedora) the callback is called with error 0 and error message "The connection is closed (the underlying SSH process exited)". After this error the remote server is no longer mounted. This only happens with larger files (say 300k or more). This does not happen in other Bluefish code that uses g_file_copy_async(), and also not when trying to copy the same file with gvfs-copy or using scp or sftp on the commandline. This can be reproduced in muCommander. Trying to strace the ssh process gives me no useful information (but I don't know what to look for). Bluefish calls g_file_replace_contents_async with an etag, backup is TRUE, G_FILE_CREATE_NONE, and a cancellable, How can we provide more debugging information what causes this problem?
forgot to add: when working on a remote sftp:// URL. the original Bluefish bugreport is here: https://bugzilla.gnome.org/show_bug.cgi?id=767539
Thanks for your bug report. I can reproduce it by saving file in Bluefish on sftp://localhost/ on Fedora 25. I will debug it further...
Hmm, it fails, because max message length is exceeded. From ssh debug output: debug1: Exit status 11 From journal: Jan 04 12:01:46 t450s sftp-server[7212]: error: bad message from 127.0.0.1 local user oholy See relevant server source code: https://github.com/openssh/openssh-portable/blob/00df97ff68a49a756d4b977cd02283690f5dfa34/sftp-server.c#L1414 Maximum allowed message size is (256 * 1024): https://github.com/openssh/openssh-portable/blob/00df97ff68a49a756d4b977cd02283690f5dfa34/sftp-common.h#L28 Draft says that "All servers SHOULD support packets of at least 34000 bytes": https://tools.ietf.org/html/draft-ietf-secsh-filexfer-02#section-3 It works nicely if I limit maximum size to 32768 bytes as per the draft, however, this will probably cause some slowdown :-( But also sftp uses this limit: https://github.com/openssh/openssh-portable/blob/dda78a03af32e7994f132d923c2046e98b7c56c8/sftp.c#L74 I wonder whether it worked earlier, the relevant gvfs code is from 2007...
I thought this was in my standard test-set, but it turns out that I only test loading of large files over sftp in Bluefish, and not saving those same files. So I don't know, perhaps the bug is already around for a long while.
Created attachment 342874 [details] [review] sftp: Do not call force unmount twice g_vfs_backend_force_unmount() might be called twice if command sending fails. Use fail_jobs_and_unmount wrapper instead of calling _force_unmount() directly.
Created attachment 342875 [details] [review] sftp: Report written size only on success The written size is reported regardless write job result. Report the written size only on success.
Created attachment 342876 [details] [review] sftp: Mark error string as translatable The error string, which might be shown in UI is not marked as translatable, fix it.
Created attachment 342880 [details] [review] sftp: Limit writes to 32768 bytes Write buffer is not limited in the backend and thus it might happen that it tries to send data, which are too length. Unfortunatelly, in that case the underlying SSH process may exit with "The connection is closed (the underlying SSH process exited)" error and the backend is force unmounted consequently. It seems that there isn't any way to determine maximal allowed buffer size for the server. The draft-ietf-secsh-filexfer-02.txt just says: All servers SHOULD support packets of at least 34000 bytes (where the packet size refers to the full length, including the header above). This should allow for reads and writes of at most 32768 bytes. Thus the maximal buffer size has to be limited to 32768. It will probably cause some slowdown, but better than force unmount.
I am a just little confused what is a meaning of the following comment: /* Ideally we shouldn't do this copy, but doing the writes as multiple writes caused problems on the read side in openssh */ https://git.gnome.org/browse/gvfs/tree/daemon/gvfsbackendsftp.c#n3955 Alex?
Comment on attachment 342876 [details] [review] sftp: Mark error string as translatable Attachment 342876 [details] pushed as d0bae4a - sftp: Mark error string as translatable
Review of attachment 342875 [details] [review]: This is useless, because the written size is not sent to the client anyway...
Created attachment 343030 [details] [review] sftp: Limit writes to 32768 bytes Write buffer is not limited in the backend and thus it might happen that it tries to send data, which are too length. Unfortunatelly, in that case the underlying SSH process may exit with "The connection is closed (the underlying SSH process exited)" error and the backend is force unmounted consequently. It seems that there isn't any way to determine maximal allowed buffer size for the server. The draft-ietf-secsh-filexfer-02.txt just says: All servers SHOULD support packets of at least 34000 bytes (where the packet size refers to the full length, including the header above). This should allow for reads and writes of at most 32768 bytes. Thus the maximal buffer size has to be limited to 32768. It will probably cause some slowdown, but better than force unmount.
Created attachment 343031 [details] [review] sftp: Merge 3 constants into one MAX_BUFFER_SIZE, PULL_BLOCKSIZE, and PUSH_BLOCKSIZE defines the same value. Let's merge them into MAX_BUFFER_SIZE.
Comment on attachment 342874 [details] [review] sftp: Do not call force unmount twice Attachment 342874 [details] pushed as df8a5df - sftp: Do not call force unmount twice
Attachment 343030 [details] pushed as 1482097 - sftp: Limit writes to 32768 bytes Attachment 343031 [details] pushed as b0e627b - sftp: Merge 3 constants into one The former also pushed in gnome-3-22.
*** Bug 668477 has been marked as a duplicate of this bug. ***