GNOME Bugzilla – Bug 534080
CamelStreamFilter fails to write if target stream is slow
Last modified: 2013-09-14 16:52:47 UTC
Please describe the problem: When a stream is written through the CamelStreamFilter, and the target stream is slow (example, obex/bluetooth stream), the write method (and also usages of decode_to_stream) fails returning -1. In a particular this happens when the write method done on the target stream does not write all the buffer available, but only a part. Steps to reproduce: 1. Use a camel stream filter on a slow target stream. 2. Sometimes this target stream on fast attempts to write a buffer, will only write part of the buffer (that's when a write returns a number of bytes lower than the number of bytes of the buffer requested to write). Actual results: When a write attempt does not write all the buffer, it fails. Expected results: It should simply try again with the remaining part of the buffer unless there was a real error in target stream write. Does this happen every time? Yes (every time you get a partial buffer write). Other information:
Created attachment 111239 [details] [review] Patch: Camel stream filter waits properly for slow streams This patch solves the issue just looping camel stream write inside do_write, so that we take into account partial writes to the destination stream. Changelog would be: * evolution-data-server/camel/camel-stream-filter.c: (do_write): if the camel_stream_write call does not write the full buffer but it's not due to an error, then loop to go on writing the stream.
you can't rely on the return value of camel_stream_write() for anything other than pass/fail the correct fix for this is simply: if (camel_stream_write(filter->source, buffer, len) == -1) return -1; camel_stream_write() will loop until all data is written (it checks for EINTR and EAGAIN itself) Other errors are irrecoverable (EBADF, etc).
Yeah, I would hope every call to camel_stream_write() wouldn't have to deal with short writes. Might be a minor API break, but should the write() method be modified to return a boolean instead of a ssize_t to help clarify its behavior? Rejecting the patch per comment #2.
(In reply to comment #2) > you can't rely on the return value of camel_stream_write() for anything other > than pass/fail > > the correct fix for this is simply: > > if (camel_stream_write(filter->source, buffer, len) == -1) > return -1; > > camel_stream_write() will loop until all data is written (it checks for EINTR > and EAGAIN itself) But this way you break the standard behavior of mostly any posix write() method. If you get a partial write (because of buffering, slow media, etc), this way of handling it will cause lose of data (the buffer write will be successful but in fact it won't have written all the data. Another way of handling would be returning error for writing partial. Then we wouldn't know what we did accomplish to write, and could cause writing same data more than one time. Or we assume that camel_stream_write always writes the full buffer, or a part of it. Then we have to change mostly any stream write implementation (at least fs and vfs streams assume the posix behavior). Then, the current handling is of course broken, but the proposal to simply check the return value would also fail for partial writes.
I think Jeff was saying that the various CamelStream.write() implementations already deal with recoverable errors if it's a possibility for that type of stream, so the caller of camel_stream_write() never has to worry about it. I haven't verified that for all cases, but the subclasses I looked at do appear to handle partial writes. If indeed these are the semantics of camel_stream_write() then its documentation is inaccurate and should be updated. Currently it states: Attempts to write up to @n bytes of @buffer into @stream. Returns the number of bytes written to the stream, or %-1 on error along with setting errno.
The problem is that in this case the caller of the write doesn't know that the implementation of the stream has cut the data into pieces and is transmitting the data piece per piece. This makes the implementation of the stream that does the cutting responsible for finishing each slice (don't just look at the patch, look at the surrounding code).
If this is the bug Philip was talking to me about the other day, I think the correct fix is to fix CamelStreamGnomeVFS.write() to work the way the other streams work.
The problem with that, although I agree it would fix it, is that GIO (a new streaming library in GLib) is making the same assumption (that a write is not a write_all). Personally, I would prefer if Camel's usage of camel_stream_write was reviewed to behave how a GIO's write() is behaving (for consistency, which might make a future migration of CamelStream to GIO more easy someday). But it's just an opinion, of course.
I agree with Philip, that would be better to ..._write_all in camel-stream-vfs.c:stream_write, I do not know why I used the ..._write function in GIO rewrite.
Created attachment 111964 [details] [review] proposed eds patch for evolution-data-server; I found also a g_output_stream_write used in a calendar, so using g_output_stream_write_all on both places. I believe it's better to call ..._all function, because it's intended to be used in these cases.
Fejj, can you look at Milan's patch ?
looks good
Committed to trunk. Committed revision 8980.
(In reply to comment #10) > Created an attachment (id=111964) [edit] > proposed eds patch > > for evolution-data-server; The patch broke attachment saving for me.. Patch upcoming.
Created attachment 113005 [details] [review] fixes-attachment-saving.patch
Yes, good catch. Please commit.
Committed to trunk as revision 9005.
Oops, thanks for fixing this.
*** Bug 539100 has been marked as a duplicate of this bug. ***
*** Bug 540515 has been marked as a duplicate of this bug. ***
*** Bug 541286 has been marked as a duplicate of this bug. ***