After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 534080 - CamelStreamFilter fails to write if target stream is slow
CamelStreamFilter fails to write if target stream is slow
Status: RESOLVED FIXED
Product: evolution-data-server
Classification: Platform
Component: Mailer
2.24.x (obsolete)
Other All
: Normal normal
: ---
Assigned To: Priit Laes (IRC: plaes)
Evolution QA team
: 539100 540515 541286 (view as bug list)
Depends on:
Blocks:
 
 
Reported: 2008-05-20 17:30 UTC by Jose Dapena Paz
Modified: 2013-09-14 16:52 UTC
See Also:
GNOME target: ---
GNOME version: 2.21/2.22


Attachments
Patch: Camel stream filter waits properly for slow streams (766 bytes, patch)
2008-05-20 17:33 UTC, Jose Dapena Paz
rejected Details | Review
proposed eds patch (2.06 KB, patch)
2008-06-02 16:03 UTC, Milan Crha
committed Details | Review
fixes-attachment-saving.patch (928 bytes, patch)
2008-06-18 19:17 UTC, Priit Laes (IRC: plaes)
committed Details | Review

Description Jose Dapena Paz 2008-05-20 17:30:31 UTC
Please describe the problem:
When a stream is written through the CamelStreamFilter, and the target stream is slow (example, obex/bluetooth stream), the write method (and also usages of decode_to_stream) fails returning -1. In a particular this happens when the write method done on the target stream does not write all the buffer available, but only a part.

Steps to reproduce:
1. Use a camel stream filter on a slow target stream.
2. Sometimes this target stream on fast attempts to write a buffer, will only write part of the buffer (that's when a write returns a number of bytes lower than the number of bytes of the buffer requested to write).

Actual results:
When a write attempt does not write all the buffer, it fails.

Expected results:
It should simply try again with the remaining part of the buffer unless there was a real error in target stream write.

Does this happen every time?
Yes (every time you get a partial buffer write).

Other information:
Comment 1 Jose Dapena Paz 2008-05-20 17:33:11 UTC
Created attachment 111239 [details] [review]
Patch: Camel stream filter waits properly for slow streams

This patch solves the issue just looping camel stream write inside do_write, so that we take into account partial writes to the destination stream.

Changelog would be:
* evolution-data-server/camel/camel-stream-filter.c:
  (do_write): if the camel_stream_write call does not write the
  full buffer but it's not due to an error, then loop to go on 
  writing the stream.
Comment 2 Jeffrey Stedfast 2008-05-20 18:12:45 UTC
you can't rely on the return value of camel_stream_write() for anything other than pass/fail

the correct fix for this is simply:

if (camel_stream_write(filter->source, buffer, len) == -1)
	return -1;

camel_stream_write() will loop until all data is written (it checks for EINTR and EAGAIN itself)

Other errors are irrecoverable (EBADF, etc).
Comment 3 Matthew Barnes 2008-05-20 18:35:37 UTC
Yeah, I would hope every call to camel_stream_write() wouldn't have to deal with short writes.  Might be a minor API break, but should the write() method be modified to return a boolean instead of a ssize_t to help clarify its behavior?

Rejecting the patch per comment #2.
Comment 4 Jose Dapena Paz 2008-05-21 07:35:57 UTC
(In reply to comment #2)
> you can't rely on the return value of camel_stream_write() for anything other
> than pass/fail
> 
> the correct fix for this is simply:
> 
> if (camel_stream_write(filter->source, buffer, len) == -1)
>         return -1;
> 
> camel_stream_write() will loop until all data is written (it checks for EINTR
> and EAGAIN itself)

But this way you break the standard behavior of mostly any posix write() method. If you get a partial write (because of buffering, slow media, etc), this way of handling it will cause lose of data (the buffer write will be successful but in fact it won't have written all the data.

Another way of handling would be returning error for writing partial. Then we wouldn't know what we did accomplish to write, and could cause writing same data more than one time.

Or we assume that camel_stream_write always writes the full buffer, or a part of it. Then we have to change mostly any stream write implementation (at least fs and vfs streams assume the posix behavior).

Then, the current handling is of course broken, but the proposal to simply check the return value would also fail for partial writes.
Comment 5 Matthew Barnes 2008-05-21 11:17:42 UTC
I think Jeff was saying that the various CamelStream.write() implementations already deal with recoverable errors if it's a possibility for that type of stream, so the caller of camel_stream_write() never has to worry about it.

I haven't verified that for all cases, but the subclasses I looked at do appear to handle partial writes.

If indeed these are the semantics of camel_stream_write() then its documentation is inaccurate and should be updated.  Currently it states:

    Attempts to write up to @n bytes of @buffer into @stream.

    Returns the number of bytes written to the stream, or %-1 on error
    along with setting errno.
Comment 6 Philip Van Hoof 2008-05-21 13:22:28 UTC
The problem is that in this case the caller of the write doesn't know that the implementation of the stream has cut the data into pieces and is transmitting the data piece per piece.

This makes the implementation of the stream that does the cutting responsible for finishing each slice (don't just look at the patch, look at the surrounding code).
Comment 7 Jeffrey Stedfast 2008-05-21 14:07:21 UTC
If this is the bug Philip was talking to me about the other day, I think the correct fix is to fix CamelStreamGnomeVFS.write() to work the way the other streams work.
Comment 8 Philip Van Hoof 2008-05-21 14:46:19 UTC
The problem with that, although I agree it would fix it, is that GIO (a new streaming library in GLib) is making the same assumption (that a write is not a write_all).

Personally, I would prefer if Camel's usage of camel_stream_write was reviewed to behave how a GIO's write() is behaving (for consistency, which might make a future migration of CamelStream to GIO more easy someday).

But it's just an opinion, of course.
Comment 9 Milan Crha 2008-05-26 09:41:55 UTC
I agree with Philip, that would be better to ..._write_all in camel-stream-vfs.c:stream_write, I do not know why I used the ..._write function in GIO rewrite.
Comment 10 Milan Crha 2008-06-02 16:03:49 UTC
Created attachment 111964 [details] [review]
proposed eds patch

for evolution-data-server;

I found also a g_output_stream_write used in a calendar, so using g_output_stream_write_all on both places. I believe it's better to call ..._all function, because it's intended to be used in these cases.
Comment 11 Srinivasa Ragavan 2008-06-13 03:46:17 UTC
Fejj, can you look at Milan's patch ?
Comment 12 Jeffrey Stedfast 2008-06-13 11:29:55 UTC
looks good
Comment 13 Milan Crha 2008-06-13 11:57:47 UTC
Committed to trunk. Committed revision 8980.
Comment 14 Priit Laes (IRC: plaes) 2008-06-18 19:14:47 UTC
(In reply to comment #10)
> Created an attachment (id=111964) [edit]
> proposed eds patch
> 
> for evolution-data-server;

The patch broke attachment saving for me..

Patch upcoming.
Comment 15 Priit Laes (IRC: plaes) 2008-06-18 19:17:40 UTC
Created attachment 113005 [details] [review]
fixes-attachment-saving.patch
Comment 16 Matthew Barnes 2008-06-18 19:28:55 UTC
Yes, good catch.  Please commit.
Comment 17 Priit Laes (IRC: plaes) 2008-06-18 19:47:50 UTC
Committed to trunk as revision 9005.
Comment 18 Milan Crha 2008-06-19 09:08:30 UTC
Oops, thanks for fixing this.
Comment 19 Matthew Barnes 2008-06-19 11:15:56 UTC
*** Bug 539100 has been marked as a duplicate of this bug. ***
Comment 20 David Ronis 2008-06-27 17:14:41 UTC
*** Bug 540515 has been marked as a duplicate of this bug. ***
Comment 21 Matthew Barnes 2008-07-03 02:06:20 UTC
*** Bug 541286 has been marked as a duplicate of this bug. ***