GNOME Bugzilla – Bug 655272
IMAPX: Leaking file descriptors from open pipes (probably CamelMsgPorts)
Last modified: 2015-08-26 08:07:19 UTC
Hi, since upgrading evolution to 3.0.2 I have problems that evolution keeps crashing because of "too many files" when it is running 3+ days non-stop. I have set up an imapx account which connects an Exchange IMAP server. Additionally I have set up around 10 filters for incoming mails. Now evolution crashes with "too many files". I cannot remember that evolution 2.32 was crashing - maybe after a month running non-stop ;) Most open file are "pipes". ---- shaller@shaller ~ $ ls -l1 /proc/4230/fd | grep pipe\: | wc -l 986 ---- The last error message at standard out are: ---- [...] (evolution:4230): camel-WARNING **: CamelIMAPXStore::noop_sync() reported failure without setting its GError [New Thread 0x7fffcbd05700 (LWP 8936)] [New Thread 0x7fffcffa6700 (LWP 8938)] [Thread 0x7fffcffa6700 (LWP 8938) exited] [New Thread 0x7fffcffa6700 (LWP 8939)] [Thread 0x7fffcbd05700 (LWP 8936) exited] [Thread 0x7fffc8ff3700 (LWP 8929) exited] [Thread 0x7fffcffa6700 (LWP 8939) exited] [New Thread 0x7fffcffa6700 (LWP 9355)] [Thread 0x7fffcffa6700 (LWP 9355) exited] [New Thread 0x7fffcffa6700 (LWP 9440)] (evolution:4230): camel-WARNING **: CamelIMAPXStore::noop_sync() reported failure without setting its GError [New Thread 0x7fffc8ff3700 (LWP 9441)] [New Thread 0x7fffcbd05700 (LWP 9442)] [Thread 0x7fffcbd05700 (LWP 9442) exited] [New Thread 0x7fffcbd05700 (LWP 9443)] [Thread 0x7fffc8ff3700 (LWP 9441) exited] [Thread 0x7fffcffa6700 (LWP 9440) exited] [Thread 0x7fffcbd05700 (LWP 9443) exited] [New Thread 0x7fffcbd05700 (LWP 9490)] [Thread 0x7fffcbd05700 (LWP 9490) exited] [New Thread 0x7fffcbd05700 (LWP 9534)] (evolution:4230): camel-WARNING **: CamelIMAPXStore::noop_sync() reported failure without setting its GError [New Thread 0x7fffcffa6700 (LWP 9535)] [New Thread 0x7fffc8ff3700 (LWP 9536)] [Thread 0x7fffc8ff3700 (LWP 9536) exited] [New Thread 0x7fffc8ff3700 (LWP 9537)] [Thread 0x7fffcffa6700 (LWP 9535) exited] [Thread 0x7fffcbd05700 (LWP 9534) exited] [Thread 0x7fffc8ff3700 (LWP 9537) exited] [New Thread 0x7fffc8ff3700 (LWP 9575)] [Thread 0x7fffc8ff3700 (LWP 9575) exited] [New Thread 0x7fffc8ff3700 (LWP 9667)] (evolution:4230): camel-WARNING **: CamelIMAPXStore::noop_sync() reported failure without setting its GError [New Thread 0x7fffcbd05700 (LWP 9668)] [New Thread 0x7fffcffa6700 (LWP 9669)] [Thread 0x7fffcffa6700 (LWP 9669) exited] [New Thread 0x7fffcffa6700 (LWP 9670)] [New Thread 0x7fffc3fff700 (LWP 9671)] [New Thread 0x7fffd19d7700 (LWP 9672)] [New Thread 0x7fffd0fa8700 (LWP 9673)] [Thread 0x7fffd0fa8700 (LWP 9673) exited] [New Thread 0x7fffd0fa8700 (LWP 9674)] [Thread 0x7fffd0fa8700 (LWP 9674) exited] [New Thread 0x7fffd0fa8700 (LWP 9675)] (evolution:4230): camel-CRITICAL **: camel_stream_write_to_stream: assertion `CAMEL_IS_STREAM (output_stream)' failed (evolution:4230): camel-WARNING **: CamelIMAPXFolder::get_message_sync() reported failure without setting its GError (evolution:4230): e-data-server-WARNING **: Error in execution: Nachricht konnte nicht abgerufen werden I/O error : Too many open files I/O error : Too many open files I/O warning : failed to load external entity "/home/shaller/.config/evolution/mail/views/current_view-imapx:__Stephan_20Haller@172.16.11.44_INBOX.xml" GLib-ERROR **: Cannot create pipe main loop wake-up: Zu viele offene Dateien aborting... Program received signal SIGABRT, Aborted. ---- The last time I started evolution in gdb to get a backtrace and an overview about all running threads. I hope it is useful: ---- (gdb) bt
+ Trace 227879
2634 Thread 0x7fffd0fa8700 (LWP 9675) 0x00007ffff5e94cb3 in poll () from /lib64/libc.so.6 2631 Thread 0x7fffd19d7700 (LWP 9672) 0x00007ffff6451b4d in nanosleep () from /lib64/libpthread.so.0 2630 Thread 0x7fffc3fff700 (LWP 9671) 0x00007ffff644e81b in pthread_cond_timedwait () from /lib64/libpthread.so.0 2629 Thread 0x7fffcffa6700 (LWP 9670) 0x00007ffff5e94cb3 in poll () from /lib64/libc.so.6 2627 Thread 0x7fffcbd05700 (LWP 9668) 0x00007ffff644e81b in pthread_cond_timedwait () from /lib64/libpthread.so.0 2626 Thread 0x7fffc8ff3700 (LWP 9667) 0x00007ffff644e49c in pthread_cond_wait () from /lib64/libpthread.so.0 2125 Thread 0x7fffd07a7700 (LWP 13476) 0x00007ffff5e94cb3 in poll () from /lib64/libc.so.6 2 Thread 0x7fffe3234700 (LWP 4233) 0x00007ffff5e94cb3 in poll () from /lib64/libc.so.6 * 1 Thread 0x7ffff7f99900 (LWP 4230) 0x00007ffff5dfffe5 in raise () from /lib64/libc.so.6 ---- If you need more information I try to provide them. But in some cases it needs around three days to wait for evolution to crash :( Regards, Stephan
Confirming, I've seen this myself.
can be related to bug 630124
Similar downstream bug report from 3.0.2 exposing a crash on quit: https://bugzilla.redhat.com/show_bug.cgi?id=724947
+ Trace 227901
Thread 1 (Thread 0xb7744890 (LWP 14348))
Created attachment 192979 [details] [review] proposed eds patch for evolution-data-server; This is most common part where CamelMsgPort-s are leaked in IMAPx and one in camel_folder_get_message_sync(), because each camel_operation_push_message() also references itself, which is kind of circular dependency on the object, which is never good, but that's something I do not want to judge here, because I do not understand the reason why Matthew did it this way. I neither understand IMAPx code enough, thus I want a review from Chen here, as he things IMAPx is finished, but some parts aren't completed, from my point of view. For example jobs which are in a queue, are they ever freed? I do not see the place, but because I do not understand the code enough, then I do not want to break things I do not understand. Not this time :)
I also did a commit 8c351c1 in evolution master (3.1.5+) for a leak I found during this bug investigation.
*** Bug 656319 has been marked as a duplicate of this bug. ***
Created commit c597c9e in eds master (3.1.90+)
*** Bug 657300 has been marked as a duplicate of this bug. ***
*** Bug 655303 has been marked as a duplicate of this bug. ***
*** Bug 659996 has been marked as a duplicate of this bug. ***
*** Bug 630124 has been marked as a duplicate of this bug. ***