GNOME Bugzilla – Bug 644695
Unrefs CamelStore when it should not in 3.2.x+
Last modified: 2012-02-17 17:36:40 UTC
Moving this from a downstream bug report: https://bugzilla.redhat.com/show_bug.cgi?id=684327 abrt version: 1.1.17 architecture: x86_64 Attached file: backtrace, 45216 bytes cmdline: evolution component: evolution Attached file: coredump, 96567296 bytes crash_function: camel_folder_summary_count executable: /usr/bin/evolution kernel: 2.6.38-0.rc8.git0.1.fc15.x86_64 package: evolution-2.91.91-1.fc15 rating: 4 reason: Process /usr/bin/evolution was killed by signal 11 (SIGSEGV) release: Fedora release 15 (Lovelock) time: 1299871576 uid: 500 How to reproduce ----- 1. update to evolution*-2.91.91-1.fc15.x86_64 2. reboot 3. start evolution Core was generated by `evolution'. Program terminated with signal 11, Segmentation fault.
+ Trace 226304
Thread 6 (Thread 0x7fb5bf0e19e0 (LWP 2447))
Thread 5 (Thread 0x7fb5aa32d700 (LWP 2475))
Thread 1 (Thread 0x7fb5a9900700 (LWP 2457))
I tried to reproduce this with no luck. From the backtrace I see that the IMAP account has set a real Junk folder, and during initial message fetching were found some junk messages, which where about to move to this real junk folder, but the folder doesn't have set a summary, which is not possible for regular cases, as I saw in the code. I'm wondering about the setup and whether this is reproducible anyhow. I asked downstream reporter to join this bug, so I'm setting this to need-info till he comes (or anyone other will be able to reproduce this) and provides missing information.
I don't see downstream reporter in cc list. Can you please ping him once on downstream bug, last try before closing the bug as incomplete :-)
Sure, done.
My setup is three IMAP accounts to two servers. One of the servers work fine, the other one (driven by a friend of mine, which has had decreasing time to do "support") I guess is the one having problem. I did point my "trash" to the corresponding map (or at least the one used by nutsmail as trash), and made my own folder to be used as junk-folder on the server (for the working server this was done automatically), however this created all kinds of strangeness like filters copying messages instead of moving them and so on. Please advice me in how I can help find out where the problem lies, and if it turns out to be the server, pointers on how I can advice my friend to get things working.
Shall we move bug status to out of needinfo ?
Similar crash from 3.0.2: https://bugzilla.redhat.com/show_bug.cgi?id=714034 The common part is that it's IMAP and the folder's summary is NULL.
+ Trace 227524
Thread 1 (Thread 0x7fb4092e2700 (LWP 2293))
I see this bug, but only if I have a custom junk folder set it seems - you can avoid the crash on startup if you very quickly toggle the work-offline button (or I guess you could just disable your network connection)
Thanks for an update. I tried with a real junk folder set on my google account, but it still doesn't want to crash. When you run evolution from a console, are there any runtime warnings before the evolution crashes for you?
Closing this bug report as no further information has been provided. Please feel free to reopen this bug if you can provide the information asked for. Thanks!
I just hit this crash, with Evolution 3.2.2 on a Fedora 16 machine (evolution-3.2.2-1.fc16.x86_64). I have two IMAP accounts, one on my employer's server and one on the server of a company we are working with. Both servers supply Junk and Trash folders, which I have my accounts set to use. What other information do you need?
Hmmm, restarting evolution leads to a repeat crash within seconds. How do I get a usable evolution back? I don't even have time to push any buttons before it crashes. I've had this setup for months, although I upgraded to version 3.2.2 just this morning. I'll have to see if downgrading is possible, I guess.
Could you install debug info packages for evolution-data-server and evolution, and update the backtrace, please? Alternatively, try to run evolution under valgrind, whether it'll show anything useful. Though it's possible some folders.db file is broken for some reason, which leads to this crash. Hard to tell. You can run evolution under valgrind like this: $ G_SLICE=always-malloc valgrind --num-callers=50 evolution &>log.txt To get backtrace run evolution under gdb, say like this: $ gdb evolution --ex r --ex "t a a bt" --ex q and then, when it crashes and prints the backtrace answer 'y' for the quit operation and copy here everything on console from the gdb command invocation to the end. Note that the gdb output can expose private information like passwords, email addresses or server names, thus please make sure you'll not add them here.
Created attachment 202377 [details] Evolution backtrace The crash triggered abrt, which saved a core file and other information. Here's the backtrace from the core file. I'll run under valgrind next and see what that shows. I'll keep the core file for awhile, by the way, in case it can be used to gather any other information you need.
Created attachment 202381 [details] Valgrind output It took awhile to crash while running under valgrind, but it finally did. Here's the output from valgrind.
Nice trace Jerry.
Thanks Jerry, very nice backtrace and valgrind log. The below is the first hit here. I'll try to investigate what failed here (either missing g_object_ref or one g_object_unref where it shouldn't be). From few downstream bugs I see a common part on this, users have installed tracker-miner-evolution plugin, but there is confirmed that removing it doesn't help. ==25011== Thread 7: ==25011== Invalid read of size 8 ==25011== at 0x38CD246B1E: camel_offline_journal_write (camel-offline-journal.c:157) ==25011== by 0x1C871E21: replay_offline_journal (camel-imap-folder.c:345) ==25011== by 0x1C878CB0: imap_synchronize_sync (camel-imap-folder.c:1654) ==25011== by 0x38CD23E732: camel_folder_synchronize_sync (camel-folder.c:3622) ==25011== by 0xFFFD33E: refresh_folders_exec (mail-send-recv.c:988) ==25011== by 0xFFF920F: mail_msg_proxy (mail-mt.c:416) ==25011== by 0x38BA66C6F7: g_thread_pool_thread_proxy (gthreadpool.c:319) ==25011== by 0x38BA66A1D5: g_thread_create_proxy (gthread.c:1962) ==25011== by 0x38B9207D8F: start_thread (pthread_create.c:309) ==25011== by 0x38B86EED0C: clone (clone.S:115) ==25011== Address 0x13aa5ec0 is 48 bytes inside a block of size 104 free'd ==25011== at 0x4A0662E: free (vg_replace_malloc.c:366) ==25011== by 0x38BA64B792: g_free (gmem.c:263) ==25011== by 0x38BA66066E: g_slice_free1 (gslice.c:907) ==25011== by 0x38BBE318B3: g_type_free_instance (gtype.c:1930) ==25011== by 0x1C87411E: imap_folder_dispose (camel-imap-folder.c:210) ==25011== by 0x38BBE113DA: g_object_unref (gobject.c:2709) ==25011== by 0x38CD25D11E: signal_data_free (camel-store.c:120) ==25011== by 0x38BA6415FD: g_source_callback_unref (gmain.c:1324) ==25011== by 0x38BA640F55: g_source_destroy_internal (gmain.c:993) ==25011== by 0x38BA644AF2: g_main_context_dispatch (gmain.c:2450) ==25011== by 0x38BA645277: g_main_context_iterate (gmain.c:3073) ==25011== by 0x38BA6457C4: g_main_loop_run (gmain.c:3281) ==25011== by 0x38C8F516AC: gtk_main (gtkmain.c:1362) ==25011== by 0x4022E2: main (main.c:696)
Created attachment 202636 [details] [review] proposed trk patch for tracker; I cannot reproduce this without tracker, and with it I see some different crashes, but close enough to investigate. This is what I found in tracker. The main issue is with camel_session_get_service(), which returns CamelStore, but since 3.2 it doesn't return a new reference to it, thus tracker should not unref the returned pointer. This may cause issues with CamelFolder-s being freed too early. The to p most change with PoolItem is that the thread_pool_exec () doesn't have a PoolItem as a parameter, it has the WorkerThreadinfo as the parameter. I didn't find it, it did valgrind: ==19411== Thread 5: ==19411== Invalid read of size 4 ==19411== at 0x287C7287: thread_pool_exec (tracker-evolution-plugin.c:308) ==19411== by 0x79386F7: g_thread_pool_thread_proxy (gthreadpool.c:319) ==19411== by 0x79361D5: g_thread_create_proxy (gthread.c:1962) ==19411== by 0x3E68E07D8F: start_thread (in /lib64/libpthread-2.14.90.so) ==19411== by 0x3E68AEEDDC: clone (in /lib64/libc-2.14.90.so) ==19411== Address 0x10178658 is not stack'd, malloc'd or (recently) free'd The rest of changes is to avoid critical warnings when the plugin is too quick (might not be right, I do not know the tracker's evo-plugin at all, this only helped to avoid runtime warnings being shown on the console. With the patch the valgrind log is pretty clear, the left place from valgrind is the below one, but that might not be related to tracker at all, but to the GRegex. ==19411== Invalid read of size 1 ==19411== at 0x294CB8EC: tracker_string_to_date (tracker-date-time.c:184) ==19411== by 0x287CA6A1: on_register_client_qry (tracker-evolution-plugin.c:1867) ==19411== by 0x3E6DA67D26: g_simple_async_result_complete (gsimpleasyncresult.c:749) ==19411== by 0x28C0A02C: tracker_sparql_backend_real_query_async_co (tracker-backend.vala:88) ==19411== by 0x28C09AA1: tracker_sparql_backend_query_async_ready (tracker-backend.vala:86) ==19411== by 0x3E6DA67D26: g_simple_async_result_complete (gsimpleasyncresult.c:749) ==19411== by 0x28C1A3CD: tracker_bus_connection_real_query_async_co (tracker-bus.vala:107) ==19411== by 0x28C185C6: __lambda0_ (tracker-bus.vala:86) ==19411== by 0x28C185F3: ___lambda0__gasync_ready_callback (tracker-bus.vala:83) ==19411== by 0x3E6DA67D26: g_simple_async_result_complete (gsimpleasyncresult.c:749) ==19411== by 0x3E6DA67E38: complete_in_idle_cb (gsimpleasyncresult.c:761) ==19411== by 0x7910A7C: g_main_context_dispatch (gmain.c:2425) ==19411== Address 0x2eddf891 is 0 bytes after a block of size 1 alloc'd ==19411== at 0x4A074CD: malloc (vg_replace_malloc.c:236) ==19411== by 0x7917650: g_malloc (gmem.c:164) ==19411== by 0x792DCCD: g_strdup (gstrfuncs.c:100) ==19411== by 0x79223DB: g_match_info_fetch (gregex.c:887) ==19411== by 0x294CB877: tracker_string_to_date (tracker-date-time.c:179) ==19411== by 0x287CA6A1: on_register_client_qry (tracker-evolution-plugin.c:1867) ==19411== by 0x3E6DA67D26: g_simple_async_result_complete (gsimpleasyncresult.c:749) ==19411== by 0x28C0A02C: tracker_sparql_backend_real_query_async_co (tracker-backend.vala:88) ==19411== by 0x28C09AA1: tracker_sparql_backend_query_async_ready (tracker-backend.vala:86) ==19411== by 0x3E6DA67D26: g_simple_async_result_complete (gsimpleasyncresult.c:749) ==19411== by 0x28C1A3CD: tracker_bus_connection_real_query_async_co (tracker-bus.vala:107) ==19411== by 0x28C185C6: __lambda0_ (tracker-bus.vala:86)
Ok, will work on reviewing and integrating this patch asap
Patch is committed in Tracker's master branch
(In reply to comment #19) > Patch is committed in Tracker's master branch What version will this be included in, please? The best what stable version. Just to know what to tell possible other people what version look for.
Should be in 0.12.9. The *when* I do that release is currently unknown but it's likely to be in the next 3 weeks or so.
Pushed also to the tracker-0.12 branch: http://git.gnome.org/browse/tracker/commit/?h=tracker-0.12&id=a1e60fec270add34509e71d7b1aba660510127ab