GNOME Bugzilla – Bug 324168
Evolution crashed when disabled configured IMAP account
Last modified: 2008-09-15 09:49:24 UTC
IMAP account was laready configured. Disabled that IMAP account 'Evolution' crashed. Backtrace was generated from '/opt/gnome/bin/evolution' Using host libthread_db library "/lib/tls/libthread_db.so.1". [Thread debugging using libthread_db enabled] [New Thread 1097752544 (LWP 9424)] [New Thread 1114184624 (LWP 9526)] [Thread debugging using libthread_db enabled] [New Thread 1097752544 (LWP 9424)] [New Thread 1114184624 (LWP 9526)] [New Thread 1133849520 (LWP 9517)] [New Thread 1112083376 (LWP 9457)] [New Thread 1122982832 (LWP 9450)] [New Thread 1120881584 (LWP 9447)] [New Thread 1109982128 (LWP 9427)] [New Thread 1107127216 (LWP 9426)] 0xffffe410 in ?? ()
+ Trace 64626
Thread 1 (Thread 1097752544 (LWP 9424))
i can not reproduce it with my imap account and 2.5.3
*** Bug 326804 has been marked as a duplicate of this bug. ***
Happens in 2.5.91 also.
Happens very frequently,
+ Trace 66475
(gdb) p *mi->summary $2 = {parent = {klass = 0x81724d8, magic = 0, hooks = 0x415caa40, ref_count = 0, flags = 0, next = 0x0, prev = 0x415c6f20}, priv = 0x405b31a9, version = 1096577232, flags = 0, nextuid = 0, time = 0, saved_count = 0, unread_count = 0, deleted_count = 0, junk_count = 0, message_info_size = 0, content_info_size = 0, message_info_chunks = 0x0, content_info_chunks = 0x0, summary_path = 0x0, build_content = 0, messages = 0x41, messages_uid = 0x3, folder = 0x0} (gdb) p *mi->summary->priv $3 = {filter_charset = 0x53e58955, filter_index = 0xe814ec83, filter_64 = 0x0, filter_qp = 0x97c3815b, filter_uu = 0xe8000d56, filter_save = 0xffffe5a7, filter_html = 0x8308558b, filter_stream = 0x525008ec, index = 0xffe8bae8, summary_lock = 0x10c483ff, io_lock = 0x8bf84589, filter_lock = 0x408bf845, alloc_lock = 0xcec830c, ref_lock = 0xead5e850} (gdb) p (((CamelFolderSummary *)mi)->priv->ref_lock) $4 = (GMutex *) 0x31
Fixed on head. Closing
*** Bug 333316 has been marked as a duplicate of this bug. ***
According to bug 333316 this is not yet fixed in .92, which was released after this bug was claimed to be fixed. REOPENing.
Hmm, skadz, can you confirm you got this crash with Evo 2.5.92? What I am about is the exact Evo version, rather than the GNOME version 2.13.92.
[skadz@codewarrior ~]$ rpm -q evolution evolution-2.5.92-1
If you have a vfoder with some messages in, you can reproduce every time.
Vfolder being freed twice causes the crash. When an account is disabled, vfoler will be freed once in remove_store. The other is in freeing old message view before creating new one.
When an account is disabled, all message infos in real folder will be freed. But those messages in vfolder are not freed. The link to the real folder still exists, this leads to crash while refreshing vfolder.
The problem of "camel_folder_summary_remove_range" ? camel_folder_summary_remove_range(CamelFolderSummary *s, int start, int end) This function first removes MessageInfo from summary, then calls "camel_message_info_free" for each MessageInfo which has been removed from summary. This will lead to inconsistence, that is for a MessageInfo, even though its refcount is still not less than 1, its summary field is not valid. Who could find a good method to solve this bug? Maybe it is so tough task for me that I can't overcome it for a long time. Help me please.
*** Bug 306001 has been marked as a duplicate of this bug. ***
*** Bug 308603 has been marked as a duplicate of this bug. ***
retargetting.
*** Bug 323470 has been marked as a duplicate of this bug. ***
also see bug 310866 which has been closed as fixed, but has the same issue i think.
*** Bug 346293 has been marked as a duplicate of this bug. ***
still in evolution 2.7.90: Backtrace was generated from '/opt/gnome/libexec/evolution-2.8' Using host libthread_db library "/lib/libthread_db.so.1". [Thread debugging using libthread_db enabled] [New Thread -1237038752 (LWP 24263)] [New Thread -1611822176 (LWP 24357)] [New Thread -1678963808 (LWP 24355)] [New Thread -1653785696 (LWP 24351)] [New Thread -1645392992 (LWP 24350)] [New Thread -1637000288 (LWP 24349)] [New Thread -1628607584 (LWP 24348)] [New Thread -1599505504 (LWP 24340)] [New Thread -1425163360 (LWP 24291)] [New Thread -1416770656 (LWP 24290)] [New Thread -1408377952 (LWP 24289)] 0xffffe410 in __kernel_vsyscall ()
+ Trace 69763
*** Bug 348315 has been marked as a duplicate of this bug. ***
*** Bug 350752 has been marked as a duplicate of this bug. ***
*** Bug 355340 has been marked as a duplicate of this bug. ***
*** Bug 363741 has been marked as a duplicate of this bug. ***
*** Bug 363415 has been marked as a duplicate of this bug. ***
*** Bug 367021 has been marked as a duplicate of this bug. ***
*** Bug 373356 has been marked as a duplicate of this bug. ***
*** Bug 375830 has been marked as a duplicate of this bug. ***
*** Bug 415099 has been marked as a duplicate of this bug. ***
*** Bug 422005 has been marked as a duplicate of this bug. ***
*** Bug 422456 has been marked as a duplicate of this bug. ***
*** Bug 422459 has been marked as a duplicate of this bug. ***
no evo2.10/gnome2.18 reports yet, would be interesting to know whether this has been fixed in the meantime.
Not yet. I still can reproduce it on the latest Evolution.
*** Bug 360533 has been marked as a duplicate of this bug. ***
*** Bug 436203 has been marked as a duplicate of this bug. ***
*** Bug 378098 has been marked as a duplicate of this bug. ***
Similar Bug : 463896
This is clearly confirmed in Evo 2.12.0. Should really be fixed! Just ask if you need info/testing, we are so many to experience this bug...
Haven't seen Evo 2.12 yet, but bug 455802 is on Evo 2.10.
*** Bug 503555 has been marked as a duplicate of this bug. ***
For the developer: I think you can fix this by making the lock in camel-folder-summary.c for CamelMessageInfo instances a recursive one.
Just my observations: In the initial stack trace, because vee folders holds only a link to real message info structure which belongs to other summary than the vee, and because the camel_message_info_free uses lock from the summary the message info belongs to, (if that is not belong to any summary then some global lock), and (probably) because the summary itself got freed before this real message info had chance to decrease ref counter, then it got crashed when trying to obtain lock from the already freed memory. (Notice, there is only one "probably".) Main problem is that there is no easy way how to unset the summary member of CamelMessageInfo after call of camel_message_info_free, because there is no guarantee if the structure will be alive even after the call of free (so, it's a free, variable should not be accessed after the call). Even more, when I try to do this (by more or less ugly approach), it will crash a bit later, in other function. So it seems to me like a "not the best" design over there, which is hard to fix "easily". Of course, if I comment out the ref increaser and camel_message_info_free call in camel-vee-summary.c, then it will just work. But leaks, more or less. I will try to investigate further. Here are just my thoughts. My reproducer steps, because it doesn't crash consistently for me: a) Make sure there is at least one message from IMAP account in the search folder b) select such search folder c) disable account d) close evo, run it again e) enable IMAP account and disable it a few seconds after. It will crash now.
Created attachment 102587 [details] [review] proposed eds patch for evolution-data-server; as simple as possible, it only took me too long to get the idea :) See my comments in the patch.
Milan, I think this will have a negative effect. (*) My accounts are removed, but still the summary object and the summary-message-infos are in memory. (*) What would happen if I change flags/tags. Would they be synced up with the server? When no account there, how can you sync? (*) CamelFolder is gone but the summary is there. It is assumed at lot of code that summary->folder is valid. It may bring more crashers/issue. I feel that the folder deleted event doesn't reach the vfolders when the account is disabled, it may make sense to see how the account is removed and whether the folder deleted is emitted or not. Otherwise, this scenario works fine in local folders (Dont have to account disable, but delete a folder, it works). Just see that this scenario If you think that isn't clean, you can have a store deleted/removed signal that vfolders can listen at and delete the folders that belong to that store and their summary/message infos when the account is taken off. Hope you got what I said.
*** Bug 334053 has been marked as a duplicate of this bug. ***
*** Bug 417778 has been marked as a duplicate of this bug. ***
*** Bug 446022 has been marked as a duplicate of this bug. ***
*** Bug 378451 has been marked as a duplicate of this bug. ***
Here (bug 348315) we have other possible duplicate, with couple of its own duplicates. I guess we will mark them (I believe there are more than this) at once, right?
We should be having atleast 20+ dupes on the stactrace list. I'm taking your patch for 2.21.90. I added another hunk, which otherwise can cause some issues during freeing of vee message info if they are cloned and used. static CamelMessageInfo * @@ -54,8 +60,10 @@ vee_message_info_clone(CamelFolderSummar to = (CamelVeeMessageInfo *)camel_message_info_new(s); to->real = camel_message_info_clone(from->real); + /* FIXME: We may not need this during CamelDBSummary */ + camel_object_ref (to->real->summary); to->info.summary = s; - + return (CamelMessageInfo *)to; } Also I added fixme for now. I noticed that my previous issues may not hold good as I noted that they were removed from the summary. I have committed to trunk rev 8429.
We can close the bug, just after Milan looks at my extra hunk.
Looks good except of one little thing, your editor added one \t at the beginning of the line, which should be empty (Matt cleared such things in the recent past). Anyway, closing as fixed then.
*** Bug 512900 has been marked as a duplicate of this bug. ***
*** Bug 512910 has been marked as a duplicate of this bug. ***
*** Bug 512903 has been marked as a duplicate of this bug. ***
*** Bug 514206 has been marked as a duplicate of this bug. ***
*** Bug 508036 has been marked as a duplicate of this bug. ***
*** Bug 515104 has been marked as a duplicate of this bug. ***
*** Bug 514603 has been marked as a duplicate of this bug. ***
*** Bug 514721 has been marked as a duplicate of this bug. ***
*** Bug 504558 has been marked as a duplicate of this bug. ***
*** Bug 517078 has been marked as a duplicate of this bug. ***
*** Bug 518205 has been marked as a duplicate of this bug. ***
*** Bug 464852 has been marked as a duplicate of this bug. ***
*** Bug 463896 has been marked as a duplicate of this bug. ***
*** Bug 519016 has been marked as a duplicate of this bug. ***
*** Bug 518949 has been marked as a duplicate of this bug. ***
*** Bug 352396 has been marked as a duplicate of this bug. ***
*** Bug 509654 has been marked as a duplicate of this bug. ***
*** Bug 504236 has been marked as a duplicate of this bug. ***
*** Bug 524439 has been marked as a duplicate of this bug. ***
*** Bug 521108 has been marked as a duplicate of this bug. ***
*** Bug 419556 has been marked as a duplicate of this bug. ***
*** Bug 528771 has been marked as a duplicate of this bug. ***
*** Bug 521793 has been marked as a duplicate of this bug. ***
*** Bug 485073 has been marked as a duplicate of this bug. ***
*** Bug 499748 has been marked as a duplicate of this bug. ***
*** Bug 511175 has been marked as a duplicate of this bug. ***