GNOME Bugzilla – Bug 544110
Exchange storage crashed - just enabled an exchange account
Last modified: 2010-06-10 06:34:39 UTC
Evolution 2.23.5 (After came db summary commit) backend_last_client_gone_cb() called! backend_last_client_gone_cb() called! GThread-ERROR **: file gthread-posix.c: line 171 (g_mutex_free_posix_impl): error 'Device or resource busy' during 'pthread_mutex_destroy ((pthread_mutex_t *) mutex)' aborting... Program received signal SIGTRAP, Trace/breakpoint trap. [Switching to Thread 0xb62406e0 (LWP 18956)] IA__g_logv (log_domain=0xb67de7c8 "GThread", log_level=G_LOG_LEVEL_ERROR, format=0xb67deb0c "file %s: line %d (%s): error '%s' during '%s'", args1=0xbffc4adc "��}��") at gmessages.c:503 503 g_private_set (g_log_depth, GUINT_TO_POINTER (depth)); (gdb) thread a a bt
+ Trace 203463
Thread 1 (Thread 0xb62406e0 (LWP 18956))
also see bug 556669
*** Bug 558047 has been marked as a duplicate of this bug. ***
*** Bug 556669 has been marked as a duplicate of this bug. ***
*** Bug 477968 has been marked as a duplicate of this bug. ***
*** Bug 561169 has been marked as a duplicate of this bug. ***
*** Bug 525715 has been marked as a duplicate of this bug. ***
Created attachment 125134 [details] [review] proposed eds patch for evolution-data-server; Not so smart, I just cannot think of anything better/general. Trying to cancel ongoing operation is not always possible, for some backends like contacts for sure. Thus at least wait-and-free code added. It is not exchange specific, I saw that with contacts backend too.
changes from bug #501298 should be applied before this, even with them it'll not help every time.
*** Bug 570767 has been marked as a duplicate of this bug. ***
Hmm, I need chen's opinion.
This is not a right fix. This is a syncronization issue in the backend. Before removing the reference to the backend, factory need to ensure that no other clients tries to use the backend. Even if this fix is applied, the would still happen at Thread 4 which will try to use the unreff'ed backend. In this case backend is being destroyed while some client is trying to open it. Milan, are you able to reproduce it? or some scenario when this can happen? I just checked data-cal-factory does have some backend_mutex locks. Maybe should we check if there any clients present even inside backend_last_client_gone_cb after grabbing the lock?
that's still an issue in 2.25.91 and a frequent crasher (ubuntu has 11 duplicates of this one now)
(In reply to comment #11) > Even if this fix is applied, the would still happen at Thread 4 which will try > to use the unreff'ed backend. In this case backend is being destroyed while > some client is trying to open it. No, it cannot. The backend holds the lock when in e_cal_backend_sync_open, thus it'll wait in dispose until the lock is unlocked.Though it works only for those which has e_cal_backend_sync_set_lock called with TRUE, which isn't file and caldav backen, thus at least for these two it'll not work. (Which means it'll not crash in this way, but in a usage of already freed memory). > Milan, are you able to reproduce it? or some scenario when this can happen? > I just checked data-cal-factory does have some backend_mutex locks. Maybe > should we check if there any clients present even inside > backend_last_client_gone_cb after grabbing the lock? I cannot recall any exact steps at the moment, I think it was crashing for me once per five starts or something, but I really do not know now. It would be easy to reproduce with a contact backend, with a big EBook (slow open). I agree that it'll need some backend support for detection of "still working/waiting for cancel", other than the sync_mutex, but considering a time, what about pushing a workaround before a code freeze and make it better in the next version?
IMHO it can still crash by accessing invalid memory. I cannot accept this as a workaround either. Srag, what do you think ?
*** Bug 574431 has been marked as a duplicate of this bug. ***
*** Bug 582107 has been marked as a duplicate of this bug. ***
Downstream bug report for the same: https://bugzilla.redhat.com/show_bug.cgi?id=541322
I didn't see this crash for a whole development cycle of 2.29. Could anyone of you retest with an upcoming 2.30.0+, please? Thanks in advance.
*** Bug 615335 has been marked as a duplicate of this bug. ***
Please feel free to reopen the bug if you see any time in evolution 2.30.x or later.