After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 544110 - Exchange storage crashed - just enabled an exchange account
Exchange storage crashed - just enabled an exchange account
Status: RESOLVED OBSOLETE
Product: Evolution Exchange
Classification: Deprecated
Component: Connector
2.28.x
Other Linux
: High critical
: ---
Assigned To: Connector Maintainer
Ximian Connector QA
: 477968 525715 556669 558047 561169 570767 574431 582107 615335 (view as bug list)
Depends on:
Blocks:
 
 
Reported: 2008-07-22 06:33 UTC by Akhil Laddha
Modified: 2010-06-10 06:34 UTC
See Also:
GNOME target: ---
GNOME version: 2.27/2.28


Attachments
proposed eds patch (1.09 KB, patch)
2008-12-22 13:45 UTC, Milan Crha
needs-work Details | Review

Description Akhil Laddha 2008-07-22 06:33:29 UTC
Evolution 2.23.5 (After came db summary commit)

backend_last_client_gone_cb() called!
backend_last_client_gone_cb() called!

GThread-ERROR **: file gthread-posix.c: line 171 (g_mutex_free_posix_impl): error 'Device or resource busy' during 'pthread_mutex_destroy ((pthread_mutex_t *) mutex)'
aborting...

Program received signal SIGTRAP, Trace/breakpoint trap.
[Switching to Thread 0xb62406e0 (LWP 18956)]
IA__g_logv (log_domain=0xb67de7c8 "GThread", log_level=G_LOG_LEVEL_ERROR, format=0xb67deb0c "file %s: line %d (%s): error '%s' during '%s'", 
    args1=0xbffc4adc "��}��") at gmessages.c:503
503		  g_private_set (g_log_depth, GUINT_TO_POINTER (depth));
(gdb) thread a a bt

Thread 1 (Thread 0xb62406e0 (LWP 18956))

  • #0 IA__g_logv
  • #1 IA__g_log
  • #2 g_mutex_free_posix_impl
    at gthread-posix.c line 171
  • #3 e_cal_backend_sync_dispose
    at e-cal-backend-sync.c line 1031
  • #4 dispose
    at e-cal-backend-exchange.c line 2078
  • #5 dispose
    at e-cal-backend-exchange-calendar.c line 2368
  • #6 IA__g_object_unref
    at gobject.c line 2383
  • #7 IA__g_value_unset
    at gvalue.c line 216
  • #8 IA__g_signal_emit_valist
    at gsignal.c line 3007
  • #9 IA__g_signal_emit
    at gsignal.c line 3034
  • #10 e_cal_backend_remove_client
    at e-cal-backend.c line 393
  • #11 listener_died_cb
    at e-cal-backend.c line 387
  • #12 link_connection_emit_broken
    at linc-connection.c line 146
  • #13 link_connection_broken_idle
    at linc-connection.c line 183
  • #14 g_idle_dispatch
    at gmain.c line 4173
  • #15 IA__g_main_context_dispatch
    at gmain.c line 2068
  • #16 g_main_context_iterate
    at gmain.c line 2701
  • #17 IA__g_main_loop_run
    at gmain.c line 2924
  • #18 bonobo_main
    at bonobo-main.c line 311
  • #19 main
    at main.c line 278
  • #0 IA__g_logv
  • #1 IA__g_log
  • #2 g_mutex_free_posix_impl
    at gthread-posix.c line 171
  • #3 e_cal_backend_sync_dispose
    at e-cal-backend-sync.c line 1031
  • #4 dispose
    at e-cal-backend-exchange.c line 2078
  • #5 dispose
    at e-cal-backend-exchange-calendar.c line 2368
  • #6 IA__g_object_unref
    at gobject.c line 2383
  • #7 IA__g_value_unset
    at gvalue.c line 216
  • #8 IA__g_signal_emit_valist
    at gsignal.c line 3007
  • #9 IA__g_signal_emit
    at gsignal.c line 3034
  • #10 e_cal_backend_remove_client
    at e-cal-backend.c line 393
  • #11 listener_died_cb
    at e-cal-backend.c line 387
  • #12 link_connection_emit_broken
    at linc-connection.c line 146
  • #13 link_connection_broken_idle
    at linc-connection.c line 183
  • #14 g_idle_dispatch
    at gmain.c line 4173
  • #15 IA__g_main_context_dispatch
    at gmain.c line 2068
  • #16 g_main_context_iterate
    at gmain.c line 2701
  • #17 IA__g_main_loop_run
    at gmain.c line 2924
  • #18 bonobo_main
    at bonobo-main.c line 311
  • #19 main
    at main.c line 278

Comment 1 André Klapper 2008-10-17 12:07:52 UTC
also see bug 556669
Comment 2 Akhil Laddha 2008-10-28 06:15:31 UTC
*** Bug 558047 has been marked as a duplicate of this bug. ***
Comment 3 Kandepu Prasad 2008-11-17 08:16:10 UTC
*** Bug 556669 has been marked as a duplicate of this bug. ***
Comment 4 Kandepu Prasad 2008-11-17 08:16:39 UTC
*** Bug 477968 has been marked as a duplicate of this bug. ***
Comment 5 Kandepu Prasad 2008-11-17 08:16:52 UTC
*** Bug 561169 has been marked as a duplicate of this bug. ***
Comment 6 Milan Crha 2008-12-22 13:37:07 UTC
*** Bug 525715 has been marked as a duplicate of this bug. ***
Comment 7 Milan Crha 2008-12-22 13:45:45 UTC
Created attachment 125134 [details] [review]
proposed eds patch

for evolution-data-server;

Not so smart, I just cannot think of anything better/general. Trying to cancel ongoing operation is not always possible, for some backends like contacts for sure. Thus at least wait-and-free code added. It is not exchange specific, I saw that with contacts backend too.
Comment 8 Milan Crha 2008-12-22 15:45:24 UTC
changes from bug #501298 should be applied before this, even with them it'll not help every time.
Comment 9 Akhil Laddha 2009-02-07 12:54:37 UTC
*** Bug 570767 has been marked as a duplicate of this bug. ***
Comment 10 Srinivasa Ragavan 2009-02-25 07:46:41 UTC
Hmm, I need chen's opinion.

Comment 11 Chenthill P 2009-02-25 11:39:21 UTC
This is not a right fix. This is a syncronization issue in the backend. Before removing the reference to the backend, factory need to ensure that no other clients tries to use the backend.

Even if this fix is applied, the would still happen at Thread 4 which will try to use the unreff'ed backend. In this case backend is being destroyed while some client is trying to open it. 

Milan, are you able to reproduce it? or some scenario when this can happen?
I just checked data-cal-factory does have some backend_mutex locks. Maybe should we check if there any clients present even inside backend_last_client_gone_cb after grabbing the lock?
Comment 12 Sebastien Bacher 2009-03-02 11:32:41 UTC
that's still an issue in 2.25.91 and a frequent crasher (ubuntu has 11 duplicates of this one now)
Comment 13 Milan Crha 2009-03-02 13:49:43 UTC
(In reply to comment #11)
> Even if this fix is applied, the would still happen at Thread 4 which will try
> to use the unreff'ed backend. In this case backend is being destroyed while
> some client is trying to open it. 

No, it cannot. The backend holds the lock when in e_cal_backend_sync_open, thus it'll wait in dispose until the lock is unlocked.Though it works only for those which has e_cal_backend_sync_set_lock called with TRUE, which isn't file and caldav backen, thus at least for these two it'll not work. (Which means it'll not crash in this way, but in a usage of already freed memory).

> Milan, are you able to reproduce it? or some scenario when this can happen?
> I just checked data-cal-factory does have some backend_mutex locks. Maybe
> should we check if there any clients present even inside
> backend_last_client_gone_cb after grabbing the lock?

I cannot recall any exact steps at the moment, I think it was crashing for me once per five starts or something, but I really do not know now. It would be easy to reproduce with a contact backend, with a big EBook (slow open).

I agree that it'll need some backend support for detection of "still working/waiting for cancel", other than the sync_mutex, but considering a time, what about pushing a workaround before a code freeze and make it better in the next version?
Comment 14 Chenthill P 2009-03-03 10:02:43 UTC
IMHO it can still crash by accessing invalid memory. I cannot accept this as a workaround either. Srag, what do you think ?
Comment 15 Akhil Laddha 2009-03-07 18:21:10 UTC
*** Bug 574431 has been marked as a duplicate of this bug. ***
Comment 16 Fabio Durán Verdugo 2009-05-11 03:40:32 UTC
*** Bug 582107 has been marked as a duplicate of this bug. ***
Comment 17 Milan Crha 2009-11-30 11:25:50 UTC
Downstream bug report for the same:
https://bugzilla.redhat.com/show_bug.cgi?id=541322
Comment 18 Milan Crha 2010-03-25 17:58:02 UTC
I didn't see this crash for a whole development cycle of 2.29. Could anyone of you retest with an upcoming 2.30.0+, please? Thanks in advance.
Comment 19 Akhil Laddha 2010-05-03 03:54:40 UTC
*** Bug 615335 has been marked as a duplicate of this bug. ***
Comment 20 Akhil Laddha 2010-06-10 06:34:39 UTC
Please feel free to reopen the bug if you see any time in evolution 2.30.x or later.