GNOME Bugzilla – Bug 619682
crash on libgda using GDA_CONNECTION_OPTIONS_THREAD_SAFE
Last modified: 2010-06-07 19:37:36 UTC
After a change to GDA_CONNECTION_OPTIONS_THREAD_SAFE my libgda git master has a crash. I didn't yet changed mutex policies into the core, but when libgda is using two different databases at the same time (no problem discovered with just one [but that is thread safe at symbol-db level]) I've got the following stack trace.
+ Trace 222096
Thread 140736628230416 (LWP 24122)
It would be usefull to have the backtrace of all the threads you have, or even better a standalone testcase in which the core dump occurs. Just as a reminder, the GDA_CONNECTION_OPTIONS_THREAD_SAFE option does not mean that the GdaDataModel resulting from a SELECT's execution can be used by several threads at the same time.
(In reply to comment #1) > It would be usefull to have the backtrace of all the threads you have, or even > better a standalone testcase in which the core dump occurs. actually I received that crash only once. I'll report as many info as possible if it rehappens. > Just as a reminder, the GDA_CONNECTION_OPTIONS_THREAD_SAFE option does not mean > that the GdaDataModel resulting from a SELECT's execution can be used by > several threads at the same time. well, I'm using GdaDataModel in a thread per time, they're not shared between threads.
got it another time: (gdb) bt
+ Trace 222165
(gdb) info threads * 23 Thread 0x7fffd19c2910 (LWP 26432) 0x00007ffff3f02f05 in *__GI_raise ( sig=<value optimized out>) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64 11 Thread 0x7fffd31c5910 (LWP 26367) __lll_lock_wait_private () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:97 10 Thread 0x7fffd3fff910 (LWP 26352) 0x00007ffff42329eb in read () from /lib/libpthread.so.0 6 Thread 0x7fffdd2bf910 (LWP 26346) pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:261 5 Thread 0x7fffe9745910 (LWP 26345) pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:261 3 Thread 0x7fffe9f46910 (LWP 26343) pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:261 2 Thread 0x7fffeb1a3910 (LWP 26329) __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:136 1 Thread 0x7ffff7fc8790 (LWP 26326) 0x00007ffff3f91633 in *__GI___poll ( fds=<value optimized out>, nfds=<value optimized out>, timeout=99) at ../sysdeps/unix/sysv/linux/poll.c:87 this time I'm populating just one db, so just one thread access one db. The other threads you see here are only waiting on a queue.
Here they are some more cases, I don't know if they're related. I'm having a test with transaction and isolation level GDA_TRANSACTION_ISOLATION_SERIALIZABLE. When the transaction starts there are some concurrent selects probably on a second db. On disposing the thing crashes. first stack trace: Program received signal SIGSEGV, Segmentation fault.
+ Trace 222166
Thread 140736770201872 (LWP 28675)
actually bugzilla wrapped the two stack traces in one. Click on "trace 222166" to see both. I've got some more info: the crash doesn't happen if I specify GDA_CONNECTION_OPTIONS_NONE on connection to db. So actually the problem seems related to parameter GDA_CONNECTION_OPTIONS_THREAD_SAFE.
Using the GDA_CONNECTION_OPTIONS_THREAD_SAFE makes Libgda use some internal threads (not visible to the user), which is the reason it behaves differently compared with GDA_CONNECTION_OPTIONS_NONE. However that does not give enough indications. From the information you've provided I've created some test cases and still can't figure out what the problem is. Could you try to provide a standalone test case in which the problem appears (even if only from time to time, in this case if you can also give me a core dump and/or backtraces)?
yes, I'll try to do it asap.
strange enought, I wrote a test example but I cannot make it crash. I'll keep trying. The population seems ok now, no crash too. It's a weird bug.
well, I hadn't encountered thig bug anymore. I'll close the bug as incomplete. If this re-happens I'll reopen it.