GNOME Bugzilla – Bug 98782
Rythmbox crash in gst code
Last modified: 2009-08-15 18:40:50 UTC
Package: GStreamer Severity: major Version: 0.4.2.1 Synopsis: Rythmbox crash in gst code Bugzilla-Product: GStreamer Bugzilla-Component: gstreamer (core) BugBuddy-GnomeVersion: 2.0 (2.1.1) Description: Description of Problem: Rythmbox crashes sometimes when clicking on the next button. The arguments passed to gst_scheduler_interrupt is both null. The crash happens quite random, so I don't think it's a bad file, but rather a timing issue that causes the bug. Debugging Information: Backtrace was generated from '/home/jens/garnome/bin/rhythmbox' [New Thread 1024 (LWP 1081)] [New Thread 2049 (LWP 1082)] [New Thread 1026 (LWP 1083)] [New Thread 2051 (LWP 1084)] [New Thread 3076 (LWP 1085)] [New Thread 4101 (LWP 1086)] 0x40b5ba59 in wait4 () from /lib/libc.so.6
+ Trace 30554
Thread 6 (Thread 4101 (LWP 1086))
Thread 5 (Thread 3076 (LWP 1085))
------- Bug moved to this database by unknown@bugzilla.gnome.org 2002-11-17 09:33 ------- Unknown version unspecified in product GStreamer. Setting version to "0.3.3". The original reporter (jensus@linux.nu) of this bug does not have an account here. Reassigning to the exporter, unknown@bugzilla.gnome.org. Reassigning to the default owner of the component, gstreamer-maint@bugzilla.gnome.org.
hmmm. I cant see they are null?
+ Trace 32236
I am assuming that you are using a distribution with a i686 optimized glibc, probably Red Hat. After tons of testing/debuging I think this bug is solved. It turns out that using Red Hat 8.1 + glibc from Rawhide things start working, so it seems there is a bug in the i686 threading code shipping with earlier distro's. Downgrading to i386 optimized glibc is a workaround. If you should happen to use something else let me know. I leave this bug open for a little while more so I can see you confir/denym my assumptions.
The code was running on debian 3.0, libc-2.2.5.
Talked with our thread-master David Schleef. The threading code has been bugfixed heavily since 0.4.2.1 and the bug is probably fixed. I am closing this bug due to this, but if you should encounter it with a newer release please let us know. 0.6.0 is planed out in about 9 days.