GNOME Bugzilla – Bug 610678
starts eating cpu when trying to unlock screen
Last modified: 2010-03-09 04:48:09 UTC
The bug has been described on https://bugs.launchpad.net/ubuntu/+source/gnome-keyring/+bug/524860 "Everytiime I lock the screen , I cannot return to session. When I enter the password gdm stays at "checking..." and just remains. I have to use SysRq+ALT+K to exit from the freeze state. :( Its either from today's update > gnome-keyring (2.28.2-0ubuntu1) to 2.29.90git20100218-0ubuntu1 I also notice gnome-keyring-daemon using high cpu when i enter the session." The libgnome-keyring and gnome-keyring versions in lucid are 2010-02-18 git snapshots
Could you get a backtrace of gnome-keyring-daemon (all threads) when it's in this high cpu usage state? FWIW, I can't duplicate this problem. But that does not indicate the lack of a bug :)
I will ask if the submitter can reproduce the bug, I got the issue once which was one of the first time I unlocked the screen after upgrade but not since
one user replied with a strace log indicating it keeps reading: http://launchpadlibrarian.net/39585617/strace-gnome-keyring-daemon.log, gdb stracktrace has been requested too
Thanks. Although the strace is interesting, looking forward to the stack trace.
debug stacktrace: ".
+ Trace 220694
Thread 1 (Thread 1284)
oh , I forgot to mention , Since update gnome-keyring (2.29.90git20100218-0ubuntu1) to 2.29.90git20100218-0ubuntu2 The freeze happens only once per system start... The first time we get to a lock screen , the keyring hangs. If I do a SysRq+ALT+K and re-login into the session , i can notice that there are 2 gnome-keyring-daemons [probably from the older session?] and that is the [old??] one is still using high CPU. The stacktrace is from the instance using high CPU. After the first time , it is able to release the lock with no problems. [however there are others who mention it happens every time when returning from guest session]
Unfortunately, that stack trace is missing some elements that will help a lot to solve the problem. In particular, I need a stack trace of all threads, and not just one (in this case the signal handling thread).
Created attachment 154883 [details] partial gdb (In reply to comment #7) > Unfortunately, that stack trace is missing some elements that will help a lot > to solve the problem. In particular, I need a stack trace of all threads, and > not just one (in this case the signal handling thread). Well , this bug is frustrating. I have narrowed down the bug to - Only happens for auto-login users. [when users enter password in policykit dialogue to unlock keyring] - Does not happen when user enters password anywhere else [in the vt or during session login.] Since this bug is tricky , i have not been able to get a full gdb. - if i start gdb from session in a terminal , i cant get back to session unless i kill the daemon. [which does not give the fully required gdb] - if i start the keyring gdb from a vt earlier[user has to enter password to login to the vt session] , then the bug wont happen! Since i have entered the password before starting keyring-daemon. It seems that user needs ssh access to system to get a full gdb. [I'm not sure I'll be able to get this done , hope someone else who has ssh access can do it]
There is a new stacktrace on http://launchpadlibrarian.net/39910296/gdb-keyring.txt
(In reply to comment #7) > Unfortunately, that stack trace is missing some elements that will help a lot > to solve the problem. In particular, I need a stack trace of all threads, and > not just one (in this case the signal handling thread). Just noticed that there are 4 threads in the earlier stacktrace as well: https://bugzilla.gnome.org/page.cgi?id=trace.html&trace_id=220694 Also , Comments in launchpad mention users are noticing the problem even without auto-login. [contrary to my earlier comment 8]
Is there high hard disk activity from the gnome-keyring-daemon during this period as well?
(In reply to comment #11) > Is there high hard disk activity from the gnome-keyring-daemon during this > period as well? Nope , only high cpu usage.
I believe this should fix it. Thanks for all the insight and stack traces. commit 6c25bf1ba2772ac56e9f0430661290a40cbefed1 Author: Stef Walter <stef@memberwebs.com> Date: Mon Mar 8 04:39:18 2010 +0000 Fix endless loop in reading data. Fix problems with EINTR and EAGAIN handling, and some possible endless loops in various places. Fixes bug #610678
the fix has been backported to lucid and several users confirmed it's working now, thank you!
(In reply to comment #13) > I believe this should fix it. Thanks for all the insight and stack traces. Fixed , thanks ..