GNOME Bugzilla – Bug 564388
UI blocks while reading a large news group
Last modified: 2011-08-23 12:59:15 UTC
Please describe the problem: If I add a large newsgroup on an NNTP evolution reads the summaries for the entire news group and in the process of doing so, blocks all other UI operations. i.e. minimizing and restoring a window does not cause a "repaint". Steps to reproduce: 1. configure the gmane nntp server 2. subscribe to the gmane.linux.kernel folder Actual results: Summaries download, UI hangs. Expected results: Summaries download, evolution goes about it's business otherwise while downloading, letting me read mail, etc. Does this happen every time? Yes. Other information:
Oh great. Now I'm stuck in a cycle of: 1. start evolution 2. [re]starts NNTP download of lkml and hangs UI 3. kill evolution 4. goto 1 "evolution --offline" doesn't help either because with it offline I cannot see the list of newsgroups in Folder Subscriptions to disable the subscription to this folder. I guess it needs to be online to fetch and display the list of newsgroups. Increasing Severity and I would like to see somebody increase Priority to reflect that evolution is now completely useless to me until I can break this vicious cycle.
(In reply to comment #1) > 1. start evolution > 2. [re]starts NNTP download of lkml and hangs UI > 3. kill evolution > 4. goto 1 How long have you (generally) waited between step 2 and 3? Can you disable the account (Edit > Preferences > Accounts) while in offline mode>
(In reply to comment #2) > > How long have you (generally) waited between step 2 and 3? Until it crashed with an ENOMEM. > Can you disable the account (Edit > Preferences > Accounts) while in offline > mode> Sure but then I can't read any of the newsgroups I'm subscribed to, making the feature completely useless. There is no way to unsubscribe from a group while an NNTP account is offline. As soon as you put it online it starts trying to download hundreds of thousands of headers again.
Hmm, possible that its locked. when you encounter it in gdb, 'thread apply all bt' would get a good trace to see on which lock main thread is waiting and can help to see who else is holding the lock.
(In reply to comment #4) > Hmm, possible that its locked. when you encounter it in gdb, 'thread apply all > bt' would get a good trace to see on which lock main thread is waiting and can > help to see who else is holding the lock. I don't really have a "sandbox" I can mess around in and given what a pain in the ass it is to recover from this situation once in it, I am reluctant to purposely get into this situation. Surely you have a sandbox that you can screw around with and scrub if you need to, yes? If so, simply use gmane and subscribe to lkml with it. You will very quickly see what I mean.
Created attachment 137160 [details] [review] Test patch with db improvements I need someone to test this patch. I reviewed the entire nntp code and saw that these could be the cause for the long delays/hangs. Attached patch should help improve a lot.
(In reply to comment #6) > Created an attachment (id=137160) [edit] > Test patch with db improvements > > I need someone to test this patch. I reviewed the entire nntp code and saw that > these could be the cause for the long delays/hangs. Attached patch should help > improve a lot. I'd love to test it but this doesn't appear to apply too cleanly to 2.26.1. In fact, I can't even find a camel_nntp_get_headers() function in 2.26.1's camel/nntp provider.
Oh, it doesn't apply on 2.26.x? Lemme put a back ported patch. Ideally it should go there as well.
Created attachment 137348 [details] [review] test patched backported for 2.26.x branch
(In reply to comment #9) > Created an attachment (id=137348) [edit] > test patched backported for 2.26.x branch This is an e-d-s patch, right? The patch patches camel/providers/nntp/camel-nntp-utils.c but I don't have that file in my 2.26.1 source tree: $ ls camel/providers/nntp/ camel-nntp-folder.c camel-nntp-store.lo camel-nntp-summary.lo camel-nntp-folder.h camel-nntp-store-summary.c ChangeLog camel-nntp-folder.lo camel-nntp-store-summary.h libcamelnntp.la camel-nntp-private.h camel-nntp-store-summary.lo libcamelnntp.urls camel-nntp-provider.c camel-nntp-stream.c Makefile camel-nntp-provider.lo camel-nntp-stream.h Makefile.am camel-nntp-resp-codes.h camel-nntp-stream.lo Makefile.in camel-nntp-store.c camel-nntp-summary.c camel-nntp-store.h camel-nntp-summary.h
This patch is against e-d-s stable branch for gnome-2-26. May be 2.26.2 should do.
(In reply to comment #11) > This patch is against e-d-s stable branch for gnome-2-26. May be 2.26.2 should > do. Hrm. This is not making any sense to me. Looking at http://git.gnome.org./cgit/evolution-data-server/commit/?h=gnome-2-26&id=8c649f6db04ff488d66db5a365dc24099cd4bdda I see that camel/providers/nntp/camel-nntp-utils.c was imported (i.e. created) back in 2000-04-14 19:05:33 (GMT). Why does my 2.26.1 tarball not have this file?
Hrm. Further investigation yields that also, http://ftp.gnome.org/pub/GNOME/sources/evolution-data-server/2.26/evolution-data-server-2.26.2.tar.bz2 does not have the camel/providers/nntp/camel-nntp-utils.c file either: $ tar tjvf evolution-data-server-2.26.2.tar.bz2 | grep camel-nntp-utils.c $ Ideas why?
OK. After much mucking about with git and make distdir and whatnot, a simple grep of the source reveals that camel-nntp-utils.c is not even used to build e-d-s anymore. I think. So that makes the camel-nntp-utils.c portion of your patch moot/invalid. Rebuilding e-d-s with your patch minus the camel-nntp-utils.c hunk.
Argh. Im sorry. I just all code around and fixed everywhere. And then just a make install.
I tried the patch on master and i see great improvement, nice work Srini. Patch has reduced downloading time by almost 60%, i have tried folder subscription which has approx 200K mails But there is considerable delay between completion of downloading of the messages and showing them in message list. Gdb traces of evolution
+ Trace 216196
(In reply to comment #16) > > But there is considerable delay between completion of downloading of the > messages and showing them in message list. I wonder if this delay you describe is the same thing I am reporting in bug 586882.
Once evolution CPU shot up to 100%
+ Trace 216197
(In reply to comment #17) > I wonder if this delay you describe is the same thing I am reporting in bug > 586882. > Not sure as I don't have any search folder here. I was just trying with nntp.
Akhil, first time setup of a folder of size 800K will take some time surely. Its gonna commit 800K records at once.
Got crash here with evolution running under valgrind ==5308== Thread 13: ==5308== Invalid read of size 4 ==5308== at 0x42C211A: camel_folder_summary_save_to_db (camel-folder-summary.c:1570) ==5308== by 0xFBE2519: nntp_folder_sync_online (camel-nntp-folder.c:108) ==5308== by 0x42B3826: disco_sync (camel-disco-folder.c:300) ==5308== by 0x42C9FF5: camel_folder_sync (camel-folder.c:324) ==5308== by 0x42C3320: remove_cache (camel-folder-summary.c:835) ==5308== by 0x42DE981: session_thread_proxy (camel-session.c:597) ==5308== by 0x57CFDC5: g_thread_pool_thread_proxy (gthreadpool.c:265) ==5308== by 0x57CE75E: g_thread_create_proxy (gthread.c:635) ==5308== by 0x499B1B4: start_thread (in /lib/libpthread-2.9.so) ==5308== by 0x59013BD: clone (in /lib/libc-2.9.so) ==5308== Address 0x28 is not stack'd, malloc'd or (recently) free'd
Possible. Can you pull me when you get a trace? I think its a left-out folder or a closed folder, still trying to sync.
Patch committed to trunk
*** Bug 562979 has been marked as a duplicate of this bug. ***
(In reply to comment #14) > OK. After much mucking about with git and make distdir and whatnot, a simple > grep of the source reveals that camel-nntp-utils.c is not even used to build > e-d-s anymore. I think. > > So that makes the camel-nntp-utils.c portion of your patch moot/invalid. > > Rebuilding e-d-s with your patch minus the camel-nntp-utils.c hunk. Did my patch improve the situation? In our local test, this was a lot better.
I'm closing this supposing it's similar on your end as described in the previous comment, but feel free to reopen if you find any issue with 3.0.2 or any later version. Thanks in advance.