GNOME Bugzilla – Bug 320282
Nautilus profiling with sysprof
Last modified: 2007-04-12 21:49:59 UTC
Two profile runs done with Sysprof, by Luis Menina (who rocks). The first is a profile of nautilus opening ten folders with one thousand, non-empty, files each. The second is a profile of nautilus opening and closing ten times the same directory, with five thousands non-empty files.
Created attachment 54108 [details] First profile run open, ten directories, 1000 files each
Created attachment 54109 [details] Second profile run open/close, single folder, 5000 files, for ten times
Here is the test case: ====================== <testcase> mkdir test-folder cd test-folder for n in $(seq 5000); do echo "This is file number $n" > file-$n; done # start sysprof for n in $(seq 10); do nautilus .; done # stop sysprof </testcase> Just close the folder each time needed.
Created attachment 58967 [details] Updated open/close profiling info from CVS HEAD open/close, single folder, 5000 files, ten times While in the first test from 2005 the g_utf8_to_utf16 function seemed a big overused (from what I understand, meaning its really unsure, i'm no optimization guru :-) ), this time, you may just give some attention to the "is_needy" function. It's called many times in many places, and seems to take some good amount time. If some optimization guru (Federico ? Mathias ?) could confirm this, just to make sure it's a good starting point ;-) ?
Created attachment 60027 [details] [review] reduce number of is_needy() calls Correct me if I'm wrong: is_needy() checks if there is some job to be done for file. There are 3 groups of is_needy() calls: for high priority, for low priority and for extension queues. nautilus_directory_add_file_to_work_queue() calls is_needy() with all possible arguments and if something is found, file is added to high priority queue. For each file in high priority queue all is_needy() with arguments for high priority queue are called. If anything is found, high priority jobs are started (xxxx_start()), but each calls it's own is_needy() again. If nothing is found, file is moved to next queue (low priority or extension queue), but before it is_needy() with arguments for low priority queue are called. In the same way file is processed in low priority queue and extension queue. So, is_needy() is called way too much for each file. I removed all calls except those in xxxx_start(). For the testcase with 5000 files number of is_needy() calls dropped from 105569 to 56216.
I don't think this patch is 100% correct. For instance, take this part: @@ -3196,45 +3199,45 @@ start_or_stop_io (NautilusDirectory *dir while (!nautilus_file_queue_is_empty (directory->details->high_priority_queue)) { file = nautilus_file_queue_head (directory->details->high_priority_queue); - if (file_needs_high_priority_work_done (directory, file)) { - /* Start getting attributes if possible */ - file_info_start (directory, file); - link_info_start (directory, file); + /* Start getting attributes if possible */ + file_info_start (directory, file, &needy); + link_info_start (directory, file, &needy); + + if (needy) { return; - } else { - move_file_to_low_priority_queue (directory, file); } + + move_file_to_low_priority_queue (directory, file); } If we're already loading the file info for this file (directory->details->get_info_in_progress != NULL), then needy isn't correctly set, which means we move this file into the next queue. Also, whats up with the profile? I see sysprof reporting 0.01% "Self" in the list on the left, but 19.28% in the list on the right. Anyway, I don't think we want to change this core code this late in the cycle. This has to wait until 2.16.
Created attachment 60291 [details] [review] reduce number of is_needy() calls Fixed patch according to comment#6 nautilus-open-close-20060209.xml looks ok on my box: 19.28/20.71 both in list on the right and on the left (functions). List on the left (callers) should sum up to 19.28/20.71.
So, 2.14 is released. Does anybody have time to review this patch?
can you send the patch to nautilus-list? nautilus developer tend to miss patches in bugzilla due to the high bugzilla traffic, so they prefer that patches are also sent to the mailing list.
What's the progress on this ? Has the patch been applied ? There's still people complaining about this, especially with 5400 RPM hard disks...
Luis: nautilus patches are reviewed only if sent to the mailing list. Yiu can send a mail with a pointer to this patch
I can't reproduce the testcase on my GNOME 2.18 jhbuild system: Does anyone know how to launch the session bus so that nautilus is happy ? BTW, the application should complain if it can't find the session bus, not crash as it did. Here's the output: ================== [liberforce@donald test-folder]$ LC_ALL=C nautilus . process 5384: D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open "/opt/gnome2/var/lib/dbus/machine-id": No such file or directory See the manual page for dbus-uuidgen to correct this issue. D-Bus not built with -rdynamic so unable to print a backtrace Variable "format" is not available. Variable "address" is not available. Variable "var_args" is not available. Variable "var_args" is not available. Backtrace limit of 200 exceeded Backtrace limit of 200 exceeded Backtrace limit of 200 exceeded And here's the backtrace: ========================== Distribution: Mandriva Linux release 2007.0 (Official) for i586 Gnome Release: 2.18.0 2007-03-18 (JHBuild) BugBuddy Version: 2.18.0 System: Linux 2.6.17-11mdv #1 SMP Tue Feb 13 11:59:10 MST 2007 i686 X Vendor: The X.Org Foundation X Vendor Release: 70101000 Selinux: No Accessibility: Disabled GTK+ Theme: Clearlooks Icon Theme: gnome Memory status: size: 20660224 vsize: 20660224 resident: 6840320 share: 5775360 rss: 6840320 rss_rlim: 4294967295 CPU usage: start_time: 1175040316 rtime: 6 utime: 6 stime: 0 cutime:0 cstime: 0 timeout: 0 it_real_value: 0 frequency: 100 Backtrace was generated from '/opt/gnome2/bin/nautilus' Using host libthread_db library "/lib/i686/libthread_db.so.1". `shared object read from target memory' has disappeared; keeping its symbols. [Thread debugging using libthread_db enabled] [New Thread -1227069760 (LWP 5409)] 0xbfffe410 in __kernel_vsyscall ()
+ Trace 122759
Thread 1 (Thread -1227069760 (LWP 5409)): #-1 0xbfffe410 in __kernel_vsyscall () No symbol table info available.
Sent a mail to the mailing list, still no replay... http://mail.gnome.org/archives/nautilus-list/2007-March/msg00096.html
I commited a slightly changed patch.
Thanks a lot Alexander :-)