GNOME Bugzilla – Bug 310749
Memory usage
Last modified: 2007-02-23 13:49:41 UTC
Version details: 2.6.5 Distribution/Version: FC4 Gthumb uses a *lot* of memory, below are 3 common usage memory profiles. All memory numbers are reported as "Total":"Resident":"Shared" in MB. 1) Start gthumb on the command line in a directory with no photos "gthumb ." and it uses 128:20:12. 2) Browse to a folder with 20 images (52MB total size), viewing just the thumbnails of these photos causes gThumb to use 207:111:12 !! 3) Double click on one image (2.7MB) to view this single photo causes gThumb to jump in memory usage to 259:164:12 !!!! GThumb displaying this single image is now the single largest memory usage app on my computer, almost double to size of firefox and thunderbird. The computer is noticably slower as many interactive desktop features need to be swapped out in order to make room for gThumb. 259MB to view a single 2.7MB photo!!!!
Thanks for the bug report. This particular bug has already been reported into our bug tracking system, but please feel free to report any further bugs you find. *** This bug has been marked as a duplicate of 166275 ***
Teppo- Yes, this bug is similar to 166275, but there is an important difference. This bug also addresses the memory usage of gThumb EVEN WHEN THERE ARE NO PHOTOS. Using 128MB of RAM just to view an empty directory is crazy. I think bug 166275 is more dealing with the additional memory usage for when you have a folder with many pictures. Besides, isn't this bug report more informative? Whatever, if you close it as a dup again, I'll leave it closed. I just wanted to point out the differences in what I was reporting with bug 166275. :)
I've tested v2.7.7 to see if there were any improvements since v2.6.5. All memory numbers are reported as "Virtual":"Writable":"Resident":"Shared" in MB, as reported by Gnome System Monitor (GSM). 1) Start gthumb on the command line in a directory with no photos "gthumb ." and it uses 270:75:16:12. Looking at GSM's Memory Map, you can see that icon-theme.cache for "crystalsvg" is loaded 3 times, at 25MB a pop. Also, the icon-theme.cache for "Bluecurve" and "hicolor" are loaded three times. A grand total of ~150MB for these 3 icon-theme.caches. 2) Browse to a folder with 20 images (52MB total size), viewing just the thumbnails of these photos causes gThumb to use 290:95:25:12. 3) Double click on one image (2.7MB) to view this single photo causes gThumb to jump in memory usage to 353:158:89:12 So, it looks like things have gotten worse (?) than before. The Virtual memory usage of looking at a single 2.7MB image file is 353MB now with v2.7.7, whereas it was *just* 259MB.
Jon, Any chance you could identify the memory-hogging code bits and propose patches? (Current CVS is at 2.9.0, please use that code if you do.) - Mike
Jon, I committed a patch (from bug 171593) that affects the icon cache. Before the patch, viewing a directory with no images consumed: Virt Wri Res Shar 226.5 : 4.9 : 17.3 : 12.4 after the patch: 138.4 : 4.3 : 16.6 : 12.3 Try the latest version from SVN (rev 1263), if you are interested. - Mike
Maybe pmap -d is more interesting: before patch - mapped: 231704K writeable/private: 87180K shared: 516K after patch - mapped: 141352K writeable/private: 76352K shared: 516K pmap -d reports a bunch of 10 MB [anon] blocks that suck up most of the writeable/private memory (~ 80 MB). I wonder what they are? - Mike
Well, to answer my own question: most of the 10 MB blocks seem to be associated with image_loaders threads launched from image_loader_init. One is for thumb_loader_new. One is for image_viewer_init. Three (N_LOADERS) are created for preloader_new. Do they have to be 10 MB? - Mike
Out of curiosity, I tried this: - priv->thread = g_thread_create (load_image_thread, il, TRUE, NULL); + priv->thread = g_thread_create_full (load_image_thread, il, 65536, TRUE, FALSE, G_THREAD_PRIORITY_NORMAL, NULL); which reduces the stack size for new threads to 64K (down from 10M!). gthumb still works, and pmap -d on an empty directory now gives: mapped: 91284K writeable/private: 25872K shared: 516K which is a huge drop! Paolo, are you following this? Is there any reason the above change is a bad idea? Can we push the stack size lower? (I'm just experimenting here - I'm no expert on memory allocation.) - Mike
If I set the stack size smaller at 16k or smaller, pmap reports that 16k is used (4k+12k) regardless (e.g., even if I set it at 2k): b55de000 4 ----- 00000000b55de000 000:00000 [ anon ] b55df000 12 rw--- 00000000b55df000 000:00000 [ anon ] If I set it at 32k or higher, the second segment just expands to use the extra allocated space: b5636000 4 ----- 00000000b5636000 000:00000 [ anon ] b5637000 28 rw--- 00000000b5637000 000:00000 [ anon ] ... so I guess 16k is the "ideal" number for the stack size. - Mike
Created attachment 80342 [details] [review] Patch to reduce thread stack size to 16k
I don't know if a stack size of 16kb can cause problems, I'd apply the patch to svn and test it for a while.
I've committed a patch to reduce the stack size to 32k (to be conservative), svn rev 1272. - Mike
Well, this seems to be working, so I'll close the bug as fixed. The reported memory usage for Nautilus and gThumb is nearly identical now, when viewing a directory with no images. - Mike
Michael and/or Paolo- Can you guys release a new development version tarball so I can test this? Or, maybe someone could point me to a web page that explains how to grab the latest version from svn? (I've never used svn before). Thanks, these recent memory changes look really exciting!
Jon, I'd like to hear your feedback. svn is easy to use: cd ~ svn checkout http://svn.gnome.org/svn/gthumb/trunk cd trunk vim README (to check library requirements) ./autogen.sh make su make install - Mike
Okay, thanks for the instructions Michael, I have the latest svn running on my machine now. One thing I notice is that even with the trunk version, there are still 3 10MB blocks being allocated even when viewing an empty directory. This is down from 9 10MB blocks from v2.7.8. Is it possible to track down these remaining 3 blocks and do something similar to what you did in comment #8? lapham@bilbo > pmap -d 11993 | grep 10240 b05ec000 10240 rw--- 00000000b05ec000 000:00000 [ anon ] b2df0000 10240 rw--- 00000000b2df0000 000:00000 [ anon ] b37f1000 10240 rw--- 00000000b37f1000 000:00000 [ anon ] These 3 blocks remain unchanged even after viewing many files, even though new large memory allocations have been made. For example, below are the memory allocations >10MB after viewing a bunch of images. a81e3000 46664 rw--- 00000000a81e3000 000:00000 [ anon ] aaf76000 46664 rw--- 00000000aaf76000 000:00000 [ anon ] aef22000 23332 rw--- 00000000aef22000 000:00000 [ anon ] b05ec000 10240 rw--- 00000000b05ec000 000:00000 [ anon ] b1510000 24660 rw--- 00000000b1510000 000:00000 [ anon ] b2df0000 10240 rw--- 00000000b2df0000 000:00000 [ anon ] b37f1000 10240 rw--- 00000000b37f1000 000:00000 [ anon ] ...note that those 3 10MB allocations remain unchanged.
My best guess is that those three threads are the main program thread and two support threads launched by gdk_threads_init, gnome_program_init, and / or gnome_authentication_manager_init. gThumb doesn't have control over the stack size used by those functions. Nautilus has those same three 10 MB blocks too. So... I don't think anything can be done about them - but I'm "learning by doing", so I could be wrong... - Mike
You can play with ulimit -s and see what happens... - Mike
Okay, here is a comparison of v2.9.1_svn_trunk versus v2.7.8. I used a directory that contains 39 photos, the first 32 being 3456x2304 pixels (~3MB each) and the last 7 being 2048x1536 (~1MB each). The first column is photo number. "00" means no photo was loaded, "14" means viewing the 14th photo after having viewed the first 13. The idea here was to look to see if there was a memory leak from loading lots of sequential images. The second and third column are the mapped and writable/private values from pmap for v2.9.1_svn_trunk. The fourth and fifth column are the mapped and writable/private values from pmap for v2.7.8 (from Fedora Core 6). As you can see, v2.9.1 always wins for mapped memory, and wins pretty consistently for writable/private... but not always as can be seen in photos 18,19,20, and 28. Also, v2.9.1 is less consistent in its memory allocations, as can be seen starting at photo #17 where it starts jumping all over the map. 2.9.1 2.7.8 # map wri map wri 00 127 41 303 97 01 171 85 353 147 02 194 109 377 170 03 194 108 377 170 04 194 108 377 170 05 194 108 377 170 06 194 108 377 170 07 194 108 377 170 08 194 108 377 170 09 194 108 377 170 10 194 108 377 170 11 194 108 377 170 12 194 108 377 170 13 194 108 377 170 14 196 109 377 170 15 196 109 377 170 16 196 109 377 170 17 242 155 377 170 18 265 179 377 170 19 265 179 377 170 20 265 179 377 170 21 242 155 377 170 22 242 155 377 170 23 219 132 377 170 24 219 132 377 170 25 195 109 377 170 26 219 132 377 170 27 242 155 377 170 28 265 179 377 170 29 242 155 377 170 30 219 132 377 170 31 195 109 377 170 32 181 95 363 156 33 176 90 349 142 34 171 85 334 128 35 181 94 334 128 36 181 94 334 128 37 171 85 334 128 38 162 75 334 128 39 162 75 334 128
Michael- Note, I never was able to reproduce your reported "mapped: 91284K writeable/private: 25872K" numbers. The best I can do on an empty directory is "mapped: 121032K writeable/private: 35976K". PS: Just so there is no confusion, the "127 41" reported in the table above for the "00" image was on the directory with the 39 images, not an empty directory, that is why it is a tad higher than what I see on an empty directory.
> Note, I never was able to reproduce your reported "mapped: 91284K > writeable/private: 25872K" numbers. The best I can do on an empty directory is > "mapped: 121032K writeable/private: 35976K". Quick tests suggest that my numbers were for foreground processes (gthumb), yours are for background processes (gthumb &). I don't know why memory varies after photo 17... - Mike
Hmmm, odd. I was indeed running in the background, not sure why that would matter though. Anyway, I retried running in the foreground and I get the same numbers (mapped: 120960K writeable/private: 35904K). I wonder if you are using a different max stack size? lapham@bilbo > ulimit -s 10240
Hmm, my quick test was misleading. Sometimes there are two 10M blocks, sometimes there are three (testing with foreground processes). After testing some more, I'm not sure what the pattern is... - Mike
The stack size has been increased to 512k, to avoid crashes with svg images - bug 410827. - Mike