After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 523103 - Evolution Pango related memory leaks
Evolution Pango related memory leaks
Status: RESOLVED OBSOLETE
Product: evolution
Classification: Applications
Component: BugBuddyBugs
2.24.x (obsolete)
Other Linux
: High critical
: ---
Assigned To: Evolution Triage Team
Evolution QA team
Depends on:
Blocks:
 
 
Reported: 2008-03-18 04:48 UTC by Akhil Laddha
Modified: 2010-03-26 05:08 UTC
See Also:
GNOME target: ---
GNOME version: 2.19/2.20


Attachments
Valgrind traces of evolution process (279.50 KB, text/plain)
2008-03-18 04:48 UTC, Akhil Laddha
Details
Valgrind traces of evolution process (346.32 KB, text/plain)
2008-04-21 05:10 UTC, Akhil Laddha
Details
Valgrind traces of evolution process (294.51 KB, text/plain)
2008-05-20 09:25 UTC, Akhil Laddha
Details
Valgrind traces of evolution process (464.94 KB, text/plain)
2008-05-22 04:16 UTC, Akhil Laddha
Details
evolution-leaks.txt (290.01 KB, text/plain)
2008-11-20 11:21 UTC, Peter
Details
evolution-leaks.txt (728.95 KB, text/plain)
2008-11-20 13:49 UTC, Peter
Details

Description Akhil Laddha 2008-03-18 04:48:21 UTC
Evolution 2.23.x 

I have evolution configured with group wise back end and i left it running for an over night. In the morning evolution residential memory consumption went up to 287 MB. Few leaks could be due to dependent packages (glib,pango,gtk) but still i can see leaks in group wise. I closed evolution during collecting valgrind traces 
(there was no crash).
Comment 1 Akhil Laddha 2008-03-18 04:48:54 UTC
Created attachment 107507 [details]
Valgrind traces of evolution process
Comment 2 Akhil Laddha 2008-04-21 05:10:11 UTC
Created attachment 109609 [details]
Valgrind traces of evolution process 

I left evolution 2.23.1 running for 2 days/nights. I haven't done any operation during 2 days. Before closing evolution, i did auto completion and switched among components and then forcefully closed evolution (no crash).
Comment 3 Akhil Laddha 2008-05-05 11:03:45 UTC
Bug 531509 may be helpful
Comment 4 Sankar P 2008-05-18 17:28:06 UTC
Akhil,

Thanks for the traces. I will look into this.
Comment 5 Akhil Laddha 2008-05-20 09:25:16 UTC
Created attachment 111215 [details]
Valgrind traces of evolution process

These valgrind traces, i collected with two operations - offline sync and message movement in 2.23.3, could be extension of bug 530543.
Comment 6 Akhil Laddha 2008-05-22 04:16:20 UTC
Created attachment 111318 [details]
Valgrind traces of evolution process

After Sankar has fixed a big memory leak, i thought i should try valgrind again. I left evolution running over night without doing any operation and attached are the valgrind traces of evolution 2.23.3. Traces look similar to previous one.
Comment 7 Sankar P 2008-08-08 06:58:09 UTC
The hunk corresponding to GroupWise is fixed. Rest of the leaks show somewhere in pango etc. Need to dig more. Removing the GW tag.
Comment 9 Milan Crha 2008-09-18 15:34:42 UTC
Akhil, maybe worth trying actual 2.23.92 to see how it is going. There could be some changes in leaks hopefully.
Comment 10 Akhil Laddha 2008-10-14 04:37:51 UTC
see bug 556217
Comment 11 Peter 2008-11-20 10:10:43 UTC
Heh and this is still a problem in evolution-2.24.1.1. I'll try to gather valgrind output too since this is obvious regression since 2.22...
Comment 12 Peter 2008-11-20 11:21:24 UTC
Created attachment 123099 [details]
evolution-leaks.txt

Well, yes. Seems it looks like leak the most important leak is pango related.
Comment 13 Peter 2008-11-20 13:49:44 UTC
Created attachment 123109 [details]
evolution-leaks.txt

This one is better as I've installed more libraries with debugging symbols and worked longer in evolution. Also I've checked the following leak:

==3182== 11,779,610 bytes in 6,963 blocks are still reachable in loss record 475 of 475
[snip]
==3182==    by 0x4BBA204: pango_ot_ruleset_new_from_description (pango-ot-ruleset.c:299)
==3182==    by 0x4BBA419: pango_ot_ruleset_get_for_description (pango-ot-ruleset.c:149)

And actually this is not leak and it works as it should. But still somewhere there are big problems...
Comment 14 Akhil Laddha 2010-03-26 05:08:16 UTC
These traces are obsolete now. I will file a new bug against 2.30.0 if i see memory leaks.