After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 702023 - caldav/gdbus memory leak
caldav/gdbus memory leak
Status: RESOLVED NOTABUG
Product: evolution-data-server
Classification: Platform
Component: Calendar
3.8.x (obsolete)
Other Linux
: Normal normal
: ---
Assigned To: evolution-calendar-maintainers
Evolution QA team
Depends on:
Blocks: 627707
 
 
Reported: 2013-06-11 15:51 UTC by David Woodhouse
Modified: 2013-06-12 09:25 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description David Woodhouse 2013-06-11 15:51:49 UTC
==21714== 24 bytes in 1 blocks are definitely lost in loss record 2,292 of 7,013
==21714==    at 0x4A06409: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==21714==    by 0x376804D93E: g_malloc (gmem.c:159)
==21714==    by 0x37680634ED: g_slice_alloc (gslice.c:1003)
==21714==    by 0x3768063A2D: g_slice_alloc0 (gslice.c:1029)
==21714==    by 0x37680473EC: g_main_context_push_thread_default (gmain.c:728)
==21714==    by 0x3CFD0B94DF: g_dbus_connection_send_message_with_reply_sync (gdbusconnection.c:2230)
==21714==    by 0x3CFD0B9946: g_dbus_connection_call_sync_internal (gdbusconnection.c:5564)
==21714==    by 0x3CFD0C5322: g_dbus_proxy_call_sync_internal (gdbusproxy.c:2910)
==21714==    by 0x3CFD0C6742: g_dbus_proxy_call_sync (gdbusproxy.c:3102)
==21714==    by 0x3DE023B7C6: e_dbus_source_manager_call_authenticate_sync (e-dbus-source-manager.c:679)
==21714==    by 0x3DDFE4B205: e_source_registry_authenticate_sync (e-source-registry.c:1950)
==21714==    by 0xF95224D: caldav_do_open (e-cal-backend-caldav.c:2904)
==21714==
Comment 1 David Woodhouse 2013-06-11 15:52:30 UTC
This one was seen immediately after it in the valgrind log. Quite possibly related, and I'm unlikely to achieve anything else with it unless it can be found along with the above...

==21714== 28 bytes in 1 blocks are definitely lost in loss record 2,333 of 7,013
==21714==    at 0x4A06409: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==21714==    by 0x390C20BBAE: __libc_res_nsend (res_send.c:441)
==21714==    by 0x390C208C56: __libc_res_nquery (res_query.c:226)
==21714==    by 0x390C209872: __libc_res_nsearch (res_query.c:582)
==21714==    by 0x198C5C2C: ???
==21714==    by 0x39092DB115: gaih_inet (getaddrinfo.c:849)
==21714==    by 0x39092DE6FC: getaddrinfo (getaddrinfo.c:2396)
==21714==    by 0x3CFD07A6E2: do_lookup_by_name (gthreadedresolver.c:81)
==21714==    by 0x3CFD077F14: g_task_thread_pool_thread (gtask.c:1242)
==21714==    by 0x376806CBE5: g_thread_pool_thread_proxy (gthreadpool.c:309)
==21714==    by 0x376806C224: g_thread_proxy (gthread.c:798)
==21714==    by 0x3909607C52: start_thread (pthread_create.c:308)
==21714==
Comment 2 David Woodhouse 2013-06-11 15:58:42 UTC
Actually, ignoring comment #1 there's a pattern of those leaks inside g_main_context_push_thread_default().

==21714== 48 (24 direct, 24 indirect) bytes in 1 blocks are definitely lost in loss record 3,304 of 7,013
==21714==    at 0x4A06409: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==21714==    by 0x376804D93E: g_malloc (gmem.c:159)
==21714==    by 0x37680634ED: g_slice_alloc (gslice.c:1003)
==21714==    by 0x3768063A2D: g_slice_alloc0 (gslice.c:1029)
==21714==    by 0x37680473EC: g_main_context_push_thread_default (gmain.c:728)
==21714==    by 0x1035F998: ??? (in /usr/lib64/gio/modules/libdconfsettings.so)
==21714==    by 0x376806C224: g_thread_proxy (gthread.c:798)
==21714==    by 0x3909607C52: start_thread (pthread_create.c:308)
==21714==    by 0x39092F4ECC: clone (clone.S:113)
Comment 3 David Woodhouse 2013-06-11 15:58:52 UTC
==21714== 48 (24 direct, 24 indirect) bytes in 1 blocks are definitely lost in loss record 3,303 of 7,013
==21714==    at 0x4A06409: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==21714==    by 0x376804D93E: g_malloc (gmem.c:159)
==21714==    by 0x37680634ED: g_slice_alloc (gslice.c:1003)
==21714==    by 0x3768063A2D: g_slice_alloc0 (gslice.c:1029)
==21714==    by 0x37680473EC: g_main_context_push_thread_default (gmain.c:728)
==21714==    by 0x3DDFE49659: source_registry_object_manager_thread (e-source-registry.c:1027)
==21714==    by 0x376806C224: g_thread_proxy (gthread.c:798)
==21714==    by 0x3909607C52: start_thread (pthread_create.c:308)
==21714==    by 0x39092F4ECC: clone (clone.S:113)
Comment 4 David Woodhouse 2013-06-11 15:58:58 UTC
==21714== 48 (24 direct, 24 indirect) bytes in 1 blocks are definitely lost in loss record 3,302 of 7,013
==21714==    at 0x4A06409: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==21714==    by 0x376804D93E: g_malloc (gmem.c:159)
==21714==    by 0x37680634ED: g_slice_alloc (gslice.c:1003)
==21714==    by 0x3768063A2D: g_slice_alloc0 (gslice.c:1029)
==21714==    by 0x37680473EC: g_main_context_push_thread_default (gmain.c:728)
==21714==    by 0x3CFD0C6B8C: gdbus_shared_thread_func (gdbusprivate.c:277)
==21714==    by 0x376806C224: g_thread_proxy (gthread.c:798)
==21714==    by 0x3909607C52: start_thread (pthread_create.c:308)
==21714==    by 0x39092F4ECC: clone (clone.S:113)
Comment 5 Matthew Barnes 2013-06-11 16:12:41 UTC
I'm not sure those are really leaks.  It looks like GLib might just be creating a permanent thread-private main loop context on a worker thread.
Comment 6 David Woodhouse 2013-06-11 16:18:46 UTC
Some larger ones in addressbook:

==31378== 4,064 bytes in 2 blocks are definitely lost in loss record 6,227 of 6,316
==31378==    at 0x4A08121: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==31378==    by 0x376804D996: g_malloc0 (gmem.c:189)
==31378==    by 0x376801A918: thread_memory_from_self.part.12 (gslice.c:512)
==31378==    by 0x3768063634: g_slice_alloc (gslice.c:1561)
==31378==    by 0x3768063A2D: g_slice_alloc0 (gslice.c:1029)
==31378==    by 0x37680473EC: g_main_context_push_thread_default (gmain.c:728)
==31378==    by 0xD1A6B10: e_ews_soup_thread (e-ews-connection.c:1453)
==31378==    by 0x376806C224: g_thread_proxy (gthread.c:798)
==31378==    by 0x3909607C52: start_thread (pthread_create.c:308)
==31378==    by 0x39092F4ECC: clone (clone.S:113)

valgrind is fairly good about 'definitely lost' vs. 'probably lost'. I'm *completely* ignoring all its warnings about 'probably'. I've very rarely seen false positives this way.

Surely the main loop context would be referenced *somewhere*, if it's permanent?
Comment 7 David Woodhouse 2013-06-11 16:19:57 UTC
And even if it's a false positive... can we do something to shut it up? The signal:noise ratio in valgrind output is bad enough already. I'd *love* to get to the point where valgrind is silent and anything it says is considered a bug. Although I appreciate that's unrealistic!
Comment 8 David Woodhouse 2013-06-12 08:24:39 UTC
This is probably the same valgrind bug as bug 702021
Comment 9 David Woodhouse 2013-06-12 09:25:17 UTC
Comment #1 still unaccounted for but I don't think we're going to find that one easily anyway.