GNOME Bugzilla – Bug 671636
Individuals trickle in slowly (probably an EDS issue)
Last modified: 2013-06-11 23:03:00 UTC
When my app starts, I create a FolksIndividualAggregator, connect "individuals-changed-detailed" signal on it and then call folks_individual_aggregator_prepare(). I've added a debug to see what are the signals emitted at first run (all telepathy accounts are already online when the app starts): ind changed: 14 added, 1 removed. total: 13 one removed is empty... new total: 14 ind changed: 32 added, 1 removed. total: 45 one removed is empty... new total: 46 ind changed: 32 added, 3 removed. total: 75 one removed is empty... new total: 76 ind changed: 32 added, 6 removed. total: 102 one removed is empty... new total: 103 ind changed: 32 added, 4 removed. total: 131 one removed is empty... new total: 132 ind changed: 32 added, 3 removed. total: 161 one removed is empty... new total: 162 ind changed: 32 added, 3 removed. total: 191 one removed is empty... new total: 192 ind changed: 32 added, 3 removed. total: 221 one removed is empty... new total: 222 ind changed: 31 added, 6 removed. total: 247 one removed is empty... new total: 248 ind changed: 32 added, 5 removed. total: 275 one removed is empty... new total: 276 ind changed: 32 added, 7 removed. total: 301 one removed is empty... new total: 302 ind changed: 31 added, 7 removed. total: 326 one removed is empty... new total: 327 ind changed: 31 added, 5 removed. total: 353 one removed is empty... new total: 354 ind changed: 32 added, 2 removed. total: 384 one removed is empty... new total: 385 ind changed: 32 added, 7 removed. total: 410 one removed is empty... new total: 411 ind changed: 31 added, 4 removed. total: 438 one removed is empty... new total: 439 ind changed: 18 added, 4 removed. total: 453 one removed is empty... new total: 454 ind changed: 32 added, 15 removed. total: 471 one removed is empty... new total: 472 ind changed: 32 added, 8 removed. total: 496 one removed is empty... new total: 497 ind changed: 32 added, 11 removed. total: 518 one removed is empty... new total: 519 ind changed: 24 added, 4 removed. total: 539 one removed is empty... new total: 540 eds-WARNING **: edsf-persona-store.vala:2046: Error in address book view query: GDBus.Error:org.gnome.evolution.dataserver.AddressBook.OtherError: LDAP error 0x20 (No such object) eds-WARNING **: edsf-persona-store.vala:2056: Error is considered unrecoverable. Removing persona store. ind changed: 1 added, 1 removed. total: 540 ind changed: 1 added, 1 removed. total: 540 ind changed: 1 added, 1 removed. total: 540 ind changed: 1 added, 1 removed. total: 540 ind changed: 1 added, 1 removed. total: 540 ind changed: 1 added, 1 removed. total: 540 one removed is empty... new total: 541 ind changed: 1 added, 1 removed. total: 541 ind changed: 1 added, 1 removed. total: 541 ind changed: 106 added, 8 removed. total: 639 one removed is empty... new total: 640 ind changed: 3 added, 2 removed. total: 641 one removed is empty... new total: 642 ind changed: 1 added, 1 removed. total: 642 ind changed: 1 added, 1 removed. total: 642 ind changed: 181 added, 14 removed. total: 809 one removed is empty... new total: 810 All those signals are emitted without even returning to the mainloop! So that freeze the app for a while. I think folks should aggregate change signals to only emit the whole diff once.
Created attachment 209316 [details] test program Here is a simple test program to measure the time taken by folks aggregator.
Here the the test program output, I've replaced long lists of 'idle' messages with '...': 0.012351: idle, quiescent: No ... 0.028397: idle, quiescent: No 0.101145: aggregator prepared 0.152011: Added: 14, Removed: 1, quiescent: No 0.381642: Added: 32, Removed: 1, quiescent: No 0.451805: Added: 32, Removed: 3, quiescent: No 0.523084: Added: 32, Removed: 6, quiescent: No 0.596283: Added: 32, Removed: 4, quiescent: No 0.670509: Added: 32, Removed: 3, quiescent: No 0.743866: Added: 32, Removed: 3, quiescent: No 0.818778: Added: 32, Removed: 3, quiescent: No 0.920736: Added: 31, Removed: 6, quiescent: No 1.002463: Added: 32, Removed: 5, quiescent: No 1.093719: Added: 32, Removed: 6, quiescent: No 1.198596: Added: 31, Removed: 7, quiescent: No 1.316865: Added: 31, Removed: 6, quiescent: No 1.418450: Added: 32, Removed: 2, quiescent: No 1.529910: Added: 32, Removed: 6, quiescent: No 1.647462: Added: 31, Removed: 5, quiescent: No 1.738592: Added: 19, Removed: 4, quiescent: No 1.898713: Added: 32, Removed: 15, quiescent: No 1.981316: Added: 32, Removed: 8, quiescent: No 2.058910: Added: 32, Removed: 11, quiescent: No 2.118286: Added: 24, Removed: 4, quiescent: No 2.126979: idle, quiescent: No 2.243040: idle, quiescent: No 2.263326: Added: 1, Removed: 1, quiescent: No 2.276730: Added: 1, Removed: 1, quiescent: No 2.284859: Added: 1, Removed: 1, quiescent: No 2.289598: Added: 1, Removed: 1, quiescent: No 2.296223: Added: 1, Removed: 1, quiescent: No 2.311139: Added: 2, Removed: 2, quiescent: No 2.324966: Added: 1, Removed: 1, quiescent: No 2.333540: Added: 1, Removed: 1, quiescent: No 2.394970: Added: 19, Removed: 3, quiescent: No 2.687040: Added: 106, Removed: 8, quiescent: No 2.764258: Added: 32, Removed: 1, quiescent: No 2.813536: Added: 3, Removed: 2, quiescent: No 3.093937: Added: 181, Removed: 14, quiescent: No 3.194002: Added: 1, Removed: 1, quiescent: No 3.202286: Added: 1, Removed: 1, quiescent: No 3.216056: Added: 1, Removed: 1, quiescent: No 3.229774: Added: 1, Removed: 1, quiescent: No 3.389467: Added: 85, Removed: 29, quiescent: No 3.473024: Added: 32, Removed: 1, quiescent: No 3.516388: Added: 4, Removed: 3, quiescent: No 3.554265: idle, quiescent: No ... 5.295162: idle, quiescent: No 5.370788: Added: 32, Removed: 1, quiescent: No 5.374577: Added: 1, Removed: 1, quiescent: No 5.375605: quiescent changed: Yes So first block of signals freeze the main loop for 2s, followed by a 1.3s freeze. The overall time until aggregator gets quiescent is 5.3s.
Created attachment 209467 [details] Test case Here is an updated test case that shows another issue: It accumulate additions and removals while the preparation. In theory removals MUST be empty in the end, but here I get: "accumulated changes: 889 added, 154 removed". Which is not possible since it remove contacts that where never added...
(In reply to comment #3) > Created an attachment (id=209467) [details] > Test case > > Here is an updated test case that shows another issue: > > It accumulate additions and removals while the preparation. In theory removals > MUST be empty in the end, but here I get: "accumulated changes: 889 added, 154 > removed". Which is not possible since it remove contacts that where never > added... I don't understand your reasoning for removals having to be empty in the end. In this test case, removals will only be empty if all the individuals which are removed are later re-added. There's no need for the aggregator to do this. I've just run the test case, and removals wasn't empty for me in the end. Looking at the IDs of the individuals being added and removed each time, it all looks fine to me. Didn't you say on IRC that you think it's the EDS backend which is causing the slowdowns? Looking at your log from comment #0, it looks like the persona store which is emitting personas in groups of 32 could definitely be improved.
(In reply to comment #4) > (In reply to comment #3) > > Created an attachment (id=209467) [details] [details] > > Test case > > > > Here is an updated test case that shows another issue: > > > > It accumulate additions and removals while the preparation. In theory removals > > MUST be empty in the end, but here I get: "accumulated changes: 889 added, 154 > > removed". Which is not possible since it remove contacts that where never > > added... > > I don't understand your reasoning for removals having to be empty in the end. > In this test case, removals will only be empty if all the individuals which are > removed are later re-added. There's no need for the aggregator to do this. > > I've just run the test case, and removals wasn't empty for me in the end. > Looking at the IDs of the individuals being added and removed each time, it all > looks fine to me. > > > Didn't you say on IRC that you think it's the EDS backend which is causing the > slowdowns? Looking at your log from comment #0, it looks like the persona store > which is emitting personas in groups of 32 could definitely be improved. I haven't tried out the test case yet, but I suspect it will run fairly quickly for me. I'm hoping that this highlights the difference between those of us who have hit these performance problems and those of us who haven't (unfortunately, the developers). The clustering of personas is particularly peculiar. We don't have specific logic that slowly trickles out Individuals like this, so it's certainly possible this goes down to the PersonaStore or EDS itself. I'm escalating this bug.
Philip thinks the groupings-of-32 problem may have already been fixed by now (we simply wait until EDS has issued its "complete" signal). I'll investigate further.
(In reply to comment #6) > Philip thinks the groupings-of-32 problem may have already been fixed by now > (we simply wait until EDS has issued its "complete" signal). I'll investigate > further. If so, it seems like it's been fixed since 0.6.9-2 (the version in Fedora 17). I added my Google contacts to EDS (1,200 total) and now it takes a couple seconds for Gnome Contacts to start.
Xavier, can you still reproduce this problem with newer releases of Folks?
0.9.2 seems much faster to load indeed.
Fantastic. I'll close this.