After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 353334 - reduce xover memory usage
reduce xover memory usage
Status: RESOLVED FIXED
Product: Pan
Classification: Other
Component: general
pre-1.0 betas
Other All
: High normal
: 1.0
Assigned To: Charles Kerr
Pan QA Team
Depends on:
Blocks:
 
 
Reported: 2006-08-29 02:31 UTC by Charles Kerr
Modified: 2006-09-03 15:46 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
0.110 patch (20.10 KB, patch)
2006-08-29 02:32 UTC, Charles Kerr
none Details | Review

Description Charles Kerr 2006-08-29 02:31:57 UTC
This was reported by csv4me@xs4all.nl in pan-users:

> When loading the complete startrek group pan claims about 1.8 G of
> memory (oscillating a bit and eventually claiming the full 2G of
> swap space) on my 1.5 G RAM box. The swap drive is exerting
> itself nicely averaging at 10 MB/sec or so. Definitely a memory
> starvation issue AND a working set problem. The last CVS version
> of pan loads this group just fine, takes ages though because old
> pan uses a single connection for the header download.

alt.binaries.multimedia.startrek is indeed a torture test
of a group.  There are over 6 million headers there.
Under that load, even trivial issues are multiplied into
large problems.  These were the suspects that showed
up in my testing:

* We only clear out xover's "incomplete multipart Quarks"
  lookup table when the xover finishes, not as parts
  become complete.  This list becomes enormous,
  which has its own obvious costs and also the secondary
  cost of bloating the Quark hashtable.

* We create a lot of Message-Id Quarks that we don't keep:
  multiparts only use the first part's Message-Id as a key.
  This eats up CPU and also bloats the Quark hashtable.

* Reparenting articles in the header pane becomes insanely
  expensive -- say you're reparenting 500 out of 200,000
  siblings.  Each sibling is revalidated each time a node
  is reparented, or about 100000000 revalidation steps.
  This pegs the CPU for 15-45 seconds at a time.
  Adding a batch() reparent call will drop this to a single
  revalidation pass.

* Pan running an xover task spends 30% of its time
  converting the Date header into a time_t even if
  we don't keep it: multiparts keep the first part's date.
  This doesn't effect the memory much, but gmime's
  date decoder uses way too many malloc/free calls
  which showed up disproportionately high in testing.

* xover_add() also creates unneeded temporary std::strings
  by calling `normalize subject' for single-part articles
  and by using string = quark rather than
  string.assign (quark.str, quark.len);

Effects of these changes:

CPU: Pan running an xover task goes from about 20% CPU to 3%
on my AMD budget box w/512MB.

Memory Reduction: for multi-million header groups, the memory
usage is cut by more than half, as this table shows:

          Number of parts fetched
          from startrek newsgroup    Top shows         Top shows
Version   times three newsservers    virt memory       res memory

0.110     1,811,000                  360M              328M
0.110     1,914,000                  404M              392M
0.110     2,012,500                  507M              438M
0.110     2,087,000                  544M              429M

0.111     1,814,500                  177M              147M
0.111     2,001,500                  196M              165M
0.111     2,188,000                  213M              182M
0.111     3,056,000                  299M              267M

Attached is a 0.110 patch including these changes.
Comment 1 Charles Kerr 2006-08-29 02:32:28 UTC
Created attachment 71813 [details] [review]
0.110 patch
Comment 2 Darren Albers 2006-08-29 12:31:46 UTC
Very impressive!  Pan keeps getting better and better!
Comment 3 Charles Kerr 2006-08-29 17:40:01 UTC
Apples-to-apples between CVS 0.14.9x and 0.111, both
downloading the startrek newsgroup from only giganews.

Version    Parts     Virt   Res

0.111      200,000   79M    46M
0.14.9x    200,000   89M    55M

0.111      400,000   99M    65M
0.14.9x    400,000   118M   87M

0.111      550,000   110M   79M
0.14.9x    550,000   143M   111M

Strangely, 0.14.9x stopped right after 550,000,
which was only about four days' worth of articles.
I don't know if it's a bug in 0.14.9x, and am not
looking backwards to find out.
Comment 4 Jeff Berman 2006-08-29 18:48:01 UTC
Charles, that is awesome!  I noticed the same memory issues with some of the divx binary groups, but figured that was just how it had to be, so I didn't report it.

Thank you for all your hard work in making pan so great.

Jeff