After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 115305 - Binary attachments remain in memory after being written to disk
Binary attachments remain in memory after being written to disk
Status: VERIFIED INCOMPLETE
Product: Pan
Classification: Other
Component: general
0.14.0
Other Linux
: Normal normal
: ---
Assigned To: Charles Kerr
Pan QA Team
Depends on:
Blocks:
 
 
Reported: 2003-06-16 17:25 UTC by nicolas.girard
Modified: 2009-08-15 18:40 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Valgrind logfile #1 (24.61 KB, text/plain)
2003-06-30 05:54 UTC, nicolas.girard
Details

Description nicolas.girard 2003-06-16 17:25:45 UTC
As i was monitoring my processes with top, i discovered that, when 
downloading a message with a 15 Mb attachment, pan requires an 
additional ~15Mb from the beginning to the end of the dowloading. 
 
What is surprising is that, once the attachment is decoded and written do 
disk, the 15Mb are still allowed by pan. Deleting the message in the header 
pane releases these 15Mb from memory. 
 
This is quite puzzling to me ; anyway there are a few possibilities: 
  - either it is a feature because the messages are expected to remain in pan's 
cache, and pan's cache is expected to be on memory, not on disk ; but this 
would be very unusual, at least to me 
 
  - or it is the expected behaviour, but who whould expect 150 Mb of RAM to 
be necessary to download 10 x 15 Mb attachments ? 
 
  - or it is a defect
Comment 1 Charles Kerr 2003-06-27 01:30:29 UTC
Could you install valgrind, and run it on such a
Pan session to look for leaks?  I've been over the
code again and again and can't find the problem.
Comment 2 nicolas.girard 2003-06-29 22:59:03 UTC
OK, i followed your advice and ran: 
valgrind --leak-check=yes --leak-resolution=high --num-callers=12 
--logfile=pan /usr/bin/pan 
 
Here are the actions i performed: 
- select another news server in the server menu 
- load a binaries newsgroup 
- filter all messages whose subject matches a certain string 
- read article A 
- download & save binary attachments for articles A & B 
- read article C 
- save binary attachment for article C 
 
Additional infos: 
- article A: 38 lines 
- article B: 208338 lines 
- article C: 292 lines 
 
I'm about to attach valdgrind's log file ; i hope this is enough for you to 
track this defect ; otherwise i'll do some more tests 
 
Comment 3 nicolas.girard 2003-06-30 05:54:18 UTC
Created attachment 17912 [details]
Valgrind logfile #1
Comment 4 Charles Kerr 2004-11-09 20:10:27 UTC
Using top and 0.14.2.91, I can't reproduce this.  I followed your steps:

  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
Started pan:
  342 charles   16   0  7776 7776  6032 S     0.0  0.1   0:00   0 pan
Loaded a group with 35,000 articles:
  342 charles   15   0 26736  26M  6308 S     0.1  0.6   0:00   1 pan
Filter on subject:
  342 charles   15   0 26808  26M  6340 S     0.8  0.6   0:00   1 pan
Read a small article "a":
  342 charles   15   0 31660  30M  6772 S     1.4  0.7   0:01   0 pan
Saved articles "a" (small) and "b" (multipart mp3), note no jump:
  342 charles   15   0 31952  31M  6788 S     0.1  0.7   0:02   1 pan
read article "c" (small):
  342 charles   15   0 31952  31M  6788 S     0.0  0.7   0:02   1 pan
Deleted articles "a", "b", and "c":
  342 charles   15   0 31952  31M  6788 S     0.1  0.7   0:02   1 pan

The only jump I see is on reading article "a" -- there's no 10M jump during the
decode of the multipart mp3...

I guess the next step, assuming you're still at this email address after a year
of my ignoring bug reports, is for you to try to reproduce the latest beta
version of Pan.
Comment 5 nicolas.girard 2004-11-09 22:11:11 UTC
Hey, Charles, it's a pleasure to hear from you again ! 
 
You're absolutely right, 0.14.2.91 behaves very well and you can for sure 
close this defect ! 
 
Cheers 
Nicolas