After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 353317 - v0.109 File xfer stops while saving attachment
v0.109 File xfer stops while saving attachment
Status: RESOLVED DUPLICATE of bug 420618
Product: Pan
Classification: Other
Component: general
pre-1.0 betas
Other Linux
: Normal enhancement
: 1.1
Assigned To: Charles Kerr
Pan QA Team
: 363058 (view as bug list)
Depends on:
Blocks:
 
 
Reported: 2006-08-29 00:32 UTC by buckyball
Modified: 2007-03-21 20:25 UTC
See Also:
GNOME target: ---
GNOME version: ---



Description buckyball 2006-08-29 00:32:49 UTC
Not sure if this is "bug" or "feature"...

I have noticed that when doing downloads on a really fast connection (>2500 KB/s), the transfer process stops completely each time a file is being decoded/saved. This results in a significant drop in overall throughput when doing downloads of large, multi-part binaries.

Is there any way that the decode/save-to-disk of the "current" file can run in a separate thread from the download of the "next" item in the queue?
Comment 1 Darren Albers 2006-08-29 12:33:05 UTC
I see this as well and last night I was testing Pan in this manner and I can see where this would cause a problem.  I suspect that this is also the same issue that Duncan is seeing that slow down his downloads.
Comment 2 Jeff Berman 2006-08-30 18:52:21 UTC
If we're talking about the same issue, then It's not just the transfer process that pauses during the decoding of binaries, it's the whole UI.  I agree, it would be terrific to decode binaries in a different thread, but wasn't there a reason why pan moved away from threads?
Comment 3 Robey Holderith 2006-08-31 15:42:01 UTC
I have been noticing this for awhile now.  It's very easy to reproduce if you are saving to a network share over a slow link (wireless).  The downloads go to cache then the decode/transer to the networked drive pauses the UI/downloads until the attachment is in place.  It's more of an annoyance than anything else... but waiting ~30sec for the UI to respond is quite an annoyance.
Comment 4 Charles Kerr 2006-09-07 00:01:57 UTC
Agree with #2 that the decoding could be shopped to another thread.

uulib isn't threadsafe, so we could still only have one decode at a time.
(This is probably a good idea anyway, to avoid disk thrashing).
Also we want to skip the overhead of creating a new thread each time we decode,
so a glib threadpool containing a single "decode" worker thread would be
the safest bet.

The worker thread's footprint should be a minimal as possible --
all the Log messages would need to be removed from that code,
and it wouldn't work with the article cache directly.
We'd invoke the worker thread with a struct holding everything
it needed (filenames, etc.) and, when it was done, it could send
notification back to the main thread via g_idle_add.

TaskArticle would have to leave its state as `running' while this
was going on.  The idle func would have to get word back to the
TaskArticle somehow, we'd need a way to do that w/o a Task pointer
because the task could be destroyed in the interim leaving
a dangling pointer.
Comment 5 Roger C. Pao 2006-09-15 11:19:52 UTC
I believe the frozen UI also occurs when a really large group is being filtered.  Check or uncheck the Match Only Unread Articles button on a really large group like alt.binaries.movies.divx, then minimize and maximize pan.  While it is processing the filter, pan's GUI will not update.  There is not even a progress bar or hourglass mouse cursor to indicate it's not locked up.
Comment 6 Charles Kerr 2006-10-18 18:19:47 UTC
*** Bug 363058 has been marked as a duplicate of this bug. ***
Comment 7 Charles Kerr 2006-10-21 23:09:33 UTC
*** Bug 364002 has been marked as a duplicate of this bug. ***
Comment 8 Charles Kerr 2007-03-21 20:25:15 UTC

*** This bug has been marked as a duplicate of 420618 ***