GNOME Bugzilla – Bug 120554
Pan going offline automatically
Last modified: 2006-06-27 22:33:09 UTC
(Actually observed when using 0.14.0.96, which isn't listed.) When articles from one server are unavailable, Pan goes offline, preventing access to other, fully available, servers until the user brings Pan online manually. The messages in the status log may look roughly like this: 01 GLib - file giochannel.c: line 980 (g_io_channel_set_buffered): assertion `!channel->read_buf || channel->read_buf->len == 0' failed 02 GLib - file giochannel.c: line 980 (g_io_channel_set_buffered): assertion `!channel->read_buf || channel->read_buf->len == 0' failed 03 Getting article "XYZ" body failed: 400 news.astraweb.com: Session byte limit reached. 04 News server connection count: 0 05 New connection 0x1234567 for news.astraweb.com, port 119 06 Handshake: 200 news.astraweb.com NNRP Service Ready (posting ok). 07 Astraweb handshake failed: 502 news.astraweb.com: Access Denied - Too much downloaded, wrong password or inactive account 08 pan - Astraweb handshake failed: 502 news.astraweb.com: Access Denied - Too much downloaded, wrong password or inactive account 09 Astraweb handshake failed: 502 news.astraweb.com: Access Denied - Too much downloaded, wrong password or inactive account 10 Loaded 34 articles for group "a.b.c" in 0.0 seconds (1380 art/sec) 11 Scored 34 entries in 0.0 seconds (34000 articles/sec) 12 Saved 10901 articles in "x.y.z" in 0.3 seconds (32412 art/sec) 13 Saved 9 groups in "Astraweb" in 0.0 seconds (271 groups/sec) I've replaced the dates with line numbers. "XYZ"/"x.y.z" denotes articles/groups on the first server (Astraweb), while "a.b.c" denotes groups on the second server (Tiscali). Lines 01-03 and 07-09 are flagged as errors (stop signs). Pan goes offline after/during lines 07-09, which occur virtually simultaneosly (identical time stamps). Lines 10-11 indicate the first attempt to access a group on the second server. In the task manager, I get a corresponding entry: Offline | | Tiscali | | Getting new headers for "a.b.c" All of the other lines refer to the first server, which is *supposed* to deny access at this point. (I was using Astrawebs free service, with a 50 MB or so daily limit.) The log or the task manager don't show any of the failed attempts at reading *articles* from the second server. Attempting to read more groups from the second server results in more lines like 10-11 in the status log, and more "Offline ... Getting new headers for ..." lines in the status log. The option "Automatically remove failed tasks..." was unchecked, and the queued tasks for the second server completed when Pan was brought back online. Maybe related to bug 117496, but probably not the same, as this one is about one server affecting other servers.
offline/online is a global state, i.e. it's shared across all servers. Although it would be possible to make it a per-server state, I don't know whether that's desired behavior, e.g. if a user selects to go offline, but has active downloads for a non-selected server, does it make sense not to halt those downloads?
Very old versions of pan had a dialog with seperate on/off switches for each server. This was accessed by clicking on the "number of connections" box on the bottom of the window. I don't know why this went away, but I thought it was very useful. Maybe it should be brought back and adjust the error handling to only stop the affected server?
*** Bug 123981 has been marked as a duplicate of this bug. ***
*** Bug 128778 has been marked as a duplicate of this bug. ***
Bug #128778 contains at least one valid reason for making online/offline a per-server state: when i am downloading article attachments from 2 usenet servers at the same time, if one hits the download limit, pan goes offline, even though the expected behavior would be to continue downloads from the other server that are in the download list. However these other downloads that could have continued cannot, since pan goes offline, because of the error with one of the news servers.
*** Bug 133169 has been marked as a duplicate of this bug. ***
*** Bug 145161 has been marked as a duplicate of this bug. ***
appears to work correctly in the rewrite: traffic's just routed to the next server.