After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 563382 - Online radio connection remains open after leaving rhythmbox
Online radio connection remains open after leaving rhythmbox
Status: RESOLVED FIXED
Product: gvfs
Classification: Core
Component: http backend
unspecified
Other All
: Normal normal
: ---
Assigned To: gvfs-maint
gvfs-maint
: 564842 (view as bug list)
Depends on:
Blocks:
 
 
Reported: 2008-12-05 20:49 UTC by Ilafaf
Modified: 2011-05-22 13:17 UTC
See Also:
GNOME target: ---
GNOME version: 2.23/2.24


Attachments
Bug 563382 - Define PATH_MAX if not available (804 bytes, patch)
2009-02-13 20:11 UTC, Colin Walters
none Details | Review

Description Ilafaf 2008-12-05 20:49:41 UTC
Please describe the problem:
i've launched online radios, i noticed that gvfs-http has opened some sockets
for the websites as well as rhythmbox. When i stop the playback and close rhythmbox, the
sockets of rhythmbox were closed but gvfs-http's  were not. They stayed  CLOSE_WAIT until i kill the process (three hours after).
Is there a way to close all used sockets at leaving (even the gvfs-http ones)?

Steps to reproduce:
1. launch rhythmbox
2. launch online radio
3. stop playback and leave rhythmbox

Actual results:
After checking the sockets with netstat:
rhythmbox ones were closed
gvfs-http ones remains CLOSE_WAIT until shutdown

Expected results:
both of rhythmbox and gvfs-http sockets must close

Does this happen every time?
yes, each time i launch online radio

Other information:
running Ubuntu 8.10
kernel 2.6.27-9 generic
gnome 2.24.1
Rhythmbox 0.11.6
=====================================
Dependencies http://launchpadlibrarian.net/19940103/Dependencies.txt
ProcMaps.txt http://launchpadlibrarian.net/19940104/ProcMaps.txt
ProcStatus.txt  http://launchpadlibrarian.net/19940105/ProcStatus.txt
Comment 1 Ilafaf 2008-12-05 20:57:45 UTC
Architecture: i386
DistroRelease: Ubuntu 8.10
ExecutablePath: /usr/bin/rhythmbox
Package: rhythmbox 0.11.6svn20081008-0ubuntu4.2
ProcEnviron:
 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
 LANG=en_CA.UTF-8
 SHELL=/bin/bash
SourcePackage: rhythmbox
Uname: Linux 2.6.27-7-generic i686
Comment 2 Dan Winship 2008-12-10 17:56:46 UTC
there's a libsoup bug here (the fact that it's leaving the connections in CLOSE_WAIT), but is there also a gvfs-http bug? (the fact that it just stays running forever?)
Comment 3 Tom Hughes 2009-01-26 11:03:01 UTC
The same thing happends when rhythmbox is downloading podcasts - eventually the gvfs-http process hits the descriptor limit and everything stops working.
Comment 4 A. Walton 2009-01-26 11:05:08 UTC
*** Bug 564842 has been marked as a duplicate of this bug. ***
Comment 5 Colin Walters 2009-02-13 20:11:08 UTC
Created attachment 128681 [details] [review]
Bug 563382 - Define PATH_MAX if not available

This fixes the build on Hurd.  If anyone ever actually uses Hurd with
filenames longer than 4096, they can open a new bug.
Comment 6 Colin Walters 2009-02-18 19:08:15 UTC
Comment on attachment 128681 [details] [review]
Bug 563382 - Define PATH_MAX if not available

Wrong bug...
Comment 7 Christian Kellner 2009-10-08 22:38:47 UTC
Comment on attachment 128681 [details] [review]
Bug 563382 - Define PATH_MAX if not available

Changing status to get it rid of the unreviewed patches list.
Comment 8 Matt Rose 2009-11-01 16:35:53 UTC
I'm hitting the same problem using Fedora Core 11.

I'm using rhythmbox as well, and the only thing I think it's doing is calling amazon to get the album art.  

gvfsd-htt  2508 mattrose   17u  IPv4 280999      0t0  TCP 192.168.1.196:43327->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   18u  IPv4 281001      0t0  TCP 192.168.1.196:43328->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   19u  IPv4  18910      0t0  TCP 192.168.1.196:47652->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   20u  IPv4  69911      0t0  TCP 192.168.1.196:44821->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   21u  IPv4  69909      0t0  TCP 192.168.1.196:44820->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   22u  IPv4  70067      0t0  TCP 192.168.1.196:36167->192.221.96.126:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   23u  IPv4 169020      0t0  TCP 192.168.1.196:46583->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   24u  IPv4 171107      0t0  TCP 192.168.1.196:44107->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   25u  IPv4 171105      0t0  TCP 192.168.1.196:44106->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   26u  IPv4 170719      0t0  TCP 192.168.1.196:52677->192.221.96.126:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   27u  IPv4 511136      0t0  TCP 192.168.1.196:35617->ecs.amazonaws.com:http (ESTABLISHED)
gvfsd-htt  2508 mattrose   28u  IPv4 511138      0t0  TCP 192.168.1.196:35618->ecs.amazonaws.com:http (ESTABLISHED)
gvfsd-htt  2508 mattrose   29u  IPv4 460618      0t0  TCP 192.168.1.196:37597->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   30u  IPv4 460620      0t0  TCP 192.168.1.196:37598->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   31u  IPv4 239247      0t0  TCP 192.168.1.196:60838->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   32u  IPv4 239249      0t0  TCP 192.168.1.196:60839->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   33u  IPv4 239284      0t0  TCP 192.168.1.196:33831->192.221.96.126:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   34u  IPv4 452199      0t0  TCP 192.168.1.196:41881->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   35u  IPv4 452201      0t0  TCP 192.168.1.196:41882->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   36u  IPv4 376888      0t0  TCP 192.168.1.196:41664->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   37u  IPv4 314330      0t0  TCP 192.168.1.196:50792->ecs.amazonaws.com:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   38u  IPv4 314369      0t0  TCP 192.168.1.196:34641->192.221.96.126:http (CLOSE_WAIT)
gvfsd-htt  2508 mattrose   39u  IPv4 316247      0t0  TCP 192.168.1.196:42581->ecs.amazonaws.com:http (CLOSE_WAIT)
Comment 9 Matt Rose 2009-11-01 16:52:05 UTC
sorry, a few more details.

[root@redman ~]# rpm -q libsoup rhythmbox gvfs
libsoup-2.26.3-1.fc11.x86_64
rhythmbox-0.12.3-1.fc11.x86_64
gvfs-1.2.3-12.fc11.x86_64

Sorry I'm not familiar with the internals of these programs, I'm more on the server end of things myself, so I'm not sure where to start digging, but I would bet that gvfs is not shutting down connections when it gets a FIN packet from the other end.  I'm not sure why this isn't more widely reported, as it seems like a huge bug to me.
Comment 10 Matt Rose 2009-11-01 17:02:34 UTC
a few more details.

I just looked over the close code in gvfs and it seems fine.  It might be a problem in that rhythmbox doesn't call it correctly?
Comment 11 Tom Hughes 2009-11-02 08:36:25 UTC
It's not just cover art, it's podcasts as well. I have to restart gvfs once or twice a week when it hits the descriptor limit because of all the channels rhythmbox still has open.

If it isn't a bug in gvfs then we should presumably transfer this to rhythmhox and ask them to look at it...
Comment 12 Dan Winship 2009-11-02 14:24:44 UTC
the fd leak is fixed in libsoup 2.28. it will still sometimes leave connections in CLOSE_WAIT for a while, but only until the next time it opens a new connection (to anywhere). So if the fact that gvfsd-http never exits is not considered a bug, then this is FIXED.
Comment 13 Christian Kellner 2011-05-22 13:17:04 UTC
(In reply to comment #12)
> the fd leak is fixed in libsoup 2.28. it will still sometimes leave connections
> in CLOSE_WAIT for a while, but only until the next time it opens a new
> connection (to anywhere). So if the fact that gvfsd-http never exits is not
> considered a bug, then this is FIXED.
It is a bug, but a different one than this here, namely bug 509609. I will therefore close this bug here. Thanks everybody.