After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 574444 - recieving fd with wrong id on sync open
recieving fd with wrong id on sync open
Status: RESOLVED FIXED
Product: gvfs
Classification: Core
Component: client module
unspecified
Other All
: High critical
: ---
Assigned To: gvfs-maint
gvfs-maint
Depends on:
Blocks:
 
 
Reported: 2009-03-07 04:40 UTC by mattmcadoo
Modified: 2013-01-02 16:12 UTC
See Also:
GNOME target: ---
GNOME version: 2.23/2.24



Description mattmcadoo 2009-03-07 04:40:04 UTC
Version: 0.11.6

What were you doing when the application crashed?



Distribution: Gentoo Base System release 2.0.0
Gnome Release: 2.24.3 2009-01-26 (Gentoo)
BugBuddy Version: 2.24.2

System: Linux 2.6.26-hardened-r1 #2 SMP PREEMPT Thu Sep 4 11:18:04 CDT 2008 i686
X Vendor: The X.Org Foundation
X Vendor Release: 10503000
Selinux: Enforcing
Accessibility: Disabled
GTK+ Theme: Unity
Icon Theme: gnome-alternative

Memory status: size: 196124672 vsize: 196124672 resident: 87867392 share: 43151360 rss: 87867392 rss_rlim: 18446744073709551615
CPU usage: start_time: 1235891696 rtime: 5720 utime: 4812 stime: 908 cutime:0 cstime: 3 timeout: 0 it_real_value: 0 frequency: 100

Backtrace was generated from '/usr/bin/rhythmbox'

[Thread debugging using libthread_db enabled]
[New Thread 0xb65df720 (LWP 10386)]
[New Thread 0xaca1bb90 (LWP 18064)]
[New Thread 0xb27feb90 (LWP 18063)]
[New Thread 0xb2fffb90 (LWP 18062)]
[New Thread 0xb17fcb90 (LWP 18061)]
[New Thread 0xafff9b90 (LWP 18060)]
[New Thread 0xb07fab90 (LWP 18059)]
[New Thread 0xb0ffbb90 (LWP 18058)]
[New Thread 0xb1ffdb90 (LWP 15976)]
[New Thread 0xb40d0b90 (LWP 10391)]
0xb7fba422 in __kernel_vsyscall ()

Thread 9 (Thread 0xb1ffdb90 (LWP 15976))

  • #0 __kernel_vsyscall
  • #1 waitpid
    from /lib/libpthread.so.0
  • #2 IA__g_spawn_sync
    at gspawn.c line 382
  • #3 IA__g_spawn_command_line_sync
    at gspawn.c line 694
  • #4 run_bug_buddy
    at gnome-breakpad.cc line 223
  • #5 check_if_gdb
    at gnome-breakpad.cc line 292
  • #6 bugbuddy_segv_handle
    at gnome-breakpad.cc line 84
  • #7 <signal handler called>
  • #8 __kernel_vsyscall
  • #9 *__GI_raise
    at ../nptl/sysdeps/unix/sysv/linux/raise.c line 64
  • #10 *__GI_abort
    at abort.c line 88
  • #11 IA__g_assertion_message
  • #12 IA__g_assertion_message_expr
    at gtestutils.c line 1312
  • #13 _g_dbus_connection_get_fd_sync
    at gvfsdaemondbus.c line 324
  • #14 g_daemon_file_read
    at gdaemonfile.c line 1002
  • #15 IA__g_file_read
    at gfile.c line 1430
  • #16 IA__g_file_load_contents
    at gfile.c line 5142
  • #17 totem_pl_parser_add_rss
    at totem-pl-parser-podcast.c line 242
  • #18 totem_pl_parser_parse_internal
    at totem-pl-parser.c line 1534
  • #19 totem_pl_parser_parse_with_base
    at totem-pl-parser.c line 1633
  • #20 totem_pl_parser_parse
    at totem-pl-parser.c line 1657
  • #21 rb_podcast_parse_load_feed
    at rb-podcast-parse.c line 196
  • #22 rb_podcast_manager_thread_parse_feed
    at rb-podcast-manager.c line 972
  • #23 g_thread_create_proxy
    at gthread.c line 635
  • #24 start_thread
    at pthread_create.c line 297
  • #25 clone
    at ../sysdeps/unix/sysv/linux/i386/clone.S line 130

Comment 1 Jonathan Matthew 2009-03-07 05:23:21 UTC
The crash occurred inside gvfs, while doing a straightforward g_file_get_contents call on http://leoville.tv/podcasts/floss.xml
Comment 2 Alexander Larsson 2009-03-09 09:49:08 UTC
There are lots of parallel sync gvfs action, but each thread is using its own private thread-local connection. Yet when we open the file which is a 

1) dbus send open message
2) recieve dbus reply, which on success has a counter on the number of the fd to recieve
3) on success retrieve the new fd, and verify that its the right one

i.e. the third successful open will say fd_id == 3 and we verify that its the third call to recieve_fd on that connection.

However, this assert fails, so we're either missed to recieve an fd or recieved one to many. 

I don't see how this could happen though. The connection is per-thread, so there are no races, and there are no exits between successful dbus reply and recieving the fd (2-3 above).

Can you reproduce this? It would be interesting to see data->extra_fd_count
at:

  • #13 _g_dbus_connection_get_fd_sync
    at gvfsdaemondbus.c line 324

Comment 3 Marius Vollmer 2009-09-17 14:38:26 UTC
(In reply to comment #2)
> I don't see how this could happen though. 

It has happened to us in Maemo 5, too, now.

My theory is that reply reordering can actually happen when _g_vfs_daemon_call_sync uses
dbus_connection_send_with_reply_and_block.

The other branch in _g_vfs_daemon_call_sync uses 
dbus_connection_dispatch which will make sure that any outstanding fds are read from the socket and that extra_fd_count is up-to-date when _g_vfs_daemon_call_sync returns.

I'll try to concoct a fix for this.
Comment 4 Milan Crha 2011-06-22 06:15:41 UTC
Similar downstream bug report from evolution 2.32.2:
https://bugzilla.redhat.com/show_bug.cgi?id=714967
Comment 5 Pedro Silva 2011-06-28 09:53:19 UTC
This issue seems to happen when I mount a remote ssh folder on my local LAN and then connect to a VPN.

This bug is happening as I write this, I cannot umount the ssh folder and every click on Evolution's "New Message", "Reply" or "Reply to All" makes it hang for 1 or 2 minutes before the new window appears. 

If I turn off the VPN, I still can't umount the ssh folder.

Thank you for your time for reviewing this ticket.
Comment 6 Pedro Silva 2011-06-28 10:03:10 UTC
I think the bug report, https://bugzilla.redhat.com/show_bug.cgi?id=716946 , is also related with this issue as the above comment by Milan Crha.
Comment 7 Tomas Bzatek 2012-08-31 13:23:36 UTC
Can somebody please retest this with recent gvfs release, post-gdbus port, i.e. 1.13.4 and later? A lot has changed wrt opening files and transferring FDs, GDBus is generally more precise of message targetting.
Comment 8 Felix Möller 2012-12-22 08:40:59 UTC
As there are no new comments this seems to be fixed. Could it be closed?
Comment 9 Tomas Bzatek 2013-01-02 16:12:04 UTC
Closing as per comment 8, considered it fixed by gdbus port. Feel free to reopen (and add proper repro steps) when you see the issue again.