GNOME Bugzilla – Bug 793993
Eliminate gvfs-udisks2-volume-monitor and move GVfsUDisks2VolumeMonitor in-process
Last modified: 2018-09-21 18:18:28 UTC
(There might be a good reason that the gvfs-udisks2-volume-monitor process exists, that I don’t know of. I’d appreciate an opinion from someone who knows the history of this service before anyone does anything with this bug.) Currently, the gvfs-udisks2-volume-monitor process exists (on the session bus) as a proxy between the udisks2 daemon (on the system bus) and user processes. As far as I can tell, it just instantiates a GVfsUDisks2VolumeMonitor and exposes it on the bus in a fairly straightforward way. It doesn’t seem to do anything clever. This introduces an extra D-Bus hop for everything to do with udisks volume monitoring: my process → GVolumeMonitor → session bus → gvfs-udisks2-volume-monitor → system bus → udisksd rather than my process → GVolumeMonitor → system bus → udisksd It also prevents processes which use GIO, but which don’t have access to the session bus (i.e. system services), from using udisks for their volume monitoring. They get stuck with GUnixVolumeMonitor, which has less functionality. Given that a system service should normally be able to access udisks (over the system bus), this is a bit awkward. Could we drop the gvfs-udisks2-volume-monitor daemon and GProxyVolumeMonitorUDisks2 class, and make the GVfsUDisks2VolumeMonitor class available in the client library just like GProxyVolumeMonitorUDisks2 currently is, registered to the same GIO extension point?
It was written by David Zeuthen, same as udisks, but I am afraid of he is unreachable. However, please ping Alex, he can answer some overall architecture questions... It is not only the proxy between udisks2 daemon, but it also merges results with GUnixMounts and GUnixMountPoints (because not all interesting mounts are propagated by UDisks unfortunately). It is handy that only one process handles all the events! Be aware that e.g. usage of autofs in corporate systems can cause hundreds/thousands of mount changed signals in short time frame, which causes high CPU load. Those autofs mounts are usually uninteresting, but its changes have to be processed. Currently, by this is affected only a few applications, which uses GUnixMounts directly. However, if the monitor will be in-process, much more applications will be affected by this issue... so this seems like a step back. Also, the daemon-based architecture was probably chosen for robustness. A potential crash of volume monitor will not cause a crash of client app. Just a note that GVfs at all is here to provide user-space features and almost everything requires session bus...
(In reply to Ondrej Holy from comment #1) > It was written by David Zeuthen, same as udisks, but I am afraid of he is > unreachable. However, please ping Alex, he can answer some overall > architecture questions... Alex, any thoughts? > It is not only the proxy between udisks2 daemon, but it also merges results > with GUnixMounts and GUnixMountPoints (because not all interesting mounts > are propagated by UDisks unfortunately). It is handy that only one process > handles all the events! Be aware that e.g. usage of autofs in corporate > systems can cause hundreds/thousands of mount changed signals in short time > frame, which causes high CPU load. Those autofs mounts are usually > uninteresting, but its changes have to be processed. Currently, by this is > affected only a few applications, which uses GUnixMounts directly. However, > if the monitor will be in-process, much more applications will be affected > by this issue... so this seems like a step back. Does gvfs-udisks2-volume-monitor currently squash those autofs events so they don’t get propagated to its clients? > Also, the daemon-based architecture was probably chosen for robustness. A > potential crash of volume monitor will not cause a crash of client app. You could make that argument about any bit of code, though, and then you end up with a million processes. > Just a note that GVfs at all is here to provide user-space features and > almost everything requires session bus... The current architecture here explicitly prevents system services from using the udisks2 monitor, even though they should be able to access udisks2.
(In reply to Philip Withnall from comment #2) > (In reply to Ondrej Holy from comment #1) > > ... > > It is not only the proxy between udisks2 daemon, but it also merges results > > with GUnixMounts and GUnixMountPoints (because not all interesting mounts > > are propagated by UDisks unfortunately). It is handy that only one process > > handles all the events! Be aware that e.g. usage of autofs in corporate > > systems can cause hundreds/thousands of mount changed signals in short time > > frame, which causes high CPU load. Those autofs mounts are usually > > uninteresting, but its changes have to be processed. Currently, by this is > > affected only a few applications, which uses GUnixMounts directly. However, > > if the monitor will be in-process, much more applications will be affected > > by this issue... so this seems like a step back. > > Does gvfs-udisks2-volume-monitor currently squash those autofs events so > they don’t get propagated to its clients? autofs mounts are usually ignored by our heuristics (https://git.gnome.org/browse/gvfs/tree/monitor/udisks2/gvfsudisks2volumemonitor.c#n607) and thus any events are propagated to the clients...
-- GitLab Migration Automatic Message -- This bug has been migrated to GNOME's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.gnome.org/GNOME/gvfs/issues/329.