After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 330471 - files in bind mounts are counted
files in bind mounts are counted
Status: RESOLVED OBSOLETE
Product: baobab
Classification: Core
Component: general
git master
Other All
: Normal normal
: ---
Assigned To: Baobab Maintainers
Baobab Maintainers
: 548642 (view as bug list)
Depends on:
Blocks:
 
 
Reported: 2006-02-08 21:41 UTC by Michael Hofmann
Modified: 2021-05-26 09:26 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Fix multiple counting of bind mounts (1.25 KB, patch)
2008-04-19 02:40 UTC, Bill Nottingham
reviewed Details | Review

Description Michael Hofmann 2006-02-08 21:41:55 UTC
Please describe the problem:
I have a couple of bind mounts setup for an i386 chroot. Baobab counts files
twice while analysing these directories. This gives 

Total filesystem capacity: 391 GB (used 306.1 GB, available 84.9 GB)
/         618.6 GB   100%
/i386     306.1 GB  49.5%
/bulk     304.2 GB  49.2%
...


Steps to reproduce:
Setup a bind mount with sth. like
  mount --[r]bind olddir newdir


Actual results:
a lot of virtual space on my hard disks

Expected results:
a realistic amount of free space on my hard disks

Does this happen every time?


Other information:
Comment 1 Paolo Borelli 2006-04-27 13:04:09 UTC
I don't see an easy way to fix it... the use of bind mounts is transparent to the application. As far as I can see `du` behaves in the same way.

Leaving the bug open for now in case someone has other opinions or suggestions...
Comment 2 Michael Hofmann 2007-03-13 15:59:52 UTC
I'm not totally sure that wontfix is the answer to a problem where a solution is just hard, not impossible. Just for the record, http://lkml.org/lkml/2006/11/2/248 may give an idea on how to tackle bind mounts. Will patches still be accepted that correct this problem or does wontfix mean that this is the intended behaviour?
Comment 3 Paolo Borelli 2007-03-13 16:10:59 UTC
Patches would certainly be taken into consideration... let's keep this one open, it does not hurt.
Comment 4 Bill Nottingham 2008-04-17 21:29:07 UTC
Note that this now affects anyone using gvfs, as the FUSE mount is treated the same way.
Comment 5 Bill Nottingham 2008-04-19 02:40:30 UTC
Created attachment 109516 [details] [review]
Fix multiple counting of bind mounts

Actually, I think now that the issue with gvfs is more of a gvfs issue. Even so...

The attached correctly handles bind mounts by simply stat()ing each mount point as we calculate its size & free space - if we've already seen that device, we ignore it.
Comment 6 Paolo Borelli 2008-04-19 13:02:01 UTC
the patch however just fixes the "total" fs usage, not the actual scan, which would still recurse into bind mounts bound mounted in subdir of the one being scanned
Comment 7 Fabio Marzocca 2008-08-24 13:18:35 UTC
*** Bug 548642 has been marked as a duplicate of this bug. ***
Comment 8 David Ayers 2011-09-07 06:51:06 UTC
This issue also affects users using encfs. (I assume this is the same issue as .gvfs so I'm not sure this is the correct bug report if is bind mount specific.)

Also the scanning of NFS mounted NAS is also not very helpful.  I suppose that is also a different issue.

Should I open two but reports or could all these issues easily be resolved by a simply remaining on a single file system?
Comment 9 Pierre Ossman 2016-12-19 08:11:49 UTC
Any progress on this? I notice it doesn't even ignore bind mounts that come from a different device than the one it is currently scanning. At least that case should be trivial to handle.
Comment 10 Emmanuele Bassi (:ebassi) 2016-12-19 13:29:33 UTC
(In reply to Pierre Ossman from comment #9)

> Any progress on this? I notice it doesn't even ignore bind mounts that come
> from a different device than the one it is currently scanning. At least that
> case should be trivial to handle.

It's not really "trivial", unless you want to start dealing with ad hoc solutions for each possible "bind-like" mount point, e.g. btrfs sub-volumes.

GVFS started depending on libmount, which means it should be able to deal with bind mounts — see bug 771438.
Comment 11 Pierre Ossman 2016-12-19 13:47:23 UTC
(In reply to Emmanuele Bassi (:ebassi) from comment #10)
> 
> It's not really "trivial", unless you want to start dealing with ad hoc
> solutions for each possible "bind-like" mount point, e.g. btrfs sub-volumes.
> 

Can't you use ignore every directory where the device number changes?
Comment 12 Matt Sturgeon 2017-02-26 07:33:51 UTC
I have a number of bind mounts linking from a small SSD partition to a large HDD one.

baobab shows the combined 350GB of the SSD and HDD together... yet 85% (300GB) of that is actually bind mounted from the HDD. This makes it really hard to analyse the small SSD's usage.

--------

As for how to actually fix it, this probably isn't ideal, but it should be possible to get a list of all active mount points - since programs like findmnt manage it - and then simply exclude them.

--------

As a side note, from what I could gather the GVFS change referenced earlier (bug 771438) has been disabled in the jhbuild script, which I assume means it can't be used by gnome projects like baobab?
Comment 13 André Klapper 2021-05-26 09:26:28 UTC
GNOME is going to shut down bugzilla.gnome.org in favor of gitlab.gnome.org.
As part of that, we are mass-closing older open tickets in bugzilla.gnome.org
which have not seen updates for a longer time (resources are unfortunately
quite limited so not every ticket can get handled).

If you can still reproduce the situation described in this ticket in a recent
and supported software version of Baobab, then please follow
  https://wiki.gnome.org/GettingInTouch/BugReportingGuidelines
and create a new enhancement request ticket at
  https://gitlab.gnome.org/GNOME/baobab/-/issues/

Thank you for your understanding and your help.