GNOME Bugzilla – Bug 608204
add Btrfs support to gdu/palimpsest
Last modified: 2018-05-24 10:24:29 UTC
It's not clear how this should work, since Btrfs is more like an alternate lvm2/RAID than a standard filesystem.
Yup - for the record, we already support LVM2 and Linux MD RAID (there's a few obvious things missing - but by and large the support is pretty complete). It might be useful to play around with this code - and look at screenshots here http://people.freedesktop.org/~david/?C=M;O=D Also, as proposed in http://bugs.freedesktop.org/show_bug.cgi?id=26258#c1 I think we should try and detail how we want this to work. One thing I'm unsure about right now is whether we want the single-disk and multi-disk btrfs to work the same way. One could argue that it makes sense to treat the single-disk case just like vanilla ext3 or xfs. And only complicate things for the multiple devices case. I don't know. Does btrfs support multiple separate roots? Or only multiple separate snapshots? I guess I need to sit down and study current + planned btrfs features.
(In reply to comment #1) > Yup - for the record, we already support LVM2 and Linux MD RAID (there's a few > obvious things missing - but by and large the support is pretty complete). It > might be useful to play around with this code - and look at screenshots here > > http://people.freedesktop.org/~david/?C=M;O=D > > Also, as proposed in http://bugs.freedesktop.org/show_bug.cgi?id=26258#c1 I > think we should try and detail how we want this to work. One thing I'm unsure > about right now is whether we want the single-disk and multi-disk btrfs to work > the same way. One could argue that it makes sense to treat the single-disk case > just like vanilla ext3 or xfs. And only complicate things for the multiple > devices case. I don't know. Yeah, that's an interesting argument. My laptop just has an ext2 /boot and a btrfs /, and I suspect many Fedora users will do something similar over the next few Fedora cycles. > Does btrfs support multiple separate roots? Or only multiple separate > snapshots? > > I guess I need to sit down and study current + planned btrfs features. Yes, multiple separate roots, as I understand it -- a subvolume is to btrfs as a filesystem is to ext4. You can change which root/subvol to mount at mount-time (-o subvol=foo), or by setting the default subvol for the volume beforehand. (Which is how we implement the snapshot-choice feature: by setting the default subvol to correspond to the snapshot that we want for next boot, snapshots just being a special case of subvolumes.) You can apply different quotas etc. to different subvols, so the idea is that you might have /, /var, /home on separate subvols, and snapshotting could then act on each individually.
Oh, interesting -- Mike Snitzer has posted patches to the yum-devel list that implement snapshot creation inside yum, with the intent to use with LVM/DM "snapshot-merge" that works for any filesystem built on LVM LVs to implement rollback. So, our snapshot UI will have more than just a btrfs user, and we'll have more than one "filesystem" that supports_snapshots.
Just to note...this still doesn't seem to have been done. I just installed a Fedora 18 Beta(ish) system with / as a BTRFS 'RAID0' of /dev/vda2 and /dev/vdb1; gnome-disks (and mount, for that matter) claims that /dev/vda2 is mounted as / and /dev/vdb1 is not mounted. Chris Murphy has explained to me why technically it's 'true' that only one of the devices is 'mounted' and /proc only claims that one of them is mounted and blah, but let's face it, as far as the sysadmin is concerned, this 'technical truth' is misleading :) As far as the sysadmin is concerned, vda2 and vdb1 are one 'volume' that is mounted as / and gnome-disks should convey this 'wider truth' somehow, as it does for LVM and 'traditional' RAID.
Actually it's totally obscured with LVM and md raid too. Neither/proc/self/mount or the mount command show the actual physical block device for a mounted file system, but rather the virtual device (md0, md1, or /dev/mapper/vg1-lv2, etc.) I have no way of correlating md0 or lvs to physical disks, or vice versa unless I use mdadm or pvscan. So it's consistent that I have to a btrfs command to ultimately figure this out also. As you pointed out though, maybe System Storage Manager can help with standardizing some of the semantics, and maybe even make some things easier for gdu to implement. https://fedoraproject.org/wiki/Features/SystemStorageManager I suggest that gdu do for btrfs what it does now with LVM. When the user clicks on a physical disk containing LVM, the "In Use" attribute doesn't even show up. The user sees mount information when they click on the LV. For btrfs, it would be when clicking on either the volume, or better a subvolume, since that's really what's being mounted. Representing a subvolume, however, is a bit interesting depending on whether quotas are in place or not. By default they aren't. So the "size" of a subvolume by default is the same as the size of the volume its on, just like a folder. So it behaves like a folder, and an lv, and a partition, and a filesystem. Another one is snapshots, which are subvolumes. But they aren't like LVM snapshots. They're basically a clone of a subvolume, so they aren't empty, but once created they're an independent file system. Writes to one don't affect the other. Write performance is the same to either. Deletion of one doesn't affect the other, and are really fast, faster than mkfs.ext4 on the same sized disk.
(In reply to comment #4) > Just to note...this still doesn't seem to have been done. That's why the bug is still opened.
The main problem with multi-disk btrfs is that the kernel simply does not convey enough information to reliably infer what block devices are part of it (similar to LVM PVs and RAID "members"). I raised that here http://comments.gmane.org/gmane.comp.file-systems.btrfs/2851 more than three years ago and last time I checked it still wasn't solved. This is also why we have this workaround so it at least "works" for single-disk btrfs http://cgit.freedesktop.org/udisks/tree/src/udisksmountmonitor.c?id=2.0.0#n440 and why this bug is still open. So nothing new to see here, move along. FWIW, I haven't pursued this very aggressively except for chatting up the btrfs guys when I see them at conferences every 6-12 months - mostly because btrfs still doesn't seem to be used in production by any of the large distributors (especially not multi-disk setups - heck, RAID 5/6 only just landed in Linux 3.6). Anyway, once the kernel exports this information it's not going to be too hard to make Disks (and its underlying daemon, udisks) do the right thing. I expect that single-disk btrfs will be like any other filesystem and multi-disk btrfs will be like the MD-RAID stuff [1] targeted for Disks 3.8 / Fedora 19. [1] : See https://plus.google.com/110773474140772402317/posts/bHiBA1SJvz7 https://plus.google.com/110773474140772402317/posts/2fiLhoRMmJm https://plus.google.com/110773474140772402317/posts/TPHVtY7myks
davidz: "FWIW, I haven't pursued this very aggressively except for chatting up the btrfs guys when I see them at conferences every 6-12 months - mostly because btrfs still doesn't seem to be used in production by any of the large distributors (especially not multi-disk setups - heck, RAID 5/6 only just landed in Linux 3.6)." btrfs-by-default is perennially getting pushed as a Fedora feature and *sometime* it's going to happen, so I figured it'd be a good idea to check in on this stuff ahead of time. You can bet your bottom dollar all of this is going to turn up in a fedora-devel thread as soon as that feature actually looks like happening =) In general it seems like everyone's singing the same tune, though - the kernel doesn't expose enough info about btrfs devices. util-linux guy told me the same.
-- GitLab Migration Automatic Message -- This bug has been migrated to GNOME's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.gnome.org/GNOME/gnome-disk-utility/issues/2.