After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 756829 - SWRaid member detection enhancements
SWRaid member detection enhancements
Status: RESOLVED FIXED
Product: gparted
Classification: Other
Component: application
GIT HEAD
Other Linux
: Normal normal
: ---
Assigned To: Mike Fleetwood
gparted maintainers alias
Depends on:
Blocks:
 
 
Reported: 2015-10-19 20:37 UTC by Mike Fleetwood
Modified: 2016-01-18 17:41 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Improved SWRaid member detection (v1) (51.13 KB, patch)
2015-10-25 15:48 UTC, Mike Fleetwood
none Details | Review
GParted 0.24.0 code failing to recognized linux-raid member /dev/sde2 (36.32 KB, image/png)
2015-10-27 19:34 UTC, Curtis Gedak
  Details
TESTING: Time clearing 1 MiB (v1) (5.83 KB, patch)
2015-10-29 15:15 UTC, Mike Fleetwood
none Details | Review
Improved SWRaid member detection (v2) (65.31 KB, patch)
2015-11-01 09:09 UTC, Mike Fleetwood
none Details | Review
Update README file for SWRaid / mdadm (v1) (943 bytes, patch)
2016-01-02 10:39 UTC, Mike Fleetwood
none Details | Review

Description Mike Fleetwood 2015-10-19 20:37:36 UTC
Issues:
1) GParted recognises partitions containing Linux Software RAID members
   as the file system the array contains, when using metadata type 0.90
   or 1.0 which is stored at the end.
   --
   This is because libparted finds the file system signature and doesn't
   understand about mdadm created SWRaid.

2) Blkid sometimes also reports the file system instead of as a SWRaid
   member.
   --
   Only seen this happen for the first member of a SWRaid mirror
   containing the /boot file system.  Only have one example of this
   configuration.

3) Old versions of blkid don't recognise SWRaid members at all so always
   report file system content when found.
   --
   On old CentOS 5.

4) Would like to add the active array device name as the mount point for
   SWRaid members.  We already put the Volume Group name in the mount
   point (active access reference) for LVM Physical Volumes.

At the moment the only way I can see to resolve (1) and (2) in
particular is to query the configured SWRaid arrays and members using
"mdadm -E -s -v".  Then when detecting partition contents use this
information before existing methods of libparted, blikd, and internal.
This will mean that every refresh this extra command is also run.  Still
use /proc/mdstat for busy detection.

Will not be providing any capability to start or stop arrays like
DMRaid.cc does.  GParted shell wrapper already creates udev rules to
prevent scanning starting any Linux Software RAID arrays.

Curtis,

Let me know if you have any concerns about this, otherwise I will
proceed with this.

Thanks,
Mike
Comment 1 Mike Fleetwood 2015-10-19 20:50:57 UTC
Further details for above issues and proposed solution.


1) GParted recognises file system inside SWRaid member when metadata
   0.90 or 1.0 is used
   --
   Metadata 0.90 and 1.0 store the raid member superblock at the end of
   the partition.  This is recommended so that the boot loader sees just
   a file system at the start of the boot partition and doesn't need to
   understand Linux Software RAID.

e.g. on CentOS 7.  Setup.
    # mdadm --create --level=linear --metadata=1.0 --raid-devices=1 \
    > --force /dev/md1 /dev/sdb1
    # mkfs.ext4 /dev/md1

Parted recongises sdb1 as containing ext4 file system.
    # parted /dev/sdb print
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdb: 8590MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags: 

    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  1075MB  1074MB  primary  ext4

Blkid reports this correctly and even knows there are two signatures.
    # blkid -V
    blkid from util-linux 2.23.2  (libblkid 2.23.0, 25-Apr-2013)
    # blkid | grep sdb
    /dev/sdb1: UUID="9dd6c1cc-ec58-15b9-cdd9-360ab4bdad2e"
               UUID_SUB="215f09fd-772b-804a-fdc4-f8adb08d14ec"
               LABEL="localhost.localdomain:1" TYPE="linux_raid_member"
    # wipefs /dev/sdb1
    offset               type
    ----------------------------------------------------------------
    0x438                ext4   [filesystem]
                         UUID:  0df50a62-6fe1-4b13-958a-479129eabb29

    0x3fffe000           linux_raid_member   [raid]
                         LABEL: localhost.localdomain:1
                         UUID:  9dd6c1cc-ec58-15b9-cdd9-360ab4bdad2e


2) Blkid sometimes also reports file system instead of SWRaid member

e.g. on CentOS 6 configured with mirrored drives like this:
    sda                               sdb
    ----                              ----
    sda1  [md1 (raid1 metadata=1.0)]  sdb1  primary
    sda2  [md2 (raid1 metadata=1.1)]  sdb2  primary
    sda3  [md3 (raid1 metadata=1.1)]  sdb3  primary

    md1  ext4  /boot
    md2  swap
    md3  ext4  /

Blkid reports sda1 as containing ext4 rather than raid member, yet
reports sdb1 (the other member in the mirror pair) as containing a raid
member!
    # blkid | egrep 'sd[ab]1'
    /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501"
               TYPE="ext4" LABEL="chimney-boot"
    /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a"
               UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98"
               LABEL="chimney:1" TYPE="linux_raid_member"

When probing, bypassing the cache it reports sda1 as a raid member!
    # blkid /dev/sda1
    /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501"
               TYPE="ext4" LABEL="chimney-boot"
    # blkid -c /dev/null /dev/sda1
    /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a"
               UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d"
               LABEL="chimney:1" TYPE="linux_raid_member"

But we can't bypass the cache because that can allow GParted to wait for
minutes if a user has a floppy device configured in the BIOS when it is
not commected.  See commit:
    Fix long scan problem when BIOS floppy setting incorrect
https://git.gnome.org/browse/gparted/commit/?id=18f863151c82934fe0a980853cc3deb1e439bec2


3) Old versions of blkid don't recognise SWRaid members at all.

e.g. on CentOS 5, setup.
    # mdadm --create /dev/md1 --level=raid1 --metadata=1.0 \
    > --raid-devices=2 /dev/sdb1 /dev/sdc1
    # mkfs.ext2 -L test1-ext2 /dev/md1
    # mdadm --create /dev/md2 --level=raid1 --metadata=1.2 \
    > --raid-devices=2 /dev/sdb2 /dev/sdc2
    # mkfs.ext3 -L test2-ext3 /dev/md2

Blkid detects ext2 in the metadata=1.0 members and nothing in the
metadata=1.2 members.
    # blkid -v
    blkid 1.0.0 (12-Feb-2003)
    # blkid -c /dev/null | egrep 'sdb|sdc|md'
    /dev/sdb1: LABEL="test1-ext2"
               UUID="6e8ea3c5-d26d-472b-b2d9-701271155679" TYPE="ext2" 
    /dev/sdc1: LABEL="test1-ext2"
               UUID="6e8ea3c5-d26d-472b-b2d9-701271155679" TYPE="ext2" 
    /dev/md1: LABEL="test1-ext2"
               UUID="6e8ea3c5-d26d-472b-b2d9-701271155679" TYPE="ext2" 
    /dev/md2: LABEL="test2-ext3"
               UUID="35f007a5-9d25-4087-9af8-8d129218af12"
               SEC_TYPE="ext2" TYPE="ext3" 


Plan) Use mdadm to report SWRaid members

e.g. On GParted Live CD, with previously created mdadm SWRaid arrays and
dmraid created ISW (Intel Software RAID) mirror drives.  Configuration
like this:
    sda                               sdb
    ----                              ----
    sda1  [md1 (raid1 metadata=1.0)]  sdb1  primary
    sda2  [md2 (raid1 metadata=1.1)]  sdb2  primary
    sda3  [md3 (raid1 metadata=1.1)]  sdb3  primary

    sdc   [ISW DMRaid array]          sdd

Mdadm reports configured arrays.  This new version of mdadm also
recognises the dmraid created ISW array.  A.K.A. IMSM (Intel Matrix
Storage Manager).  Information includes the names of the member devices.
    # mdadm -E -s -v
    ARRAY metadata=imsm UUID=9a5e...
       devices=/dev/sdc,/dev/sdd
    ARRAY /dev/md/MyRaid container=9a5e... member=0 UUID=4751...

    ARRAY /dev/md/1  level=raid1 metadata=1.0 num-devices=2 UUID=1522...
       devices=/dev/sda1,/dev/sdb1
    ARRAY /dev/md/2  level=raid1 metadata=1.1 num-devices=2 UUID=6719...
       devices=/dev/sda2,/dev/sdb2
    ARRAY /dev/md/3  level=raid1 metadata=1.1 num-devices=2 UUID=dc9f...
       devices=/dev/sda3,/dev/sdb3
    ARRAY /dev/md/4  level=raid1 metadata=1.2 num-devices=2 UUID=c855...
       devices=/dev/sda5,/dev/sdb5

Still use /proc/mdstat to report members of active SWRaid arrays.
    # cat /proc/mdstat
    Personalities : [raid1]
    md124 : active raid1 sda1[2] sdb1[3]
          524224 blocks super 1.0 [2/2] [UU]

    md125 : active raid1 sda2[3] sdb2[2]
          5238720 blocks super 1.1 [2/2] [UU]

    md126 : active raid1 sda3[2] sdb3[1]
          10477440 blocks super 1.1 [2/2] [UU]
          bitmap: 0/1 pages [0KB], 65536KB chunk

    md127 : active raid1 sda5[0] sdb5[0]
          523712 blocks super 1.2 [2/2] [UU]

    unused devices: <none>

The ISW array is running under DM (Device Mapper), not mdadm/md (Meta
Devices).
    # dmsetup ls
    isw_dahbabafdg_MyRaid1  (254:1)
    isw_dahbabafdg_MyRaid   (254:0)
    # dmraid -s
    *** Group superset isw_dahbabafdg
    --> Active Subset
    name   : isw_dahbabafdg_MyRaid
    size   : 16768000
    stride : 128
    type   : mirror
    status : ok
    subsets: 0
    devs   : 0
    spares : 0
Comment 2 Curtis Gedak 2015-10-20 15:02:52 UTC
Hi Mike,

Your approach to this problem sounds good to me.

We should try to minimize the calls to mdadm as much as is possible.  For example if a computer does not have mdadam installed then we should only look for the command once.

Curtis
Comment 3 Mike Fleetwood 2015-10-25 15:48:23 UTC
Created attachment 314075 [details] [review]
Improved SWRaid member detection (v1)

Hi Curtis,

Here's the patchset for this.

Tested with Fake/BIOS/dmraid mirrored array and SWRaid/mdadm arrays
(metadata versions 0.90, 1.0, 1.1 & 1.2).  Mostly on CentOS 6 and
Debian 8.

* CentOS 6 because it has switched to using mdadm to start Fake/BIOS/
  dmraid arrays so they appear in "mdadm -Esv" output and /proc/mdstat
  along side SWRaid arrays.
* Debian 8 because it is still using dmraid to start and manage
  Fake/BIOS/dmraid arrays.  Those arrays do appear in "mdadm -Esv"
  output but do not appear im /proc/mdstat.
* The variety of SWRaid metadata versions to test:
** Variablility in formatting of the metadata/superblock version in
   "mdadm -Esv" output and /proc/mdstat content.  0.90 versions don't
   print metadata/superblock version, have to assume it.  Others version
   do.
** Metadata/superblock stored at the beginning and end allowing blkid
   and libparted to detect file system signatures in the array also in
   the partitions.

Thanks,
Mike
Comment 4 Curtis Gedak 2015-10-26 17:28:05 UTC
Thanks Mike for this patch set to enhance RAID handling.  I will test and review patch set v1 after the GParted 0.24.0 release on Oct 27th.
Comment 5 Curtis Gedak 2015-10-26 18:41:26 UTC
Hi Mike,

I changed my mind and did a quick review of the patch code.  ;-)

Patch set (v1) is looking good to me.  I have one suggestion that might improve the maintainability of the code.


Possible improvement to amend 5/7 patch:

Rather than adding "initialise_if_required();" to each method of the class, how about adding the initialization to the class creation method at the beginning of the SWRaid_Info class?

For example (similar to DMRaid.cc):

SWRail_Info::SWRaid_Info()
{
	if ( ! cache_initialised )
	{
		set_command_found();
		load_swraid_info_cache();
		cache_initialised = true;
	}
}

The class initialization code would be called the very first time that the class is invoked -- no specific method need be called.  This would simplify the code in each of the subsequent class methods.

If I've missed something in the way this functions then please feel free to indicate where I might have misunderstood.

Thanks,
Curtis
Comment 6 Mike Fleetwood 2015-10-26 20:53:00 UTC
Hi Curtis,

Thanks for having a quick look at the code.


What you suggest wouldn't work without further changes.  Constructors
are only called when an object is defined in the code and constructed.
I.e.  In the GParted_Core methods where the SWRaid_Info class is used
there would have to be a variable like this:

FILESYSTEM GParted_Core::detect_filesystem( ... )
{
    SWRaid_Info swraid_info;  // Object swraid_info constructed here
    ...
}

This is how DMRaid and FS_Info work with objects constructed in every
GParted_Core method where it is used, every time the method is called.


A second method to avoid constructing in every function where the cache
is used would be to have a swraid_info member in the GParted_Core class.
In such a case it would get constructed as soon as the program started.
Definately before set_devices_thread() is run.  So it couldn't load the
cache from the constructor.  Therefore there would have to be an
explicit load_cache() call and every other SWRaid method would have to
check if the cache had been loaded yet.  Just like the code does now.


Third method, is the current SWRaid_Info way.  All static methods and
variables and explicit initialise_if_required() calls.


It boils down to:
Method 1   - Explicit objects in various functions with the constructor
             loading the cache if required.  E.g.  DMRaid, FS_Info
Method 2/3 - Explicit check in the interface methods to load the cache.
             E.g.  LVM2_PV_Info, SWRaid.


I find it all to easy to look past the fact that C++ is constructing and
destructing objects when they come into and go out of scope.  The code
doesn't shout this at you.  I prefer explicit calls.  That's probably my
C heritage showing through.

Anyway I would prefer leaving the SWRaid_Info module coded as it is.


Thanks,
Mike
Comment 7 Mike Fleetwood 2015-10-26 20:58:12 UTC
Method 4 - Use assert() to crash the program when the API is used wrong
           and ensure the accessors aren't called before the cache is
           loaded.

So far I have only been adding assert()s to check pointers, which if
used when set incorrectly, will lead to undefined behaviour and likely a
crash anyway.
Comment 8 Curtis Gedak 2015-10-27 19:34:13 UTC
Created attachment 314254 [details]
GParted 0.24.0 code failing to recognized linux-raid member /dev/sde2

Hi Mike,

Thank you for your detailed explanation of the possible options.

To better understand how this works I temporarily added a debug statement to print when the cache was loaded.  Then I tried combinations of commenting out the "initialize_if_required()" calls, commenting out the "load_cache()" method, and adding a default constructor to load the SWRaid_Info cache.  It works exactly how you described in the various methods.

With this improved understanding I am on board with your choice of implementation for SWRaid_Info().

I have performed initial testing of patch set (v1) with Kubuntu 12.04 and so far all looks good.  It is a definite improvement over the previous GParted code which only recognized busy linux-raid when there were no other recognized file system signatures (shown in attachment - both sde2 and sde3 are actually linux-raid).

With patch set (v1), linux-raid is properly recognized on both sde2 and sde3.

I plan to test some more before committing the code.

Curtis
Comment 9 Mike Fleetwood 2015-10-28 12:17:10 UTC
Hi Curtis,

I've looked at your screen shot of how the current code is just using
what ever blkid identifies first as the content for /dev/sde2.  So I
assume you were previously testing ZFS detection using that partition.

I would be interested to see what the following report:

mdadm -Esv
cat /proc/mdstat
wipefs /dev/sde2

(The wipefs command will only report all the signatures.  It will NOT
remove any.  Needs the -a flag for that).

I previously looked at the erasure code in mdadm.  From memory it
erased only a handful of file system signatures that it recognised.
That's dangerous as this case shows.  So creating an mdadm SWRaid member
over the top of ZFS will leave the ZFS signature behind and blkid is
choosing to report ZFS rather than SWRaid member.  It is likely that a
more recent distro/blkid version would report this case as SWRaid
member.

Mike
Comment 10 Curtis Gedak 2015-10-28 15:25:55 UTC
Hi Mike,

You are correct.  Previously I had tested with a ZFS file system image in the partition.  It does appear that mdadm creation of the RAID *did not* wipe out the ZFS file system signatures.  Do we wipe these out in GParted?

Following is the output of the requested commands from my kubuntu 12.04 test.

root@octo:~# mdadm -Esv
ARRAY metadata=imsm UUID=06b7191e:a7535fc1:f851e4a7:7747fa07
   devices=/dev/sdd,/dev/sdc
ARRAY /dev/md/Vol0 container=06b7191e:a7535fc1:f851e4a7:7747fa07 member=0 UUID=d24a487a:d7e2ede0:ace438e2:42a1f7c7

ARRAY /dev/md/1 level=raid1 metadata=1.2 num-devices=2 UUID=c6af6280:f07f8339:a6a93e68:9046e978 name=octo:1
   devices=/dev/sde3,/dev/sde2
root@octo:~# cat /proc/mdstat 
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md127 : active (auto-read-only) raid1 sde3[1] sde2[0]
      10477440 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>
root@octo:~# wipefs /dev/sde2
offset               type
----------------------------------------------------------------
0x25000              zfs_member   [raid]

0x1000               linux_raid_member   [raid]
                     LABEL: octo:1
                     UUID:  c6af6280-f07f-8339-a6a9-3e689046e978

root@octo:~# 

Regards,
Curtis
Comment 11 Mike Fleetwood 2015-10-28 19:11:08 UTC
Hi Curtis,

Unfortunately GParted doesn't currently wipe ZFS signatures and possibly
not all the different types of SWRaid metadata types.

GParted_Core::erase_filesystem_signatures() zeros the first 68K, the
last whole 4K and a few others.  With will definitely erase SWRaid
metadata 1.1 and 1.2 super blocks at the start of the partition.  May
not erase metatdata 0.90 and 1.0 super blocks at the end of the
partition.  I will check and have to and another commit if required.

My slight concern is what to choose to erase for ZFS.  It has 4 super
blocks, each 256K, 2 at the start and 2 at the end.  So we potentially
have to write 1M.  Remember that since threading was reduced every call
to into libparted, apart from the copy and now FS resize, is run in the
main UI thread.  Writing 1M to a block device may take a while and it's
blocking the UI while it does it.

ZFS On-Disk Specification
www.giis.co.in/Zfs_ondiskformat.pdf

Thanks,
Mike
Comment 12 Mike Fleetwood 2015-10-29 15:15:26 UTC
Created attachment 314409 [details] [review]
TESTING: Time clearing 1 MiB (v1)

Hi Curtis,

Here's a quick TESTING ONLY patch(set) to test the performance of
writing 1 MiB of zeros when erasing signatures.  First 2 patches add
the timing code.  3rd patch writes 1 MiB of zeros to clear the 4 ZFS
labels (superblocks).

Sample output looks like this:
 23.997922 +23.920738 STOPWATCH_RESET     
  0.000000 +0.000000 erase_filesystem_signatures() call
  0.060649 +0.060649 erase_filesystem_signatures() writing zeros ...
  0.061181 +0.000532 erase_filesystem_signatures() flushing OS cache ...
  0.077221 +0.016040 erase_filesystem_signatures() return 1

N.B. Use the time first column number (cumulative since STOPWATCH_RESET)
for the "erase_filesystem_signatures() return" line.  Second column
is the incremental time from the previous line.

 23.997922 +23.920738 STOPWATCH_RESET     
  0.000000 +0.000000 erase_filesystem_signatures() call
  0.060649 +0.060649 erase_filesystem_signatures() writing zeros ...
           ^^^^^^^^^ - Time to open the device
  0.061181 +0.000532 erase_filesystem_signatures() flushing OS cache ...
           ^^^^^^^^^ - Time to write zeros from GParted memory into
                       kernel buffer cache
  0.077221 +0.016040 erase_filesystem_signatures() return 1
  ^^^^^^^^ ^^^^^^^^^ - Time to sync the kernel buffer cache to the
         |             drive and close the file handle
         +------------ Overall time when GParted is calling
                       erase_filesystem_signatures() and not responding
                       to the UI.

When testing on a flash USB key the whole erase_filesystem_signatures()
call normally takes 0.08s with current amount of zeros written and 0.11s
with 1MiB of zeros written.  Have see either normal or 1MiB amount take
up to 2.8 seconds though.  Guess that flash key occasionally has extra
overhead.  Looks like it will probably be OK to write 1 MiB to clear the
ZFS labels.  The only problem will be if the device hangs then GParted
will too.

Thanks,
Mike
Comment 13 Curtis Gedak 2015-10-29 17:06:10 UTC
Hi Mike,

Following are timing tests from hard drives on my development computer
using the testing patch (v1) from comment #12:

/dev/mapper/isw_efjbbijhh_Vol0 - Fake BIOS RAID (pair of 160 GB SATA HDDs)
  0.000000 +0.000000 STOPWATCH_RESET
  0.000000 +0.000000 erase_filesystem_signatures() call
  0.021218 +0.021218 erase_filesystem_signatures() writing zeros ...
  0.021774 +0.000556 erase_filesystem_signatures() flushing OS cache ...
  0.067714 +0.045940 erase_filesystem_signatures() return 1

/dev/md127 - Linux SW RAID (two partitions on single 160 GB IDE HDD)
 87.279743 +87.212029 STOPWATCH_RESET
  0.000000 +0.000000 erase_filesystem_signatures() call
  0.074549 +0.074549 erase_filesystem_signatures() writing zeros ...
  0.075092 +0.000543 erase_filesystem_signatures() flushing OS cache ...
  0.196849 +0.121757 erase_filesystem_signatures() return 1

/dev/sde - 160 GB IDE HDD (this is a slow failing drive)
 65.187984 +64.991135 STOPWATCH_RESET
  0.000000 +0.000000 erase_filesystem_signatures() call
  0.104957 +0.104957 erase_filesystem_signatures() writing zeros ...
  0.105882 +0.000925 erase_filesystem_signatures() flushing OS cache ...
  0.140279 +0.034397 erase_filesystem_signatures() return 1

/dev/sda - 2 TB SATA HDD
158.671511 +158.531232 STOPWATCH_RESET
  0.000000 +0.000000 erase_filesystem_signatures() call
  0.209161 +0.209161 erase_filesystem_signatures() writing zeros ...
  0.209684 +0.000524 erase_filesystem_signatures() flushing OS cache ...
  0.250184 +0.040500 erase_filesystem_signatures() return 1

/dev/sdb - 120 GB SSD
120.815938 +120.565754 STOPWATCH_RESET
  0.000000 +0.000000 erase_filesystem_signatures() call
  0.009964 +0.009964 erase_filesystem_signatures() writing zeros ...
  0.010419 +0.000454 erase_filesystem_signatures() flushing OS cache ...
  0.015817 +0.005398 erase_filesystem_signatures() return 1

From these above tests it appears that the worst case scenario was on
my 2 TB drive and even then the delay was only 0.25s.  I think that
this is more than acceptable to ensure that the ZFS file system
signatures are wiped out.

Curtis
Comment 14 Mike Fleetwood 2015-11-01 09:09:54 UTC
Created attachment 314574 [details] [review]
Improved SWRaid member detection (v2)

Hi Curtis,

Here's patchset v2.  Compared to patchset v1 from comment 3 above it
adds these commits:

    Add clearing of SWRaid metadata 0.90 and 1.0 super blocks (#756829)
    Stop over rounding up of signature zeroing lengths (#756829)
    Add clearing of ZFS labels

To make sure the rounding / alignment works the same as mdadm implements
for metadata 0.90 and 1.0 super blocks stored at the end of the block
device I created an odd sized loop device and created an SWRaid array
on it and got GParted to clear the signature.

    truncate -s $((10*1024*1024-512)) /tmp/block0.img
    losetup /dev/loop0 /tmp/block0.img
    mdadm --create /dev/md1 --level=linear --raid-devices=1 --force \
      --metadata=1.0 /dev/loop0
    mdadm --stop /dev/md1
    [GParted format /dev/loop0 to cleared]

Various checks to confirm the SWRaid super block has gone:
    hexdump -C /dev/loop0
    wipefs /dev/loop0
    blkid /dev/loop0
    mdadm -Esv

Clean up.
    losetup -d /dev/loop0
    rm /tmp/block0.img

Also looked at the ZFS source code to find out that the labels are
written at 256 KiB boundaries, as well as each being 256 KiB in size.
(I considered it too much detail to put in the code comment.  Even this
is only hinting at everything you need to understand to confirm what the
code is doing).

https://github.com/zfsonlinux/zfs
zfs/module/zfs/vdev.c
	vdev_open()   osize = P2ALIGN(osize, sizeof(vdev_label_t))
zfs/module/zfs/vdev_label.c
	vdev_label_read()
	vdev_label_offset()
lib/libspl/include/sys/sysmacros.h
	P2ALIGN
zfs/include/sys/vdev_impl.h
	struct vdev_label {...} vdev_label_t /* 256K total */

Thanks,
Mike
Comment 15 Curtis Gedak 2015-11-02 17:35:21 UTC
Thanks Mike for the updated patch set v2 in comment #14.

The code looks good to me and from testing I confirmed that SWRAID and ZFS file system signatures are cleared.

Do you have any additional patches you wish to add to this report?

If not then I will commit patch set v2.

Curtis
Comment 16 Mike Fleetwood 2015-11-02 19:05:27 UTC
Hi Curtis,

No further patches for this set.  Push away upstream.

Mike
Comment 17 Curtis Gedak 2015-11-02 19:25:01 UTC
Patch set v2 from comment #14 has been committed to the git repository for inclusion in the next release of GParted.

The relevant git commits can be viewed at the following links:

Detect Linux SWRaid members by querying mdadm (#756829)
https://git.gnome.org/browse/gparted/commit/?id=5f02bcf463ed61989e7f44926e7258d0ffcb2461

Move busy detection of SWRaid members into the new module (#756829)
https://git.gnome.org/browse/gparted/commit/?id=0ce985738038b171c7a96594fa99f7b69a4bf6f6

Use UUID and label of SWRaid arrays too (#756829)
https://git.gnome.org/browse/gparted/commit/?id=7255c8af403cb1121f81d7caa5cedcd754a8f45e

Populate member mount point with SWRaid array device (#756829)
https://git.gnome.org/browse/gparted/commit/?id=f6c2f00df7858a7f4b97e699b9bcf1023bba850a

Ensure SWRaid_Info cache is loaded at least once (#756829)
https://git.gnome.org/browse/gparted/commit/?id=bab1109d3df55b733cd1245f8661a60f21fa98c3

Handle unusual case of missing mdadm but SWRaid arrays active (#756829)
https://git.gnome.org/browse/gparted/commit/?id=a86f28bc3253506a3c6bba870390ac130c919c7a

Add clearing of SWRaid metadata 0.90 and 1.0 super blocks (#756829)
https://git.gnome.org/browse/gparted/commit/?id=743968ef68085f6043e8fd569384c0747c8fc9e2

Stop over rounding up of signature zeroing lengths (#756829)
https://git.gnome.org/browse/gparted/commit/?id=eec78cd2b203d9e895c730a547345e4acafc40cf

Add clearing of ZFS labels
https://git.gnome.org/browse/gparted/commit/?id=32b5106aa12c51d63273ab8e43f27ff6ef3b4826

Correct inclusion of Operation* headers in GParted_Core.cc
https://git.gnome.org/browse/gparted/commit/?id=52183058ef34dfa82dc1054190e64e72fd9ef076
Comment 18 Mike Fleetwood 2016-01-02 10:39:02 UTC
Created attachment 318155 [details] [review]
Update README file for SWRaid / mdadm (v1)

Hi Curtis,

Yesterday when reviewing where I was with LUKS read-only support I
realised that I had missed adding information into the README file about
mdadm for improved SWRaid support.  Here is the patch for this.

Thanks,
Mike
Comment 19 Curtis Gedak 2016-01-03 16:51:43 UTC
Thanks Mike for a simple patch to get things rolling in the new year.  :-)

Patch v1 from comment 18, which contains an update to the README file, has been committed to the git repository.

The relevant git commit can be viewed at the following link:

Add Linux SWRaid / mdadm note to the README file (#756829)
https://git.gnome.org/browse/gparted/commit/?id=28ad527874e73875b64cdfea5806e1d7c9e0f29d
Comment 20 Curtis Gedak 2016-01-18 17:41:22 UTC
This enhancement was included in the GParted 0.25.0 release on January 18, 2016.