After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 678379 - Could not stat device /dev/md/0 - No such file or directory
Could not stat device /dev/md/0 - No such file or directory
Status: RESOLVED FIXED
Product: gparted
Classification: Other
Component: application
0.12.0
Other Linux
: Normal normal
: ---
Assigned To: gparted maintainers alias
gparted maintainers alias
: 689160 (view as bug list)
Depends on:
Blocks:
 
 
Reported: 2012-06-19 09:49 UTC by martin.suc
Modified: 2012-12-12 18:12 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Patch 1 to read Linux SW RAID device names from /proc/mdstat (2.11 KB, patch)
2012-11-13 18:44 UTC, Curtis Gedak
none Details | Review
Patch 2 to improve Linux software RAID device detection (11.83 KB, patch)
2012-11-14 20:45 UTC, Curtis Gedak
none Details | Review
Patch 3 to improve Linux software RAID device detection (12.99 KB, patch)
2012-12-04 17:35 UTC, Curtis Gedak
none Details | Review

Description martin.suc 2012-06-19 09:49:35 UTC
2. Could not stat device /dev/md/0 - No such file or directory. 

---
previously happening:
Is trying to identified /dev/md/0 /dev/md/1 /dev/md/2 even if there is no one
 like that , all is removed , erased, not in fstab not in mdadm.conf , mstat is
empty , ... . no reason for that.

---
another today happening - version 12.1 :
is trying to identified /dev/md/something which does not exist.
it exist under the name /dev/md/hostname:something

regards,
M.
Comment 1 Curtis Gedak 2012-06-19 15:06:58 UTC
Do you know if the disk drives previously had a Linux Software RAID setup?

What is the output from the following command?

     mdadm --examine --scan

This is the command that scans for Linux Software RAID signatures.
Comment 2 Curtis Gedak 2012-11-05 17:05:48 UTC
Setting the status of this report to NEEDINFO.

Without the additional information it is very difficult to proceed with troubleshooting this bug report.
Comment 3 martin.suc 2012-11-06 13:34:34 UTC
unfortunately - it is too late for additional info. 

I have redesigned all raid arrays many times from the moment I reported it and changed the config files. 

Nowadays I am using gparted version 14 which does not have this mentioned error.

BUT
Yesterday I have got same error when I went to boot up with gparted live iso - (version 14) - it is still trying to mount /dev/md/0 and /1 and failed.

now is:
mdadm --examine --scan

ARRAY /dev/md/raid0ext41 metadata=1.2 UUID=8248abdc:c6cb1226:1f60ded4:3b5dbffa name=PCEUBU1:raid0ext41
ARRAY /dev/md/raid0btrfs1 metadata=1.2 UUID=11d759ba:1e0a545d:f591ceda:3203976a name=PCEUBU1:raid0btrfs1
ARRAY /dev/md/raid0jfs1 metadata=1.2 UUID=60f0aa2f:f2047d01:9a03ea4b:41ca8193 name=PCEUBU1:raid0jfs1
ARRAY /dev/md/raid0reiserfs1 metadata=1.2 UUID=a95d4031:52c0bfcc:584587d6:85fa6eec name=PCEUBU1:raid0reiserfs1
ARRAY /dev/md/raid0xfs1 metadata=1.2 UUID=853cf572:b972d3b3:c8fab9ec:11d4d102 name=PCEUBU1:raid0xfs1
ARRAY /dev/md/raid0oczclone metadata=1.2 UUID=89db6336:c55a88ee:acdabbf4:94424fff name=PCEUBU1:raid0oczclone
ARRAY /dev/md/raid0ocz metadata=1.2 UUID=4e3ed53d:6b0062c5:dd8285a4:41940138 name=UBU2TBEXT42:raid0ocz

(but it could be drastically changed any day/s)

( by the way - on-live ISO gparted does not work at all - it is trying to detect partitons - raid partitions detection failed according to messages which I have seent - are the same as I reported. And the rest partitions detection took over 10 minutess without finishing detection. So I cancel it and rebooted into system.)

the version using now in ubuntu instance is taken from 
http://www.ubuntuupdates.org/gparted
does not have this particular problem.

(
BUT I have got another errors:
My experience is that all those bugs which I have been reported so far are corresponding with possibility that gparted is not capable of to deal with raid partition when is only ONE defined on the whole raid0 (no matter  what filesystem is there) (informations are wrong , manage flags are empty).
if I split the raid0 into more partitions (does not matter what flag I use on them), it is going to more or less work properly.
it is not valid for live iso gparted (am supprissed it suppossed to use natural library for dealing with partitions instead of libparted like have been told me to).
)


So, I am sorry but no more info will be available.
Comment 4 Curtis Gedak 2012-11-06 16:36:50 UTC
Thank you Martin for the extra information.

I wonder if this problem is due to Linux software RAID (mdadm) now providing support for some motherboard BIOS RAIDs (fake RAIDs), which previously were only handled by the device mapper RAID (dmraid)?

Perhaps the RAID device is being recognized by both mdadm and dmraid?

To check if this might be the case, would you be able to provide the output from the following two commands (using your current RAID configuration)?

     sudo dmraid -sa -c

     sudo mdadm --examine --scan
Comment 5 Phillip Susi 2012-11-06 19:11:48 UTC
I still do not understand what the exact error is, or what you were doing to prompt it.  Could you try to describe it again?

It does not look like this has anything to do with dmraid Curtis, but there does appear to be a goofy new mdadm feature where you can give arrays names instead of just being numbered.  Hence names like /dev/md/raid0ext41 rather than /dev/md0.
Comment 6 martin.suc 2012-11-06 21:46:13 UTC
sudo dmraid -sa -c
no raid disks

sudo mdadm --examine --scan
is the same as in comment #3

(( I did not try to use BIOS raid (even if motherboard support it naturally) and not even special hardware PCI-E cards for raid. I am using only software raid mdadm and only raid0. ))

to give it name am using this:
mdadm --verbose --create /dev/md123 --chunk=64 --level=0 --name=raid0oczclone --raid-devices=3 /dev/sdc7 /dev/sde7 /dev/sdf7

why:
I know that about one and half year ago when mdadm had bug within /dev/md/0 and /dev/md/127 numbering - first it was numbering from 0 up to 127 , now it is numbering from 127 to down - it is described on many forums how to deal with it. 
and another bug with md/0 - to solve it:
remove name=PCEUBU1:0 in /etc/mdadm/mdadm.conf
change from md/0 to md0

I am telling it because maybe it could be potentially corresponding with this problem - it looks similar.
Since I start to use "--name=something" within creation raid0 I get rid off those mentioned errors within mdadm but not in gparted. Thing is that this error appearing repeatedly only in gparted no matter which release and/or from what repository I use it. Even compiled by myself has the same problem from version 11 as far as I remember.

---
I do not know how to describe it better. 
It seems to me that gparted is trying by default to get mapped drives like /dev/md/0 , /dev/md1 , ... and is ignoring that now it is like /dev/md127 /dev/md126 and counting down number. I really do not know. But you can see it in gparted version if the partition is only one on the whole raid0 array.

So, as in busybox or in normal Ubuntu instance I have always this:
 ls -lA /dev/md/
total 0
lrwxrwxrwx 1 root root  8 Nov  6 15:18 raid0btrfs1 -> ../md123
lrwxrwxrwx 1 root root 10 Nov  6 15:18 raid0btrfs1p1 -> ../md123p1
lrwxrwxrwx 1 root root 10 Nov  6 15:18 raid0btrfs1p2 -> ../md123p2
lrwxrwxrwx 1 root root  8 Nov  6 11:12 raid0ext41 -> ../md122
lrwxrwxrwx 1 root root 10 Nov  6 14:52 raid0ext41p1 -> ../md122p1
lrwxrwxrwx 1 root root 10 Nov  6 11:12 raid0ext41p2 -> ../md122p2
lrwxrwxrwx 1 root root  8 Nov  6 11:12 raid0jfs1 -> ../md125
lrwxrwxrwx 1 root root 10 Nov  6 15:02 raid0jfs1p1 -> ../md125p1
lrwxrwxrwx 1 root root 10 Nov  6 15:03 raid0jfs1p2 -> ../md125p2
lrwxrwxrwx 1 root root  8 Nov  6 15:20 raid0ocz -> ../md121
lrwxrwxrwx 1 root root  8 Nov  6 15:20 raid0oczclone -> ../md124
lrwxrwxrwx 1 root root 10 Nov  6 15:20 raid0oczclone1 -> ../md124p1
lrwxrwxrwx 1 root root 10 Nov  6 11:12 raid0oczclone2 -> ../md124p2
lrwxrwxrwx 1 root root  8 Nov  6 14:56 raid0reiserfs1 -> ../md126
lrwxrwxrwx 1 root root 10 Nov  6 14:55 raid0reiserfs1p1 -> ../md126p1
lrwxrwxrwx 1 root root 10 Nov  6 14:55 raid0reiserfs1p2 -> ../md126p2
lrwxrwxrwx 1 root root  8 Nov  6 15:19 raid0xfs1 -> ../md127
lrwxrwxrwx 1 root root 10 Nov  6 14:50 raid0xfs1p1 -> ../md127p1
lrwxrwxrwx 1 root root 10 Nov  6 14:50 raid0xfs1p2 -> ../md127p2

(so, numbers are like /dev/md127 , /dev/md126 , ...)

so, if the gparted is guessing something like /dev/md/0 and so on it is going always to show error messages because I simply do not have this numbering in /dev/md/ subdirectory - there are only names. Numbers are in /dev/ directory and starting from 127 counting down.


I am sorry if I made confusion but the reason/s are only my guesses obtained by my experience.
Comment 7 Phillip Susi 2012-11-06 22:04:33 UTC
Describe what you did, what happened, and what you expected to happen.  So far I can only guess that at some point you did something and parted printed "Could not stat device /dev/md/0 - No such file or directory"
Comment 8 martin.suc 2012-11-06 23:09:23 UTC
I described it in comment #3 and again in comment #6
Comment 9 Phillip Susi 2012-11-07 01:44:45 UTC
You have described what you think the cause of the problem is.  What we need is a step by step instructions for how to reproduce the problem.  You open gparted and click on what?  Then what?  Then got what response?
Comment 10 Curtis Gedak 2012-11-07 01:59:43 UTC
Thank you Martin for your perseverance.  We are trying to understand where the error is coming from and the steps you use to reproduce it.


     Quote from comment #6:  "Even compiled by myself has the same problem
                              from version 11 as far as I remember."

Based on the above quote, I assume that you mean that version 0.11.0 did not have this problem, but all subsequent versions of GParted do have the problem.

One key change made since GParted 0.11.0 was to have GParted 0.12.0 and higher report libparted messages that prompted with more than one possible response.  See:
Bug #566935 - Unable to expand GPT partition when growing RAID

Prior to this, these messages were not displayed with the GUI.

Where does the error message appear?
Is the error in it's own dialog box, or is it displayed somewhere else?
A screen shot would help so that we can see what you are seeing.


Does the problem occur when you start GParted?
If so, is GParted scanning drives at this point?
Or does the problem occur when you perform certain actions?
If so, which actions?
Comment 11 Phillip Susi 2012-11-07 02:08:32 UTC
Ahh, I have reproduced it and see what the problem is now.  On startup, gparted
pops up a few parted bug exception dialog boxes complaining it can't stat
/dev/md/0 or /dev/md/Volume0.  It appears that gparted is asking mdadm for the
list of raid devices by running mdadm --examine --scan, and new versions of
mdadm support "container" arrays that have no corresponding device.  For
example:

psusi@faldara:~/gparted/src$ sudo mdadm --examine --scan
ARRAY metadata=imsm UUID=b202bce6:915c0b5b:1a37ec29:e160be47
ARRAY /dev/md/Volume0 container=b202bce6:915c0b5b:1a37ec29:e160be47 member=0
UUID=5d87da9a:70e0d4e2:bf867aa5:5d60067b
ARRAY /dev/md/0 metadata=1.0 UUID=4bc5a826:ea60ac19:b35ee42c:18258a02
name=ubuntu:0


That /dev/md/Volume0 line with the container attribute seems to be causing
gparted to try to treat /dev/md/Volume0 as a disk, when no such thing exists.

In addition, mdadm reports the actual array as /dev/md/0, when it has actually
been activated as /dev/md127, probably because I do not have the array listed
in my mdadm.conf.

I think using mdadm --examine --scan is flawed.  gparted should either let
libparted supply the list of disks, or maybe should cat /proc/mdstat for the
list of active arrays.
Comment 12 Curtis Gedak 2012-11-07 02:18:24 UTC
Phillip, does /proc/mdstat only list valid RAID devices, or does it also list these "container" arrays?

Could you post your example /proc/mdstat file?
Comment 13 Phillip Susi 2012-11-07 04:29:58 UTC
It only lists active /dev/md devices within the kernel, as opposed to mdadm --scan --examine, which looks for on disk metadata and reports what it says.
Comment 14 Curtis Gedak 2012-11-07 17:48:08 UTC
Phillip, what are the steps you used to create the Linux SW RAID with these "container" arrays?

In terms of a path forward, I would like to fix the problem in GParted, rather than rely on libparted for device detection.  The reason is that inclusion of code in parted, and the timing of releases of parted is outside of the control of the GParted project.  In addition, the choice of which version of parted is used in each distribution is outside of GParted project control.  As such I prefer to make GParted work with multiple versions of libparted, on many different distributions.

You suggestion of reading the devices from /proc/mdstat is one approach to resolving this problem.

Another approach used in the past is to only add the device if we can read the first sector.  This check currently exists in the GParted_Core::set_devices method.  A similar check could be used with Linux SW RAID to ensure that the device path exists, and that we can read the first sector.

Yet another approach would be to refactor the method in GParted_Core::set_devices that checks for "real" devices, so that the check is performed for all devices, regardless of the type of device.
Comment 15 Phillip Susi 2012-11-07 18:35:38 UTC
I don't recall, I just happened to still have a pair of logical volumes around with a raid array in them I had used at some point in the past to test under a VM.  Probably just the regular d-i installer setup.

Something else that struck me as odd is that gparted DOES still show /dev/md127 so it seems almost like the code in SWRaid.cc that calls mdadm --examine --scan is totally redundant, and gparted is detecting the *actual* raid arrays otherwise.
Comment 16 Curtis Gedak 2012-11-13 17:02:58 UTC
In the interest of reducing the scan time, Phillip's suggestion to read /proc/mdstat might be the best approach to this problem.

I did the following time tests on my system:

$ sudo time -o timestats.txt mdadm --examine --scan
ARRAY metadata=imsm UUID=06b7191e:a7535fc1:f851e4a7:7747fa07
ARRAY /dev/md/Vol0 container=06b7191e:a7535fc1:f851e4a7:7747fa07 member=0 UUID=d24a487a:d7e2ede0:ace438e2:42a1f7c7

$ cat timestats.txt 
0.00user 0.04system 0:00.54elapsed 8%CPU (0avgtext+0avgdata 4848maxresident)k
750inputs+0outputs (0major+380minor)pagefaults 0swaps

$ sudo time -o timestats.txt cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
unused devices: <none>

$ cat timestats.txt 
0.00user 0.00system 0:00.00elapsed ?%CPU (0avgtext+0avgdata 2560maxresident)k
0inputs+0outputs (0major+209minor)pagefaults 0swaps


Based on this testing, reading /proc/mdstat takes almost no time at all, whereas running the mdadm command takes about half a second (if I am reading the output correctly).

> Something else that struck me as odd is that gparted DOES still show
> /dev/md127 so it seems almost like the code in SWRaid.cc that calls
> mdadm --examine --scan is totally redundant, and gparted is detecting
> the *actual* raid arrays otherwise.

This is odd behavior indeed, especially since there is no "/dev/md127" in the output in comment #11.
Comment 17 Curtis Gedak 2012-11-13 17:20:42 UTC
In the output from comment #16 for mdadm --examine --scan, the name "Vol0" is from a motherboard BIOS RAID that I created.  This "Vol0" RAID is currently activated using dmraid "fake RAID".

Since "Vol0" also shows up in the output from the Linux software RAID comment mdadm, it seems that there is now some overlap between "fake RAID", and "Linux software RAID".


Phillip,

Is this the same situation with the "Volume0" RAID listed in comment #11?


Martin,

Would you be able to provide the output on your system from the following command?

   sudo cat /proc/mdstat
Comment 18 martin.suc 2012-11-13 17:38:14 UTC
sudo cat /proc/mdstat
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] 
md121 : active raid0 sdk3[0] sdl1[1] sdm1[2] sdn1[3]
      232406720 blocks super 1.2 64k chunks
      
md122 : active raid0 sdc7[0] sdf7[2] sde7[1]
      230106880 blocks super 1.2 64k chunks
      
md123 : active raid0 sdc3[2] sde3[1] sdf3[0]
      76795584 blocks super 1.2 64k chunks
      
md124 : active raid0 sdc2[2] sde2[1] sdf2[0]
      76795584 blocks super 1.2 64k chunks
      
md125 : active raid0 sdc4[2] sde4[1] sdf4[0]
      76795584 blocks super 1.2 64k chunks
      
md126 : active raid0 sdc6[2] sde6[1] sdf6[0]
      76795584 blocks super 1.2 64k chunks
      
md127 : active raid0 sdc5[2] sde5[1] sdf5[0]
      76795584 blocks super 1.2 64k chunks
      
unused devices: <none>
Comment 19 Curtis Gedak 2012-11-13 18:44:07 UTC
Created attachment 228914 [details] [review]
Patch 1 to read Linux SW RAID device names from /proc/mdstat

Attached is a patch that changes GParted behaviour from using "mdadm --examine --scan" to reading active Linux SW RAID device names from /proc/mdstat.

Phillip and Martin,

Would you be able to compile and test this patch?


Instructions for getting the lastest code from the git repository can be found at:
   http://gparted.org/git.php

The patch can be applied with the following command:
   git am name-of-patch.txt
Comment 20 Phillip Susi 2012-11-14 03:25:58 UTC
I went ahead and stuck a return as the first statement in SWRaid::load_swraid_cache() and it still finds /dev/md127 correctly, without finding the bogus drives.  I think it is getting it from /proc/partitions, and the whole SWRaid.cc module is pointless and should be removed.
Comment 21 martin.suc 2012-11-14 10:50:06 UTC
Hi , I compiled it "version 0.12.1-git" and did not get this error again.
I have created another raid0 for test purpose and no errors.

(
Appeared another error:
gksu /usr/local/sbin/gparted from terminal gave me:
** (gpartedbin:26431): CRITICAL **: murrine_style_draw_box: assertion `height >= -1' failed
Error appeared ONLY once. The next runs went without that error message. But that is probably for another investigation.
)
Comment 22 Curtis Gedak 2012-11-14 17:53:46 UTC
Thank you Phillip and Martin for testing.

Martin, with regards to the error you mentioned, this appears to be a graphics library error.  As you mentioned, it is not the focus of this bug report.

Phillip, great catch on the fact that /dev/md127 is found in /proc/partitions.  That way we can do away with SWRaid.h and SWRaid.cc entirely.  It turns out that it is a fluke that /dev/md127 gets picked up by the current code.  One of the regular expressions happens to match with md## and md###, but not md#.

I will work on a new patch set.
Comment 23 Curtis Gedak 2012-11-14 20:45:37 UTC
Created attachment 229007 [details] [review]
Patch 2 to improve Linux software RAID device detection

Phillip and Martin,

Would you be able to compile and test this patch?


This new patch set removes the SWRaid method entirely and relies upon detecting active Linux software RAID devices in /proc/partitions.


I have successfully tested device detection on ubuntu 8.04, ubuntu 12.04, fedora 10, and fedora 17.

I also tried partitioning the Linux software RAID devices, but partitioning of these devices seems to only be supported on recent GNU/Linux distributions, such as ubuntu 12.04, and fedora 17.

For my testing, I created two partitions of equal size (e.g., 512 MiB /dev/sdb5 and /dev/sdb6).

Next I created the RAID with the following command:
   mdadm --create /dev/md127 --level raid1 --auto=p \
         --raid-devices=2 /dev/sdb5 /dev/sdb6

I used an MSDOS partition table, so I also toggled on the RAID flag on each of these partitions.

I am interested to learn how this new patch set works on your RAID setups.
Comment 24 Phillip Susi 2012-11-15 03:33:15 UTC
Working well here.  I have applied the patch to the Ubuntu package and am preparing to upload it.
Comment 25 Curtis Gedak 2012-11-22 17:48:21 UTC
Martin,

Based on Phillip's and my testing, I am reasonably confident that the patch does address the SWRaid bug in this report.

Does it look like you will be able to test the patch in comment #23 in the near future?
Comment 26 martin.suc 2012-11-24 17:25:32 UTC
Hi,
I will test it later next week but I believe it works :-)
Comment 27 Curtis Gedak 2012-11-24 17:28:23 UTC
Thanks for the update.  I will hold off on committing this change to the code repository until you get a chance to test it next week and report back.
Comment 28 Curtis Gedak 2012-11-27 17:34:15 UTC
*** Bug 689160 has been marked as a duplicate of this bug. ***
Comment 29 Leo 2012-11-28 04:27:32 UTC
I can confirm that the patch fixes the problem on my Ubuntu 12.10 64bit version.

Thanks
Comment 30 Leo 2012-11-28 07:59:36 UTC
Hmmm, spoke too soon.

I deleted the (test) array, rebooted and on one of the disk whenever I get to device number 5 it fails with a device is busy even though the entire disk was reformatted
Comment 31 Leo 2012-11-28 08:19:43 UTC
Done a bit more testing. Makes no difference what I do (ie. reformat disk, reboot) at some stage when I create a new particion, I get a device busy error
Comment 32 Leo 2012-11-28 08:39:48 UTC
ok. uninstalled the patched version and reinstalled support.

First thing it did was to complain about not being able to stat /dev/md/2. Not sure why since this array doesn't exist and I've formatted the disks at least twice.

There's definitely an issue with dev/sdd5 as this is always busy no matter what I do.

Any suggestions?
Comment 33 martin.suc 2012-11-28 11:00:19 UTC
Within patch2 I have got error:

Applying: Tighten up regexp for HP Smart Array Devices
Applying: Add regexp for Linux SW RAID devices in /proc/partitions (#678379)
Applying: Remove SWRaid method as it is no longer needed (#678379)
error: patch failed: src/SWRaid.cc:1
error: src/SWRaid.cc: patch does not apply
Patch failed at 0003 Remove SWRaid method as it is no longer needed (#678379)
Comment 34 Curtis Gedak 2012-11-28 16:33:57 UTC
@Leo, did the patch set apply cleanly?

Also, would you be able to provide the output from the following two commands?

    sudo mdadm --examine --scan

and

    cat /proc/partitions
Comment 35 Curtis Gedak 2012-11-28 16:39:16 UTC
@martin.suc, it appears that the patch set will not cleanly apply to your copy of the source code.

To test, I performed the following steps:

user@pc:~/tmp$ git clone git://git.gnome.org/gparted
Cloning into 'gparted'...
remote: Counting objects: 12219, done.
remote: Compressing objects: 100% (5707/5707), done.
remote: Total 12219 (delta 9829), reused 7713 (delta 6461)
Receiving objects: 100% (12219/12219), 4.86 MiB | 634 KiB/s, done.
Resolving deltas: 100% (9829/9829), done.
user@pc:~/tmp$ cd gparted/
user@pc:~/tmp/gparted$ git am ../bug678379-SWRaid.patch
Applying: Tighten up regexp for HP Smart Array Devices
Applying: Add regexp for Linux SW RAID devices in /proc/partitions (#678379)
Applying: Remove SWRaid method as it is no longer needed (#678379)
user@pc:~/tmp/gparted$ ./autogen.sh && make
<snip>
user@pc:~/tmp/gparted$ sudo make install

These above steps worked for me.  Would you be able to try these steps on your system?

Note that the "sudo make install" command should by default install gparted into the /usr/local directory tree.  You can remove this copy of gparted later with "sudo make uninstall".
Comment 36 Leo 2012-11-28 20:37:04 UTC
(In reply to comment #34)
> @Leo, did the patch set apply cleanly?
> 
> Also, would you be able to provide the output from the following two commands?
> 
>     sudo mdadm --examine --scan
> 
> and
> 
>     cat /proc/partitions

Yes. I think everything re. the patch was and still is ok.

I think the problem I have now is not related to the patch at all so I think the patch fixes the issue
Comment 37 Leo 2012-11-28 21:15:56 UTC
(In reply to comment #34)
> @Leo, did the patch set apply cleanly?
> 
> Also, would you be able to provide the output from the following two commands?
> 
>     sudo mdadm --examine --scan
> 
> and
> 
>     cat /proc/partitions

Yes. I think everything re. the patch was and still is ok.

I think the problem I have now is not related to the patch at all so I think the patch fixes the issue
Comment 38 Leo 2012-11-28 21:17:55 UTC
Sorry, but don't have the array any longe due to the other issue I mentioned above, but from memory both sudo mdadm --examine --scan and cat /proc/partitions reported the new array correctly
Comment 39 Curtis Gedak 2012-11-28 21:39:52 UTC
@Leo, do you recall what sdd5 was used for previously?

I was wondering if the operating system still considered it part of the RAID.  Alternatively it the space was linux-swap, then maybe it was swapped on (e.g., active).

The output from sudo mdadm --examine --scan can still be useful, especially if it says anything about sdd5.  It might provide a clue as to why sdd5 is listed as busy.
Comment 40 martin.suc 2012-11-29 11:17:40 UTC
Hi, I did clean git clone and patch2 has been applied successfully.
(Sorry, I do not know what went wrong with previous git updated and cleaned gparted version that patch2 was not applied successfully.)

I made a test with currently compiled gparted 0.14.0-git version
and compared it with gparted version 0.14.0 installed from repository 
and did not get this error.

(By the way, I have also noticed that even version 0.14.0 which I have been lightly using within last couple of days - this version is without patch2, has not issued this error either.)
(Generally - testing will be more comprehensive when I start to using gparted heavily again.)
Comment 41 Curtis Gedak 2012-11-29 16:54:09 UTC
Thank you martin.suc for your testing.  It looks like the patch set from comment #23 addresses the problem so we will see about getting this improvement into the next release of GParted.
Comment 42 Mike Fleetwood 2012-12-04 13:54:45 UTC
Hi Curtis,

I just visually reviewed patch number 1/3 of "Patch 2 to improve Linux
software RAID device detection" from comment #23.

I think you have tightened the regexp for matching the path of
partitions on hardware RAID controllers too much.  You look for devices
like /dev/cciss/c0d0.  This is for Compaq/HP Smartarray RAID controllers
only.  I just happen to have access to an old machine with a Compaq
SMART2 controller.  Can't try GParted on it, but here's the partition
table for reference:
    # cat /proc/partitions
    major minor  #blocks  name

      72     0   35561280 ida/c0d0
      72     1    1036161 ida/c0d0p1
      72     2   34523685 ida/c0d0p2

Standard Linux kernel appears to support 3 different hardware RAID
controllers which each have different device naming:

1) Compaq/HP Smartarray RAID controller
   /dev/cciss/c0d0
   (linux-x.y.z/Documentation/blockdev/cciss.txt)

2) Compaq SMART2 Intelligent Disk Array controller
   /dev/ida/c0d0
   (linux-x.y.z/Documentation/blockdev/cpqarray.txt)

3) Mylex DAC960/AcceleRAID/eXtremeRAID PCI RAID Controllers
   /dev/rd/c0d0
   (linux-x.y.z/Documentation/blockdev/README.DAC960)

Suggest either 3 separate regexs to match these:
    cciss/c0d0
    ida/c0d0
    rd/c0d0
or a more general regexp like "... [a-z]+\/c[0-9]+d[0-9]+ ...".

Will look at the other patches over the next few days.

Thanks,
Mike
Comment 43 Curtis Gedak 2012-12-04 17:35:38 UTC
Created attachment 230679 [details] [review]
Patch 3 to improve Linux software RAID device detection

Thank you Mike for code review.

Good catch on the regular expression being too tight!

I used your suggestion of a more general regular expression to match the three hardware RAID controllers.

The difference between this 3rd patch and the earlier one is the change in regular expression from:

   "... (cciss/c[0-9]+d[0-9]+)$"

to:

   "... ([a-z]+/c[0-9]+d[0-9]+)$"

And of course updates to the comments and commit message.  :-)

I have tested this patch with a hand built "/proc/partitions" file and the three hardware RAID controllers are recognized, and the /dev/md/0 software RAID devices are correctly excluded from the above listed regular expression.
Comment 44 Mike Fleetwood 2012-12-05 21:56:31 UTC
Hi Curtis,

Observation.  GParted only detects SW Raid arrays which are active, both
before and after this change.  However GParted actually ends up starting
stopped arrays, but after they are detected by GParted.  (Might be
device probing via libparted and udev which is doing it.  This is on
Fedora 14).

My setup:
2 hard drives sda & sdb.
2 partitions on sda, sda14 & sda15 combined into a SW array.
    # cat /proc/mdstat
    Personalities : [linear]
    md0 : active linear sda15[1] sda14[0]
          2095104 blocks super 1.2 0k rounding
          
    unused devices: <none>

Stop array.
    # mdadm --stop --scan  
    mdadm: stopped /dev/md0
    # cat /proc/mdstat 
    Personalities : [linear] 
    unused devices: <none>

Run GParted.  Doesn't detect disk /dev/md0, but array has been started.
    # cat /proc/mdstat
    Personalities : [linear]
    md0 : active linear sda14[0] sda15[1]
          2095104 blocks super 1.2 0k rounding

    unused devices: <none>

Refresh devices, or re-run GParted and it displays /dev/md0 as a disk.


Just an observation.  Review passed.  Ready to commit.

Thanks,
Mike
Comment 45 Curtis Gedak 2012-12-06 01:39:22 UTC
Thank you Mike for the detailed review.

That is interesting about the SW RAID being started on Fedora 14 after running GParted.  Since GParted does not contain any SW RAID commands, such as mdadm to start or stop SW RAID, there must be some other subsystem that is invoking this behaviour.

I performed the same steps on Ubuntu 12.04, but once the SW RAID is stopped, the SW RAID stays stopped even after running GParted.


This patch set has been committed to the git repository for inclusion in the next release of GParted (0.14.1) on Dec. 12, 2012.

The relevant git commits can be viewed at the following links:

Tighten up regexp for HP Smart Array Devices
http://git.gnome.org/browse/gparted/commit/?id=f003b3c4f2613a70011c5fe4ba850f90c24d7e2c

Add regexp for Linux SW RAID devices in /proc/partitions (#678379)
http://git.gnome.org/browse/gparted/commit/?id=c600095912f5c38e01d71d461765be665e2bdebe

Remove SWRaid method as it is no longer needed (#678379)
http://git.gnome.org/browse/gparted/commit/?id=de99c530d45877f7bce1b0c0f616c03e8c1ebb89
Comment 46 Curtis Gedak 2012-12-12 18:12:38 UTC
The enhancements to address this bug report have been included in GParted 0.14.1 released on December 12, 2012.