After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 608712 - GParted fails to edit/delete partitions if active LVM on same disk device
GParted fails to edit/delete partitions if active LVM on same disk device
Status: RESOLVED FIXED
Product: gparted
Classification: Other
Component: livecd
0.5.1
Other Linux
: Normal major
: ---
Assigned To: gparted maintainers alias
gparted maintainers alias
Depends on:
Blocks:
 
 
Reported: 2010-02-01 19:13 UTC by bitnomad
Modified: 2010-05-03 23:35 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
saved page from gparted error window (1.91 KB, text/html)
2010-02-01 19:13 UTC, bitnomad
Details
saved details of 5.1-1 failures and success (3.10 KB, application/zip)
2010-02-03 01:27 UTC, bitnomad
Details
Files from gparted research on error (77.00 KB, application/zip)
2010-02-04 21:41 UTC, bitnomad
Details
Saves of new code failure (1.66 KB, application/zip)
2010-02-04 23:52 UTC, bitnomad
Details
Latest test. Multiple delays did not help.... (11.12 KB, text/html)
2010-02-05 22:36 UTC, bitnomad
Details
Volume Manger data (63.50 KB, application/zip)
2010-02-06 00:13 UTC, bitnomad
Details

Description bitnomad 2010-02-01 19:13:44 UTC
Created attachment 152755 [details]
saved page from gparted error window

When trying to change partitions or any actions on two scsi drive configuration (sda1, sda2, sda3 and sdb1, sdb2, sdb3), changing sda2 or sda3 an error dialogue pops up after pushing the apply button.

This does not happen with version 4.6-1. This is a hard failure. Attached is one of three saved error message files.

How can I help?
Comment 1 Curtis Gedak 2010-02-02 01:34:22 UTC
What type of hard drive(s) do you have?
How are these connected to your computer (IDE, SATA, USB)?

From reviewing the log file, the partition was deleted but GParted failed to inform the kernel of the partition changes.  Hence on a subsequent reboot you should find that the partition was deleted.

GParted 0.4.6 had a bug in that it would not report an error when this condition occurred.  Instead it would blindly continue on assuming that the kernel's image of the partition table had been updated.  This would lead to problems later on, such as when trying to format a newly created partition that the kernel was unaware of.

GParted 0.4.6-1 also used an older version of the libparted library from the parted project (1.8.8).

GParted 0.5.1-1 uses parted 1.9.0.
Comment 2 bitnomad 2010-02-02 02:05:34 UTC
The two hard drives are:
seagate ST336706LW (scsi id0) as sda1, sda2 sda3
--sda1 is windows server 2003, sda2 and sda3 are for Linux

seagate ST3146707lw (scsi id1)as sdb1, sdb2, sdb3
--sdb1 is windows drive e and sdb2 is windows drive f. sdb3 is unallocated.

Connected with adaptec 29160 Ultra160 controller.

Yes on reboot the changes are there. Is there anyway to eliminate the pop up error.
Comment 3 Curtis Gedak 2010-02-02 16:41:47 UTC
(In reply to comment #2)
> Is there anyway to eliminate the pop up error

If the error has occurred, then it is important for GParted to stop operations and present the error to the user.

The challenge in this situation is to figure out why the error occurred.  There has been much work on this problem as can be seen in bugs #601574 and #604298

Would you be able to try the same operation using GParted Live 0.4.6-1, and save the gparted_details.htm log file to post here?

To see if this is a random occurrence would you be able to try the operation again with GParted Live 0.5.1-1?
Comment 4 bitnomad 2010-02-03 00:17:58 UTC
I will do these two actions and be very careful to document the sequence. Before I reported this bug I had 4 actions to execute on sd2 and sda3. Remove both partitions and then add in two partitions. All failed with 5.1-1 and all worked with 4.6-1. I wll collect the logs and post them here.
Comment 5 bitnomad 2010-02-03 01:27:41 UTC
Created attachment 152899 [details]
saved details of 5.1-1 failures and success
Comment 6 bitnomad 2010-02-03 01:43:21 UTC
On first try booting to 5.1-1 livecd everything worked. Decide to re-create configuration that failed; The actions list;
1. used 4.6-1 to remove all partitions except sda1 (Windows) on scsi id 0
2. Loaded Fedora 12. Took all of the standard partition sizes (200 mb boot, 2000 swap 7709 system).
3. Booted to Fedora 12 (dual boot system)
4. Rebooted to Server 2003
************** Now configured back to the beginning of yesterday ***********
5. Rebooted to systemrescuecd 1.3.5 (has gparted 5.1-1) This is where I started yesterday.
6. In systemrescuecd, with gparted, tried to remove sda2 and sda3. Both failed. Saved details.
7. Tried to add a partition. It failed. Saved details.
8. Booted to 5.1-1 livecd.
9. Tried to add a partition. It worked. Saved details.
10. Rebooted to Server 2003
11. Rebooted to rescuecd.
12. Ran gaparted 5.1. It worked as in step 9.

I know this isn't exactly what you asked for, but at first everything worked. Now what?
Comment 7 Curtis Gedak 2010-02-03 16:38:46 UTC
From the log files you have provided, I can see that the work around I implemented in GParted 0.5.1 is not working in your situation.

Are you in a position to compile code to test out possible solutions?
Comment 8 bitnomad 2010-02-03 19:27:41 UTC
I am still a little new to Linux so you may need to be very specific in your requests.  Worked with fsarchiver developer to debug it. He would send .gz and then I would run a ./configure && make and then ran the code. He chose the OS and then I acted like tester.

I could do the same for this bug. I can set up another system to run whatever OS you choose, compile there and do as you need. For fsarchiver we used Fedora 12. That is fine on my other system. It doesn't work on the system with this bug. Too many kernel crashes and system hangs leading to a Linux BSOD (black screen of death) on reboot.
Comment 9 Curtis Gedak 2010-02-04 18:36:02 UTC
Since the problem occurs when you are running from the
systemrescuecd-1.3.5, we will try to use a running copy of that CD for
performing our tests.

I have compiled a static binary "gpartedbin" for you to try.

Following are the steps to install and use this pre-compiled static
binary "gpartedbin".


INSTALL STATIC BINARY "gpartedbin"
==================================

1)  Reboot your computer using systemrescuecd-1.3.5.

2)  Start networking (my network card is known as eth0).

    $ net-setup eth0

    In answer to the prompts I used a wired network and choose dhcp to
    receive an IP address.

3)  Start up an X session to have a graphical desktop

    $ startx


4)  Start up the Firefox web browser and download the already compiled
    GParted binary.  I simply accepted the default "Downloads"
    directory to store the file.

    http://gparted.sourceforge.net/curtis/gparted-0.5.1-20100203-bin.tar.bz2

6)  Close down Firefox and the "Downloads" window.


7)  Using the terminal window, extract the already compiled GParted
    binary into the /usr/sbin directory.

    $ tar -C /usr/sbin -xvjf ~/Downloads/gparted-0.5.1-20100203-bin.tar.bz2

8)  Start GParted and check the menu "Help --> About" to confirm you
    are using the test version.  It should be version "0.5.1-20100203".


    You are now ready to perform your testing to see if the bug
    re-appears.



--------------------------------------------------
If you are curious, I used the following steps to build the executable
on my Ubuntu 8.04 LTS GNU/Linux distribution.


BUILD STATIC BINARY "gpartedbin"
================================

NOTE:  You do not need to perform these steps. :-)

1)  Download the gparted test code.

    http://gparted.sourceforge.net/curtis/gparted-0.5.1-20100203.tar.bz2

2)  Extract the GParted source code.

    $ tar -xvjf ~/Downloads/gparted-0.5.1-20100203.tar.bz2

3)  Compile and install the GParted source code.

    $ cd gparted-0.5.1-20100203
    $ ./configure --disable-scrollkeeper --disable-doc --enable-static \
                  --prefix=/usr
    $ make
    $ make install

4)  Create a tarball of the gpartedbin executable.

    $ tar -C src/ -cvjf gparted-0.5.1-20100203-bin.tar.bz2 gpartedbin
Comment 10 bitnomad 2010-02-04 21:41:02 UTC
Created attachment 153040 [details]
Files from gparted research on error

Please look at ..._4.jpeg and ..._5.jpeg screenshots. I may have led you off to solve a problem on something that isn't supported. The WEB docs said the triangle with an exclamation point is a missing label, but gparted 0.4.6-1 and 5.1-1 say in _5.jpeg that lvm is not supported.
Comment 11 Curtis Gedak 2010-02-04 22:16:41 UTC
The screen shots look fine to me.  And you are correct that Logical Volume Management is not yet supported.

In your first few posts, the problem that occurred was that when deleting or creating a partition you would receive an error dialog.  After saving the gparted_details.htm log file, it would contain a libparted message such as:

     The kernel was unable to re-read the partition table on
     /dev/sda (Device or resource busy). This means Linux won't
     know anything about the modifications you made until you
     reboot. You should reboot your computer before doing
     anything with /dev/sda.

When using the pre-compiled static binary "gpartedbin", do you still receive the error dialog?

If so, does it contain the same libparted message?

Please save an post the gparted_details.htm log file if it does.

Also, which WEB docs said that a triangle with an exclamation point is a missing label?
Comment 12 bitnomad 2010-02-04 23:52:50 UTC
Created attachment 153052 [details]
Saves of new code failure

Here are two details from trying to delete /sda2 and /sda3.

The WEB page where I mis-understood the triangle graphic is
 here about 8 screenshots down
http://gparted.sourceforge.net/screenshots.php
Comment 13 Curtis Gedak 2010-02-05 20:09:46 UTC
The 8th screen shot on the WEB page is referring to a disk label, also known as a partition table.  This is an old screen shot that I need to update because GParted now uses the word "partition table" instead of "disk label".  Thank you for bringing this to my attention.  :)


The pre-compiled static binary "gpartedbin" (gparted-0.5.1-20100203-bin.tar.bz2) contained code that changed a pause of 1 second to a call to the udev settle command.

From looking at the log files, I can see that this change did not fix the problem.


This next test involves using a pause of 1 second and a call to udev settle, both up to 30 times if needed.

Would you be able to test this new pre-compiled static binary?

Please post the log file of either success or failure when you complete the test.


Following are the steps to install and use this pre-compiled static
binary "gpartedbin".


INSTALL STATIC BINARY "gpartedbin"
==================================

1)  Reboot your computer using systemrescuecd-1.3.5.

2)  Start networking (my network card is known as eth0).

    $ net-setup eth0

    In answer to the prompts I used a wired network and choose dhcp to
    receive an IP address.

3)  Start up an X session to have a graphical desktop

    $ startx


4)  Start up the Firefox web browser and download the already compiled
    GParted binary.  I simply accepted the default "Downloads"
    directory to store the file.

    http://gparted.sourceforge.net/curtis/gparted-0.5.1-20100205-bin.tar.bz2

6)  Close down Firefox and the "Downloads" window.


7)  Using the terminal window, extract the already compiled GParted
    binary into the /usr/sbin directory.

    $ tar -C /usr/sbin -xvjf ~/Downloads/gparted-0.5.1-20100205-bin.tar.bz2

8)  Start GParted and check the menu "Help --> About" to confirm you
    are using the test version.  It should be version "0.5.1-20100205".


    You are now ready to perform your testing to see if the bug
    re-appears.
Comment 14 bitnomad 2010-02-05 22:36:39 UTC
Created attachment 153111 [details]
Latest test. Multiple delays did not help....

Here is the file from the test. I am not sure what idea you are using to troubleshoot this one, and I can't really say, but after many years of debugging hardware and software, it just feels like its broken before we start any action. Some form of illegal state...Not sure just a guess.

After the error message, the self refresh works fine. Is the graphic displayed from this refresh a direct read from the partition table or is it through the kernel...the same one that fails after the read after update verification?
Comment 15 bitnomad 2010-02-06 00:13:59 UTC
Created attachment 153119 [details]
Volume Manger data

While booting up and shutting down systemrescuecd 1.3.5 I noticed that the volume manager was being used for setting up the vg_reservoir volume and was being shutdown using the vg_volume even after I had deleted /dev/sda3..the only volume in the volume manager list.

Took a snapshot of the Fedora Partition Manager before it created the partitions, and then the text file is the volume manager list before and after using gparted 5.1 (not any test version) to delete (with error) /dev/sda3.

Not sure if this helps. It looks like the volume manager doesn't know what happen.
Comment 16 Curtis Gedak 2010-02-06 15:28:19 UTC
(In reply to comment #14)
> After the error message, the self refresh works fine. Is the graphic displayed
> from this refresh a direct read from the partition table or is it through the
> kernel...the same one that fails after the read after update verification?

GParted uses the library libparted from the Parted project to detect and manipulate partition tables.  This library reads and writes information directly to the disk.  Hence this library always has an accurate vision of the current partition table.

The GNU/Linux kernel reads the partition table on boot up.  When libparted is used to change the partition table, the kernel will not be aware of any changes until libparted issues an ioctl() call to instruct the kernel to re-read the partition table.  If this ioctl() call fails, then the kernel will still have an old (now incorrect) vision of the partition table.


From the information you provided, I think we might be dealing with two separate problems here:

1)  Failure to inform the kernel when an unmounted, non-lvm2 partition is deleted, such as an ext4 partition.

2)  Failure to delete a Logical Volume Management 2 Physical Volume (lvm2-pv) while the volume is active.  GParted should probably indicate that this space is being active, and cannot be deleted.


With problem 1, the library libparted (and GParted) see the partition as being deleted.  However, the kernel is not aware that the partition table was changed, and hence will continue to still see the deleted partition.

This problem will disappear when the system is rebooted because the kernel will then read the actual partition table and hence have the correct view.


With problem 2, it would appear that the volume manager has activated the space used in the lvm2-pv.  However, GParted is not aware that this space is in use and hence GParted should not permit deletion of active disk space.  This problem should have its own bug report.


Going back to problem 1, would you be able to try deleting an unmounted, non-lvm2 partition using the pre-compiled static binary "gpartedbin" from 20100205?
Comment 17 bitnomad 2010-02-08 22:21:58 UTC
Tested several different disk configurations;
sda1 as NTFS and then adding ext4, then NTFS using gpartedbin. Added and deleted just fine. No problems.

Went back to Fedora 12 to install and setup the LVM configuration, with an ext4 (about 7.5 gib) and an NTFS (2 gib) partition. This was to force the installer to put Fedora between two NTFS partitions. I was testing to see if once the LVM is called, so to say, is it possible to delete partitions below or above the LVM group.

gparted-0.5.1 and gpartedbin both fail with the "kernel cannot...." when trying to delete a partition below (sda2) the LVM group, and above the LVM group (the small NTFS partition). Summarized, once LVM is called on boot up gparted does not work...maybe it shouldn't as you said earlier.

On one test had sda1 as NTFS, sda2 as Linux boot, the LVM group, and finally the NTFS partition. Ran vgremove and removed vg_reservoir group before running gparted and gpartedbin. Both worked.
Comment 18 Curtis Gedak 2010-02-17 17:39:08 UTC
Thank you for the additional testing.

Based on your results, I would have to agree that once LVM is activated then it appears that other non-LVM partitions on the same physical disk cannot be deleted.

Support for Logical Volume Management has been requested in bug #160787.

At the moment the GParted Live CD does not activate the LVM volumes.  As such GParted Live should be able to properly delete non-LVM partitions.

Would you be able to test this with GParted Live 0.5.1-1?
Comment 19 bitnomad 2010-02-19 18:12:22 UTC
Here are the findings from your request.
- Had Suse Linux loaded on the test computer that does not use LVM
- Booted gparted 5.1-1 and watched the boot messages. LVM is called but it found nothing.
- Completed boot into gparted 5.1-1 and opened terminal. Typed vgdisplay -A and it found nothing (that is correct no LVs).
- Used gparted 5.1-1 to clean off Suse on physical disk 0 and 1.
- Installed Fedora 12 that uses an LV and LVM.
- Booted up gparted 5.1-1 and watched boot messages. LVM is called and it did find the two lv_... logical volumes created by Fedora 12.
- Stayed in gparted 5.1-1 and typed vgdisplay -A. It did find and report the logical to physical map.
- Used gparted 5.1-1 to work with physical disk 1 (second disk). It works just fine. Did not try on physical disk 0. Have tried this many times with gparted 5.1-1 and it always fails with a kernel update error. This is what caused my initial bug report. I did not understand that gparted could not delete any partitions on disk 0 because of an active LVM.
Comment 20 Curtis Gedak 2010-02-19 18:24:58 UTC
(In reply to comment #19)
> I did not understand that gparted could not delete any
> partitions on disk 0 because of an active LVM.

This has been an learning experience for me too.  I was not aware that this was the case.

Thank you for the additional testing.


Based on what we have discovered, I would like to change the title of this bug to something more representative like:

     GParted fails to edit/delete partitions if active LVM on same disk device

Does this title look better to you?  If not then what title would you suggest?


To resolve this problem will require more investigation into how LVM works.  There are several other important bugs that I plan to work on first.

On the plus side you have already discovered a work around that involves de-activating LVM prior to working on non-LVM partitions.  :)
Comment 21 bitnomad 2010-02-19 21:28:33 UTC
The new title you proposed is fine. I know there are more important bugs, but if you have time maybe you could have gparted check if LVM is active and not allow any actions on the disk. Don't repartition just stop it for now.

For your consideration...
I have run into Logical disks many times. Mostly Windows, just not in Linux. They are very tricky. Its not the Logical versus the Physical that is the real problem, it is the back up that causes the problem. Back up software is generally pointed to physical devices / partitions. So someone changing a physical partition, even though they thought they had backed up the data for the partition / system, may have just corrupted the OS not knowing of the logical partition that spans multiple physical partitions / devices.

If you need anything else let me know.
Comment 22 Curtis Gedak 2010-02-19 21:44:42 UTC
(In reply to comment #21)
> The new title you proposed is fine. I know there are more important bugs, but
> if you have time maybe you could have gparted check if LVM is active and not
> allow any actions on the disk. Don't repartition just stop it for now.

This sounds like an excellent suggestion.  Especially since the behaviour is similar to when a partition is mounted.  In such a situation with newer GNU/Linux kernels and parted, editing of the partition table is prohibited.

Is there a command that would indicate if a partition or disk is part of an active LVM volume?

E.g.,
   Command-Is-Device-Path-Active-LVM /dev/sda

If so I might be able to more quickly implement your suggestion.
Comment 23 bitnomad 2010-02-22 22:26:06 UTC
Tried to find the most effective command to determine if the LVM is active. Found many, but none that could be called with a specific device that you as for in the e.g.

Did find that pvscan works the best and returns the most useful data with the least number of characters. Examples below;
With no Logical Volumes
#pvscan
no matching volumes

With one active volume on one disk
# pvscan
  PV /dev/sda3   VG vg_reservoir   lvm2 [9.57 GB / 0    free]
  Total: 1 [9.57 GB] / in use: 1 [9.57 GB] / in no VG: 0 [0   ]

With two physical partitions tied to one volume
# pvscan
  PV /dev/sda3   VG vg_reservoir   lvm2 [9.57 GB / 0    free]
  PV /dev/sdb3   VG vg_reservoir   lvm2 [16.96 GB / 16.96 GB free]

My view is that it is easy to test first for no return or if there is one the look for PV /dev/sd with and keep track of what physical disks are being used. The case above its sda and sdb so gparted would not act on any partitions on these two physical disks.
Comment 24 Curtis Gedak 2010-02-23 20:31:58 UTC
If the volume group is not active, does pvscan return the same information?

Ideally I am looking for a command that would only list the LVM volumes when these are active.  If the volumes are inactive, then the problem in this report should not occur.
Comment 25 bitnomad 2010-02-24 18:24:42 UTC
Some of the information for this post I just learned over that past few days, so it may be not very clear.

I am not certain what you mean by "...LVM volumes ...are active". With testing I found that Debian used with gparted 5.1-1, Gentoo used with systemrescuecd, Fedora 12, and Suse 11.1 all load and use the Logical Volume Manager during boot up.

So we can say that a physical volume that is part of a Logical Volume Group becomes active during boot up. But I think there might be a simple way for gparted to decide what to do.

Once the installation disk, or the OS administrator creates a the physical volumes with the pvcreate command and then assigns the physical volume to a Virtual Group, two things are done in the OS;
1. The partition that became the physical volume is reformatted to a file system of type LVM or LVM2. In testing the partition /dev/sdb3 of file type ext4 was reformatted to LVM2
2. The names of the logical volumes are placed in the /dev/mapper directory.

Whether the OS is actually using the logical volume at the time may not be something of concern. Through actions by the installer or the administrator Logical volumes were created and their file systems changed.

Since gparted already detects types of files systems, maybe all it needs to do is decide if a file system type of LVM or LVM2 is found on any partition. If so that entire hard drive should not be touched, if not, then proceed with user requested actions.

There are commands to extend physical volumes in a LVM file system, but to me that is something the system admin should do, not gparted.
Comment 26 Curtis Gedak 2010-02-24 19:43:06 UTC
(In reply to comment #25)
> I am not certain what you mean by "...LVM volumes ...are active". With testing
> I found that Debian used with gparted 5.1-1, Gentoo used with systemrescuecd,
> Fedora 12, and Suse 11.1 all load and use the Logical Volume Manager during
> boot up.

In the last paragraph of comment #17, you mentioned using some commands to "deactivate" (my word) the volume group.  After doing this you were able to successfully edit the partition table.

I was looking for a way to determine if the LVM-PV's were active (similar to a mounted partition), versus inactive (similar to an unmounted partition).

With newer GNU/Linux distributions it appears that the partition table can only be edited when there are no mounted partitions and, as we discovered, when there are no active LVM-PV's.  This restriction to partition table editing when partitions are mounted was not present in earlier GNU/Linux distributions, such as Ubuntu 8.04 LTS.

My hope was that there was a command that would indicate if a disk device contained LVM volumes that were either activated or deactivated.  That way I would be able to easily distinguish if partition editing should be disabled or enabled.
Comment 27 bitnomad 2010-02-24 19:52:55 UTC
Thanks for the clarification. Yes I was able to de-activate the LV-PV configuration, but the word de-activate is not correct. To be more accurate I used LVM commands to take apart the LV to PV structure and then gparted worked. In other words to disassemble the LVM disk structure. The pvremove etc commands made the OS not work, but made gparted work
Comment 28 Curtis Gedak 2010-02-28 16:43:05 UTC
Do you know if there are command(s) to turn off the LVM volumes without taking them apart or destroying these volumes?
Comment 29 bitnomad 2010-03-02 21:45:49 UTC
Looked into LVM commands. There isn't one to really stop LVM. To make the virtual group unavailable (off line so to say) first the private volumes have to be made unavailable and then the virtual group with a vgchange -an.

Even after setting the PVs and VG off line, the files in the /dev/mapper directory still contain the map (start, end of physical volumes) so changing anything about the sizes of the physical volumes without changing the maps could crash the system.
Comment 30 Curtis Gedak 2010-03-05 15:44:26 UTC
Thank you for looking further into LVM.  From your research it does not seem that there is an easy way to enable and disable LVM volumes.

It looks like this is a problem that requires a good understanding of LVM to address.  As such it would appear that this problem will have to wait until I have time to learn more about LVM.
Comment 31 Markus Elfring 2010-03-07 15:32:34 UTC
(In reply to comment #30)

I am also interested in the involved technical details of "logical" volume management.
Comment 32 Curtis Gedak 2010-05-01 17:05:09 UTC
leonbene1, would you be able to test again with GParted Live 0.5.2-0?

As I write this comment this version is in the testing branch, though we might move it to the stable branch soon.

The difference with this version is that three additional patches have been applied to parted-2.2.  These patches are:

libparted: don't canonicalize /dev/mapper paths
http://git.debian.org/?p=parted/parted.git;a=commit;h=c1eb485b9fd8919e18f192d678bc52b0488e6ee0


libparted: reenable use of BLKPG ioctls
http://git.debian.org/?p=parted/parted.git;a=commit;h=0e04d17386274fc218a9e6f9ae17d75510e632a3


libparted: improve BLKPG error checking
http://git.debian.org/?p=parted/parted.git;a=commit;h=7165951dfb584aae2901ac3f1a28fe3624667f19


The above three patches should improve support for DMRAID, and permit
editing unmounted partitions on devices that have at least one mounted
partition and/or an active LVM partition.
Comment 33 Curtis Gedak 2010-05-01 17:07:07 UTC
Oops, my typo.  The version to test again should read GParted Live 0.5.2-9.
Comment 34 bitnomad 2010-05-01 18:24:37 UTC
I can test 0.5.2-9 in a few days. Will download and test on Monday May 3
Comment 35 bitnomad 2010-05-03 23:26:29 UTC
Tested on the same configuration that failed. Fedora 12 with 2 Logical volumes. Gparted 0.5.2-9 deleted them without error. Works just fine. My only comment is that it allows the deletion of an LV without an a warning message. This is okay, but someone could really make a mess of a PV / LV / Volume group configuration.
Comment 36 Curtis Gedak 2010-05-03 23:35:08 UTC
Thanks for reporting back with your test results.  I am glad to learn you could delete other partitions without error on a device with an active LVM partition.

I will close this bug report because the problem reported in the title
  "GParted fails to edit/delete partitions if active LVM on same disk device"
is now resolved.

If you would like, please feel free to create a new application bug report for "GParted allows the deletion of an active LVM".  Currently I do not know how to easily detect if an LVM partition is active, and hence disallow deletion of an active LVM.