After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 748160 - disk space too large on inactive storage pool
disk space too large on inactive storage pool
Status: RESOLVED FIXED
Product: gnome-boxes
Classification: Applications
Component: installer
3.14.x
Other Linux
: Normal normal
: --
Assigned To: GNOME Boxes maintainer(s)
GNOME Boxes maintainer(s)
Depends on:
Blocks:
 
 
Reported: 2015-04-20 03:09 UTC by Maciej (Matthew) Piechotka
Modified: 2016-09-20 08:15 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
build: Require libvirt-glib >= 0.2.1 (858 bytes, patch)
2015-05-18 14:34 UTC, Zeeshan Ali
committed Details | Review
vm-creator: Set autostart=true on storage pool (1.75 KB, patch)
2015-05-18 14:34 UTC, Zeeshan Ali
committed Details | Review

Description Maciej (Matthew) Piechotka 2015-04-20 03:09:26 UTC
When I tried to install CoreOS gnome-boxes allows to select a disk size from 21.5 GB to 18.4 EB. This means that any increase results in multi PB disk - thousands times larger then total disk space (not mentioning space on partition).
Comment 1 Zeeshan Ali 2015-04-22 13:36:54 UTC
(In reply to Maciej Piechotka from comment #0)
> When I tried to install CoreOS gnome-boxes allows to select a disk size from
> 21.5 GB to 18.4 EB. This means that any increase results in multi PB disk -
> thousands times larger then total disk space (not mentioning space on
> partition).

I have seen this happening but haven't been able to reproduce it in the recent times with some fixes for this (that casme before 3.14). You sure you are using 3.14?
Comment 2 Maciej (Matthew) Piechotka 2015-04-23 03:11:06 UTC
3.14.3.1 if the about box is to be believed.
Comment 3 Zeeshan Ali 2015-04-30 14:14:14 UTC
I'm having trouble reproducing this against Boxes 3.14.3.1. I used coreos_production_iso_image.iso that I got from https://coreos.com/docs/running-coreos/platforms/iso/ . Is this the right one?
Comment 4 Maciej (Matthew) Piechotka 2015-05-05 03:10:10 UTC
Yes. I've just checked.
Comment 5 Zeeshan Ali 2015-05-13 16:13:13 UTC
Maciej, Kalev failed to reproduce this against the same ISO as well so I wonder if this could be specific to your filesystem. Can you give us following info about your host:

1. Which filesystem your home directory is on?
2. What does the following command say: virsh pool-info gnome-boxes
Comment 6 Maciej (Matthew) Piechotka 2015-05-14 03:37:06 UTC
1. btrfs but I keep big files on ext4 via symlink (i.e. .local/share/gnome-boxes/images is symlink)
2. It says:
Name:           gnome-boxes
UUID:           XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXX
State:          inactive
Persistent:     yes
Autostart:      no
Comment 7 Zeeshan Ali 2015-05-14 10:21:12 UTC
(In reply to Maciej Piechotka from comment #6)
> 1. btrfs but I keep big files on ext4 via symlink (i.e.
> .local/share/gnome-boxes/images is symlink)
> 2. It says:
> Name:           gnome-boxes
> UUID:           XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXX
> State:          inactive
> Persistent:     yes
> Autostart:      no

I don't see how/why it would be inactive unless you inactivated it? That *might* explain what's going on your system. Another possibility is that libvirt not be able to deal with btfs. If pool wasn't inactive, it would also say:

Capacity:       XXXX.XX GiB
Allocation:     XXXX,XX GiB
Available:      XXXX.XX GiB

Could you please try `virsh pool-start gnome-boxes` and see if that succeeds and helps with the issue?
Comment 8 Maciej (Matthew) Piechotka 2015-05-15 04:28:28 UTC
(In reply to Zeeshan Ali (Khattak) from comment #7)
> (In reply to Maciej Piechotka from comment #6)
> > 1. btrfs but I keep big files on ext4 via symlink (i.e.
> > .local/share/gnome-boxes/images is symlink)
> > 2. It says:
> > Name:           gnome-boxes
> > UUID:           XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXX
> > State:          inactive
> > Persistent:     yes
> > Autostart:      no
> 
> I don't see how/why it would be inactive unless you inactivated it? That
> *might* explain what's going on your system. Another possibility is that
> libvirt not be able to deal with btfs. If pool wasn't inactive, it would
> also say:
> 
> Capacity:       XXXX.XX GiB
> Allocation:     XXXX,XX GiB
> Available:      XXXX.XX GiB
> 
> Could you please try `virsh pool-start gnome-boxes` and see if that succeeds
> and helps with the issue?

Capacity:       124.88 GiB
Allocation:     69.30 GiB
Available:      55.58 GiB
Comment 9 Zeeshan Ali 2015-05-15 22:27:37 UTC
(In reply to Maciej Piechotka from comment #8)
> (In reply to Zeeshan Ali (Khattak) from comment #7)
> > (In reply to Maciej Piechotka from comment #6)
> > > 1. btrfs but I keep big files on ext4 via symlink (i.e.
> > > .local/share/gnome-boxes/images is symlink)
> > > 2. It says:
> > > Name:           gnome-boxes
> > > UUID:           XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXX
> > > State:          inactive
> > > Persistent:     yes
> > > Autostart:      no
> > 
> > I don't see how/why it would be inactive unless you inactivated it? That
> > *might* explain what's going on your system. Another possibility is that
> > libvirt not be able to deal with btfs. If pool wasn't inactive, it would
> > also say:
> > 
> > Capacity:       XXXX.XX GiB
> > Allocation:     XXXX,XX GiB
> > Available:      XXXX.XX GiB
> > 
> > Could you please try `virsh pool-start gnome-boxes` and see if that succeeds
> > and helps with the issue?
> 
> Capacity:       124.88 GiB
> Allocation:     69.30 GiB
> Available:      55.58 GiB

Thanks but I'm a bit confused. You got this after trying the above command or it was there but you didn't paste before? If its former, I meant that that command might help with the original bug so could you try to reproducing now?
Comment 10 Maciej (Matthew) Piechotka 2015-05-17 01:03:31 UTC
(In reply to Zeeshan Ali (Khattak) from comment #9)
> (In reply to Maciej Piechotka from comment #8)
> > (In reply to Zeeshan Ali (Khattak) from comment #7)
> > > (In reply to Maciej Piechotka from comment #6)
> > > > 1. btrfs but I keep big files on ext4 via symlink (i.e.
> > > > .local/share/gnome-boxes/images is symlink)
> > > > 2. It says:
> > > > Name:           gnome-boxes
> > > > UUID:           XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXX
> > > > State:          inactive
> > > > Persistent:     yes
> > > > Autostart:      no
> > > 
> > > I don't see how/why it would be inactive unless you inactivated it? That
> > > *might* explain what's going on your system. Another possibility is that
> > > libvirt not be able to deal with btfs. If pool wasn't inactive, it would
> > > also say:
> > > 
> > > Capacity:       XXXX.XX GiB
> > > Allocation:     XXXX,XX GiB
> > > Available:      XXXX.XX GiB
> > > 
> > > Could you please try `virsh pool-start gnome-boxes` and see if that succeeds
> > > and helps with the issue?
> > 
> > Capacity:       124.88 GiB
> > Allocation:     69.30 GiB
> > Available:      55.58 GiB
> 
> Thanks but I'm a bit confused. You got this after trying the above command
> or it was there but you didn't paste before? If its former, I meant that
> that command might help with the original bug so could you try to
> reproducing now?

Ups. Sorry - I misunderstood you. It seems to work now correctly (the maximum disk size is sane now).
Comment 11 Zeeshan Ali 2015-05-18 10:51:36 UTC
Any idea how the pool became de-activated? Did you change your home directory somehow or changed filesystem?

Thanks for sticking around to provide all info. I have heard same complaint from some other folks so it seems this can happen. Boxes should start the pool before using it..
Comment 12 Zeeshan Ali 2015-05-18 13:34:18 UTC
Interesting. I just looked into code and Boxes starts the storage pool if its inactive. I also tested this to work on both 3.14 and master branches. So I don't have any clue of whats going on. My guess is that somehow Boxes is failing to see that pool in inactive on your machine. If that is the case, you should have seen at least a few warnings on console.

I guess setting the autostart to 'true' should help avoid such situation so I'll see if I should enable that.
Comment 13 Zeeshan Ali 2015-05-18 14:34:21 UTC
Created attachment 303527 [details] [review]
build: Require libvirt-glib >= 0.2.1

We'll need next version of libvirt-glib to be able to set autostart flag
on storage pool in the following patch.
Comment 14 Zeeshan Ali 2015-05-18 14:34:53 UTC
Created attachment 303528 [details] [review]
vm-creator: Set autostart=true on storage pool

Even though we ensure that storage pool in activated if its been
de-activated for some reason and even create the backing directory, it
seems that on at least some machines we fail to do that somehow.

With lack of any data on why and when that happens, let's make the
storage pool autostart in hopes it helps the situation.
Comment 15 Christophe Fergeau 2015-05-21 14:03:48 UTC
18.4 EB looks like (uint64_t)-1, I guess GVirStoragePoolInfo::capacity or GVirStoragePoolInfo::available gets set when gvir_storage_pool_get_info() is called on an invalid storage pool, and then some error handling/notifying is missing somewhere in the stack.
Comment 16 Zeeshan Ali 2015-05-22 10:51:49 UTC
(In reply to Christophe Fergeau from comment #15)
> 18.4 EB looks like (uint64_t)-1, I guess GVirStoragePoolInfo::capacity or
> GVirStoragePoolInfo::available gets set when gvir_storage_pool_get_info() is
> called on an invalid storage pool, and then some error handling/notifying is
> missing somewhere in the stack.

True! However I failed to reproduce this and I don't see where/how this can happen looking at the code. So I need to know if we can just apply the patches attached and for now assume that they will help?
Comment 17 Zeeshan Ali 2015-06-05 16:33:35 UTC
I'm assuming these patches help.

Attachment 303527 [details] pushed as e3287b8 - build: Require libvirt-glib >= 0.2.1
Attachment 303528 [details] pushed as 193dc1d - vm-creator: Set autostart=true on storage pool