After an evaluation, GNOME has moved from Bugzilla to GitLab. Learn more about GitLab.
No new issues can be reported in GNOME Bugzilla anymore.
To report an issue in a GNOME project, go to GNOME GitLab.
Do not go to GNOME Gitlab for: Bluefish, Doxygen, GnuCash, GStreamer, java-gnome, LDTP, NetworkManager, Tomboy.
Bug 676029 - Make use of multicore
Make use of multicore
Status: RESOLVED FIXED
Product: gnome-boxes
Classification: Applications
Component: general
3.5.x (unsupported)
Other Linux
: Normal normal
: --
Assigned To: GNOME Boxes maintainer(s)
GNOME Boxes maintainer(s)
Depends on:
Blocks:
 
 
Reported: 2012-05-14 13:47 UTC by Zeeshan Ali
Modified: 2016-03-31 14:00 UTC
See Also:
GNOME target: ---
GNOME version: ---


Attachments
Assign all available cores to new VMs (3.21 KB, patch)
2012-06-01 16:01 UTC, Zeeshan Ali
none Details | Review
Assign all available cores & threads to new VMs (1.43 KB, patch)
2012-07-02 20:15 UTC, Zeeshan Ali
needs-work Details | Review
Let libvirt choose best CPU for us on domain launch (2.58 KB, patch)
2012-07-02 20:15 UTC, Zeeshan Ali
accepted-commit_now Details | Review
Assign best possible CPU configuration to VMs (1.75 KB, patch)
2012-07-09 18:46 UTC, Zeeshan Ali
none Details | Review
Assign best possible guest CPU configuration to VMs (1.77 KB, patch)
2012-07-09 18:53 UTC, Zeeshan Ali
committed Details | Review
Raise min libvirt-gconfig version (796 bytes, patch)
2012-07-11 15:32 UTC, Christophe Fergeau
committed Details | Review

Description Zeeshan Ali 2012-05-14 13:47:49 UTC
Marc-Andre reported that VM boxes (especially win7) perform much better with >1 cores assigned to them so we should really assign them a good number of CPUs by default.

Once we have the capabilities API in place in libvirt-glib, this shouldn't be very non-trivial to fix.
Comment 1 Zeeshan Ali 2012-06-01 16:01:17 UTC
Created attachment 215439 [details] [review]
Assign all available cores to new VMs

Perhaps we should also assign all threads as well?
Comment 2 Christophe Fergeau 2012-06-01 17:57:26 UTC
What are the performance impacts? Does it measurably improve win7 performance? Does it have an impact on the host performance?
Comment 3 Zeeshan Ali 2012-06-01 18:05:27 UTC
(In reply to comment #2)
> What are the performance impacts? Does it measurably improve win7 performance?
> Does it have an impact on the host performance?

Forgot to mention that I haven't tested these yet, having some weird libvirt issues here.
Comment 4 Zeeshan Ali 2012-06-01 20:22:45 UTC
I just did two installs of win7 w/ and w/o this patch and I didn't see any significant improvement.
Comment 5 Marc-Andre Lureau 2012-06-01 20:39:27 UTC
(In reply to comment #4)
> I just did two installs of win7 w/ and w/o this patch and I didn't see any
> significant improvement.

installation is io bound, not really cpu bound. The most important factor is cache. Kevin Wolf suggested Boxes use cache=unsafe for installation-time.
Comment 6 Zeeshan Ali 2012-06-01 20:43:12 UTC
(In reply to comment #5)
> (In reply to comment #4)
> > I just did two installs of win7 w/ and w/o this patch and I didn't see any
> > significant improvement.
> 
> installation is io bound, not really cpu bound.

I thought so but since you claimed this helps, I thought we should looking into this.

> The most important factor is
> cache. Kevin Wolf suggested Boxes use cache=unsafe for installation-time.

With installation taking 15-20 mins now, I'm not sure how much you are hoping to reduce it.
Comment 7 Marc-Andre Lureau 2012-06-01 20:45:02 UTC
I don't know what VCPU maps to. I haven't look closely, have you?

Fyi, my Win7 uses:

    <topology sockets='1' cores='4' threads='1'/>

And XP has other limits (http://en.wikipedia.org/wiki/Windows_XP#Processor_limits), so I used

    <topology sockets='2' cores='2' threads='1'/>
Comment 8 Marc-Andre Lureau 2012-06-01 20:53:48 UTC
(In reply to comment #6)
> I thought so but since you claimed this helps, I thought we should looking into
> this.

A multitask/multithread OS will benefit from many CPU. But basing performance change only on installation time is not a good measurement.

It seems reasonable to me that a guest VM should have access to the same CPU capability as the host. The host will manage task/scheduling like regular processes. There is no point in splitting host resources by limiting guest imho.
Comment 9 Zeeshan Ali 2012-06-02 02:09:53 UTC
(In reply to comment #7)
> I don't know what VCPU maps to. I haven't look closely, have you?
> 
> Fyi, my Win7 uses:
> 
>     <topology sockets='1' cores='4' threads='1'/>
> 
> And XP has other limits
> (http://en.wikipedia.org/wiki/Windows_XP#Processor_limits), so I used
> 
>     <topology sockets='2' cores='2' threads='1'/>

I don't get it, the 'topology' element you are refering to seem to be from capabilities. Is there a 'topology' element in domain xml as well?

(In reply to comment #8)
> (In reply to comment #6)
> > I thought so but since you claimed this helps, I thought we should looking into
> > this.
> 
> A multitask/multithread OS will benefit from many CPU. But basing performance
> change only on installation time is not a good measurement.

There was a miscommunication then as my understanding was that installation is what you measured against a. assignment of all cores and b. pre-allocation.

> It seems reasonable to me that a guest VM should have access to the same CPU
> capability as the host. The host will manage task/scheduling like regular
> processes. There is no point in splitting host resources by limiting guest
> imho.

I agree but I think a better solution would be more like:

vcpus = int64.min (os.recommended_vcpus, available_cores);

Anyways, blaming you for the patch wasn't my intention but to point to the person who seem to know the rationale of this more than me.
Comment 10 Zeeshan Ali 2012-06-05 02:26:58 UTC
(In reply to comment #9)
> (In reply to comment #7)
> > I don't know what VCPU maps to. I haven't look closely, have you?
> > 
> > Fyi, my Win7 uses:
> > 
> >     <topology sockets='1' cores='4' threads='1'/>
> > 
> > And XP has other limits
> > (http://en.wikipedia.org/wiki/Windows_XP#Processor_limits), so I used
> > 
> >     <topology sockets='2' cores='2' threads='1'/>
> 
> I don't get it, the 'topology' element you are refering to seem to be from
> capabilities. Is there a 'topology' element in domain xml as well?

So it seem that we need to specify both 'vcpu' and 'cpu/topology' to get this working. From what I can tell from libvirt xml docs and some experimentation with these values is that 'vcpu' is just the maximum value for 'sockets * cores * threads'. As for setting cpu/topology in the domain xml, we need more libvirt-gconfig APIs.
Comment 11 Zeeshan Ali 2012-07-02 20:15:41 UTC
Created attachment 217865 [details] [review]
Assign all available cores & threads to new VMs
Comment 12 Zeeshan Ali 2012-07-02 20:15:47 UTC
Created attachment 217866 [details] [review]
Let libvirt choose best CPU for us on domain launch
Comment 13 Christophe Fergeau 2012-07-03 10:01:48 UTC
Comment on attachment 217865 [details] [review]
Assign all available cores & threads to new VMs

>From 6685fbfe3379b987d90a4fec8edb4a19b03df9ad Mon Sep 17 00:00:00 2001
>From: "Zeeshan Ali (Khattak)" <zeeshanak@gnome.org>
>Date: Fri, 1 Jun 2012 07:02:32 +0300
>Subject: [PATCH] Assign all available cores & threads to new VMs
>
>https://bugzilla.gnome.org/show_bug.cgi?id=676029
>---
> src/vm-configurator.vala |    8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
>diff --git a/src/vm-configurator.vala b/src/vm-configurator.vala
>index bffda50..d8cb87c 100644
>--- a/src/vm-configurator.vala
>+++ b/src/vm-configurator.vala
>@@ -29,7 +29,7 @@ private class Boxes.VMConfigurator {
> 
>         var best_caps = get_best_guest_caps (caps, install_media);
>         domain.memory = install_media.resources.ram / KIBIBYTES;
>-        domain.vcpu = install_media.resources.n_cpus;
>+        set_vcpus (domain, caps);
> 
>         var virt_type = guest_kvm_enabled (best_caps) ? DomainVirtType.KVM : DomainVirtType.QEMU;
>         domain.set_virt_type (virt_type);
>@@ -167,6 +167,12 @@ private class Boxes.VMConfigurator {
>         return pool;
>     }
> 
>+    private void set_vcpus (Domain domain, Capabilities caps) {
>+        var topology = caps.get_host ().get_cpu ().get_topology ();
>+
>+        domain.vcpu = topology.get_cores () * topology.get_threads ();

topology.get_sockets () is missing. I wouldn't take into account the threads in the total number of vcpus since they are far less efficient than actual cores, and I'm still worried that this can have an impact on host performance.

/domain/cpu/topology also needs to be set to more accurately reflect the host CPUs inside the guest.
Comment 14 Christophe Fergeau 2012-07-03 10:02:36 UTC
Comment on attachment 217866 [details] [review]
Let libvirt choose best CPU for us on domain launch

Yup, looks good
Comment 15 Christophe Fergeau 2012-07-03 10:04:25 UTC
Any performance impact on the host when the guest is loaded?
Here is an IRC log on this topic in case that's useful:

11:49 <@teuf> danpb: hey, is /domain/cpu/topology something more fine-grained than /domain/vcpu? or are they different
              things?
11:51 < danpb> the latter  (/domain/vcpu) tells you the total number of vcpus
11:51 < danpb> by default all CPUs appear in guest as sockets
11:51 < danpb> the topology setting lets you choose a custom mix of  sockets+cores+threads
11:52 <@teuf> and what happens if you set 1 vcpu and 1 socket/4 cores ?
11:52 < danpb> bear in mind that the total number of vcpus implied by the topology  (sockets * cores * threads) might
               be less than the number of vcpus (eg you have vcpus spread across numa nodes), or greater  ( you have
               not fully populated the topology)
11:53 < danpb> teuf: that means you have a topology that can hold upto 4vcpus, but you have only "plugged in"  one vcpu
11:54 <@teuf> ok, so if you want to mirror the host CPU as accurately as possible, you need to set /domain/vcpu to
              sockets*cores*threads and set /domain/cpu/topology to mirror the host cpu topology ?
11:54 < danpb> well setting the topology to match is the important thing i'd say
11:55 < danpb> whether you match the total vcpus == total pcpus  is less important
11:55 < danpb> the topology is what i'd class a correctness item
11:55 < danpb> whereas the vcpu count is what i'd class a performance item
11:55 <@teuf> yes, agreed
11:56 <@teuf> vcpu count seems to help a lot with win7
11:56 < danpb> you mean to make it not suck at performacne ?
11:56 <@teuf> asking these questions because of bug https://bugzilla.gnome.org/show_bug.cgi?id=676029 if you want some
              context
11:56 <@bebot> Bug 676029: normal, Normal, ---, gnome-boxes-maint, NEW, Make use of multicore
11:56 <@teuf> danpb: yeah
11:57 < danpb> for boxes, i'd say it was entirely reasonable that you match topology and assign  total vcpus == total
               pcpus  as your default behaviour
11:58 < danpb> you're not trying to build a system where you want to ensure isolation & fairness between VMs with
               different owners
11:58 <@teuf> yep
11:58 <@teuf> only bit I'm not sure is whether to count threads as pcpus, or to stop at the core count
11:58 < danpb> so limiting cpu usage is not really required by default - just let the kernel schedule them as best it
               can according to their transient workloads
11:58  * teuf would favour the latter
11:58 < danpb> i think i'd probably agree
11:59 < danpb>  hyperthreads are still a poor relation to a proper core
Comment 16 Zeeshan Ali 2012-07-03 15:18:22 UTC
(In reply to comment #15)
> Any performance impact on the host when the guest is loaded?

In my tests so far, I didn't see any performance impact on host. I don't see why there would be since this is not very different from any user process wanting to use a lot of threads (unless qemu/kvm does something very special?).

> Here is an IRC log on this topic in case that's useful:
> 
> 11:49 <@teuf> danpb: hey, is /domain/cpu/topology something more fine-grained
> than /domain/vcpu? or are they different
>               things?
> 11:51 < danpb> the latter  (/domain/vcpu) tells you the total number of vcpus
> 11:51 < danpb> by default all CPUs appear in guest as sockets
> 11:51 < danpb> the topology setting lets you choose a custom mix of 
> sockets+cores+threads
> 11:52 <@teuf> and what happens if you set 1 vcpu and 1 socket/4 cores ?
> 11:52 < danpb> bear in mind that the total number of vcpus implied by the
> topology  (sockets * cores * threads) might
>                be less than the number of vcpus (eg you have vcpus spread
> across numa nodes), or greater  ( you have
>                not fully populated the topology)
> 11:53 < danpb> teuf: that means you have a topology that can hold upto 4vcpus,
> but you have only "plugged in"  one vcpu
> 11:54 <@teuf> ok, so if you want to mirror the host CPU as accurately as
> possible, you need to set /domain/vcpu to
>               sockets*cores*threads and set /domain/cpu/topology to mirror the
> host cpu topology ?
> 11:54 < danpb> well setting the topology to match is the important thing i'd
> say
> 11:55 < danpb> whether you match the total vcpus == total pcpus  is less
> important
> 11:55 < danpb> the topology is what i'd class a correctness item
> 11:55 < danpb> whereas the vcpu count is what i'd class a performance item
> 11:55 <@teuf> yes, agreed
SKIP

Right, I'll modify the patch to also set the topology.
Comment 17 Christophe Fergeau 2012-07-03 15:29:47 UTC
(In reply to comment #16)
> (In reply to comment #15)
> > Any performance impact on the host when the guest is loaded?
> 
> In my tests so far, I didn't see any performance impact on host. I don't see
> why there would be since this is not very different from any user process
> wanting to use a lot of threads (unless qemu/kvm does something very special?).

Nope nothing special, I'm just not sure I'd be very pleased if a virtual machine started hogging all my host CPUs because some application inside the VM started using all CPU power available.
Comment 18 Zeeshan Ali 2012-07-03 17:09:35 UTC
Turns out that if I specify the topology, I get/see only one core in the guest. Here is the domain config: http://fpaste.org/UQeH/
Comment 19 Zeeshan Ali 2012-07-04 17:19:29 UTC
So a bit more observations and info:

First of all it turns out that qemu seems to ignore 'cores' 'threads' part of the ' -smp n[,cores=cores][,threads=threads][,sockets=sockets]' option. The only way to get it to enable multicore in the guest is to set the 'n' and 'sockets' to >1. The 'n' here comes from libvirt's 'domain/vcpu'. If you don't set the topology, libvirt also sets the 'sockets' to the same value as 'domain/vcpu'.

I'm still not sure we need/want to set the topology and not let qemu expose all virtual cpus available as sockets. In an F17 guest running under a domain with 4 sockets (host has only 1 socket, 2 cores and 2 threads), I tried to bring down the host with running lots of glxgears but I only managed to slow down the guest, host continued working as normal.
Comment 20 Zeeshan Ali 2012-07-05 23:19:35 UTC
(In reply to comment #17)
> (In reply to comment #16)
> > (In reply to comment #15)
> > > Any performance impact on the host when the guest is loaded?
> > 
> > In my tests so far, I didn't see any performance impact on host. I don't see
> > why there would be since this is not very different from any user process
> > wanting to use a lot of threads (unless qemu/kvm does something very special?).
> 
> Nope nothing special, I'm just not sure I'd be very pleased if a virtual
> machine started hogging all my host CPUs because some application inside the VM
> started using all CPU power available.

If thats your main concern, I already addressed this in comment#19?
Comment 21 Marc-Andre Lureau 2012-07-06 21:25:02 UTC
(In reply to comment #19)
> So a bit more observations and info:
> 
> First of all it turns out that qemu seems to ignore 'cores' 'threads' part of
> the ' -smp n[,cores=cores][,threads=threads][,sockets=sockets]' option. The
> only way to get it to enable multicore in the guest is to set the 'n' and
> 'sockets' to >1. The 'n' here comes from libvirt's 'domain/vcpu'. If you don't
> set the topology, libvirt also sets the 'sockets' to the same value as
> 'domain/vcpu'.

that would be a bit surprising, I supposed you checked with Linux and not with a limited OS like XP. It may also depend on the cpu type etc.
Comment 22 Marc-Andre Lureau 2012-07-06 21:27:27 UTC
the patches don't apply on master, but they look good, ack
Comment 23 Christophe Fergeau 2012-07-07 08:58:14 UTC
So, I just ran a quick test with qemu-kvm and a Fedora LiveCD, and

qemu-kvm -m 1G -smp 4,cores=2,threads=2,sockets=1 -cdrom /home/teuf/isos/fedora/Fedora-17-i686-Live-Desktop.iso

gives me what I expect in /proc/cpuinfo (2 cores, 2 threads per-core).
Comment 24 Zeeshan Ali 2012-07-07 14:30:08 UTC
(In reply to comment #21)
> (In reply to comment #19)
> > So a bit more observations and info:
> > 
> > First of all it turns out that qemu seems to ignore 'cores' 'threads' part of
> > the ' -smp n[,cores=cores][,threads=threads][,sockets=sockets]' option. The
> > only way to get it to enable multicore in the guest is to set the 'n' and
> > 'sockets' to >1. The 'n' here comes from libvirt's 'domain/vcpu'. If you don't
> > set the topology, libvirt also sets the 'sockets' to the same value as
> > 'domain/vcpu'.
> 
> that would be a bit surprising, I supposed you checked with Linux and not with
> a limited OS like XP. It may also depend on the cpu type etc.

Yes.

(In reply to comment #23)
> So, I just ran a quick test with qemu-kvm and a Fedora LiveCD, and
> 
> qemu-kvm -m 1G -smp 4,cores=2,threads=2,sockets=1 -cdrom
> /home/teuf/isos/fedora/Fedora-17-i686-Live-Desktop.iso
> 
> gives me what I expect in /proc/cpuinfo (2 cores, 2 threads per-core).

Hmm.. I assume you are using qemu-kvm from fedora 17 packages? I'm always using from git master so there could be some regression.
Comment 25 Zeeshan Ali 2012-07-07 14:54:59 UTC
(In reply to comment #24)
> (In reply to comment #23)
> > So, I just ran a quick test with qemu-kvm and a Fedora LiveCD, and
> > 
> > qemu-kvm -m 1G -smp 4,cores=2,threads=2,sockets=1 -cdrom
> > /home/teuf/isos/fedora/Fedora-17-i686-Live-Desktop.iso
> > 
> > gives me what I expect in /proc/cpuinfo (2 cores, 2 threads per-core).
> 
> Hmm.. I assume you are using qemu-kvm from fedora 17 packages? I'm always using
> from git master so there could be some regression.

I just tried against qemu-kvm from F17 packages and I still see only 1 cpu in F17 Live guest under /proc/cpuinfo. Here is the qemu commandline used by libvirtd:

/usr/bin/qemu-kvm -name fedora17-2 -S -M pc-0.15 -cpu core2duo,+lahf_lm,+rdtscp,+aes,+popcnt,+sse4.2,+sse4.1,+pdcm,+xtpr,+cx16,+tm2,+est,+smx,+vmx,+ds_cpl,+dtes64,+pclmuldq,+pbe,+tm,+ht,+ss,+acpi,+ds -enable-kvm -m 1152 -smp 4,sockets=1,cores=2,threads=2 -uuid c573342b-2876-05b8-098e-6d5314cab062 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/home/zeenix/.config/libvirt/qemu/lib/fedora17-2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -no-acpi -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/home/zeenix/.local/share/gnome-boxes/images/fedora17-2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/extra-data/ISOs/Fedora-17-i686-Live-Desktop.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev user,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:21:6f:6e,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -device usb-tablet,id=input0 -spice port=5900,addr=127.0.0.1,disable-ticketing -vga qxl -global qxl-vga.vram_size=67108864 -device AC97,id=sound0,bus=pci.0,addr=0x4 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
Comment 26 Zeeshan Ali 2012-07-07 14:58:45 UTC
(In reply to comment #23)
> So, I just ran a quick test with qemu-kvm and a Fedora LiveCD, and
> 
> qemu-kvm -m 1G -smp 4,cores=2,threads=2,sockets=1 -cdrom
> /home/teuf/isos/fedora/Fedora-17-i686-Live-Desktop.iso
> 
> gives me what I expect in /proc/cpuinfo (2 cores, 2 threads per-core).

However, if i use the exact commandline above, I also get the same results that you did: i-e 2 cores and 2 threads in guest.
Comment 27 Christophe Fergeau 2012-07-09 18:00:31 UTC
The guest is not seeing the topology because of --no-acpi on qemu command line. Libvirt adds this because it doesn't have /domain/features/acpi in the domain XML. 
Features are confusing me as always, but as I understand it /domain/features is about enabling/disabling hypervisor features, and the topology changes you're doing are about the CPU exposed to the guest. This means the setting of /domain/features shouldn't be removed from the 2nd patch ("Let libvirt choose best CPU for us on domain launch"). Can you try if this helps?
Comment 28 Zeeshan Ali 2012-07-09 18:42:16 UTC
(In reply to comment #27)
> The guest is not seeing the topology because of --no-acpi on qemu command line.

Ouch! I didn't know ACPI has anything to do with CPU topology. Clearly, I need to learn more about ACPI. :(

> Libvirt adds this because it doesn't have /domain/features/acpi in the domain
> XML. 
> Features are confusing me as always, but as I understand it /domain/features is
> about enabling/disabling hypervisor features, and the topology changes you're
> doing are about the CPU exposed to the guest. This means the setting of
> /domain/features shouldn't be removed from the 2nd patch ("Let libvirt choose
> best CPU for us on domain launch"). Can you try if this helps?

I checked and yes, you are correct. I'll submit a new patch.
Comment 29 Zeeshan Ali 2012-07-09 18:46:21 UTC
Created attachment 218360 [details] [review]
Assign best possible CPU configuration to VMs

- VM CPU topology should match that of host.
- Let libvirt choose best possible CPU for us on domain launch.
Comment 30 Christophe Fergeau 2012-07-09 18:46:52 UTC
(In reply to comment #28)
> (In reply to comment #27)
> > The guest is not seeing the topology because of --no-acpi on qemu command line.
> 
> Ouch! I didn't know ACPI has anything to do with CPU topology. Clearly, I need
> to learn more about ACPI. :(

qemu command line bisection ftw!! 


> I checked and yes, you are correct. I'll submit a new patch.

Ah, great to know! I'll submit a patch doing s/guest_features/hypervisor_features/ on top of these unless you do it first.
Comment 31 Christophe Fergeau 2012-07-09 18:50:25 UTC
(In reply to comment #29)
> Created an attachment (id=218360) [details] [review]
> Assign best possible CPU configuration to VMs
> 
> - VM CPU topology should match that of host.
> - Let libvirt choose best possible CPU for us on domain launch.

I give up on setting the patch status, but ACK from me
Comment 32 Christophe Fergeau 2012-07-09 18:51:10 UTC
(In reply to comment #31)
> (In reply to comment #29)
> > Created an attachment (id=218360) [details] [review] [details] [review]
> > Assign best possible CPU configuration to VMs
> > 
> > - VM CPU topology should match that of host.
> > - Let libvirt choose best possible CPU for us on domain launch.
> 
> I give up on setting the patch status, but ACK from me

Oh, would be extra better if you could use "guest CPU" in the commit log since it's easy to get confused about which CPU we are talking about.
Comment 33 Zeeshan Ali 2012-07-09 18:52:01 UTC
(In reply to comment #30)
> 
> > I checked and yes, you are correct. I'll submit a new patch.
> 
> Ah, great to know! I'll submit a patch doing
> s/guest_features/hypervisor_features/ on top of these unless you do it first.

We still get those in guest features, no? Or is that the host cpu features we should check?
Comment 34 Zeeshan Ali 2012-07-09 18:53:40 UTC
Created attachment 218362 [details] [review]
Assign best possible guest CPU configuration to VMs

- VM's guest CPU topology should match that of host.
- Let libvirt choose best possible guest CPU for us on domain launch.
Comment 35 Zeeshan Ali 2012-07-09 19:09:43 UTC
Attachment 218362 [details] pushed as f1e67ec - Assign best possible guest CPU configuration to VMs
Comment 36 Christophe Fergeau 2012-07-09 19:10:40 UTC
(In reply to comment #33)
> (In reply to comment #30)
> > 
> > > I checked and yes, you are correct. I'll submit a new patch.
> > 
> > Ah, great to know! I'll submit a patch doing
> > s/guest_features/hypervisor_features/ on top of these unless you do it first.
> 
> We still get those in guest features, no? Or is that the host cpu features we
> should check?

I'm basing this name on http://libvirt.org/formatdomain.html#elementsFeatures "Hypervisor features"
Reading http://libvirt.org/formatcaps.html , this corresponds to what is under <guest> I think, and the way things are worded there is consistent with the "Hypervisor features" naming:
"The second block (in blue) indicates the paravirtualization support of the Xen support, you will see the os_type of xen to indicate a paravirtual kernel, then architecture information and potential features.", ie it describes the features that are available in the hypervisor to improve guest virtualization.

/capabilities/host/cpu/features describes the CPU features of the host CPU, ie unrelated to these hypervisor features (which are not necessarily CPU features).
Comment 37 Zeeshan Ali 2012-07-09 19:18:07 UTC
(In reply to comment #36)
> (In reply to comment #33)
> > (In reply to comment #30)
> > > 
> > > > I checked and yes, you are correct. I'll submit a new patch.
> > > 
> > > Ah, great to know! I'll submit a patch doing
> > > s/guest_features/hypervisor_features/ on top of these unless you do it first.
> > 
> > We still get those in guest features, no? Or is that the host cpu features we
> > should check?
> 
> I'm basing this name on http://libvirt.org/formatdomain.html#elementsFeatures
> "Hypervisor features"
> Reading http://libvirt.org/formatcaps.html , this corresponds to what is under
> <guest> I think, and the way things are worded there is consistent with the
> "Hypervisor features" naming:
> "The second block (in blue) indicates the paravirtualization support of the Xen
> support, you will see the os_type of xen to indicate a paravirtual kernel, then
> architecture information and potential features.", ie it describes the features
> that are available in the hypervisor to improve guest virtualization.
> 
> /capabilities/host/cpu/features describes the CPU features of the host CPU, ie
> unrelated to these hypervisor features (which are not necessarily CPU
> features).

So I follow this right, guest and hypervisor features are synonyms in this context?
Comment 38 Christophe Fergeau 2012-07-09 19:25:19 UTC
More explanations about this in an email thread:
http://www.redhat.com/archives/libvir-list/2012-May/msg00231.html
http://www.redhat.com/archives/libvir-list/2012-May/msg00523.html
(which says among other things that "there [is] also a /domain/features  which
corresponds to /capabilities/guest/features")

(In reply to comment #37)

> So I follow this right, guest and hypervisor features are synonyms in this
> context?

I'd argue that calling /capabilities/guest/features "guest features" is inaccurate and can be misleading (though the actual issue in this bug was more a confusion between guest CPU features and guest/hypervisor features).
I think 'hypervisor features' is a more descriptive name of what these features are, and this has the advantage of being a distinct name from guest/host CPU features. That's why I'm suggesting this change.
Comment 39 Zeeshan Ali 2012-07-09 19:30:47 UTC
(In reply to comment #38)
> More explanations about this in an email thread:
> http://www.redhat.com/archives/libvir-list/2012-May/msg00231.html
> http://www.redhat.com/archives/libvir-list/2012-May/msg00523.html
> (which says among other things that "there [is] also a /domain/features  which
> corresponds to /capabilities/guest/features")
> 
> (In reply to comment #37)
> 
> > So I follow this right, guest and hypervisor features are synonyms in this
> > context?
> 
> I'd argue that calling /capabilities/guest/features "guest features" is
> inaccurate and can be misleading (though the actual issue in this bug was more
> a confusion between guest CPU features and guest/hypervisor features).
> I think 'hypervisor features' is a more descriptive name of what these features
> are, and this has the advantage of being a distinct name from guest/host CPU
> features. That's why I'm suggesting this change.

ah ok but there is nothing called 'guest_features' in vm-configurator.vala afaict. We have: bool guest_supports_feature (CapabilitiesGuest guest_caps, string feature_name). Since it taks guest caps, calling it 'hypervisor_supports_feature' would only confuse reader more IMHO.

Still no strong preferences so feel free to change it.
Comment 40 Christophe Fergeau 2012-07-09 19:51:11 UTC
(In reply to comment #39)

> ah ok but there is nothing called 'guest_features' in vm-configurator.vala
> afaict. We have: bool guest_supports_feature (CapabilitiesGuest guest_caps,
> string feature_name). Since it taks guest caps, calling it
> 'hypervisor_supports_feature' would only confuse reader more IMHO.

I'd rename 'guest caps' to 'hypervisor caps' as well in boxes, unless there are too many places where it does not make sense. If (fingers crossed) the only place where there's a clash in in the boxes/libvirt-gconfig interface, a comment should handle any confusion.
Comment 41 Zeeshan Ali 2012-07-09 19:58:00 UTC
(In reply to comment #40)
> (In reply to comment #39)
> 
> > ah ok but there is nothing called 'guest_features' in vm-configurator.vala
> > afaict. We have: bool guest_supports_feature (CapabilitiesGuest guest_caps,
> > string feature_name). Since it taks guest caps, calling it
> > 'hypervisor_supports_feature' would only confuse reader more IMHO.
> 
> I'd rename 'guest caps' to 'hypervisor caps' as well in boxes, unless there are
> too many places where it does not make sense. If (fingers crossed) the only
> place where there's a clash in in the boxes/libvirt-gconfig interface, a
> comment should handle any confusion.

Since we can't change it everywhere (as you noticed), this rename doesn't really serve the purpose you want it to (otherwise you won't need an additional comment). But as i said, no strong preference.
Comment 42 Christophe Fergeau 2012-07-11 15:32:19 UTC
Created attachment 218564 [details] [review]
Raise min libvirt-gconfig version

The CPU topology work needs libvirt-gconfig >= 0.1.0
Comment 43 Zeeshan Ali 2012-07-11 16:54:44 UTC
Review of attachment 218564 [details] [review]:

yup
Comment 44 Christophe Fergeau 2012-07-11 21:21:25 UTC
Comment on attachment 218564 [details] [review]
Raise min libvirt-gconfig version

Attachment 218564 [details] pushed as 782de30 - Raise min libvirt-gconfig version