One of the benefits of the qemu/kvm model is management of the qemu process using linux userspace utilities as qemu runs in userspace as a normal linux process. One of these useful linux utilities is called taskset which allows setting the cpu affinity of a process. This is just a fancy way of saying bonding a process to a specific set of cpu(s). Why would you want to do this? The short answer is Load balancing. With multicore cpus becoming the de facto standard, there are some very practical uses of this when it comes to virtualization. On a typical multicore cpu today, each core has enough power to easily run a physical machine so you can see the practical application of this; pin a virtual machine to a cpu core. This is applicable for most applications. If your application needs more cpu cycles you can scale up by pinning your virtual machine to multiple cores. Using the linux taskset utility you have two options in applying it to your virtual machine process. You can set the cpu affinity of an already running process or you can start the process using a specific cpu affinity. The fact that you can specify the cpu affinity of an already running process really adds to the flexibility and usefulness of this utility. Let’s look at both options for manipulating your kvm virtual machines processes.
To start your qemu/kvm virtual machine on logical cpus 0 and 1, start your virtual machine with the following command.
taskset 0x00000003 qemu-system-x86_64 –hda windows.img –m 512
When starting a new process using taskset, you have to specify one argument which represents a bitmask of the cpus you want to bond the process to. As you can see this mask argument is in hexadecimal. The mask value 0x00000003 represents logical cpu numbers 0 and 1 as it starts counting from 0. To verify that your VM is running on logical cpus 0 and 1, use the taskset to verify this using the process id of your virtual machine. To get the process id of your virtual machine process use the following command.
[root@localhost ~]# ps -eo pid,comm | grep qemu 7532 qemu-system-x86
From the above command the process id of the qemu/kvm process is 7532. Run the following command to verify the cpu affinity of the virtual machine process.
[root@localhost ~]# taskset -p 7532 pid 7532's current affinity mask: 3
This is saying that the bitmask representing the cpu affinity of the process is 3. This bitmask is more machine friendly. To get a more human friendly verification run the following command instead.
[root@localhost ~]# taskset -c -p 7532 pid 7532's current affinity list: 0,1
This is much more human friendly and verifies that your virtual machine is running on logical cpus 0 and 1.
As mentioned earlier, taskset also allows you to modify your already running virtual machine process so that you can change the cpu affinity. Let’s change it so that it only runs on logical cpu 0. Using the same process id in the section above, run the following command.
[root@localhost ~]# taskset -p 0x00000001 7532 pid 7532's current affinity mask: 3 pid 7532's new affinity mask: 1
Taskset sets the affinity and shows the old and new bitmask but you want to also verify using the human friendly way with the following command.
[root@localhost ~]# taskset -c -p 7532 pid 7532's current affinity list: 0
You can see that your virtual machine process is running on logical cpu 0 only.
As multicore technology continue to increase the number of cores, the application of bonding virtual machine processes to specific cpus will become more commonplace. This is one clear example of how the qemu/kvm model leverages linux to its advantage. There was absolutely no development needed by KVM maintainers in order for users to have access to this practical facility.
Hello Haydn,
I have been looking at KVM docs to figure out to control CPU time allocation for KVM, but could not find any. Is there a KVM-defined tunable configuration setting like reservation and/or CPU Shares for VMware virtual machines? (The only thing I can think about using is the "nice" command.)
Hi Yuksel,
I'm not aware of any features like that and that would have to be done by Linux Kernel itself since KVM relies on Linux for resource management. It's an interesting question though and I'll raise it on the IRC channel/mailing list.