When a host is running multiple guest vms, it is vital to ensure they don't interfere with one another. Most operators will want to use hardware restrictions for this purpose, allocating only a specific subset of the host's performance for each vm.
This article focuses safe operations only when setting or changed resource limits. When changing limits at runtime is dangerous, only the safe method is shown (with a hint where to find information on the alternative).
Limiting CPU cores and speed
For CPU limits, there are two options to choose or combine. The easiest (and most talked about) is setting a fixed number of virtual CPU cores for the vm:
virt-install --vcpus 2 # ...The guest os will see exactly 2 cores, and the host will provide 2 cores at full speed to the vm. The linux kernel will intelligently pick the cpu core used for the vm based on current load, potentially switching the vm to a different core at runtime to prevent underusing existing hardware.
The vcpu count can also be changed for existing vms:
virsh setvcpus 4 my_vm --configNote that we omitted --live, applying the change only after a reboot. This is the only safe method to change vcpu count; there are options to hotplug cores at runtime, but safety and availability depend on the guest os. See `virsh help setvcpus` if you need to change running guest vcpus
If you want to absolutely guarantee a vm exclusive access to a specific core, specify cpuset=:
virt-install --vcpus 2,cpuset=0-1 # ...This is called "core pinning" and is usually a bad idea, since the cores will be allocated to the vm and unavailable for other vms or host tasks - even if the owning vm is currently idle. it is only useful for tasks that cannot accept speed degradation at any cost, like real-time or low-latency applications.
The second parameter to restrict CPU usage is setting a quota. Quotas work by picking a period in μs and a quota for how many μs worth of compute the vm may use during that period. The default period is 100000 (=100ms), allowing a reasonable burst window while preventing side-effects for other vms. You can decrease the period for faster reaction times at the cost of some overhead and potentially jittering guest vm tasks, or increase it to allow even longer burst duration that may negatively affect other guests.
Quotas can only be set on existing vms, either when shutdown or at runtime:
virsh schedinfo my_vm \
--set global_period=100000 \
--set global_quota=50000 \
--live --configThe quota represents the percentage of a single host cpu core available to the vm: allowing 50000 quota within a 100000 period effectively allows half a cpu core for the vm. You could also allow double the period as the quota to allow up to 2 full-speed cores.
When combining vcpus with quota, the quota defines the amount of physical core compute available to the vm in total. The quota is then evenly divided between the vcpu cores on the vm.
If your quota allows half a core as in the example above and you set 2 vcpus, the vm will effectively see 2 cpu cores, but each runs at 25% the speed of a single host core. Defining more than 100% quota per core wastes the remainder, as a single vm core can never utilize more than a single host core worth of quota.
Global quotas work for most users, but if you need more fine grained control, you can instead set separate quotas for vcpus only (vcpu_quota), emulated graphics/usb devices (emulator_quota), and disk/network i/o (iothread_quota).
Limiting memory
The only memory limit that works across all guests is a simple hard limit:
virt-install --memory 4096 # ...The --memory flag takes a single integer, the amount of memory in MiB.
For existing vms, you can change the memory size:
virsh setmem my_vm 4096 --configNote we omit --live so changes only take effect after the vm reboots. Changing memory at runtime can be destructive and requires specific guest os support; see virsh help setmem.
There are other options like ballooning and dynamic soft/hard limits, but those require specific guest os support.
Controlling disk size
For disks, there are three limits to keep in mind: storage size, read/write bandwidth and operations per second.
Setting a disk size limit is implicitly enforced during vm creation, although that may not be obvious from all commands. Some install commands may look like:
virt-install --install fedora29 --unattended
virt-install --cdrom ubuntu.iso --os-variant ubuntu24.04Both of these commands will create virtual disks, with sizes inferred from libosinfo data from the -install or --os-variant flags. While commands will get shorter this way, you should always set dis sizes manually so you don't get surprised by unexpectedly large guest disks eating up the host disk space.
A manual disk size is simply done for the sample commands above by providing a minimal --disk flag with size parameter:
virt-install --install fedora29 --unattended --disk size=20
virt-install --cdrom ubuntu.iso --os-variant ubuntu24.04 --disk size=20The value of the size= parameter is the maximum disk size in GiB - this can be larger than the actual host disk. Make sure to add all maximum vm disk sizes together and ensure they do not exceed the maximum host disk capacity. Running out of disk space on the host may deadlock or crash vms or the entire host system!
Throttling disk speeds
Restricting the maximum disk speeds is often overlooked by inexperienced administrators, but can be dangerous when left unattended. There are two different throttling limits you should set in production: read/write bandwidth and number of operations per second (iops).
You will need to set iotune parameters in bytes to throttle operations per second and bandwidth:
virt-install \
--disk size=20,bus=virtio,iotune.read_bytes_sec=10485760,iotune.write_bytes_sec=10485760 # ...
virt-install \
--disk size=20,bus=virtio,iotune.read_iops_sec=1000,iotune.write_iops_sec=1000 # ...These can also be safely changed for existing/running vms:
virsh blkiotune my_vm \
--set device_read_bytes_sec=10485760 \
--set device_write_bytes_sec=10485760 \
--live --config
virsh blkiotune my_vm \
--set device_read_iops_sec=1000 \
--set device_write_iops_sec=1000 \
--live --configAdjust the values for your disk, keeping in mind that all vms will share the total limits imposed by the physical disk. An HDD will typically provide 80-200 iops total due to overhead of seeking (spin-up times, moving the reading head etc), while SSDs range anywhere from 80,000 iops to over a million. Because of this, iops limits are much more important for HDD storage than for modern SSD or NVME backed disks.
The blkiotune command applies limits to all guest disks combined. If you need different limits per-disk, you can set those too. Start by listing available disks for the vm:
virsh domblklist my_vmOutput will list all configured disks, one per line:
vda /var/lib/libvirt/images/my_vm.qcow2
vdb /var/lib/libvirt/images/my_vm-1.qcow2
vdc /var/lib/libvirt/images/my_vm-2.qcow2Pick the disk you want to limit, then reference it by name for when setting limits:
virsh blkdeviotune my_vm vda \
--read-bytes-sec 10485760 \
--write-bytes-sec 10485760 \
--read-iops-sec 1000 \
--write-iops-sec 1000 \
--live --configThere are more fine-grained limits available, see virsh help blkdeviotune for a complete list.
Limiting network speed
First, find the name of the network interface you want to limit:
virsh domiflist my_vmThe output may look like this:
Interface Type Source Model MAC
------------------------------------------------------------
vnet35 network default e1000 52:54:00:f9:b3:a7In this example, we will limit the network interface named vnet35, adjust this to fit your setup.
Every attached network device can have limits for upload and download bandwidth, most often set through the average parameter:
virsh domiftune my_vm vnet35 \
--inbound 600 --outbound 600The example limits network bandwidth to a constant 600KB/s.
Setting constant speeds can be problematic for some workloads or unnecessarily slow down short-lived operations like package upgrades, cache refreshing etc. As a counter measure, you can allow a higher bandwidth for a limited amount of data, then falling back to the average limit.
This works by specifying a peak bandwidth speed in KB/s and the burst limit in KB after which to return to the average speed. The order of values for --inbound/--outbound is average,peak.burst, separated by commas:
virsh domiftune my_vm vnet35 \
--inbound 600,1000,5000 \
--outbound 10,1000,1000The example still enforces a 600KB/s speed limit for up/downloads, but up to 1MB/s for the first 5MB (5000KB). When the burst limit is exceeded, the vm will passively restore it by using less bandwidth than allowed by average, up to the maximum burst limit.