Getting started with KVM and virsh

Page contents

Managing virtual machines is a core requirement for many system administrators, and on linux the default option is KVM with virt-tools. Tools like virsh and virt-install are powerful, but can be confusing when getting started. This guide is not a complete cheatsheet, but rather a shortcut to get up and running with basic VMs quickly, pointing out common pitfalls along the way so you know what to look out for. We will focus on building a single-machine kvm host, with some guidance for highly-available production setups towards the end.

installation & setup

We need core packages installed and configured to work with KVM.

On debian-like distros:

sudo apt update
sudo apt install -y \
  qemu-kvm \
  libvirt-daemon-system \
  libvirt-clients \
  virtinst \
  virt-viewer \
  bridge-utils \
  guestfs-tools \

*There is an issue on debian 13 resulting in broken network connections when running virt-customize. If you run into this issue, also install these packages:

sudo apt install \
  systemd-resolved \
  dhcpcd-base

On RHEL family distros:

sudo dnf install -y \
  qemu-kvm \
  libvirt \
  libvirt-client \
  virt-install \
  virt-viewer \
  bridge-utils \
  guestfs-tools

Now verify your machine supports virtualization:

virt-host-validate

This will check many requirements for KVM to work properly. If one of the first 3 lines isn't marked as PASS, your system isn't configured right.

Troubleshooting is beyond the scope of this guide, but remember that the KVM kernel module needs to be loaded, not all processors support virtualization and even the ones that do need to have it enabled in BIOS - look for options named "Intel VT-x" or "AMD-V".

Using the libvirt system daemon

KVM provides two different daemons to run VMs on: the "system" daemon running with root privileges and full support for all features, and the "session" daemon for your local user account, with limited feature support.

The "session" daemon is configured for your user by default, causing headaches to users expecting an experience similar to virtualbox or vmware player. It is also not supported by higher-level tools like vagrant or terraform.


In order to allow your user account to interact with the system daemon, you need to be part of the libvirt group:

sudo usermod -aG libvirt $USER

Your can tell libvirt to use the system socket by default by setting an env var:

export LIBVIRT_DEFAULT_URI=qemu:///system

Add this to the end of your ~/.bashrc to make it permanent for your user, or in a script /etc/profile.d/libvirt.sh for all user's shells. Even if you do that, remember to also set it in your current shell or start a new one.


You should now be able to interact with KVM through virsh:

virsh list

Optional:

You will likely get prompted for your password when running virsh. If this is inconvenient, you can give users in the libvirt group access without password confirmation through polkit.

Create a file /etc/polkit-1/rules.d/50-libvirt.rules with contents:

polkit.addRule(function(action, subject) {  
  if ((action.id == "org.libvirt.unix.manage" ||  
    action.id == "org.libvirt.unix.read" ||  
    action.id == "org.libvirt.unix.control") &&  
    subject.isInGroup("libvirt")) {  
    return polkit.Result.YES;  
  }
});

Verifying default resources

There is a default network and storage pool that virsh will fall back to when omitting specific options, and you should rely on this behavior. Unfortunately, they are not always configured properly after installing the packages, so we will verify and potentially fix them now.


Starting with the network, you should see a network named "default" when running

virsh net-list --all

It should look like this:

Name      State    Autostart    Persistent  
--------------------------------------------  
default   active   yes          yes

Make sure it is active and set to autostart. If it is not started, run

virsh net-start default

If it is not marked for autostart, enable it:

virsh net-autostart default

If you don't have a default network at all, you will need to create one manually:

default.xml

<network>  
  <name>default</name>
  <forward mode='nat'/>
  <bridge name='virbr0' stp='on' delay='0'/>  
  <ip address='192.168.122.1' netmask='255.255.255.0'>  
    <dhcp>  
      <range start='192.168.122.2' end='192.168.122.254'/>  
    </dhcp>  
  </ip>  
</network>

Define the network and start it:

virsh net-define default.xml
virsh net-start default
virsh net-autostart default

For the storage pool, things will look similar. Start by verifying you have a storage pool named "default":

virsh pool-list --all

The output should look like:

Name       State     Autostart  
--------------------------------------  
default    active    yes  

It's fine if you have other pools, just make sure "default" is there.


If the pool is not active, start it:

virsh pool-start default

if it is not marked for autostart, enable it:

virsh pool-autostart default

If you don't have a default storage pool at all, create one:

virsh pool-define-as default dir --target /var/lib/libvirt/images
virsh pool-build default
virsh pool-start default
virsh pool-autostart default

You should now have a functional KVM / virsh setup that works as expected in most cases. For highly-available production deployments, you can instead create a default storage pool from a network storage like GlusterFS or CEPH, keeping your data alive on kvm host failure and enabling fast host migrations.

You can also use raw ZFS or LVM pools for slightly better performance compared to local directory pools, but this is rarely necessary.

Creating a libvirt working directory

Even though the libvirt system daemon runs with root privileges, it will drop to an unprivileged system user when possible, for example when trying to read cdrom or disk images. This means these files need to be accessible by this user, leading to "permission denied" errors if not handled properly.

The best way to deal with this issue is to create a working directory to store files you want to use with vms. In this guide we will use /opt/kvm, but you can move it to /srv if you prefer (just make sure not to put it any non-readable dir like paths in /root or /home).

Create the directory and make it owned by your user and the libvirt group, with no access for others:

getent group qemu && LIBVIRT_SYS_USER="qemu" || LIBVIRT_SYS_USER="libvirt-qemu"
sudo mkdir /opt/kvm
sudo chown $USER:$LIBVIRT_SYS_USER /opt/kvm
sudo chmod g+s /opt/kvm
sudo chmod 750 /opt/kvm

This script uses getent to dynamically pick the correct libvirt system user's groupname (either qemu for rhel or libvirt-qemu for debian), then use it as the group owner of the working directory. By setting the SGID bit, files and directories inside inherit this group ownership, allowing libvirt access to the files in most cases. The permissions of 750 are chosen intentionally: you want your local user to have full access inside, the libvirt system user being able to read contents, and others should not be able to even read dir contents, since it may contain temporary admin password files for unattended installs, generated SSH keys etc. The libvirt user should not have write permission in the directory, to discourage storing writable disks in the dir, which often leads to problems with SELInux or apparmor.


If you have any files you want to use with KVM vms, like .iso, .qcow2 etc, move them to /opt/kvm and you should avoid permission-related errors. If you still get permission errors, force directory contents to be group-readable:

sudo chmod -R 750 /opt/kvm

Creating a virtual machine from an ISO

Before you can install a vm, you need to find the identifier for the operating system in the osinfo database. If you can't find the exact match, try an earlier version or similar or (e.g. ubuntu 22.04 if 24.04 isn't supported, or picking a debian/ubuntu version for linux mint).

If nothing matches, pick "generic".

The os identifier will help in picking guest-compatible default values for boot mode (legacy/uefi, secure boot on/off), network device types, graphics settings etc.


With the os identifier and the installer iso file, you can create a new vm:

virt-install \
  --name debian12 \
  --vcpus 2 \
  --memory 4096 \
  --cdrom /opt/kvm/debian12.iso \
  --disk size=20 \
  --os-info debian12

These are the minimum recommended arguments for creating a new vm. You could let osinfo also set defaults for cpu count, memory and disk size, but it's better to be verbose here to maintain control over hardware resources.

The --disk parameter takes a comma-separated list of values, but we let virt-install fill most fields for us, like the pool name to put the vm disk into ("default), the name of the vm disk (derived from --name), bus etc. Only the size=20 parameter is set to allocate a new 20GiB virtual disk for the vm. You can also set a disk name manually with the disk=my_disk_name parameter.

The disk params also support local files with path=/myvm.qcow, but using it can create problems with selinux contexts and file permissions, so try to avoid it.

Creating a vm from a preinstalled disk image

If you instead want to create a vm from a preinstalled cloud/disk image, you can pass it as a backing_store to the --disk parameter. A backing store acts as a readonly layer underneath a writable overlay (the vm disk), so all vms can share the same image without modifying it.


Let's go through the process of adding a new vm template, debian 13 in this example.

Start by downloading the image:

cd /opt/kvm
wget 'https://cloud.debian.org/images/cloud/trixie/latest/debian-13-nocloud-amd64.qcow2'

If you want to customize it, e.g. to set a root password, do that now. If you chose an image with cloud-init support, you can skip this step and use that.

virt-customize -a /opt/kvm/debian-13-nocloud-amd64.qcow2 \
  --root-password password:YOUR_ROOT_PASS_HERE

Now you can use the debian 13 image as a template and have a writable overlay disk allocated for you:

virt-install \
  --name debian13 \
  --vcpus 2 \
  --memory 4096 \
  --disk size=20,backing_store=/opt/kvm/debian-13-nocloud-amd64.qcow2,backing_format=qcow2 \
  --import \
  --os-info debian13 \
  --graphics none \
  --boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no

This is mostly similar to an ISO install, with some notable differences:

  • we use --import instead of --cdrom

  • --disk now also specifies backing_store for the readonly disk image, and the backing_format to tell virsh which file format to use

  • --graphics none disables allocating vga/frame buffers, which are usually not necessary for cloud images. if you get no console output, try removing this line

  • --boot is configured to use uefi with secure boot disabled. You likely need this, since many cloud images don't have signed bootloaders.

Since disk images are already preinstalled, the vm will be available within seconds and not require any installation process - it simply boots. New vms can reuse the same backing image, no need to download it again.

A little warning is in order here: do not remove or alter the backing image file in /opt/kvm if a vm is still bound to it - it can do any number of things, from disk corruption to complete data loss!

Basic VM management

Now that you have a basic VM started, it is time to interact with it. Let's go through the basic commands.


List all running vms:

virsh list

List all vms, including stopped ones:

virsh list --all

Connect to serial console of a vm (primarily for servers / cloud images):

virsh console my_vm

Connect to graphical console of a vm (for desktop vms and those you installed from an ISO):

virt-viewer my_vm

Gracefully shutdown a vm:

virsh shutdown my_vm

Force-stop vm:

virsh destroy my_vm

Destroy in libvirt means "destroy the runtime state", aka "shutdown". It does not remove the vm after stopping it.


Start a stopped vm:

virsh start my_vm

Remove a stopped vm entirely:

virsh undefine --nvram --remove-all-storage my_vm

If you want to keep the virtual disks, omit the -remove-all-storage flag. The --nvram flag removes uefi-associated runtime data, which isn't necessary for legacy boot, but also doesn't hurt - it's best to pass it every time.

Editing a vm

You can edit a vm after creation when it is turned off. There are options to edit it at runtime, but that's dangerous and limited, thus beyond the scope of this guide.


Make sure the vm is turned off:

virsh destroy my_vm

Then edit the xml configuration:

virsh edit my_vm

Editing the xml configuration is the main way to alter existing vms. Check the libvirt domain file reference for an explanation of all keys and options.

Checking resource usage

The easiest way to check resource usage of all vms is virt-top. Simply runs the command to get an interactive viewer:

virt-top

The default view is an overview of basic resource usage per vm. You can check host cpu usage by pressing 1, network stats by pressing 2 or disk io stats by pressing 3. Pressing 0 will return you to the overview.

Snapshots and rollbacks

Since vms use copy on write filesystems, you can easily create near zero-cost snapshots of disks or entire vm state within seconds.


You can either make a snapshot of only disk contents:

virsh snapshot-create my_vm --disk-only

or the entire vm, including runtime state and memory contents:

virsh snapshot-create my_vm --live

Live snapshots is the default when not passing disk-only.


You can list all created snapshots:

virsh snapshot-list my_vm

The output will show both disk-only and full snapshots, as well as runtime state:

Name         Creation Time               State  
---------------------------------------------------------  
1765364110   2025-12-10 11:55:10 +0100   running  
1765370288   2025-12-10 13:38:08 +0100   disk-snapshot  
1765370618   2025-12-10 13:43:38 +0100   shutoff  

Any snapshot can be reverted (even at at runtime!) by name:

virsh snapshot-revert my_vm 1234567

Assuming 1234567 is the name of your target snapshot.

Snapshots are stored inside the vm's disk, so deleting it will also implicitly remove all snapshots.

High-availability production setups

Many production environments need some form of high-availability. This can take many forms, from fast migrations on host failure to hot standby replica vms, but some core rules apply to all of them.


Primarily, disks need to survive a kvm host failure. The only way to achieve this is to use a highly-available storage system, most commonly GlusterFS or CEPH. The "default" storage pool is backed by this network storage instead of local directories.


Operating this highly-available setup becomes more cumbersome, since for example adding a template image now involves first allocating enough space in the template storage pool to a new volume, then uploading the disk data to it.


it would look like this (assuming the "debian13-nocloud.qcow2" file is < 400M in size):

virsh volume-create-as ha-pool debian13_template 400M --format qcow2
virsh vol-upload debian13_template debian13-nocloud.qcow2 ha-pool --offset 0

Creating a vm backed by this image also requires to first create an empty disk backed by the image, then passing it to the vm during creation:

virsh vol-create-as ha-pool my_vm_disk 20G \
  --format qcow2 backing-vol debian13_template \
  --backing-format qcow2
virt-install --name sample --vcpus 1 --memory 1024 \
  --disk pool=ha-pool,vol=my_vm_disk --import \
  --os-variant debian13

Note that backing image and vm disk must be stored in the same pool.


Lastly, ISO install files need to be available across hosts but cannot work properly from within storage pools (the --cdrom only supports local paths, but --disk with device=cdrom prevents features like auto-ejecting media after install). A common solution for this is to set up shared storage like NFS or SMB, and auto-mount that onto all kvm hosts, so they can use local paths for installations.

More articles

Avoiding namespace pollution in C

Keeping code of all sizes clean and maintainable

Configuring vagrant to use kvm

Easy VM automation backed by libvirt

A guide to byte encodings

From binary to text and back