Just because it supports a large array of technologies, doesn’t mean you have to be familiar with all of them. You can focus on one technology like KVM and build your libvirt experience around that. This article will try and give a comprehensive criticism of the technology from the author’s personal experience with it.
To get a hang of what Libvirt is capable of and how you can use it on your own system you can follow the following guides:
If you are already familiar with tools like virsh, virt-install, virt-manager, oVirt, etc then you are already using libvirt without even knowing it. The aforementioned tools use libvirt in the backend and provide a user-friendly interface be it command line or GUI.
Libvirt is designed to work with any hypervisor and has grown over the years to work with a wide array of hypervisors. The libvirt daemon exposes an API that can be used by apps like virt-manager or virsh (and even your custom Python scripts). The user requests are received by the API. These requests could be anything like create a KVM guest, or show me the memory used by a given LX contianer, etc.
The libvirt daemon then delegates the request to the appropriate libvirt hypervisor driver. This driver understands and implements all the specifics of a given virtualization technology and carries out the instructions accordingly.
There’s a different class of drivers for handling storage and even networks of VMs.
Pools and Volumes
VMs need a lot of storage. The storage technology itself is very variable from hypervisor to hypervisor. VMWare uses its own vmdk format, QEMU likes to use qcow2, there’s also raw disk images and LXC images are a different story as well. Moreover, you would like to group together all the VM disk images and provide them a different storage media like a NFS server, a ZFS dataset or just a directory. This allows you to use libvirt across a variety of different use cases from a single home server to an enterprise grade scalable virtualization solution.
In libvirt vernacular, a single virtual storage device associated with any VM, like the qcow2, raw or vmdk image file of a VM or mountable ISO is known as a volume. The storage media used on the host to store a group of associated volumes is known as a pool. You can use an NFS servers as a pool, or a ZFS dataset, as previously mentioned. If you don’t have a fancy storage solution, then you can simply use a directory.
By default, libvirt has two different pools. First is /var/lib/libvirt/images and /var/lib/libvirt/boot. Volumes for a single VM can be split across multiple pools. For example, I store all the clean cloud images and OS installer ISOs in the /var/lib/libvirt/boot pool and for individual VMs rootfs is installed in image files stored in /var/lib/libvirt/images.
You can even have a single pool for a single VM, or you can split the pools further for VM snapshots, backups ,etc. It’s all very flexible and allows you to organize your data as per your convenience.
Virsh is a popular tool to configure everything from your VM, virtual machine networking and even storage. The configuration files themselves live in the XML format. You will find yourself issuing commands like:
$ virsh edit VM1
And similarly, there are subcommands like net-dumpxml and pool-edit to view or configure the configuration of pools, networks, etc. If you are curious to where these configuration files live, you can go to /etc/libvirt/ and find your hypervisors concerned directory. The parent directory /etc/libvirt/ itself contains a lot of global configurations like drivers (e.g qemu.conf and lxc.conf ) and their configuration and the default behaviour of libvirt.
To look at specific configuration of individual components like the VMs, pools and volumes you have to go to the corresponding directories. For qemu guests this is /etc/libvirt/qemu
drwxr-xr-x 4 root root 4096 Apr 21 10:39 .
drwxr-xr-x 6 root root 4096 Apr 28 17:19 ..
drwxr-xr-x 2 root root 4096 Apr 21 10:39 autostart
drwxr-xr-x 3 root root 4096 Apr 14 13:49 networks
-rw——- 1 root root 3527 Apr 20 19:10 VM1.xml
-rw——- 1 root root 3527 Apr 20 19:09 VM2.xml
The autostart directory will contain symlinks to VM1.xml and VM2.xml if you have confiured the VMs to autostart when the host system boots ( $ virsh autostart VM1 ).
Similarly, the /etc/libvirt/qemu/network contains a configurations for the default network a qemu guest. The /etc/libvirt/storage contains XMLs defining the storage pools.
If you are interested in setting up your own virtualization host a good place to start will be this article where I show how to install QEMU-KVM guests on a Debian host using libvirt and related tools.
After that you can start playing with virsh CLI and see and manage entities like Domain (libvirt calls guest VMs a domain) networks, storage pools and volumes. This will make you comfortable enough with the technology that you can move on to other concepts like snapshots and network filter. I hope this article will prove to be a good starting point for you.