KVM/QEMU Virtual Machine Management
Complete CLI reference for creating and managing virtual machines without GUI tools.
Quick Start Checklist
For experienced users - minimal viable setup:
sudo pacman -S libvirt qemu-full virt-manager dnsmasq edk2-ovmf swtpm
sudo systemctl enable --now libvirtd virtlogd
sudo usermod -aG libvirt $(whoami)
sudo virsh net-define /usr/share/libvirt/networks/default.xml
sudo virsh net-start default
sudo virsh net-autostart default
sudo nvim /etc/nftables.conf
Add virbr0 INPUT/FORWARD rules + NAT masquerade, then reload:
sudo nft -f /etc/nftables.conf
sudo systemctl enable --now nftables
bash ~/bin/kvm-preflight.sh
First VM (Windows 11):
sudo cp ~/Downloads/Win11*.iso /var/lib/libvirt/images/
sudo virt-install \
--name=win11 \
--vcpus=4 --ram=8192 \
--os-variant=win11 \
--boot uefi \
--cdrom=/var/lib/libvirt/images/Win11*.iso \
--disk path=/var/lib/libvirt/images/win11.qcow2,size=64,bus=sata \
--network network=default,model=e1000e \
--graphics spice \
--noautoconsole
sudo virt-viewer win11
Prerequisites & Setup
System Requirements
lscpu | grep -i virtualization
Must show: VT-x (Intel) or AMD-V (AMD)
lsmod | grep kvm
Should show: kvm_intel or kvm_amd
df -h /var/lib/libvirt/images
Recommended: 100GB+ free
Install Required Packages (Arch)
sudo pacman -S \
libvirt \
qemu-full \
virt-manager \
dnsmasq \
edk2-ovmf \
swtpm
sudo pacman -S \
virt-viewer \
libguestfs \
guestfs-tools
Enable Services
sudo systemctl enable --now libvirtd virtlogd
systemctl status libvirtd virtlogd
sudo usermod -aG libvirt $(whoami)
| Logout and login for group membership to take effect |
groups | grep libvirt || echo "ERROR: Re-login required"
Setup Default Network
sudo virsh net-list --all
sudo virsh net-define /usr/share/libvirt/networks/default.xml
sudo virsh net-start default
sudo virsh net-autostart default
sudo virsh net-list --all
Should show: default active yes
sudo virsh net-dumpxml default
Default subnet: 192.168.122.0/24 Default gateway: 192.168.122.1 DHCP range: 192.168.122.2-192.168.122.254
Firewall Configuration (CRITICAL)
Without these rules, VMs will have NO network access on Arch Linux.
The default nftables.conf has policy drop on INPUT and FORWARD chains, blocking:
-
DHCP requests from VMs → dnsmasq (UDP 67)
-
DNS queries from VMs → dnsmasq (UDP 53)
-
All traffic forwarding through host (VM → internet)
Edit /etc/nftables.conf:
#!/usr/bin/nft -f
flush ruleset
table inet filter {
chain input {
type filter hook input priority filter
policy drop
# Allow established/related connections
ct state invalid drop comment "drop invalid"
ct state {established, related} accept comment "allow tracked"
# Allow loopback
iif lo accept comment "allow loopback"
# Allow ICMP
ip protocol icmp accept comment "allow icmp"
meta l4proto ipv6-icmp accept comment "allow icmp v6"
# Allow SSH
tcp dport 22 accept comment "allow sshd"
# CRITICAL: Allow DHCP and DNS for libvirt VMs
iif "virbr0" udp dport 67 accept comment "DHCP for VMs"
iif "virbr0" udp dport 53 accept comment "DNS for VMs"
iif "virbr0" tcp dport 53 accept comment "DNS TCP for VMs"
# Reject everything else
pkttype host limit rate 5/second counter reject with icmpx type admin-prohibited
counter
}
chain forward {
type filter hook forward priority filter
policy drop
# CRITICAL: Allow VM traffic forwarding
iif "virbr0" accept comment "allow VM outbound"
oif "virbr0" ct state {established,related} accept comment "allow VM inbound"
}
chain output {
type filter hook output priority filter
policy accept
}
}
NAT Masquerade (Required for VM Internet Access):
Add this NAT table to /etc/nftables.conf - without it, VMs can communicate with the host but NOT reach the internet:
table inet nat {
chain postrouting {
type nat hook postrouting priority srcnat
policy accept
# Masquerade VM traffic going to internet (not staying on virbr0)
ip saddr 192.168.122.0/24 oifname != "virbr0" masquerade comment "NAT for libvirt VMs"
}
}
Reload firewall:
sudo nft -f /etc/nftables.conf
sudo nft list ruleset | grep virbr0
sudo nft list table inet nat
NOTE: Configure firewall AFTER starting libvirtd and default network (so virbr0 exists).
Pre-Flight Check Script
Save as ~/bin/kvm-preflight.sh and run before creating VMs:
#!/bin/bash
# KVM/QEMU Pre-Flight Check
# Usage: bash ~/bin/kvm-preflight.sh
set -euo pipefail
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
echo "========================================"
echo " KVM/QEMU Pre-Flight Check"
echo "========================================"
echo ""
check_pass() { echo -e "${GREEN}[OK]${NC} $1"; }
check_fail() { echo -e "${RED}[FAIL]${NC} $1"; }
check_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
# 1. CPU Virtualization
echo "=== CPU Virtualization ==="
if VIRT=$(lscpu | grep -i virtualization | awk '{print $2}'); then
check_pass "CPU virtualization: $VIRT"
else
check_fail "No virtualization support - enable VT-x/AMD-V in BIOS"
fi
echo ""
# 2. KVM Module
echo "=== KVM Kernel Module ==="
if lsmod | grep -q kvm; then
check_pass "KVM module loaded"
lsmod | grep kvm | sed 's/^/ /'
else
check_fail "KVM module not loaded"
echo " Run: sudo modprobe kvm_intel (or kvm_amd)"
fi
echo ""
# 3. Required Packages
echo "=== Required Packages ==="
for pkg in libvirt qemu-full virt-manager dnsmasq edk2-ovmf swtpm; do
if pacman -Q "$pkg" &>/dev/null; then
VERSION=$(pacman -Q "$pkg" | awk '{print $2}')
check_pass "$pkg ($VERSION)"
else
check_fail "$pkg not installed"
echo " Install: sudo pacman -S $pkg"
fi
done
echo ""
# 4. Services
echo "=== Services ==="
for svc in libvirtd virtlogd; do
if systemctl is-active "$svc" &>/dev/null; then
check_pass "$svc is running"
else
check_fail "$svc is stopped"
echo " Start: sudo systemctl enable --now $svc"
fi
done
echo ""
# 5. Group Membership
echo "=== Group Membership ==="
if groups | grep -q libvirt; then
check_pass "User $(whoami) in libvirt group"
else
check_fail "User not in libvirt group"
echo " Run: sudo usermod -aG libvirt $(whoami)"
echo " Then: logout/login"
fi
echo ""
# 6. Default Network
echo "=== Default Network ==="
if sudo virsh net-list --all 2>/dev/null | grep -q default; then
NET_STATE=$(sudo virsh net-list --all | grep default | awk '{print $2}')
NET_AUTO=$(sudo virsh net-list --all | grep default | awk '{print $3}')
if [[ "$NET_STATE" == "active" && "$NET_AUTO" == "yes" ]]; then
check_pass "default network (State: $NET_STATE, Autostart: $NET_AUTO)"
else
check_warn "default network exists but not fully configured"
[[ "$NET_STATE" != "active" ]] && echo " Start: sudo virsh net-start default"
[[ "$NET_AUTO" != "yes" ]] && echo " Autostart: sudo virsh net-autostart default"
fi
else
check_fail "default network not found"
echo " Define: sudo virsh net-define /usr/share/libvirt/networks/default.xml"
echo " Start: sudo virsh net-start default"
echo " Autostart: sudo virsh net-autostart default"
fi
echo ""
# 7. Firewall - virbr0 rules
echo "=== Firewall (nftables) ==="
if sudo nft list ruleset 2>/dev/null | grep -q "virbr0"; then
RULE_COUNT=$(sudo nft list ruleset 2>/dev/null | grep -c "virbr0")
check_pass "virbr0 rules found ($RULE_COUNT rules)"
else
check_fail "No virbr0 rules in nftables"
echo " VMs will NOT get DHCP/DNS or internet"
echo " See: Firewall Configuration section"
fi
echo ""
# 8. UEFI Firmware
echo "=== UEFI Firmware ==="
OVMF_PATH="/usr/share/edk2/x64/OVMF_CODE.secboot.4m.fd"
if [[ -f "$OVMF_PATH" ]]; then
check_pass "OVMF Secure Boot firmware found"
else
check_fail "OVMF firmware missing (needed for Windows 11)"
echo " Install: sudo pacman -S edk2-ovmf"
fi
echo ""
# 9. TPM Emulator
echo "=== TPM Emulator ==="
if command -v swtpm &>/dev/null; then
check_pass "swtpm installed"
else
check_fail "swtpm missing (needed for Windows 11)"
echo " Install: sudo pacman -S swtpm"
fi
echo ""
# 10. Storage Space
echo "=== Storage Directory ==="
IMG_DIR="/var/lib/libvirt/images"
if [[ -d "$IMG_DIR" ]]; then
SPACE=$(df -h "$IMG_DIR" | tail -1 | awk '{print $4}')
PERCENT=$(df -h "$IMG_DIR" | tail -1 | awk '{print $5}')
check_pass "$IMG_DIR exists (Available: $SPACE, Used: $PERCENT)"
else
check_fail "$IMG_DIR does not exist"
fi
echo ""
echo "========================================"
echo " Pre-Flight Check Complete"
echo "========================================"
Make executable and run:
chmod +x ~/bin/kvm-preflight.sh
~/bin/kvm-preflight.sh
Decision Guide
When to use UEFI vs BIOS
| Boot Mode | Use For | Notes |
|---|---|---|
UEFI |
Windows 11, modern Linux |
Required for Win11, supports Secure Boot/TPM |
BIOS |
Older OSes, legacy software |
Simpler, fewer requirements |
UEFI command: --boot uefi
BIOS command: (omit --boot flag or use --boot hd)
Disk Bus Types
| Bus Type | Performance | Compatibility | When to Use |
|---|---|---|---|
virtio |
Best |
Needs drivers |
Linux (built-in), Windows after driver install |
SATA |
Good |
Universal |
Windows install, no driver loading needed |
IDE |
Slowest |
Universal |
Legacy systems only |
SCSI |
Good |
Needs drivers |
Enterprise appliances |
Network Models
| Model | Performance | Compatibility | When to Use |
|---|---|---|---|
virtio |
Best |
Needs drivers |
Linux (built-in), Windows after driver install |
e1000e |
Good |
Universal |
Windows install, Intel emulation |
e1000 |
Good |
Universal |
Older systems |
rtl8139 |
Slower |
Universal |
Very old systems |
Disk Formats
| Format | Features | Use Case |
|---|---|---|
qcow2 |
Thin provisioning, snapshots, compression |
General use, best for most VMs |
raw |
No overhead, better performance |
Production, databases, pass-through disks |
vmdk |
VMware compatibility |
Import from VMware |
Create qcow2:
sudo qemu-img create -f qcow2 disk.qcow2 50G
Create raw:
sudo qemu-img create -f raw disk.raw 50G
Convert:
sudo qemu-img convert -f qcow2 -O raw disk.qcow2 disk.raw
VM Creation Templates
Windows 11 (Production-Ready Script)
Save as ~/bin/create-win11.sh:
#!/bin/bash
# Windows 11 VM Creator
# Usage: ./create-win11.sh [vm-name] [disk-size-gb] [vcpus] [ram-mb]
set -euo pipefail
VM_NAME="${1:-win11}"
DISK_SIZE="${2:-64}"
VCPUS="${3:-4}"
RAM="${4:-8192}"
DISK_PATH="/var/lib/libvirt/images/${VM_NAME}.qcow2"
ISO_DIR="/var/lib/libvirt/images"
# Find Windows 11 ISO
WIN_ISO=$(find "$ISO_DIR" -name "Win11*.iso" -o -name "win11*.iso" | head -1)
if [[ -z "$WIN_ISO" ]]; then
echo "ERROR: No Windows 11 ISO found in $ISO_DIR"
echo "Copy ISO with: sudo cp ~/Downloads/Win11*.iso $ISO_DIR/"
exit 1
fi
# Check for existing VM
if sudo virsh list --all | grep -q "$VM_NAME"; then
echo "ERROR: VM '$VM_NAME' already exists"
echo "Delete with: sudo virsh undefine $VM_NAME --nvram --remove-all-storage"
exit 1
fi
echo "Creating Windows 11 VM: $VM_NAME"
echo " vCPUs: $VCPUS"
echo " RAM: $RAM MB"
echo " Disk: $DISK_SIZE GB"
echo " ISO: $WIN_ISO"
echo ""
# Choose driver configuration
read -p "Use VirtIO drivers (better performance, harder setup) [y/N]? " -n 1 -r
echo ""
if [[ $REPLY =~ ^[Yy]$ ]]; then
# VirtIO - best performance, requires driver loading
VIRTIO_ISO=$(find "$ISO_DIR" -name "virtio-win*.iso" | head -1)
if [[ -z "$VIRTIO_ISO" ]]; then
echo "ERROR: VirtIO ISO not found. Download with:"
echo " curl -LO https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso"
echo " sudo mv virtio-win.iso $ISO_DIR/"
exit 1
fi
sudo virt-install \
--name="$VM_NAME" \
--arch=x86_64 \
--cpu=host-model \
--vcpus="$VCPUS" \
--ram="$RAM" \
--os-variant=win11 \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=yes \
--tpm backend.type=emulated,backend.version=2.0,model=tpm-tis \
--cdrom="$WIN_ISO" \
--disk path="$DISK_PATH",size="$DISK_SIZE",bus=virtio \
--disk path="$VIRTIO_ISO",device=cdrom \
--network network=default,model=virtio \
--graphics spice,listen=0.0.0.0 \
--video qxl \
--noautoconsole
echo ""
echo "IMPORTANT: During Windows install:"
echo "1. Click 'Load driver' at disk selection"
echo "2. Browse D:\vioscsi\w11\amd64 - load storage driver"
echo "3. Browse D:\NetKVM\w11\amd64 - load network driver"
else
# SATA/e1000e - easy install, no drivers needed
sudo virt-install \
--name="$VM_NAME" \
--arch=x86_64 \
--cpu=host-model \
--vcpus="$VCPUS" \
--ram="$RAM" \
--os-variant=win11 \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=yes \
--tpm backend.type=emulated,backend.version=2.0,model=tpm-tis \
--cdrom="$WIN_ISO" \
--disk path="$DISK_PATH",size="$DISK_SIZE",bus=sata \
--network network=default,model=e1000e \
--graphics spice,listen=0.0.0.0 \
--video qxl \
--noautoconsole
fi
echo ""
echo "VM created successfully!"
echo "Connect with: sudo virt-viewer $VM_NAME"
Make executable:
chmod +x ~/bin/create-win11.sh
~/bin/create-win11.sh
Linux VMs (Quick Templates)
Arch Linux:
sudo virt-install \
--name=arch \
--vcpus=2 --ram=4096 \
--os-variant=archlinux \
--boot uefi \
--cdrom=/var/lib/libvirt/images/archlinux-*.iso \
--disk path=/var/lib/libvirt/images/arch.qcow2,size=30,bus=virtio \
--network network=default,model=virtio \
--graphics spice \
--noautoconsole
Ubuntu Server (headless with serial console):
sudo virt-install \
--name=ubuntu-srv \
--vcpus=2 --ram=4096 \
--os-variant=ubuntu24.04 \
--boot uefi \
--cdrom=/var/lib/libvirt/images/ubuntu-*.iso \
--disk path=/var/lib/libvirt/images/ubuntu-srv.qcow2,size=25,bus=virtio \
--network network=default,model=virtio \
--graphics none \
--console pty,target_type=serial \
--extra-args 'console=ttyS0,115200n8' \
--noautoconsole
Fedora:
sudo virt-install \
--name=fedora \
--vcpus=2 --ram=4096 \
--os-variant=fedora39 \
--boot uefi \
--cdrom=/var/lib/libvirt/images/Fedora-*.iso \
--disk path=/var/lib/libvirt/images/fedora.qcow2,size=30,bus=virtio \
--network network=default,model=virtio \
--graphics spice \
--noautoconsole
Appliance VMs (Cisco ISE, etc.)
Cisco ISE (RHEL-based):
sudo virt-install \
--name=ise-3.2 \
--vcpus=6 --ram=16384 \
--os-variant=rhel8.0 \
--hvm --virt-type=kvm \
--cdrom=/var/lib/libvirt/images/ise-3.2.0.542a.SPA.x86_64.iso \
--disk path=/var/lib/libvirt/images/ise-3.2.qcow2,size=600,bus=virtio,cache=none,io=native \
--network network=default,model=virtio \
--rng /dev/urandom \
--graphics spice,listen=0.0.0.0 \
--noautoconsole
VM Management (virsh)
Basic Operations
List VMs:
sudo virsh list
sudo virsh list --all
sudo virsh list --autostart
sudo virsh list --inactive
Start/Stop/Restart:
sudo virsh start $VM
sudo virsh shutdown $VM
sudo virsh reboot $VM
sudo virsh destroy $VM
sudo virsh reset $VM
Suspend/Resume:
sudo virsh suspend $VM
sudo virsh resume $VM
Autostart:
sudo virsh autostart $VM
sudo virsh autostart $VM --disable
Console Access
Graphical console (SPICE/VNC viewer):
sudo virt-viewer $VM
Serial console (headless VMs):
sudo virsh console $VM
Exit with: Ctrl+]
Get SPICE/VNC connection URI:
sudo virsh domdisplay $VM
Example output: spice://127.0.0.1:5900
Remote VNC access:
sudo virsh domdisplay $VM --type vnc
VM Information
Basic info:
sudo virsh dominfo $VM
sudo virsh domstate $VM
Full XML configuration:
sudo virsh dumpxml $VM
sudo virsh dumpxml $VM > ${VM}.xml
Get IP address:
sudo virsh domifaddr $VM
sudo virsh domifaddr $VM --source agent
Network interfaces:
sudo virsh domiflist $VM
Attached disks:
sudo virsh domblklist $VM
sudo virsh domblkinfo $VM vda
Disk I/O stats:
sudo virsh domblkstat $VM vda
Memory stats:
sudo virsh dommemstat $VM
CPU stats:
sudo virsh cpu-stats $VM
Modify VM Configuration
Edit VM XML directly:
sudo virsh edit $VM
Change memory (VM must be off):
sudo virsh setmaxmem $VM 16G --config
sudo virsh setmem $VM 16G --config
Change vCPUs:
sudo virsh setvcpus $VM 8 --config --maximum
sudo virsh setvcpus $VM 8 --config
Hot-add vCPU (running VM, if supported):
sudo virsh setvcpus $VM 8 --live --config
Attach/detach CD-ROM:
sudo virsh attach-disk $VM /path/to/iso.iso hdc --type cdrom --mode readonly --live
sudo virsh change-media $VM hdc --eject
Attach disk (persistent):
sudo virsh attach-disk $VM \
/var/lib/libvirt/images/extra-disk.qcow2 \
vdb --persistent --subdriver qcow2
Detach disk:
sudo virsh detach-disk $VM vdb --persistent
Add network interface:
sudo virsh attach-interface $VM network default \
--model virtio --mac 52:54:00:aa:bb:cc --persistent
Detach network interface:
sudo virsh detach-interface $VM network --mac 52:54:00:aa:bb:cc --persistent
Snapshots
Create snapshot:
sudo virsh snapshot-create-as $VM \
"snapshot-name" \
"Description of snapshot"
Create snapshot with automatic naming:
sudo virsh snapshot-create-as $VM \
--name "before-update-$(date +%Y%m%d-%H%M)" \
--description "Before system update"
List snapshots:
sudo virsh snapshot-list $VM
sudo virsh snapshot-list $VM --tree
Show snapshot details:
sudo virsh snapshot-info $VM snapshot-name
Revert to snapshot:
sudo virsh snapshot-revert $VM snapshot-name
Delete snapshot:
sudo virsh snapshot-delete $VM snapshot-name
Delete all snapshots:
sudo virsh snapshot-list $VM --name | while read snap; do
sudo virsh snapshot-delete $VM "$snap"
done
Create external snapshot:
sudo virsh snapshot-create-as $VM \
--name external-snap \
--disk-only \
--diskspec vda,snapshot=external
Merge external snapshot back:
sudo virsh blockcommit $VM vda --active --pivot
Delete VM
Undefine (removes VM definition, keeps disk):
sudo virsh undefine $VM
Undefine with UEFI NVRAM:
sudo virsh undefine $VM --nvram
Undefine and remove ALL storage:
sudo virsh undefine $VM --nvram --remove-all-storage
Manual cleanup:
sudo virsh destroy $VM
sudo virsh undefine $VM --nvram
sudo rm -f /var/lib/libvirt/images/${VM}*.qcow2
Storage Management
Disk Images
Create new disk:
sudo qemu-img create -f qcow2 /var/lib/libvirt/images/disk.qcow2 50G
sudo qemu-img create -f raw /var/lib/libvirt/images/disk.raw 50G
Show image info:
sudo qemu-img info /var/lib/libvirt/images/disk.qcow2
Resize disk (GROW only, VM must be off):
sudo qemu-img resize /var/lib/libvirt/images/disk.qcow2 +20G
Convert between formats:
sudo qemu-img convert -f qcow2 -O raw disk.qcow2 disk.raw
sudo qemu-img convert -f vmdk -O qcow2 disk.vmdk disk.qcow2
Compact qcow2:
sudo qemu-img convert -O qcow2 -c disk.qcow2 disk-compressed.qcow2
Check image for corruption:
sudo qemu-img check /var/lib/libvirt/images/disk.qcow2
Benchmark disk performance:
sudo qemu-img bench -c 100 -s 4k -t read /var/lib/libvirt/images/disk.qcow2
Storage Pools
List pools:
sudo virsh pool-list --all
Create directory pool:
sudo virsh pool-define-as \
--name mypool \
--type dir \
--target /path/to/pool
Start and autostart pool:
sudo virsh pool-start mypool
sudo virsh pool-autostart mypool
Refresh pool:
sudo virsh pool-refresh mypool
List volumes in pool:
sudo virsh vol-list mypool
Create volume in pool:
sudo virsh vol-create-as mypool disk1.qcow2 50G --format qcow2
Delete volume:
sudo virsh vol-delete --pool mypool disk1.qcow2
Clone volume:
sudo virsh vol-clone --pool mypool disk1.qcow2 disk2.qcow2
Pool info:
sudo virsh pool-info mypool
sudo virsh pool-dumpxml mypool
Disk I/O Tuning
Set I/O limits (IOPS):
sudo virsh blkdeviotune $VM vda \
--total-iops-sec 1000 \
--write-iops-sec 500
Set throughput limits:
sudo virsh blkdeviotune $VM vda \
--total-bytes-sec 100000000
Show current limits:
sudo virsh blkdeviotune $VM vda
Set disk cache mode (edit XML):
sudo virsh edit $VM
Change: <driver name='qemu' type='qcow2' cache='none'/>
Options: none, writethrough, writeback, directsync, unsafe
Network Management
Default Network
List networks:
sudo virsh net-list --all
Start/stop network:
sudo virsh net-start default
sudo virsh net-destroy default
Autostart:
sudo virsh net-autostart default
sudo virsh net-autostart default --disable
Show network info:
sudo virsh net-info default
sudo virsh net-dumpxml default
Get DHCP leases:
sudo virsh net-dhcp-leases default
Edit network config:
sudo virsh net-edit default
Custom NAT Network
Create custom network:
cat > /tmp/labnet.xml << 'EOF'
<network>
<name>labnet</name>
<forward mode="nat"/>
<bridge name="virbr1"/>
<ip address="10.10.10.1" netmask="255.255.255.0">
<dhcp>
<range start="10.10.10.100" end="10.10.10.200"/>
<host mac="52:54:00:11:22:33" name="server1" ip="10.10.10.50"/>
</dhcp>
</ip>
</network>
EOF
sudo virsh net-define /tmp/labnet.xml
sudo virsh net-start labnet
sudo virsh net-autostart labnet
Bridge Network (Direct Host Access)
For VMs on same network as host:
cat > /tmp/br0.xml << 'EOF'
<network>
<name>host-bridge</name>
<forward mode="bridge"/>
<bridge name="br0"/>
</network>
EOF
sudo virsh net-define /tmp/br0.xml
sudo virsh net-start host-bridge
sudo virsh net-autostart host-bridge
Note: br0 must already exist on host
Port Forwarding (NAT Network)
Forward host port 2222 to VM port 22:
sudo virsh net-update default add-last ip-dhcp-host \
"<host mac='52:54:00:11:22:33' name='vm1' ip='192.168.122.10'/>" \
--live --config
sudo iptables -t nat -I PREROUTING -p tcp --dport 2222 -j DNAT --to 192.168.122.10:22
sudo iptables -I FORWARD -p tcp -d 192.168.122.10 --dport 22 -j ACCEPT
Advanced Features
Shared Folders (9p/VirtFS)
Add to VM XML (virsh edit $VM):
<filesystem type='mount' accessmode='mapped'>
<source dir='/home/user/shared'/>
<target dir='hostshare'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</filesystem>
Mount in Linux guest:
sudo mount -t 9p -o trans=virtio,version=9p2000.L hostshare /mnt/shared
qemu-guest-agent
Install in guest (Arch/Fedora):
sudo pacman -S qemu-guest-agent
sudo systemctl enable --now qemu-guest-agent
Install in guest (Ubuntu/Debian):
sudo apt install qemu-guest-agent
sudo systemctl enable --now qemu-guest-agent
Add channel to VM XML:
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channel/target/${VM}.org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0'/>
</channel>
Use on host:
sudo virsh domifaddr $VM --source agent
sudo virsh qemu-agent-command $VM '{"execute":"guest-ping"}'
sudo virsh shutdown $VM --mode agent
Cloud-Init (Automated Linux Provisioning)
Create meta-data:
cat > meta-data << EOF
instance-id: vm1
local-hostname: ubuntu-vm
EOF
Create user-data:
cat > user-data << EOF
#cloud-config
users:
- name: admin
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- $(cat ~/.ssh/id_ed25519.pub)
packages:
- vim
- htop
runcmd:
- echo "Setup complete" > /tmp/cloud-init-done
EOF
Create ISO:
genisoimage -output cloud-init.iso -volid cidata -joliet -rock user-data meta-data
sudo mv cloud-init.iso /var/lib/libvirt/images/
Attach to VM:
sudo virsh attach-disk $VM /var/lib/libvirt/images/cloud-init.iso hdc --type cdrom --mode readonly --persistent
CPU Topology
Set sockets/cores/threads (virsh edit):
<vcpu placement='static'>8</vcpu>
<cpu mode='host-passthrough'>
<topology sockets='2' cores='2' threads='2'/>
</cpu>
USB Passthrough
Identify USB device:
lsusb
Example: Bus 002 Device 003: ID 090c:1000 Silicon Motion, Inc. Flash Drive
lsusb -v -d 090c:1000 2>/dev/null | head -20
Hot-attach USB device (VM running):
cat > /tmp/usb-device.xml << 'EOF'
<hostdev mode='subsystem' type='usb' managed='yes'>
<source>
<vendor id='0x090c'/>
<product id='0x1000'/>
</source>
</hostdev>
EOF
sudo virsh attach-device $VM /tmp/usb-device.xml --live
sudo virsh dumpxml $VM | grep -A10 hostdev
Detach USB device:
sudo virsh detach-device $VM /tmp/usb-device.xml --live
Permanent USB passthrough:
sudo virsh attach-device $VM /tmp/usb-device.xml --live --config
Or add directly to VM XML (virsh edit $VM):
<devices>
...
<hostdev mode='subsystem' type='usb' managed='yes'>
<source>
<vendor id='0x090c'/>
<product id='0x1000'/>
</source>
</hostdev>
...
</devices>
Alternative: Attach by bus:device:
cat > /tmp/usb-by-bus.xml << 'EOF'
<hostdev mode='subsystem' type='usb' managed='yes'>
<source>
<address bus='2' device='3'/>
</source>
</hostdev>
EOF
sudo virsh attach-device $VM /tmp/usb-by-bus.xml --live
Common USB use cases:
| Device Type | Vendor:Product | Notes |
|---|---|---|
Flash Drive |
varies |
Use vendor:product for persistence |
YubiKey |
1050:0407 |
Security key passthrough |
USB NIC |
varies |
For dedicated VM networking |
Printer |
varies |
Direct print from VM |
Troubleshooting USB passthrough:
lsusb | grep 090c
sudo virsh dumpxml $VM | grep -A10 hostdev
sudo virsh destroy $VM
sudo virsh start $VM
Nested Virtualization
Enable on host (Intel):
echo "options kvm_intel nested=1" | sudo tee /etc/modprobe.d/kvm-intel.conf
sudo modprobe -r kvm_intel
sudo modprobe kvm_intel
Enable on host (AMD):
echo "options kvm_amd nested=1" | sudo tee /etc/modprobe.d/kvm-amd.conf
sudo modprobe -r kvm_amd
sudo modprobe kvm_amd
Verify:
cat /sys/module/kvm_intel/parameters/nested
Should show: Y
Enable in VM (virsh edit):
<cpu mode='host-passthrough' check='none'>
<feature policy='require' name='vmx'/> <!-- Intel -->
<!-- OR -->
<feature policy='require' name='svm'/> <!-- AMD -->
</cpu>
Common Workflows
Backup VM
Full backup (VM must be off):
VM=myvm
BACKUP_DIR=/backup/vms
sudo cp /var/lib/libvirt/images/${VM}.qcow2 $BACKUP_DIR/
sudo virsh dumpxml $VM > $BACKUP_DIR/${VM}.xml
cd /var/lib/libvirt/images
sudo tar -czf $BACKUP_DIR/${VM}-$(date +%Y%m%d).tar.gz \
${VM}.qcow2 \
/etc/libvirt/qemu/${VM}.xml
Restore VM
VM=myvm
BACKUP_DIR=/backup/vms
sudo cp $BACKUP_DIR/${VM}.qcow2 /var/lib/libvirt/images/
sudo virsh define $BACKUP_DIR/${VM}.xml
sudo virsh start $VM
Clone VM
Clone (VM must be off):
sudo virt-clone \
--original myvm \
--name myvm-clone \
--auto-clone
Clone with custom disk location:
sudo virt-clone \
--original myvm \
--name myvm-clone \
--file /var/lib/libvirt/images/myvm-clone.qcow2
Migrate VM to Another Host
Prerequisites: Shared storage OR manual disk copy
On source host: Export VM:
sudo virsh dumpxml $VM > ${VM}.xml
sudo virsh shutdown $VM
Copy disk to destination:
scp /var/lib/libvirt/images/${VM}.qcow2 dest-host:/var/lib/libvirt/images/
Copy XML to destination:
scp ${VM}.xml dest-host:/tmp/
On destination host: Import:
ssh dest-host "sudo virsh define /tmp/${VM}.xml"
ssh dest-host "sudo virsh start $VM"
On source host: Undefine (optional):
sudo virsh undefine $VM
Increase Disk Size (Live)
Resize qcow2 image:
sudo qemu-img resize /var/lib/libvirt/images/$VM.qcow2 +20G
Tell VM about new size:
sudo virsh blockresize $VM vda 100G
In Linux guest, extend partition/filesystem:
sudo growpart /dev/vda 1
sudo resize2fs /dev/vda1
Or for xfs:
sudo xfs_growfs /
Convert VMware VMDK to KVM
Convert disk:
sudo qemu-img convert -f vmdk -O qcow2 \
vm.vmdk \
/var/lib/libvirt/images/vm.qcow2
Create new VM with converted disk:
sudo virt-install \
--name converted-vm \
--ram 4096 --vcpus 2 \
--os-variant linux2022 \
--import \
--disk path=/var/lib/libvirt/images/vm.qcow2,bus=virtio \
--network network=default,model=virtio \
--graphics spice \
--noautoconsole
Troubleshooting
VM Has No Network / Gets 169.254.x.x IP
This is the #1 issue on Arch with nftables.
Root Cause: Firewall blocks DHCP/DNS and forwarding for VMs.
Quick Fix:
sudo nft list ruleset | grep virbr0
If no output, firewall is blocking VMs
sudo nft add rule inet filter input iif "virbr0" udp dport 67 accept
sudo nft add rule inet filter input iif "virbr0" udp dport 53 accept
sudo nft add rule inet filter input iif "virbr0" tcp dport 53 accept
sudo nft add rule inet filter forward iif "virbr0" accept
sudo nft add rule inet filter forward oif "virbr0" ct state established,related accept
sudo virsh destroy $VM && sudo virsh start $VM
Make Permanent: Add to /etc/nftables.conf (see Firewall section), then:
sudo nft -f /etc/nftables.conf
Diagnostic Commands:
echo "=== virbr0 interface ==="
ip link show virbr0 | grep -E "state|inet"
echo "=== VM interfaces attached to bridge ==="
ip link show | grep "master virbr0"
echo "=== DHCP server listening ==="
ss -ulnp | grep :67
echo "=== Firewall INPUT policy ==="
sudo nft list chain inet filter input | head -3
echo "=== Firewall FORWARD policy ==="
sudo nft list chain inet filter forward | head -3
echo "=== DHCP leases ==="
sudo virsh net-dhcp-leases default
echo "=== NAT chain ==="
sudo nft list chain ip libvirt_network guest_nat
Permission Denied Errors
Fix ownership:
sudo chown -R nobody:kvm /var/lib/libvirt/images/
Set SELinux context (if using SELinux):
sudo chcon -t virt_image_t /var/lib/libvirt/images/*.qcow2
Check libvirt user:
grep ^user /etc/libvirt/qemu.conf
Should be: user = "nobody" (Arch) or "libvirt-qemu" (others)
VM Won’t Start
Check detailed logs:
sudo journalctl -u libvirtd -xe
Check VM-specific log:
sudo tail -50 /var/log/libvirt/qemu/${VM}.log
Remove stale locks:
sudo rm -rf /var/run/libvirt/qemu/${VM}*
sudo virsh start $VM
Can’t Connect to VM Console
Check SPICE/VNC is listening:
sudo ss -tlnp | grep -E "5900|5901|5902"
Get connection URI:
sudo virsh domdisplay $VM
Try VNC instead of SPICE:
sudo virsh edit $VM
Change: <graphics type='vnc' port='5900' listen='127.0.0.1'/>
Connect with VNC client:
vncviewer localhost:5900
Disk Performance Issues
Check I/O scheduler:
cat /sys/block/sda/queue/scheduler
Best for SSDs: [none] or [mq-deadline]
Change cache mode:
sudo virsh edit $VM
Change: <driver name='qemu' type='qcow2' cache='none' io='native'/>
Use raw disk instead of qcow2:
sudo qemu-img convert -f qcow2 -O raw disk.qcow2 disk.raw
Docker + KVM Coexistence (iptables Conflict)
If you have Docker installed: Docker’s iptables FORWARD chain has policy DROP, which blocks VM traffic even if nftables accepts it.
Symptoms: - VM gets IP via DHCP ✓ - VM can resolve DNS ✓ - VM cannot ping internet ✗ - tcpdump on wlan0 shows zero packets
Diagnosis:
sudo iptables -L FORWARD -n -v | head -3
If "policy DROP" and high packet count = Docker is blocking VMs
Fix: See KVM-Docker Network Conflict troubleshooting for full packet trace analysis.
Quick fix:
sudo iptables -I FORWARD -i virbr0 -j ACCEPT
sudo iptables -I FORWARD -o virbr0 -j ACCEPT
Persistent fix: Create /etc/libvirt/hooks/network:
#!/bin/bash
if [ "$1" = "default" ] && [ "$2" = "started" ]; then
iptables -I FORWARD -i virbr0 -j ACCEPT
iptables -I FORWARD -o virbr0 -j ACCEPT
fi
sudo chmod +x /etc/libvirt/hooks/network
Windows 11 Installation Fails
TPM Error:
which swtpm
sudo virsh dumpxml $VM | grep -A5 tpm
Secure Boot Error:
ls -la /usr/share/edk2/x64/OVMF_CODE.secboot.4m.fd
sudo virsh dumpxml $VM | grep -A3 "os type"
No Disk Visible:
-
Using VirtIO: Load driver from VirtIO ISO (D:\vioscsi\w11\amd64)
-
Or use
bus=satafor easy install
Quick Reference
| Task | Command |
|---|---|
List all VMs |
|
Start VM |
|
Stop VM (graceful) |
|
Force stop |
|
Console (GUI) |
|
Console (serial) |
|
Get IP |
|
VM info |
|
Edit config |
|
Snapshot create |
|
Snapshot restore |
|
Delete VM + disk |
|
Clone VM |
|
List networks |
|
DHCP leases |
|
Create disk |
|
Resize disk |
|