Why I chose QEMU directly instead of the virtualisation stack
Today I finally settled the question: QEMU directly vs libvirt/virt-manager, especially for a Windows work VM stored on a USB stick.
Short answer: for this use case, QEMU + one good script beats the whole virtualisation stack.
My context
-
The VM disk (
LCS.raw) lives on a USB partition (label:LCS_RAW). -
I want a “VM on a stick”: plug USB anywhere → mount → run → done.
-
It’s one VM, always the same, for client work (Windows + browser).
-
I don’t need snapshots, multi-VM orchestration, XML configs, etc.
Why I didn’t want libvirt / virt-manager here
1. libvirt assumes “local, permanent storage”
Libvirt stores VM definitions in XML pointing to fixed paths like:
-
/var/lib/libvirt/images/windows.qcow2
My VM is on a removable USB, which might be:
-
/dev/sda2today -
/dev/sdb2tomorrow -
/media/ernest/Whateverif Plasma mounts it -
/mnt/LCSwhen I mount it manually
Libvirt and virt-manager don’t like that kind of instability. They want:
-
stable devices
-
stable paths
-
stable XML definitions
A “VM on a stick” is fundamentally not that.
2. One VM doesn’t need a whole orchestration layer
libvirt + virt-manager are great when:
-
I have many VMs.
-
I want autostart on boot.
-
I need fancy network topologies.
-
I want to manage VMs like services.
Here, I have exactly one VM.
I just want a tool that launches it reliably.
3. Fewer layers = fewer things to break
With libvirt:
virt-manager → libvirt → QEMU → KVM → VM
With my approach:
lcs→ QEMU → KVM → VM
No background daemons, no XML, no GUI tool that can silently “forget” the VM because the path changed. Just one script that does exactly what I tell it to.
4. Portability
If I move to another machine:
-
I install QEMU/KVM.
-
I copy
~/.local/bin/lcs. -
I plug the same USB.
-
I run
lcs.
No need to re-define libvirt domains, re-import images, fix XML paths, etc.
5. Cleaner mental model
-
Scripts (
update.sh,autorotate-wayland.sh) = automation I edit and run. -
Commands (
lcs) = tools that feel native, in~/.local/bin, no extension.
lcs is a command that knows how to find, mount and launch my VM.
I don’t need a whole “virtualisation stack” to get that.
The design of lcs (my QEMU launcher)
lcs is a Bash script in:
It is a script disguised as a command.
When I type lcs, it:
-
Finds the VM partition by label (
LCS_RAW), not by/dev/sdX2. -
Reuses any existing mount (e.g.
/media/ernest/...) or mounts it at/mnt/LCS. -
Builds sane defaults for CPU/RAM based on the host.
-
Detects KVM and uses hardware acceleration if possible.
-
Starts QEMU with:
-
IDE disk (safe default)
-
e1000 network card (works in Windows without drivers)
-
GTK display
-
optional SPICE, virtio disk/net, port forwarding, and virtio ISO
-
-
Logs everything to
~/.local/share/lcs/DATE_lcs.log. -
Cleans up by auto-unmounting only if it mounted the partition.
It also has a CLI and some environment overrides so I can tune things without changing the script.
The lcs script (current version)
Note to future me: this looks long, but it’s basically:
mount → sanity checks → build QEMU args → run → cleanup.
*****************************************
#!/usr/bin/env bash
# LCS Windows VM Launcher (with autodetect + CLI)
# - Reuses any existing mount of LCS_RAW (no duplicate mounts)
# - Only unmounts on exit if this script mounted it
# - Long-option CLI: --help, --spice, --smp, --ram, --virtio-disk, --virtio-net, --e1000, --fwd, --virtio-iso, --device, --vmfile, --no-autounmount
# - Still supports env overrides for power users
set -Eeuo pipefail
usage() {
cat <<'EOF'
lcs - Launch Windows VM from the LCS_RAW USB partition
Usage: lcs [options]
Display & UI:
--help Show this help and exit
--gtk Use GTK display (default)
--spice [PORT] Enable SPICE (default port 5930 if PORT omitted)
CPU / RAM:
--smp N Set vCPUs (default: min(4, nproc))
--ram SIZE Set RAM (e.g. 8G, 8192M; default ~60% host, min 4G, max 16G)
Disk / Network:
--virtio-disk Use virtio block device (needs Windows virtio storage driver)
--virtio-net Use virtio NIC (needs Windows virtio net driver)
--e1000 Use Intel e1000 NIC (default; no extra drivers)
--fwd SPEC Host forwards, e.g. "tcp::13389-:3389,tcp::1222-:22"
Media:
--virtio-iso PATH Attach virtio-win drivers ISO (forces attachment)
Device / File:
--device PATH Override block device (default /dev/disk/by-label/LCS_RAW)
--vmfile PATH Override VM image path (default <mountpoint>/LCS.raw)
Behaviour:
--no-autounmount Do NOT unmount on exit (even if we mounted it)
Notes:
- The launcher auto-detects if the LCS_RAW partition is already mounted anywhere
(e.g. /media/$USER/...), and reuses that mountpoint. If it's already mounted,
we will NOT unmount it on exit. If we mount it, we will auto-unmount on exit.
EOF
}
### ---- Defaults (env can override) -----------------------------------------
MOUNTPOINT="${MOUNTPOINT:-/mnt/LCS}"
DEVICE="${DEVICE:-/dev/disk/by-label/LCS_RAW}" # stable by label
LCS_SPICE="${LCS_SPICE:-0}"
SPICE_PORT="${SPICE_PORT:-5930}"
LCS_DISK_IF="${LCS_DISK_IF:-ide}" # 'ide' (safe) or 'virtio'
LCS_VIRTIO_NET="${LCS_VIRTIO_NET:-0}" # 1 => virtio-net, else e1000
LCS_FWD="${LCS_FWD:-}"
LCS_ATTACH_VIRTIO="${LCS_ATTACH_VIRTIO:-0}"
VIRTIO_ISO="${VIRTIO_ISO:-}"
NO_AUTOUNMOUNT="${NO_AUTOUNMOUNT:-0}"
# Logging
LOGDIR="${LOGDIR:-$HOME/.local/share/lcs}"
mkdir -p "$LOGDIR"
LOGFILE="$LOGDIR/$(date +%F_%H-%M-%S)_lcs.log"
exec > >(tee -a "$LOGFILE") 2>&1
echo "[LCS] Log => $LOGFILE"
### ---- CLI parsing ---------------------------------------------------------
while [[ $# -gt 0 ]]; do
case "$1" in
-h|--help) usage; exit 0 ;;
--gtk) LCS_SPICE=0 ;;
--spice) LCS_SPICE=1; if [[ ${2:-} =~ ^[0-9]+$ ]]; then SPICE_PORT="$2"; shift; fi ;;
--spice-port) SPICE_PORT="${2:-$SPICE_PORT}"; shift ;;
--spice-port=*) SPICE_PORT="${1#*=}" ;;
--smp) LCS_SMP="${2}"; shift ;;
--smp=*) LCS_SMP="${1#*=}" ;;
--ram) LCS_RAM="${2}"; shift ;;
--ram=*) LCS_RAM="${1#*=}" ;;
--virtio-disk) LCS_DISK_IF="virtio" ;;
--virtio-net) LCS_VIRTIO_NET=1 ;;
--e1000) LCS_VIRTIO_NET=0 ;;
--fwd) LCS_FWD="${2}"; shift ;;
--fwd=*) LCS_FWD="${1#*=}" ;;
--virtio-iso) VIRTIO_ISO="${2}"; LCS_ATTACH_VIRTIO=1; shift ;;
--virtio-iso=*) VIRTIO_ISO="${1#*=}"; LCS_ATTACH_VIRTIO=1 ;;
--device) DEVICE="${2}"; shift ;;
--device=*) DEVICE="${1#*=}" ;;
--vmfile) VMFILE_OVERRIDE="${2}"; shift ;;
--vmfile=*) VMFILE_OVERRIDE="${1#*=}" ;;
--no-autounmount) NO_AUTOUNMOUNT=1 ;;
*) echo "[LCS] Unknown option: $1"; usage; exit 2 ;;
esac
shift
done
### ---- Sanity checks -------------------------------------------------------
command -v qemu-system-x86_64 >/dev/null || { echo "[LCS] ERROR: qemu-system-x86_64 not found"; exit 1; }
# Ensure mount directory exists for our preferred mountpoint
[[ -d "$MOUNTPOINT" ]] || sudo mkdir -p "$MOUNTPOINT"
### ---- Auto-detect if device is already mounted anywhere -------------------
ACTIVE_MP=""
if command -v findmnt >/dev/null; then
ACTIVE_MP="$(findmnt -rn -S "$DEVICE" -o TARGET || true)"
REALDEV="$(readlink -f "$DEVICE" || true)"
if [[ -z "$ACTIVE_MP" && -n "$REALDEV" ]]; then
ACTIVE_MP="$(findmnt -rn -S "$REALDEV" -o TARGET || true)"
fi
else
REALDEV="$(readlink -f "$DEVICE" || true)"
ACTIVE_MP="$(awk -v dev="${REALDEV:-$DEVICE}" '$1==dev {print $2}' /proc/self/mounts | head -n1)"
fi
mounted_here=0
if [[ -n "$ACTIVE_MP" ]]; then
echo "[LCS] Reusing existing mount: $ACTIVE_MP"
MOUNTPOINT="$ACTIVE_MP"
else
echo "[LCS] Mounting $DEVICE at $MOUNTPOINT ..."
sudo mount "$DEVICE" "$MOUNTPOINT"
mounted_here=1
fi
# Decide VM file path (allow override)
VMFILE="${VMFILE_OVERRIDE:-$MOUNTPOINT/LCS.raw}"
cleanup() {
st=$?
if (( mounted_here == 1 )) && (( NO_AUTOUNMOUNT == 0 )); then
echo "[LCS] Sync & unmount $MOUNTPOINT ..."
sync || true
sudo umount "$MOUNTPOINT" || true
else
[[ $mounted_here -eq 1 && $NO_AUTOUNMOUNT -eq 1 ]] && echo "[LCS] Skipping auto-unmount (--no-autounmount)."
fi
exit $st
}
trap cleanup EXIT INT TERM
### ---- Validate VM image ---------------------------------------------------
if [[ ! -f "$VMFILE" ]]; then
echo "[LCS] ERROR: VM file not found: $VMFILE"
exit 1
fi
### ---- Auto-detect CPU/RAM defaults (overridable) --------------------------
TOTAL_CPUS=$(nproc)
SMP="${LCS_SMP:-$(( TOTAL_CPUS>4 ? 4 : TOTAL_CPUS ))}"
TOTAL_MEM_KB=$(grep -E '^MemTotal:' /proc/meminfo | awk '{print $2}')
HOST_MB=$(( TOTAL_MEM_KB / 1024 ))
DEFAULT_VM_MB=$(( HOST_MB * 60 / 100 )) # 60% host
(( DEFAULT_VM_MB < 4096 )) && DEFAULT_VM_MB=4096
(( DEFAULT_VM_MB > 16384 )) && DEFAULT_VM_MB=16384
RAM="${LCS_RAM:-${DEFAULT_VM_MB}M}"
### ---- KVM detection -------------------------------------------------------
KVM_ARGS=()
if [[ -e /dev/kvm ]] && [[ -r /dev/kvm ]] && id -nG "$USER" | grep -qw kvm; then
echo "[LCS] KVM available: hardware acceleration ON."
KVM_ARGS+=( -enable-kvm -cpu host )
else
echo "[LCS] KVM not available (or user not in 'kvm'); using tcg."
KVM_ARGS+=( -cpu qemu64 )
fi
### ---- Build QEMU args -----------------------------------------------------
QEMU_ARGS=( "${KVM_ARGS[@]}" -smp "$SMP" -m "$RAM" )
# Display
if [[ "$LCS_SPICE" == "1" ]]; then
echo "[LCS] SPICE display enabled on port $SPICE_PORT"
QEMU_ARGS+=( -display gtk )
QEMU_ARGS+=( -spice "port=${SPICE_PORT},disable-ticketing=on" )
QEMU_ARGS+=( -device virtio-vga )
QEMU_ARGS+=(
-device virtio-serial-pci
-chardev spicevmc,id=vdagent0,name=vdagent
-device virtserialport,chardev=vdagent0,name=com.redhat.spice.0
)
else
QEMU_ARGS+=( -display gtk )
fi
# Disk
if [[ "$LCS_DISK_IF" == "virtio" ]]; then
echo "[LCS] Disk bus: virtio"
QEMU_ARGS+=( -drive id=drv0,file="$VMFILE",format=raw,if=none
-device virtio-blk-pci,drive=drv0 )
else
echo "[LCS] Disk bus: IDE (safe)"
QEMU_ARGS+=( -drive id=drv0,file="$VMFILE",format=raw,if=none
-device ide-hd,drive=drv0,bus=ide.0 )
fi
# Network (+ optional forwards)
NETDEV_OPTS="user,id=n0"
if [[ -n "$LCS_FWD" ]]; then
echo "[LCS] Port forwards: $LCS_FWD"
NETDEV_OPTS="$NETDEV_OPTS,hostfwd=$LCS_FWD"
fi
QEMU_ARGS+=( -netdev "$NETDEV_OPTS" )
if [[ "$LCS_VIRTIO_NET" == "1" ]]; then
echo "[LCS] NIC: virtio-net"
QEMU_ARGS+=( -device virtio-net,netdev=n0 )
else
echo "[LCS] NIC: e1000 (compat)"
QEMU_ARGS+=( -device e1000,netdev=n0 )
fi
# Virtio ISO (attach if requested or auto-found)
attach_virtio=0
if [[ "$LCS_ATTACH_VIRTIO" == "1" ]]; then
attach_virtio=1
else
for iso in "$VIRTIO_ISO" \
"$HOME/Downloads/virtio-win.iso" \
"/usr/share/virtio-win/virtio-win.iso" \
"/usr/share/virtio-win/virtio-win-guest-tools.iso"; do
[[ -n "$iso" && -f "$iso" ]] && { VIRTIO_ISO="$iso"; attach_virtio=1; break; }
done
fi
if (( attach_virtio == 1 )); then
if [[ -n "$VIRTIO_ISO" ]]; then
echo "[LCS] Attaching virtio ISO: $VIRTIO_ISO"
QEMU_ARGS+=( -drive "file=$VIRTIO_ISO,media=cdrom,readonly=on" )
else
echo "[LCS] Note: --virtio-iso PATH to attach specific ISO."
fi
fi
### ---- Run ---------------------------------------------------------------
echo "[LCS] Using mountpoint: $MOUNTPOINT"
echo "[LCS] VM image: $VMFILE"
echo "[LCS] QEMU:"
printf ' %q ' qemu-system-x86_64 "${QEMU_ARGS[@]}"; echo
qemu-system-x86_64 "${QEMU_ARGS[@]}"
echo "[LCS] VM exited with code $?"
*****************************************
How I actually use it (cheat sheet)
For future me:
-
Normal run:
-
With SPICE:
-
Force RAM/CPU:
-
Forward RDP:
-
Attach virtio ISO (while installing drivers in Windows):
-
Toggle fullscreen inside the VM window:
This setup gives me a portable Windows work VM on a USB stick, powered by QEMU directly, without having to drag libvirt/virt-manager into the story.
Comments
Post a Comment