How To Install KVM on Manjaro

Virtualization has become an essential tool for developers, system administrators, and Linux enthusiasts who need to run multiple operating systems simultaneously. KVM (Kernel-based Virtual Machine) stands out as one of the most powerful and efficient virtualization solutions available for Linux systems. This comprehensive guide walks through the complete process of installing and configuring KVM on Manjaro Linux, enabling the creation and management of high-performance virtual machines.
KVM transforms the Linux kernel into a bare-metal hypervisor, delivering near-native performance for guest operating systems. Unlike proprietary solutions, KVM is completely open-source and cost-effective, eliminating expensive licensing fees while providing enterprise-grade virtualization capabilities. Combined with QEMU for hardware emulation and virt-manager for graphical management, KVM creates a robust virtualization environment perfect for testing software, learning new operating systems, or running production workloads.
Prerequisites and System Requirements
Before diving into the installation process, ensuring the system meets specific hardware and software requirements is crucial for optimal KVM performance.
Hardware Requirements
The CPU must support hardware virtualization extensions—either Intel VT-x for Intel processors or AMD-V for AMD processors. A minimum of 8GB RAM is recommended for the host system, though 4GB can work for light virtualization tasks. Adequate disk space depends on intended usage, but allocating at least 50GB for virtual machine storage ensures flexibility. The processor architecture must be 64-bit (x86_64) to support modern virtualization features.
Software Requirements
A fully updated Manjaro Linux installation forms the foundation for KVM setup. Root or sudo privileges are necessary for installing packages and configuring system services. Basic familiarity with terminal commands streamlines the installation process. An active internet connection enables downloading required packages from Manjaro repositories.
Checking CPU Virtualization Support
Verifying hardware virtualization support prevents installation issues and ensures optimal VM performance.
Verify System Architecture
Open a terminal and execute the following command to confirm the system architecture:
uname -m
The output should display x86_64, indicating a 64-bit system capable of running KVM. Any other output suggests the system may not support KVM virtualization properly.
Check Hardware Virtualization Extensions
The lscpu command reveals whether the CPU supports virtualization extensions:
LC_ALL=C lscpu | grep Virtualization
For Intel processors, look for VT-x in the output. AMD processors display AMD-V instead. If virtualization appears in the output, the CPU supports hardware-assisted virtualization.
Alternatively, check for KVM module support directly:
lsmod | grep kvm
If the command returns kvm_intel or kvm_amd, virtualization modules are already loaded.
Enabling Virtualization in BIOS/UEFI
When virtualization doesn’t appear in the lscpu output, enabling it in the system BIOS or UEFI becomes necessary. Restart the computer and press the appropriate key during startup—commonly F2, F10, F12, Delete, or ESC depending on the manufacturer. Navigate to Advanced Settings, CPU Configuration, or System Configuration menus. Look for options labeled “Intel Virtualization Technology,” “Intel VT-x,” “AMD-V,” or “SVM Mode”. Enable the virtualization option, save changes, and exit the BIOS/UEFI. After rebooting, verify virtualization is enabled by running the lscpu command again.
Installing KVM Packages on Manjaro
Manjaro’s pacman package manager simplifies installing all necessary virtualization components in one command.
Understanding Required Packages
The KVM virtualization stack consists of several interconnected packages. QEMU provides the quick emulator for hardware virtualization and device emulation. Virt-manager delivers a user-friendly graphical interface for creating and managing virtual machines. Libvirt offers a powerful API for controlling virtualization engines and managing VM lifecycle operations. Virt-viewer functions as a lightweight client for displaying VM consoles. Dnsmasq handles DNS forwarding and DHCP services for virtual networks. Bridge-utils enables Ethernet bridge configuration for advanced networking. Ebtables and iptables-nft manage firewall rules for virtual networks. Libguestfs provides tools for accessing and modifying virtual machine disk images. Edk2-ovmf supplies UEFI firmware for virtual machines requiring modern boot capabilities.
Installing Packages via Pacman
First, update the system repositories to ensure access to the latest package versions:
sudo pacman -Syy
Install the complete KVM virtualization stack with a single comprehensive command:
sudo pacman -S --needed virt-manager qemu-desktop libvirt edk2-ovmf dnsmasq iptables-nft bridge-utils virt-viewer
The --needed flag prevents reinstalling packages already present on the system. Pacman downloads and installs all packages along with their dependencies automatically. Confirm the installation when prompted by typing “Y” and pressing Enter.
For TPM (Trusted Platform Module) support, which is required for Windows 11 virtual machines, install the swtpm package:
sudo pacman -S --asdeps swtpm
Package Selection Options
Manjaro offers different QEMU package variants for specific needs. qemu-desktop includes support for common desktop virtualization scenarios and is recommended for most users. qemu-base provides minimal functionality for lightweight installations. qemu-full contains support for all architectures and device emulations, useful for advanced use cases. The desktop variant balances functionality and disk space requirements effectively.
Configuring the Libvirtd Service
The libvirtd daemon manages all virtualization operations and must run continuously for VM functionality.
Starting and Enabling Libvirtd
Enable libvirtd to start automatically at system boot:
sudo systemctl enable libvirtd.service
Start the libvirtd service immediately without rebooting:
sudo systemctl start libvirtd.service
Alternatively, combine both operations using the --now flag:
sudo systemctl enable --now libvirtd.service
This single command enables the service for automatic startup and starts it immediately.
Verifying Service Status
Confirm libvirtd is running properly with the status command:
sudo systemctl status libvirtd.service
The output should display “active (running)” in green text. Press “q” to exit the status view. If the service shows as failed or inactive, check system logs for error messages.
Understanding Libvirtd Dependencies
Libvirtd relies on several socket units for proper operation. virtlockd.socket prevents resource conflicts between virtual machines. virtlogd.socket handles logging for VM operations. libvirtd.socket enables socket-based activation of the libvirt daemon. These dependencies start automatically when enabling libvirtd.
Configuring User Permissions
Running virtual machines as a regular user improves security and simplifies management.
Why Non-Root Access Matters
Operating VMs as root poses unnecessary security risks and conflicts with Linux security principles. User-level VM management provides proper isolation and resource tracking. Default libvirt permissions restrict VM operations to the root user. Configuring appropriate group membership grants regular users full VM control.
Editing Libvirtd Configuration
Open the libvirtd configuration file with a text editor:
sudo nano /etc/libvirt/libvirtd.conf
Locate the unix_sock_group parameter around line 85. Remove the comment symbol (#) and set the value to “libvirt”:
unix_sock_group = "libvirt"
Find the unix_sock_rw_perms parameter near line 108. Uncomment the line and ensure the value is “0770”:
unix_sock_rw_perms = "0770"
Save the file by pressing Ctrl+O, then Enter, and exit nano with Ctrl+X.
Adding User to Libvirt Group
Add the current user to the libvirt group for VM management permissions:
sudo usermod -aG libvirt $USER
Also add the user to the kvm group for direct KVM access:
sudo usermod -aG kvm $USER
Some distributions require the libvirt-qemu group as well:
sudo usermod -aG libvirt-qemu $USER
Restarting Libvirtd Service
Restart libvirtd to apply the configuration changes:
sudo systemctl restart libvirtd.service
Log out and log back in for group membership changes to take effect. Alternatively, use the soft-reboot command to restart userspace without rebooting the entire system:
systemctl soft-reboot
Verify group membership after logging back in:
groups $USER
The output should include libvirt, kvm, and libvirt-qemu groups.
Enabling Nested Virtualization (Optional)
Nested virtualization allows running virtual machines inside other virtual machines, useful for testing hypervisors or complex environments.
What Is Nested Virtualization
Nested virtualization enables a guest VM to run its own virtual machines. This capability proves valuable for testing virtualization platforms, developing hypervisor configurations, or running containerized applications inside VMs. Performance overhead increases with each virtualization layer. Most users don’t require nested virtualization for standard VM usage.
Enabling for Intel Processors
Create a modprobe configuration file for persistent settings:
echo "options kvm-intel nested=1" | sudo tee /etc/modprobe.d/kvm-intel.conf
Reload the KVM Intel module to apply changes immediately:
sudo modprobe -r kvm-intel
sudo modprobe kvm-intel
Enabling for AMD Processors
Create the AMD-specific configuration file:
echo "options kvm-amd nested=1" | sudo tee /etc/modprobe.d/kvm-amd.conf
Reload the KVM AMD module:
sudo modprobe -r kvm-amd
sudo modprobe kvm-amd
Verifying Nested Virtualization
Check if nested virtualization is enabled for Intel CPUs:
cat /sys/module/kvm_intel/parameters/nested
For AMD CPUs, use:
cat /sys/module/kvm_amd/parameters/nested
The output should display “1” or “Y” indicating nested virtualization is active.
Setting Up Virt-Manager
Virt-manager provides an intuitive graphical interface for managing KVM virtual machines.
Launching Virt-Manager
Start virt-manager from the terminal:
virt-manager
Alternatively, find Virtual Machine Manager in the application menu under System Tools or Administration. The application connects automatically to the local QEMU/KVM hypervisor.
Verifying Connection Details
Click Edit in the menu bar and select Connection Details. The Overview tab displays connection status and hypervisor information. The connection URI shows “qemu:///system” for system-level virtual machines. This connection type provides full access to system resources and network bridges.
Configuring Virtual Networks
Select the Virtual Networks tab in Connection Details. The default network usually appears as “default” with the NAT mode. This network creates a virtual bridge interface named virbr0 on the host system. The default IP range is typically 192.168.122.0/24 with DHCP enabled. Enable Autostart on boot by checking the appropriate box. Virtual machines connected to this network can access the internet through the host’s network connection.
To create additional networks, click the + button at the bottom of the window. Choose a network name and configure the IP addressing scheme. Select NAT, isolated, or routed mode depending on networking requirements.
Setting Up Storage Pools
Navigate to the Storage tab in Connection Details. The default storage pool typically resides at /var/lib/libvirt/images. This location stores virtual machine disk images in qcow2 or raw format.
Create a custom ISO storage pool for installation media. Click the + button and select a name like “ISO Images”. Choose “dir: Filesystem Directory” as the type. Set the target path to a directory where ISO files are stored, such as /home/username/ISOs. Enable Autostart on Boot to make the pool available at system startup. Click Finish to create the storage pool.
Multiple storage pools help organize different types of virtual machine resources. Separate pools for ISOs, disk images, and backup snapshots improve management efficiency.
Creating Your First Virtual Machine
With KVM fully configured, creating a virtual machine becomes straightforward through virt-manager’s guided wizard.
Preparing ISO Images
Download ISO images for desired guest operating systems from official sources. Popular choices include various Linux distributions, Windows, or BSD systems. Place downloaded ISOs in the storage pool directory created earlier. Virt-manager can access ISOs from any location, but organizing them in a dedicated pool simplifies management.
VM Creation Steps
Click the “Create a new virtual machine” button in virt-manager’s main window. Select “Local install media (ISO image or CDROM)” as the installation method. Click Forward to proceed. Browse for the ISO file by clicking Browse and selecting the ISO storage pool. Choose the downloaded installation image from the list.
Virt-manager attempts automatic OS detection based on the ISO metadata. If detection fails, uncheck “Automatically detect from the installation media / source”. Manually type the operating system name in the search field. Select the correct OS type and version from the dropdown list. Accurate OS selection ensures appropriate virtualization optimizations.
Configuring VM Resources
Click Forward to reach the memory and CPU allocation screen. Allocate RAM based on the guest OS requirements—typically 2048 MB (2 GB) for Linux and 4096 MB (4 GB) for Windows. Assign CPU cores according to workload needs, starting with 2 cores for general use. Remember that allocated resources are not available to the host while the VM runs.
Click Forward to configure storage. Select “Enable storage for this virtual machine”. Choose “Select or create custom storage” for more control. Click Manage to access storage pool options. Select the default storage pool and click the + button to create a new volume. Name the virtual disk image and set the capacity—20 GB minimum for Linux, 40 GB or more for Windows. The default qcow2 format provides thin provisioning and snapshot support.
Finalizing VM Setup
Click Forward to review the configuration summary. Assign a descriptive name to the virtual machine. Verify that “default” network is selected for network connectivity. Check “Customize configuration before install” to access advanced settings. This option allows modifying hardware components before starting installation.
In the Overview section, change Firmware to “UEFI x86_64” for modern operating systems requiring UEFI boot. Navigate to the disk settings and change the Disk bus to “VirtIO” for optimal performance. Set Discard mode to “unmap” to enable TRIM support for efficient storage management. Click Apply to save disk changes.
Change the NIC device model to “virtio” for enhanced network performance. For Windows 11 guests, add a TPM device by clicking “Add Hardware,” selecting TPM, choosing TIS model, and Emulated backend. Consider adding a watchdog device to automatically reboot hung guests. Add a hardware RNG (Random Number Generator) to provide entropy from the host to the guest.
Installing Guest Operating System
Click “Begin Installation” at the top of the window to start the VM. The virtual machine console appears showing the guest OS installer. Follow the operating system’s installation procedure as if installing on physical hardware. After installation completes, install guest additions or drivers for improved performance.
For Linux guests, install spice-vdagent and xf86-video-qxl packages using the distribution’s package manager. For Windows guests, download and install spice-guest-tools from the SPICE project website. Guest tools enable features like automatic resolution adjustment, clipboard sharing, and enhanced graphics performance.
Networking Configuration Options
Understanding KVM networking modes enables selecting the appropriate configuration for specific use cases.
Understanding Default NAT Network
NAT (Network Address Translation) networking provides internet connectivity while isolating virtual machines from the external network. The default network creates a virtual bridge (virbr0) with a private IP subnet. DHCP automatically assigns IP addresses to virtual machines from the defined range. VMs can initiate outbound connections to the internet but remain inaccessible from external networks by default.
NAT mode works well for development, testing, and scenarios where VMs only need outbound connectivity. Port forwarding can expose specific VM services to the external network when needed.
Bridged Networking Setup
Bridged networking connects virtual machines directly to the physical network, making them appear as independent devices. Each VM receives an IP address from the physical network’s DHCP server or requires static configuration. This mode enables external devices to access VM services directly.
Creating a bridge requires additional configuration on the host system. Install NetworkManager if not already present. Use nmcli or NetworkManager GUI to create a bridge interface. Add the physical network interface to the bridge. Configure the bridge with appropriate IP settings. In virt-manager, select the bridge interface when creating or editing VMs.
Bridged networking suits production servers, network testing scenarios, and situations requiring direct VM network access.
Host-Only Networking
Host-only networks create isolated environments where VMs communicate with each other and the host but not external networks. This configuration provides security for sensitive workloads or testing environments. Create isolated networks in virt-manager’s Virtual Networks section. Set the network mode to “Isolated” and configure an appropriate IP range.
Advanced Network Configuration
Multiple network interfaces can be added to virtual machines for complex networking scenarios. Each interface connects to a different virtual or physical network. VLAN tagging enables network segmentation within virtual environments. Custom DHCP settings control IP assignment and lease times. Port forwarding in NAT mode redirects specific ports from the host to VMs.
Performance Optimization Tips
Several configuration tweaks significantly improve virtual machine performance.
CPU Configuration
CPU pinning assigns specific physical CPU cores to virtual machine vCPUs, reducing scheduling overhead. In virt-manager, edit the VM XML to specify CPU affinity. Topology configuration optimizes CPU layout by matching the guest to physical architecture. Setting CPU mode to “host-passthrough” exposes all host CPU features to the guest. This mode delivers maximum performance for CPU-intensive workloads.
NUMA (Non-Uniform Memory Access) node configuration improves performance on multi-socket systems. Assign VM memory and CPUs from the same NUMA node to reduce memory latency.
Memory Optimization
Hugepages reduce memory management overhead and improve performance for memory-intensive applications. Enable hugepages on the host and configure VMs to use them. Memory ballooning allows dynamic memory adjustment without restarting VMs. KSM (Kernel Same-page Merging) consolidates identical memory pages across VMs, increasing effective memory capacity. Cache mode settings affect how the guest OS interacts with storage caching.
Storage Performance
VirtIO drivers provide optimal I/O performance for disk and network operations. Cache mode selection impacts performance and data safety. “none” mode offers best performance with proper data integrity. “writeback” improves performance but requires host backup power for safety. “writethrough” provides strong data guarantees at the cost of performance.
Raw disk format eliminates format overhead compared to qcow2 but sacrifices snapshot and thin provisioning features. For maximum performance, assign dedicated block devices or LVM volumes directly to VMs.
Network Performance
VirtIO-net drivers deliver significantly better network performance than emulated network cards. Multi-queue networking distributes packet processing across multiple CPU cores. Enable vhost-net for reduced network latency and increased throughput. These optimizations matter most for network-intensive workloads like web servers or database applications.
Graphics and Display
Choosing the appropriate graphics adapter balances performance and compatibility. QXL provides good compatibility and supports 2D acceleration. VirtIO-GPU offers better performance with 3D acceleration support for Linux guests. SPICE protocol delivers better desktop experience than VNC for local VM access. GPU passthrough enables near-native graphics performance by assigning physical GPUs directly to VMs.
Troubleshooting Common Issues
Understanding common problems and their solutions accelerates KVM adoption.
VM Won’t Start
Check libvirtd service status when VMs fail to start. Restart the service if it’s not running. Verify CPU virtualization is enabled in BIOS settings. Permission errors often indicate improper group membership. Examine libvirt logs at /var/log/libvirt/ for detailed error messages. Ensure the user belongs to libvirt, kvm, and libvirt-qemu groups. Re-login after group changes to apply new permissions.
Performance Issues
CPU allocation problems occur when overcommitting CPU resources across multiple VMs. Monitor host CPU usage with htop or top while VMs run. Memory constraints cause swapping and severe performance degradation. Ensure adequate free RAM on the host system. Disk I/O bottlenecks appear when multiple VMs access storage simultaneously. Use virtio drivers and optimize cache settings to improve storage performance.
Networking Problems
VMs unable to access the network often result from firewall interference or incorrect network configuration. Verify the default network is active in virt-manager. Check that dnsmasq is running for DHCP functionality. Bridge configuration issues require verifying the bridge interface has proper IP settings. DNS resolution problems may stem from incorrect DNS servers in virtual network settings. Temporarily disable host firewalls to isolate connectivity issues.
Display and Graphics Issues
Black screens on VM startup sometimes indicate graphics driver problems. Try switching between QXL and VirtIO display adapters. Resolution problems often require installing guest tools or drivers. SPICE and VNC connection failures can result from firewall blocking display server ports. Update graphics drivers inside the guest operating system to resolve display glitches.
Permission Denied Errors
Insufficient permissions prevent VMs from accessing disk images or network interfaces. Verify group membership with the groups command. Check file ownership and permissions on disk image files. SELinux or AppArmor policies sometimes block legitimate VM operations. Review security logs to identify and resolve policy violations. Remember that group changes require logging out and back in to take effect.
Managing Virtual Machines
Effective VM lifecycle management streamlines virtualization workflows.
VM Lifecycle Management
Start virtual machines from virt-manager by double-clicking them or using the play button. Stop VMs gracefully through the operating system’s shutdown procedure. Force shutdown becomes necessary when VMs freeze or become unresponsive. Pause and resume functions temporarily halt VM execution without full shutdown. Auto-start configuration launches specific VMs automatically when the host boots. Enable auto-start by right-clicking a VM and selecting “Autostart”.
VM Snapshots
Snapshots capture the complete VM state at a specific point in time. Create snapshots before system updates or configuration changes. Revert to snapshots when problems occur or testing requires a clean state. Snapshots consume additional disk space proportional to changed data. Regularly delete unnecessary snapshots to free storage space. Snapshot functionality requires qcow2 disk format rather than raw.
VM Cloning
Cloning creates duplicate virtual machines from existing ones. Full clones copy all VM data to independent disk images. Linked clones share the parent VM’s disk image, saving space but creating dependencies. Customize cloned VMs by changing hostnames, IP addresses, and machine IDs. Virt-manager provides a clone function accessible from the VM context menu.
VM Migration
Export VMs for backup or transfer to other hosts. Use virsh commands or virt-manager to export VM configurations and disk images. Import VMs by copying disk images and creating new VM definitions pointing to them. Convert VMs from other formats like VirtualBox or VMware using qemu-img. Live migration moves running VMs between hosts without downtime in clustered environments.
Best Practices and Security
Implementing security best practices protects virtualization environments from threats.
Security Considerations
Keep the host system updated with the latest security patches. Virtualization inherits kernel security features including SELinux support. Apply guest OS security updates regularly to prevent compromise. Network isolation strategies prevent lateral movement between VMs. User permission management limits VM access to authorized personnel. Avoid running VMs as root when possible. Strong VM isolation ensures that compromised guests cannot affect the host or other VMs.
Backup Strategies
VM backup methods include full disk image copies, snapshot-based backups, and incremental backups. Snapshots provide quick rollback capability but shouldn’t replace proper backups. Store backups on separate physical storage to protect against host failures. Disaster recovery planning includes documented procedures for restoring VMs after catastrophic events. Test backup restoration regularly to verify backup integrity and procedures.
Resource Management
Monitor host resources continuously to prevent overcommitment. Avoid allocating more virtual resources than physical resources available. Load balancing distributes VMs across multiple hosts in larger deployments. Scaling considerations include planning for future VM additions and resource growth. Right-sizing VMs ensures efficient resource utilization without waste.
Congratulations! You have successfully installed KVM. Thanks for using this tutorial for installing the Kernel-based Virtual Machine (KVM) on your Manjaro Linux system. For additional or useful information, we recommend you check the official KVM website.