VM Management
Relevant source files
This document describes how virtual machines (VMs) are created, configured, and managed within the AxVisor hypervisor system. It explains the VM lifecycle from initialization through execution to shutdown, as well as the management of virtual CPUs (VCPUs) within each VM. For information about the underlying hardware abstraction layer, see Hardware Abstraction Layer.
VM Architecture Overview
AxVisor implements a modular VM management system that separates VM configuration, execution, and resource management concerns. The core components work together to create, execute, and monitor virtual machines.
flowchart TD subgraph subGraph1["Task Management"] TaskScheduler["ArceOS Task Scheduler"] TaskExt["Task Extensions"] end subgraph subGraph0["VM Management System"] VMM["VMM Module"] VMConfig["VM Configuration"] VMList["VM List"] VCPUManager["VCPU Manager"] ImageLoader["Image Loader"] VMs["VMs"] VCPUs["VCPUs"] end ImageLoader --> VMs TaskExt --> VCPUs TaskExt --> VMs VCPUManager --> TaskScheduler VCPUManager --> VCPUs VMConfig --> VMs VMList --> VMs VMM --> VMConfig VMM --> VMs VMs --> VCPUs
Sources: src/vmm/mod.rs src/task.rs
VM Types and References
The VM management system uses specific types to represent VMs and VCPUs:
Type | Description | Source |
---|---|---|
VM | Instantiated VM type usingaxvm::AxVM<AxVMHalImpl, AxVCpuHalImpl> | src/vmm/mod.rs16 |
VMRef | VM reference type (shared viaArc) | src/vmm/mod.rs18 |
VCpuRef | VCPU reference type (shared viaArc) | src/vmm/mod.rs20 |
These types provide the foundation for VM management operations, with reference types allowing shared access to VM and VCPU resources across different components and tasks.
Sources: src/vmm/mod.rs(L16 - L20)
VM Lifecycle
The VM lifecycle in AxVisor follows a defined sequence of operations, from initialization through execution to shutdown.
Sources: src/vmm/mod.rs(L28 - L65) src/vmm/vcpus.rs(L275 - L367)
VM Initialization
The VMM initializes VMs through the following process:
- Call
VMM::init()
to initialize guest VMs and set up primary VCPUs - Load VM configurations from TOML files
- Create VM instances using
VM::new()
- Load VM images based on configuration
- Set up primary VCPUs for each VM
sequenceDiagram participant VMM as VMM participant ConfigModule as Config Module participant VMInstance as VM Instance participant VCPUManager as VCPU Manager VMM ->> ConfigModule: init_guest_vms() ConfigModule ->> ConfigModule: Parse TOML configs ConfigModule ->> VMInstance: Create VM with AxVMConfig VMInstance -->> ConfigModule: Return VM instance ConfigModule ->> ConfigModule: load_vm_images() ConfigModule ->> VMM: Return created VMs VMM ->> VCPUManager: setup_vm_primary_vcpu() for each VM VCPUManager ->> VCPUManager: Create primary VCPU task
Sources: src/vmm/mod.rs(L28 - L39) src/vmm/config.rs(L25 - L43)
VM Execution
The VMM starts VM execution:
- Call
VMM::start()
to boot all VMs - For each VM, call
vm.boot()
to start execution - Notify the primary VCPU to begin execution
- Increment
RUNNING_VM_COUNT
for each successfully started VM - Wait until all VMs are stopped (when
RUNNING_VM_COUNT
reaches 0)
VCPUs execute in their own tasks, handling various exit reasons such as hypercalls, interrupts, halts, and system shutdown events.
Sources: src/vmm/mod.rs(L42 - L65) src/vmm/vcpus.rs(L275 - L367)
VM Configuration
AxVisor uses TOML-based configuration files to define VM properties:
- Configuration files are stored in
configs/vms/
directory - Default configurations are provided for different architectures (x86_64, aarch64, riscv64)
- Configurations define VM properties like ID, name, CPU count, memory regions, etc.
- Configuration processing involves:
- Parsing TOML into
AxVMCrateConfig
- Converting to
AxVMConfig
for VM creation - Using configuration to load appropriate VM images
Sources: src/vmm/config.rs(L1 - L43)
VCPU Management
VCPUs are managed for each VM, with special handling for the primary VCPU:
classDiagram class VMVCpus { -usize _vm_id -WaitQueue wait_queue -Vec~AxTaskRef~ vcpu_task_list -AtomicUsize running_halting_vcpu_count +new(VMRef) VMVCpus +add_vcpu_task(AxTaskRef) +wait() +wait_until(F) +notify_one() +mark_vcpu_running() +mark_vcpu_exiting() bool } class VCPU_Task { +stack_size KERNEL_STACK_SIZE +TaskExt ext +run() vcpu_run } class TaskExt { +VMRef vm +VCpuRef vcpu +new(VMRef, VCpuRef) TaskExt } VCPU_Task "1" *-- "1" TaskExt : contains VMVCpus "1" *-- "*" VCPU_Task : manages
Sources: src/vmm/vcpus.rs(L27 - L40) src/task.rs(L6 - L11)
VCPU Initialization
For each VM, VCPUs are initialized:
setup_vm_primary_vcpu()
is called for each VM- A
VMVCpus
structure is created to manage VCPUs for the VM - The primary VCPU (ID 0) is set up first
- A task is created for the primary VCPU with the
vcpu_run
entry point - The VCPU task is added to the VM's VCPU task list
- The VM's VCPU structure is stored in a global mapping
Sources: src/vmm/vcpus.rs(L218 - L231)
VCPU Task Creation
Each VCPU runs in its own ArceOS task:
alloc_vcpu_task()
creates a task for a VCPU- Tasks are created with a 256 KiB kernel stack
- The task's entry point is set to
vcpu_run()
- If configured, the VCPU is assigned to specific physical CPUs
- The task is initialized with
TaskExt
containing references to the VM and VCPU - The task is spawned using
axtask::spawn_task()
Sources: src/vmm/vcpus.rs(L249 - L268) src/task.rs(L1 - L19)
VCPU Execution Loop
The main VCPU execution flow is:
- Wait for the VM to be in running state
- Mark the VCPU as running, incrementing the running count
- Enter execution loop:
- Run the VCPU using
vm.run_vcpu(vcpu_id)
- Handle various exit reasons (hypercalls, interrupts, halts, etc.)
- Check if the VM is shutting down
- When the VM is shutting down:
- Mark the VCPU as exiting
- If this was the last VCPU, decrement
RUNNING_VM_COUNT
- Wake the VMM wait queue if necessary
- Exit the execution loop
Sources: src/vmm/vcpus.rs(L275 - L367)
Secondary VCPU Management
AxVisor supports dynamic secondary VCPU initialization:
- When a primary VCPU requests to start a secondary VCPU (via
CpuUp
exit):
vcpu_on()
is called with the target VCPU ID, entry point, and argument- The target VCPU's entry point and registers are set
- A new task is created for the secondary VCPU
- The task is added to the VM's VCPU task list
- Secondary VCPUs follow the same execution loop as the primary VCPU
Sources: src/vmm/vcpus.rs(L179 - L208) src/vmm/vcpus.rs(L319 - L330)
VM Coordination and Shutdown
VMs and VCPUs coordinate through wait queues:
VMVCpus
maintains a wait queue for each VM- VCPUs can wait on the queue (e.g., when halted)
- Other VCPUs can notify waiting VCPUs (e.g., for interrupt handling)
- When a VM is shutting down, all VCPUs detect this condition
- Each VCPU marks itself as exiting
- The last exiting VCPU decrements
RUNNING_VM_COUNT
- When
RUNNING_VM_COUNT
reaches 0, the VMM itself is notified to exit
This coordination ensures proper cleanup when VMs are shut down.
Sources: src/vmm/vcpus.rs(L42 - L108) src/vmm/mod.rs(L55 - L65)
VM Image Loading
VM images are loaded according to configuration:
- Images can be loaded from a FAT32 filesystem or from memory
- Configuration specifies the kernel path, entry point, and load address
- The image loading process occurs after VM creation but before VCPU setup
This completes the VM preparation before execution begins.
Sources: src/vmm/config.rs(L39 - L41)