Virtual Machine Implementation

Relevant source files

This document covers the core virtual machine implementation in AxVM, focusing on the AxVM struct and its lifecycle management. This includes VM creation, booting, vCPU execution coordination, memory management, and device emulation integration. For details about the underlying vCPU architecture abstraction, see Virtual CPU Architecture. For hardware abstraction interfaces, see Hardware Abstraction Layer.

AxVM Structure Overview

The AxVM struct serves as the central coordinator for all virtual machine operations. It uses a generic design with two type parameters to support multiple architectures and hardware abstraction layers.

classDiagram
class AxVM {
    +running: AtomicBool
    +shutting_down: AtomicBool
    +inner_const: AxVMInnerConst
    +inner_mut: AxVMInnerMut
    +new(config: AxVMConfig) AxResult~AxVMRef~
    +boot() AxResult
    +shutdown() AxResult
    +run_vcpu(vcpu_id: usize) AxResult~AxVCpuExitReason~
    +vcpu(vcpu_id: usize) Option~AxVCpuRef~
    +get_devices() &AxVmDevices
}

class AxVMInnerConst {
    +id: usize
    +config: AxVMConfig
    +vcpu_list: Box[AxVCpuRef]
    +devices: AxVmDevices
    
}

class AxVMInnerMut {
    +address_space: Mutex~AddrSpace~
    +_marker: PhantomData
    
}

AxVM  *--  AxVMInnerConst : "contains"
AxVM  *--  AxVMInnerMut : "contains"

Sources: src/vm.rs(L47 - L53)  src/vm.rs(L31 - L40)  src/vm.rs(L41 - L45) 

The structure separates immutable data (AxVMInnerConst) from mutable data (AxVMInnerMut) to optimize concurrent access patterns. The type aliases provide convenient references for shared ownership.

Type AliasDefinitionPurpose
VCpuAxVCpu<AxArchVCpuImpl>Architecture-independent vCPU interface
AxVCpuRefArc<VCpu>Shared reference to a vCPU
AxVMRef<H, U>Arc<AxVM<H, U>>Shared reference to a VM

Sources: src/vm.rs(L21 - L29) 

VM Creation Process

VM creation follows a multi-stage initialization process that sets up all necessary components before the VM can be booted.

sequenceDiagram
    participant Client as Client
    participant AxVMnew as AxVM::new
    participant VCpuCreation as VCpu Creation
    participant AddressSpace as Address Space
    participant DeviceSetup as Device Setup
    participant VCpuSetup as VCpu Setup

    Client ->> AxVMnew: "AxVMConfig"
    Note over AxVMnew: "Parse vcpu affinities"
    AxVMnew ->> VCpuCreation: "for each vcpu_id, phys_cpu_set"
    VCpuCreation ->> VCpuCreation: "AxVCpuCreateConfig (arch-specific)"
    VCpuCreation ->> AxVMnew: "Arc<VCpu>"
    Note over AxVMnew: "Setup memory regions"
    AxVMnew ->> AddressSpace: "AddrSpace::new_empty()"
    AxVMnew ->> AddressSpace: "map_linear() / map_alloc()"
    Note over AxVMnew: "Setup passthrough devices"
    AxVMnew ->> AddressSpace: "map_linear() with DEVICE flags"
    AxVMnew ->> DeviceSetup: "AxVmDevices::new()"
    Note over AxVMnew: "Create AxVM instance"
    AxVMnew ->> VCpuSetup: "vcpu.setup(entry, ept_root, config)"
    VCpuSetup ->> AxVMnew: "Result"
    AxVMnew ->> Client: "Arc<AxVM>"

Sources: src/vm.rs(L59 - L220) 

vCPU Creation and Configuration

The VM creates vCPUs based on the configuration's CPU affinity settings. Each architecture requires specific configuration parameters:

ArchitectureConfiguration FieldsSource Location
AArch64mpidr_el1,dtb_addrsrc/vm.rs67-75
RISC-Vhart_id,dtb_addrsrc/vm.rs76-83
x86_64Default (empty)src/vm.rs84-85

Sources: src/vm.rs(L61 - L93) 

Memory Region Setup

The VM supports two mapping types for memory regions:

  • MapIentical: Direct mapping where guest physical addresses map to identical host physical addresses
  • MapAlloc: Dynamic allocation where the hypervisor allocates physical memory for the guest
flowchart TD
config["Memory Region Config"]
check_flags["Check MappingFlags validity"]
map_type["Mapping Type?"]
alloc_at["H::alloc_memory_region_at()"]
map_linear1["address_space.map_linear()"]
map_alloc["address_space.map_alloc()"]
complete["Memory region mapped"]

alloc_at --> map_linear1
check_flags --> map_type
config --> check_flags
map_alloc --> complete
map_linear1 --> complete
map_type --> alloc_at
map_type --> map_alloc

Sources: src/vm.rs(L95 - L163) 

For passthrough devices, the VM creates device mappings with the DEVICE flag to ensure proper memory attributes for hardware access.

Sources: src/vm.rs(L165 - L180) 

VM Lifecycle Management

The VM follows a simple state machine with atomic boolean flags for coordination between multiple threads.


Sources: src/vm.rs(L49 - L50)  src/vm.rs(L278 - L310) 

Boot Process

The boot() method performs hardware support validation before allowing VM execution:

  1. Check hardware virtualization support via has_hardware_support()
  2. Verify VM is not already running
  3. Set running flag to true atomically

Sources: src/vm.rs(L278 - L288) 

Shutdown Process

The shutdown() method initiates graceful VM termination:

  1. Check VM is not already shutting down
  2. Set shutting_down flag to true
  3. Note: VM re-initialization is not currently supported

Sources: src/vm.rs(L299 - L310) 

vCPU Execution and Exit Handling

The run_vcpu() method implements the main VM execution loop, handling various exit reasons from the hardware virtualization layer.

flowchart TD
start["run_vcpu(vcpu_id)"]
bind["vcpu.bind()"]
run_loop["vcpu.run()"]
check_exit["Exit Reason?"]
mmio_read["devices.handle_mmio_read()"]
set_reg["vcpu.set_gpr(reg, val)"]
continue_loop["continue_loop"]
mmio_write["devices.handle_mmio_write()"]
io_handled["Handle I/O (placeholder)"]
page_fault["address_space.handle_page_fault()"]
fault_handled["Handled?"]
exit_loop["Break loop"]
unbind["vcpu.unbind()"]
return_exit["Return AxVCpuExitReason"]

bind --> run_loop
check_exit --> exit_loop
check_exit --> io_handled
check_exit --> mmio_read
check_exit --> mmio_write
check_exit --> page_fault
continue_loop --> run_loop
exit_loop --> unbind
fault_handled --> continue_loop
fault_handled --> exit_loop
io_handled --> continue_loop
mmio_read --> set_reg
mmio_write --> continue_loop
page_fault --> fault_handled
run_loop --> check_exit
set_reg --> continue_loop
start --> bind
unbind --> return_exit

Sources: src/vm.rs(L328 - L376) 

Exit Reason Handling

The VM handles several categories of VM exits:

Exit TypeHandlerAction
MmioReaddevices.handle_mmio_read()Read from emulated device, set guest register
MmioWritedevices.handle_mmio_write()Write to emulated device
IoRead/IoWritePlaceholderCurrently returns true (no-op)
NestedPageFaultaddress_space.handle_page_fault()Handle EPT/NPT page fault

Sources: src/vm.rs(L338 - L368) 

Address Space and Memory Management

The VM uses a two-stage address translation system with an AddrSpace that manages guest-to-host physical address mappings.

Address Space Configuration

The VM address space uses fixed bounds to limit guest memory access:

ConstantValuePurpose
VM_ASPACE_BASE0x0Base guest physical address
VM_ASPACE_SIZE0x7fff_ffff_f000Maximum guest address space size

Sources: src/vm.rs(L18 - L19) 

EPT Root Access

The ept_root() method provides access to the page table root for hardware virtualization:

#![allow(unused)]
fn main() {
pub fn ept_root(&self) -> HostPhysAddr {
    self.inner_mut.address_space.lock().page_table_root()
}
}

Sources: src/vm.rs(L248 - L250) 

Image Loading Support

The VM provides functionality to obtain host virtual addresses for loading guest images, handling potentially non-contiguous physical memory:

Sources: src/vm.rs(L260 - L270) 

Device Integration

The VM integrates with the AxVM device emulation framework through the AxVmDevices component, which handles MMIO operations for emulated devices.

Device initialization occurs during VM creation, using the configuration's emulated device list:

Sources: src/vm.rs(L182 - L184) 

Device access is coordinated through the exit handling mechanism, where MMIO reads and writes are forwarded to the appropriate device handlers.

Sources: src/vm.rs(L345 - L355) 

Error Handling and Safety

The VM implementation uses Rust's type system and AxResult for comprehensive error handling:

  • Hardware Support: Validates virtualization capabilities before boot
  • State Validation: Prevents invalid state transitions (e.g., double boot)
  • Resource Management: Uses Arc for safe sharing across threads
  • Memory Safety: Leverages Rust's ownership system for memory region management

Sources: src/vm.rs(L7)  src/vm.rs(L279 - L287)  src/vm.rs(L300 - L310)