Skip to content

Local Development Setup

ℹ️ NOTE:
To be able to take exec console of the machine, you can follow any one of the below approaches:
- Run the libvirt-provider as the libvirt-qemu user.
- Add user to tty group and create an entry with devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=0660 0 0 in /etc/fstab.
- Manually ensure that you have 0660 access permissions on the character files created in /dev/pts.

Prerequisites

  • Linux (code contains OS specific code)
  • go >= 1.20
  • git, make and kubectl
  • Access to a Kubernetes cluster (Minikube, kind or a real cluster)
  • libvirt
  • QEMU
  • irictl-machine should be running locally or as container

Preperation

Setup irictl-machine

  1. Clone ironcore repository

    git clone git@github.com:ironcore-dev/ironcore.git
    cd ironcore
    
  2. Build irictl-machine

    go build -o bin/irictl-machine ./irictl-machine/cmd/irictl-machine/main.go
    

Run libvirt-provider for local development

  1. Clone the Repository

    To bring up and start locally the libvirt-provider project for development purposes you first need to clone the repository.

    git clone git@github.com:ironcore-dev/libvirt-provider.git
    cd libvirt-provider
    
  2. Build the libvirt-provider

    make build
    
  3. Run the libvirt-provider

    The required libvirt-provider flags needs to be defined:

    go run provider/cmd/main.go \
      --libvirt-provider-dir=<path-to-initialize-libvirt-provider> \
      --supported-machine-classes=<path-to-machine-class-json>/machine-classes.json \
      --network-interface-plugin-name=isolated \
      --address=<local-path>/iri-machinebroker.sock
    

    Sample machine-classes.json can be found here.

Interact with the libvirt-provider

  1. Creating machine

    irictl-machine --address=unix:<local-path-to-socket>/iri-machinebroker.sock create machine -f <path-to-machine-yaml>/iri-machine.yaml
    

    Sample iri-machine.yaml:

    metadata:
      id: 91076287116041d00fd421f43c3760389041dac4a8bd9201afba9a5baeb21c7
      labels:
        downward-api.machinepoollet.api.onmetal.de/root-machine-name: machine-hd4
        downward-api.machinepoollet.api.onmetal.de/root-machine-namespace: default
        downward-api.machinepoollet.api.onmetal.de/root-machine-uid: cab82eac-09d8-4428-9e6c-c98b40027b74
        machinepoollet.api.onmetal.de/machine-name: machine-hd4
        machinepoollet.api.onmetal.de/machine-namespace: default
        machinepoollet.api.onmetal.de/machine-uid: cab82eac-09d8-4428-9e6c-c98b40027b74
    spec:
      class: x3-small
      image:
        image: ghcr.io/ironcore-dev/ironcore-image/gardenlinux:rootfs-dev-20231206-v1
      volumes:
      - empty_disk:
          size_bytes: 5368709120
        name: ephe-disk
        device: oda
    
  2. Listing machines

    irictl-machine --address=unix:<local-path-to-socket>/iri-machinebroker.sock get machine
    
  3. Deleting machine

    irictl-machine --address=unix:<local-path-to-socket>/iri-machinebroker.sock delete machine <machine UUID>
    
  4. Taking machine console

    irictl-machine --address=unix:<local-path-to-socket>/iri-machinebroker.sock exec <machine UUID>
    

Deploy libvirt-provider

ℹ️ NOTE:
If the libvirt-uri can not be auto-detected it can be defined via flag: e.g. --libvirt-uri=qemu:///session
ℹ️ NOTE:
For trying out the controller use the isolated network interface plugin: --network-interface-plugin-name=isolated
ℹ️ NOTE:
Libvirt-provider can run directly as binary program on worker node

  1. Make docker images

    make docker-build
    
  2. Deploy virtlet as kubernetes

    make deploy