Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Working example #811

Open
chris-aeviator opened this issue Jan 25, 2024 · 9 comments
Open

Working example #811

chris-aeviator opened this issue Jan 25, 2024 · 9 comments
Labels
kind/bug Something isn't working lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@chris-aeviator
Copy link

chris-aeviator commented Jan 25, 2024

Hi. I'm struggling to get up and running with flintlock. The documentation has led me through all steps, I can see stdout of my firecraker instances via /var/lib/flintlock/vm/ns1/mvm1/01HN0B.... . however, I struggle to get a working json config. the provided https://github.com/weaveworks-liquidmetal/flintlock/blob/main/hack/scripts/payload/CreateMicroVM.json links a docker container that's 404, the provided images (capmvm-kubernetes:1.23.5) from the docs, with the rest of the configuration taken from CreateMicroVM.json , lead to systemd errors (dbus not found) and netplan apply fails, so my vm is offline, I also do not know how to connect to it via the console.

Seeing the same with the hammertime example.json

[ 0.000000] Linux version 5.10.77 (root@cd4feae7407d) (gcc (Ubuntu 7.5.0-6ubuntu2) 7.5.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #1 SMP Tue Jan 17 16:03:24 UTC 2023
[ 0.000000] Command line: i8042.noaux i8042.dumbkbd network-config=dmVyc2lvbjogMgpldGhlcm5ldHM6CiAgZXRoMDoKICAgIG1hdGNoOgogICAgICBtYWNhZGRyZXNzOiBBQTpGRjowMDowMDowMDowMQogICAgYWRkcmVzc2VzOgogIC
AgLSAxNjkuMjU0LjAuMS8xNgogICAgZGhjcDQ6IGZhbHNlCiAgICBkaGNwNjogZmFsc2UKICAgIGRoY3AtaWRlbnRpZmllcjogbWFjCiAgZXRoMToKICAgIG1hdGNoOgogICAgICBtYWNhZGRyZXNzOiAxQToyRToyMTo3ODpEQjpBMgogICAgZGhjcDQ6IHRydW
UKICAgIGRoY3A2OiB0cnVlCiAgICBkaGNwLWlkZW50aWZpZXI6IG1hYwo= ds=nocloud-net;s=http://169.254.169.254/latest/ console=ttyS0 reboot=k panic=1 pci=off i8042.nomux i8042.nopnp root=/dev/vda rw virtio_mm
io.device=4K@0xd0001000:5 virtio_mmio.device=4K@0xd0002000:6 virtio_mmio.device=4K@0xd0003000:7 virtio_mmio.device=4K@0xd0004000:8
...
SELinux: Could not open policy file <= /etc/selinux/targeted/policy/policy.33: No such file or directory
[ 0.502267] systemd[1]: Failed to look up module alias 'autofs4': Function not implemented
[ 0.508993] systemd[1]: systemd 245.4-4ubuntu3.19 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKI
D +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
...
[FAILED] Failed to start Load Kernel Modules.
See 'systemctl status systemd-modules-load.service' for details.
...
[ 12.266186] cloud-init[913]: 2024-01-25 13:35:43,876 - activators.py[WARNING]: Running ['netplan', 'apply'] resulted in stderr output: Failed to connect system bus: No such file or directory

@chris-aeviator chris-aeviator added the kind/bug Something isn't working label Jan 25, 2024
Copy link
Contributor

This issue is stale because it has been open 60 days with no activity.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 26, 2024
@PruserPC
Copy link

PruserPC commented Mar 4, 2025

Hey Chris! Same here... Im trying to create a microVM with flintlock following the docs at https://flintlock.liquidmetal.dev/ and https://liquidmetal.dev/docs/intro but there was no way to get the thing to work. Did you manage to solve it in the end? Thanks!

P.D.: I would love to contribute to this project, specially in the improvement of the docs @Callisto13 :)

@Callisto13
Copy link
Member

Hey @PruserPC! I'm not an active maintainer of the project right now, I'm sure @richardcase can get you started on contributing 😄

Of the two sets of documentation listed, I would prioritise https://liquidmetal.dev/docs/intro. The last thing I was doing before I left was deprecating the separate flintlock ones.

@richardcase
Copy link
Member

Sure happy to help get you started if you want 😄

@PruserPC - are you running into the same issue as @chris-aeviator ?

With the project moving from Weaveworks ownership there have been some issue with images etc. I'm more than happy to work together if you want to get it working.

@PruserPC
Copy link

PruserPC commented Mar 5, 2025

Oh @Callisto13 my bad! Thanks for your quick response and for all your work in the docs, they are trully great at explaining why each step is required and how flintlock works, specially for a newbie like me. I will be happy to take over to update them to the latest changes ^^

@PruserPC
Copy link

PruserPC commented Mar 5, 2025

Hi @richardcase !

It's not exactly the same issue, but I didn't wanted to create a new one as this one is still open.

Environment
I reproduced the same error in two machines.

  • Raspberry pi 3B (6.6.62+rpt-rpi-v8 aarch64)

  • Debian GNU/Linux 12 (bookworm)

  • Intel x86_64 with 6.13.0-061300-generic kernel

  • Ubuntu 24.04.2 LTS

Steps to reproduce
I've followed the steps in the tutorial "DO try this at home" from the docs to:

  1. setup my network (lmbr0 bridge using the liquid-metal-net.xml in the tutorial + tap0 attached to it)
pruser@raspberrypi:~ $ ip link show tap0
6: tap0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast master lmbr0 state DOWN mode DEFAULT group default qlen 1000
    link/ether ce:49:64:dd:bc:96 brd ff:ff:ff:ff:ff:ff
pruser@raspberrypi:~ $ ip link show lmbr0
5: lmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:54:00:6f:ec:89 brd ff:ff:ff:ff:ff:ff
  1. download containerd and install it as a service using provision.sh:
pruser@raspberrypi:~ $ sudo dmsetup ls
flintlock-dev-thinpool	(254:0)
pruser@raspberrypi:~ $ sudo systemctl status containerd-dev.service
● containerd-dev.service - containerd container runtime
     Loaded: loaded (/etc/systemd/system/containerd-dev.service; enabled; preset: enabled)
     Active: active (running) since Wed 2025-03-05 20:00:35 CET; 12min ago
       Docs: https://containerd.io
    Process: 1291 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 1292 (containerd)
      Tasks: 9
        CPU: 358ms
     CGroup: /system.slice/containerd-dev.service
             └─1292 /usr/local/bin/containerd --config /etc/containerd/config-dev.toml

Mar 05 20:00:35 raspberrypi containerd[1292]: time="2025-03-05T20:00:35.783909530+01:00" level=info msg="runtime interface created"
Mar 05 20:00:35 raspberrypi containerd[1292]: time="2025-03-05T20:00:35.783946404+01:00" level=info msg="created NRI interface"
Mar 05 20:00:35 raspberrypi containerd[1292]: time="2025-03-05T20:00:35.784000363+01:00" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Mar 05 20:00:35 raspberrypi containerd[1292]: time="2025-03-05T20:00:35.784112966+01:00" level=warning msg="failed to load plugin" error="unable to load CRI image service plugin depen>
Mar 05 20:00:35 raspberrypi containerd[1292]: time="2025-03-05T20:00:35.784938745+01:00" level=info msg=serving... address="127.0.0.1:1338"
Mar 05 20:00:35 raspberrypi containerd[1292]: time="2025-03-05T20:00:35.785283901+01:00" level=info msg=serving... address=/run/containerd-dev/containerd.sock.ttrpc
Mar 05 20:00:35 raspberrypi containerd[1292]: time="2025-03-05T20:00:35.785625983+01:00" level=info msg=serving... address=/run/containerd-dev/containerd.sock
Mar 05 20:00:35 raspberrypi containerd[1292]: time="2025-03-05T20:00:35.786007701+01:00" level=debug msg="sd notification" notified=true state="READY=1"
Mar 05 20:00:35 raspberrypi containerd[1292]: time="2025-03-05T20:00:35.786133117+01:00" level=info msg="containerd successfully booted in 0.095730s"
Mar 05 20:00:35 raspberrypi systemd[1]: Started containerd-dev.service - containerd container runtime
# Just in case, check the `provision.sh` generated config for containerd-dev. Looks ok to me.
pruser@raspberrypi:~ $ cat /etc/containerd/config-dev.toml 
version = 2

root = "/var/lib/containerd-dev"
state = "/run/containerd-dev"

[grpc]
  address = "/run/containerd-dev/containerd.sock"

[metrics]
  address = "127.0.0.1:1338"

[plugins]
  [plugins."io.containerd.snapshotter.v1.devmapper"]
    pool_name = "flintlock-dev-thinpool"
    root_path = "/var/lib/containerd-dev/snapshotter/devmapper"
    base_image_size = "10GB"
    discard_blocks = true

[debug]
  level = "trace"
  1. Download the latest binary of firecracker (v1.10.1)

Note: provision.sh returned an error when executing sudo ./provision.sh firecracker in my arm64 machine. It's an easy fix and I will publish the issue with my proposed changes as soon as possible :)

  1. Download the flintlock v0.7.0 (as the compatibility table says that it's the one for Firecracker v1.10+) and run it as a systemd service with ./provision.sh :
pruser@raspberrypi:~ $ sudo ./provision.sh flintlock --version v0.7.0 --dev --insecure --bridge lmbr0 --grpc-address 0.0.0.0:9090
[flintlock provision.sh] Creating containerd directory /var/lib/containerd-dev/snapshotter/devmapper
[flintlock provision.sh] Creating containerd directory /run/containerd-dev
[flintlock provision.sh] Creating containerd directory /etc/containerd
[flintlock provision.sh] All containerd directories created
[flintlock provision.sh] Installing flintlockd version v0.7.0 to /usr/local/bin
[flintlock provision.sh] Flintlockd version v0.7.0 successfully installed
[flintlock provision.sh] Writing flintlockd config to /etc/opt/flintlockd/config.yaml.
[flintlock provision.sh] Flintlockd config saved
[flintlock provision.sh] Starting flintlockd service with /etc/systemd/system/flintlockd.service
[flintlock provision.sh] Flintlockd running at 0.0.0.0:9090 via interface eth0
pruser@raspberrypi:~ $ sudo systemctl status flintlockd.service
●  flintlockd.service - flintlock microvm service
     Loaded: loaded (/etc/systemd/system/flintlockd.service; enabled; preset: enabled)
     Active: active (running) since Wed 2025-03-05 20:39:24 CET; 10s ago
       Docs: https://www.liquidmetal.dev
    Process: 1450 ExecStartPre=which firecracker (code=exited, status=0/SUCCESS)
    Process: 1451 ExecStartPre=which flintlockd (code=exited, status=0/SUCCESS)
   Main PID: 1452 (flintlockd)
      Tasks: 9 (limit: 763)
        CPU: 125ms
     CGroup: /system.slice/flintlockd.service
             └─1452 /usr/local/bin/flintlockd run

Mar 05 20:39:24 raspberrypi flintlockd[1452]: time="2025-03-05T20:39:24+01:00" level=info msg="flintlockd grpc api server starting"
Mar 05 20:39:24 raspberrypi flintlockd[1452]: time="2025-03-05T20:39:24+01:00" level=info msg="starting microvm controller"
Mar 05 20:39:24 raspberrypi flintlockd[1452]: time="2025-03-05T20:39:24+01:00" level=info msg="starting microvm controller with 1 workers" controller=microvm
Mar 05 20:39:24 raspberrypi flintlockd[1452]: time="2025-03-05T20:39:24+01:00" level=info msg="resyncing microvm specs" controller=microvm
Mar 05 20:39:24 raspberrypi flintlockd[1452]: time="2025-03-05T20:39:24+01:00" level=trace msg="querying all microvms: map[Namespace:]" component=app controller=microvm
Mar 05 20:39:24 raspberrypi flintlockd[1452]: time="2025-03-05T20:39:24+01:00" level=warning msg="basic authentication is DISABLED"
Mar 05 20:39:24 raspberrypi flintlockd[1452]: time="2025-03-05T20:39:24+01:00" level=warning msg="TLS is DISABLED"
Mar 05 20:39:24 raspberrypi flintlockd[1452]: time="2025-03-05T20:39:24+01:00" level=info msg="starting event listener" controller=microvm
Mar 05 20:39:24 raspberrypi flintlockd[1452]: time="2025-03-05T20:39:24+01:00" level=info msg="Starting workersnum_workers1" controller=microvm
Mar 05 20:39:24 raspberrypi flintlockd[1452]: time="2025-03-05T20:39:24+01:00" level=debug msg="starting grpc server listening on endpoint 0.0.0.0:9090"
# flintlock receives queries successfully 
pruser@raspberrypi:~ $ fl microvm get --host 0.0.0.0:9090
debug	getting all microvms	{"action": "get", "host": "0.0.0.0:9090"}
No microvms found.
  1. Create a new microVM:
  • I've tried with fl, but I didn't managed to properly set the parameters:
pruser@raspberrypi:~ $ fl microvm create --host 0.0.0.0:9090 --name-autogenerate --network-interface eth1:TAP::192.168.100.30/32
debug	creating a microvm	{"action": "create"}
2025/03/05 20:52:53 failed executing root command: creating microvm: creating microvm: rpc error: code = Unknown desc = creating microvm: macvtap network interfaces not supported by the microvm provider
pruser@raspberrypi:~ $ fl microvm create --host 0.0.0.0:9090 --name-autogenerate --network-interface eth1:1::192.168.100.30/32
debug	creating a microvm	{"action": "create"}
2025/03/05 20:53:58 failed executing root command: creating microvm: creating microvm: rpc error: code = Unknown desc = creating microvm: macvtap network interfaces not supported by the microvm provider
pruser@raspberrypi:~ $ fl microvm create --host 0.0.0.0:9090 --name-autogenerate --network-interface eth1:0::192.168.100.30/32
debug	creating a microvm	{"action": "create"}
2025/03/05 20:54:03 failed executing root command: creating microvm: creating microvm: rpc error: code = Unknown desc = creating microvm: macvtap network interfaces not supported by the microvm provider
  • Using the provided json in here and BloomRPC client:
# Response after CreateMicroVM operation
{
  "microvm": {
    "version": 1,
    "spec": {
      "additional_volumes": [],
      "interfaces": [
        {
          "device_id": "eth0",
          "type": "TAP",
          "guest_mac": "AA:FF:00:00:00:01",
          "_guest_mac": "guest_mac",
          "address": {
            "nameservers": [],
            "address": "169.254.0.1/16"
          },
          "_address": "address"
        },
        {
          "device_id": "eth1",
          "type": "TAP",
          "guest_mac": "",
          "_guest_mac": "guest_mac",
          "address": {
            "nameservers": [],
            "address": "192.168.100.30/32"
          },
          "_address": "address"
        }
      ],
      "labels": {},
      "metadata": {
        "meta-data": "aW5zdGFuY2VfaWQ6IG5zMS9tdm0wCmxvY2FsX2hvc3RuYW1lOiBtdm0wCnBsYXRmb3JtOiBsaXF1aWRfbWV0YWwK",
        "user-data": "I2Nsb3VkLWNvbmZpZwpob3N0bmFtZTogbXZtMApmcWRuOiBtdm0wLmZydWl0Y2FzZQp1c2VyczoKICAgIC0gbmFtZTogcm9vdAogICAgICBzc2hfYXV0aG9yaXplZF9rZXlzOgogICAgICAgIC0gfAogICAgICAgICAgc3NoLWVkMjU1MTkgQUFBQUMzTnphQzFsWkRJMU5URTVBQUFBSUdzbStWSSsyVk5WWFBDRmVmbFhrQTVKY21zMzByajFGUFFjcFNTdDFrdVYgcmljaGFyZEB3ZWF2ZS53b3JrcwpkaXNhYmxlX3Jvb3Q6IGZhbHNlCnBhY2thZ2VfdXBkYXRlOiBmYWxzZQpmaW5hbF9tZXNzYWdlOiBUaGUgcmVpZ25pdGVkIGJvb3RlZCBzeXN0ZW0gaXMgZ29vZCB0byBnbyBhZnRlciAkVVBUSU1FIHNlY29uZHMKcnVuY21kOgogICAgLSBkaGNsaWVudCAtcgogICAgLSBkaGNsaWVudAo="
      },
      "id": "mvm1",
      "namespace": "ns1",
      "vcpu": 2,
      "memory_in_mb": 2048,
      "kernel": {
        "cmdline": {},
        "image": "docker.io/richardcase/ubuntu-bionic-kernel:0.0.11",
        "add_network_config": true,
        "filename": "vmlinux",
        "_filename": "filename"
      },
      "root_volume": null,
      "created_at": null,
      "updated_at": null,
      "deleted_at": null,
      "initrd": {
        "image": "docker.io/richardcase/ubuntu-bionic-kernel:0.0.11",
        "filename": "initrd-generic",
        "_filename": "filename"
      },
      "_initrd": "initrd",
      "uid": "01JNKXRA9M9JKXFW6MBPMR2APT",
      "_uid": "uid"
    },
    "status": {
      "volumes": {},
      "network_interfaces": {},
      "state": "PENDING",
      "kernel_mount": null,
      "initrd_mount": null,
      "retry": 0
    }
  }
}
# Result of ListMicroVMs (namespace: ns1)
{
  "microvm": [
    {
      "version": 2,
      "spec": {
        "additional_volumes": [],
        "interfaces": [
          {
            "device_id": "eth0",
            "type": "TAP",
            "guest_mac": "AA:FF:00:00:00:01",
            "_guest_mac": "guest_mac",
            "address": {
              "nameservers": [],
              "address": "169.254.0.1/16"
            },
            "_address": "address"
          },
          {
            "device_id": "eth1",
            "type": "TAP",
            "guest_mac": "",
            "_guest_mac": "guest_mac",
            "address": {
              "nameservers": [],
              "address": "192.168.100.30/32"
            },
            "_address": "address"
          }
        ],
        "labels": {},
        "metadata": {
          "meta-data": "aW5zdGFuY2VfaWQ6IG5zMS9tdm0wCmxvY2FsX2hvc3RuYW1lOiBtdm0wCnBsYXRmb3JtOiBsaXF1aWRfbWV0YWwK",
          "user-data": "I2Nsb3VkLWNvbmZpZwpob3N0bmFtZTogbXZtMApmcWRuOiBtdm0wLmZydWl0Y2FzZQp1c2VyczoKICAgIC0gbmFtZTogcm9vdAogICAgICBzc2hfYXV0aG9yaXplZF9rZXlzOgogICAgICAgIC0gfAogICAgICAgICAgc3NoLWVkMjU1MTkgQUFBQUMzTnphQzFsWkRJMU5URTVBQUFBSUdzbStWSSsyVk5WWFBDRmVmbFhrQTVKY21zMzByajFGUFFjcFNTdDFrdVYgcmljaGFyZEB3ZWF2ZS53b3JrcwpkaXNhYmxlX3Jvb3Q6IGZhbHNlCnBhY2thZ2VfdXBkYXRlOiBmYWxzZQpmaW5hbF9tZXNzYWdlOiBUaGUgcmVpZ25pdGVkIGJvb3RlZCBzeXN0ZW0gaXMgZ29vZCB0byBnbyBhZnRlciAkVVBUSU1FIHNlY29uZHMKcnVuY21kOgogICAgLSBkaGNsaWVudCAtcgogICAgLSBkaGNsaWVudAo="
        },
        "id": "mvm1",
        "namespace": "ns1",
        "vcpu": 2,
        "memory_in_mb": 2048,
        "kernel": {
          "cmdline": {},
          "image": "docker.io/richardcase/ubuntu-bionic-kernel:0.0.11",
          "add_network_config": true,
          "filename": "vmlinux",
          "_filename": "filename"
        },
        "root_volume": null,
        "created_at": null,
        "updated_at": null,
        "deleted_at": null,
        "initrd": {
          "image": "docker.io/richardcase/ubuntu-bionic-kernel:0.0.11",
          "filename": "initrd-generic",
          "_filename": "filename"
        },
        "_initrd": "initrd",
        "uid": "01JNKXRA9M9JKXFW6MBPMR2APT",
        "_uid": "uid"
      },
      "status": {
        "volumes": {
          "": {
            "mount": {
              "type": "DEV",
              "source": ""
            }
          }
        },
        "network_interfaces": {
          "eth0": {
            "host_device_name": "fltapb29f02d",
            "index": 7,
            "mac_address": "C2:9A:81:C6:BD:EB"
          },
          "eth1": {
            "host_device_name": "fltap61eda7e",
            "index": 8,
            "mac_address": "96:22:77:B8:88:A1"
          }
        },
        "state": "CREATED",
        "kernel_mount": {
          "type": "HOSTPATH",
          "source": "/var/lib/containerd-dev/io.containerd.snapshotter.v1.native/snapshots/2"
        },
        "initrd_mount": {
          "type": "HOSTPATH",
          "source": "/var/lib/containerd-dev/io.containerd.snapshotter.v1.native/snapshots/3"
        },
        "retry": 0
      }
    }
  ]
}

Note: @chris-aeviator claims that the images link to a 404 container, but I successfully pulled them with my docker's containerd instance, so I don't think that's my problem

  1. Check the flintlock's logs:
pruser@raspberrypi:~ $ journalctl -fu flintlockd.service
Mar 05 21:01:40 raspberrypi flintlockd[1452]: time="2025-03-05T21:01:40+01:00" level=info msg="checking state of microvm" controller=microvm service=firecracker_microvm vmid=ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT
Mar 05 21:01:40 raspberrypi flintlockd[1452]: time="2025-03-05T21:01:40+01:00" level=debug msg="execute step" controller=microvm execution_id=01JNKYCCKZS1W97Z0NRTZK3B8Z plan_name=microvm_create_update step=microvm_start
Mar 05 21:01:40 raspberrypi flintlockd[1452]: time="2025-03-05T21:01:40+01:00" level=debug msg="starting microvm" controller=microvm step=microvm_start vmid=ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT
Mar 05 21:01:40 raspberrypi flintlockd[1452]: time="2025-03-05T21:01:40+01:00" level=error msg="failed executing plan" controller=microvm execution_id=01JNKYCCKZS1W97Z0NRTZK3B8Z execution_time=6.49437387s num_steps=363 plan_name=microvm_create_update
Mar 05 21:01:40 raspberrypi flintlockd[1452]: time="2025-03-05T21:01:40+01:00" level=info msg="[4/10] reconciliation failed, rescheduled for next attempt at 2025-03-05 21:03:00 +0100 CET" action=reconcile controller=microvm vmid=ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT
Mar 05 21:01:40 raspberrypi flintlockd[1452]: time="2025-03-05T21:01:40+01:00" level=debug msg="saving microvm spec ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT" controller=microvm repo=containerd_microvm
Mar 05 21:01:41 raspberrypi flintlockd[1452]: time="2025-03-05T21:01:41+01:00" level=error msg="failed to reconcile vmid ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT: executing plan: executing plan steps: executing steps: executing step microvm_start: starting microvm: start is not supported" controller=microvm
Mar 05 21:01:41 raspberrypi flintlockd[1452]: time="2025-03-05T21:01:41+01:00" level=debug msg="Getting spec for ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT" action=reconcile controller=microvm
Mar 05 21:01:41 raspberrypi flintlockd[1452]: time="2025-03-05T21:01:41+01:00" level=info msg="Starting reconciliation" action=reconcile controller=microvm vmid=ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT
Mar 05 21:01:41 raspberrypi flintlockd[1452]: time="2025-03-05T21:01:41+01:00" level=info msg="Generate plan" action=reconcile controller=microvm stage=plan vmid=ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT
  1. Check the firecracker logs:
pruser@raspberrypi:/var/lib/flintlock/vm/ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT $ cat firecracker.log 
2025-03-05T21:03:08.950930929 [01JNKXRA9M9JKXFW6MBPMR2APT:main:ERROR:src/firecracker/src/main.rs:96] RunWithoutApiError error: Failed to build MicroVM from Json: Configuration for VMM from one single json failed: Block device error: Unable to create the virtio block device: Virtio backend error: Error manipulating the backing file: No such file or directory (os error 2) 
2025-03-05T21:03:08.951275927 [01JNKXRA9M9JKXFW6MBPMR2APT:main:ERROR:src/firecracker/src/main.rs:99] Firecracker exiting with error. exit_code=1
pruser@raspberrypi:/var/lib/flintlock/vm/ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT $ cat firecracker.cfg 
{
 "drives": [
  {
   "drive_id": "",
   "path_on_host": "",
   "is_root_device": true,
   "is_read_only": false,
   "cache_type": "Unsafe"
  }
 ],
 "boot-source": {
  "kernel_image_path": "/var/lib/containerd-dev/io.containerd.snapshotter.v1.native/snapshots/2/vmlinux",
  "initrd_path": "/var/lib/containerd-dev/io.containerd.snapshotter.v1.native/snapshots/3/initrd-generic",
  "boot_args": "pci=off i8042.noaux i8042.nomux i8042.dumbkbd ds=nocloud-net;s=http://169.254.169.254/latest/ console=ttyS0 panic=1 i8042.nopnp network-config=dmVyc2lvbjogMgpldGhlcm5ldHM6CiAgZXRoMDoKICAgIG1hdGNoOgogICAgICBtYWNhZGRyZXNzOiBBQTpGRjowMDowMDowMDowMQogICAgYWRkcmVzc2VzOgogICAgLSAxNjkuMjU0LjAuMS8xNgogICAgZGhjcDQ6IGZhbHNlCiAgICBkaGNwNjogZmFsc2UKICAgIGRoY3AtaWRlbnRpZmllcjogbWFjCiAgZXRoMToKICAgIG1hdGNoOgogICAgICBuYW1lOiBldGgxCiAgICBhZGRyZXNzZXM6CiAgICAtIDE5Mi4xNjguMTAwLjMwLzMyCiAgICBkaGNwNDogZmFsc2UKICAgIGRoY3A2OiBmYWxzZQogICAgZGhjcC1pZGVudGlmaWVyOiBtYWMK reboot=k"
 },
 "logger": {
  "log_path": "/var/lib/flintlock/vm/ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT/firecracker.log",
  "level": "Debug",
  "show_level": true,
  "show_log_origin": true
 },
 "machine-config": {
  "vcpu_count": 2,
  "mem_size_mib": 2048,
  "smt": false,
  "track_dirty_pages": false
 },
 "metrics": {
  "metrics_path": "/var/lib/flintlock/vm/ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT/firecracker.metrics"
 },
 "mmds-config": {
  "version": "V1",
  "network_interfaces": [
   "eth0"
  ]
 },
 "network-interfaces": [
  {
   "iface_id": "eth0",
   "host_dev_name": "fltapb29f02d",
   "guest_mac": "AA:FF:00:00:00:01"
  },
  {
   "iface_id": "eth1",
   "host_dev_name": "fltap61eda7e"
  }
 ]
}
pruser@raspberrypi:/var/lib/flintlock/vm/ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT $ cat firecracker.stderr 
Error: RunWithoutApiError(BuildMicroVMFromJson(ParseFromJson(BlockDevice(CreateBlockDevice(VirtioBackend(BackingFile(Os { code: 2, kind: NotFound, message: "No such file or directory" }, "")))))))
Error: RunWithoutApiError(BuildMicroVMFromJson(ParseFromJson(BlockDevice(CreateBlockDevice(VirtioBackend(BackingFile(Os { code: 2, kind: NotFound, message: "No such file or directory" }, "")))))))
Error: RunWithoutApiError(BuildMicroVMFromJson(ParseFromJson(BlockDevice(CreateBlockDevice(VirtioBackend(BackingFile(Os { code: 2, kind: NotFound, message: "No such file or directory" }, "")))))))
Error: RunWithoutApiError(BuildMicroVMFromJson(ParseFromJson(BlockDevice(CreateBlockDevice(VirtioBackend(BackingFile(Os { code: 2, kind: NotFound, message: "No such file or directory" }, "")))))))
# The message repeats over and over
pruser@raspberrypi:/var/lib/flintlock/vm/ns1/mvm1/01JNKXRA9M9JKXFW6MBPMR2APT $ cat firecracker.stdout
2025-03-05T20:50:54.887829200 [01JNKXRA9M9JKXFW6MBPMR2APT:main] Running Firecracker v1.10.1
2025-03-05T20:59:24.181797758 [01JNKXRA9M9JKXFW6MBPMR2APT:main] Running Firecracker v1.10.1
2025-03-05T20:59:24.206946131 [01JNKXRA9M9JKXFW6MBPMR2APT:main] Running Firecracker v1.10.1
2025-03-05T20:59:24.231225184 [01JNKXRA9M9JKXFW6MBPMR2APT:main] Running Firecracker v1.10.1
# The message repeats over and over

Aaaand, that's where I am. I get the same error with flintlock v0.8 and I tried to look for the firecracker's Virtio error online but I didn't find anything.

P.D: @richardcase as soon as we manage to close this issue, count on me to update documentation with the solution and start contributing to the project :)

@richardcase
Copy link
Member

@PruserPC - thanks for the detailed write up of what you have done. I will work through it this week and let you know what i find. Will get bac to you asap.

@PruserPC
Copy link

PruserPC commented Mar 26, 2025

Hey @richardcase !

Saw today your latest commit of provision.sh and now it's working properly in my arm machine. Thanks!

However, I dug deeper and found out that the problem was with the KVM modules configuration of raspberry, so I decided to keep trying with my amd machine (Intel x86_64 with 6.13.0-061300-generic kernel).

I tried to create a new MicroVM with the following payload:

{
  "microvm": {
    "id": "mvm1",
    "namespace": "ns1",
    "labels": {
      "env": "lab"
    },
    "vcpu": 2,
    "memory_in_mb": 1024,
    "kernel": {
      "image": "ghcr.io/liquidmetal-dev/kernel-bin:5.10.77",
      "cmdline": {},
      "filename": "boot/vmlinux",
      "add_network_config": true
    },
    "additional_volumes": [{
        "id": "modules",
        "is_read_only": false,
        "mount_point": "/lib/modules/5.10.77",
        "source": {
          "container_source": "ghcr.io/liquidmetal-dev/kernel-modules:5.10.77"
        }
    }],
    "root_volume": {
        "id": "root",
        "is_read_only": false,
        "source": {
          "container_source": "docker.io/library/ubuntu:20.04"
        }
    },
    "interfaces": [
      {
        "device_id": "eth1",
        "type": 1,
        "address": {
          "address": "192.168.100.30/32"
        }
      }
    ]
  }
}

And the result in /var/lib/flintlock/vm/ns1/mvm1/<ID>/firecracker.log is:

2025-03-26T20:17:23.305880407 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:INFO:src/vmm/src/resources.rs:167] Successfully added metadata to mmds from file
2025-03-26T20:17:23.305921561 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/builder.rs:391] event_start: build microvm for boot
2025-03-26T20:17:23.320871980 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/device_manager/mmio.rs:95] acpi: Building AML for VirtIO device _SB_.V000. memory range: 0xd0001000:4096 irq: 5
2025-03-26T20:17:23.320951254 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/device_manager/mmio.rs:95] acpi: Building AML for VirtIO device _SB_.V001. memory range: 0xd0002000:4096 irq: 6
2025-03-26T20:17:23.320988928 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/device_manager/mmio.rs:95] acpi: Building AML for VirtIO device _SB_.V002. memory range: 0xd0003000:4096 irq: 7
2025-03-26T20:17:23.321027028 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/device_manager/mmio.rs:95] acpi: Building AML for VirtIO device _SB_.V003. memory range: 0xd0004000:4096 irq: 8
2025-03-26T20:17:23.321041568 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/devices/acpi/vmgenid.rs:61] vmgenid: building VMGenID device. Address: 0x000dfff0. IRQ: 9
2025-03-26T20:17:23.321080208 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/devices/acpi/vmgenid.rs:69] vmgenid: writing new generation ID to guest: 0x4bfb53955176123340346ab8fbbb6df2
2025-03-26T20:17:23.321250611 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/arch/x86_64/mptable.rs:129] mptable: Allocated 324 bytes for MPTable 2 vCPUs at address 0x0009fc00
2025-03-26T20:17:23.321317781 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/acpi/mod.rs:70] acpi: Wrote table (826 bytes) at address: 0x0009fd44
2025-03-26T20:17:23.321337733 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/acpi/mod.rs:70] acpi: Wrote table (276 bytes) at address: 0x000a007e
2025-03-26T20:17:23.321344143 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/acpi/mod.rs:70] acpi: Wrote table (72 bytes) at address: 0x000a0192
2025-03-26T20:17:23.321350498 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/acpi/mod.rs:70] acpi: Wrote table (52 bytes) at address: 0x000a01da
2025-03-26T20:17:23.321357934 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/acpi/mod.rs:151] acpi: Wrote RSDP (36 bytes) at address: 0x000e0000
2025-03-26T20:17:23.321668572 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/builder.rs:393] event_end: build microvm for boot
2025-03-26T20:17:23.321691494 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/builder.rs:395] event_start: boot microvm
2025-03-26T20:17:23.321707620 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:INFO:src/vmm/src/device_manager/mmio.rs:453] Artificially kick devices.
2025-03-26T20:17:23.321759271 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:fc_vcpu 1:WARN:src/vmm/src/vstate/vcpu/mod.rs:402] Received a VcpuEvent::Resume message with immediate_exit enabled. immediate_exit was disabled before proceeding
2025-03-26T20:17:23.321773584 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:fc_vcpu 0:WARN:src/vmm/src/vstate/vcpu/mod.rs:402] Received a VcpuEvent::Resume message with immediate_exit enabled. immediate_exit was disabled before proceeding
2025-03-26T20:17:23.321801302 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:DEBUG:src/vmm/src/builder.rs:400] event_end: boot microvm
2025-03-26T20:17:23.321807326 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:INFO:src/firecracker/src/main.rs:578] Successfully started microvm that was configured from one single json
2025-03-26T20:17:23.321943907 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:WARN:src/vmm/src/devices/legacy/serial.rs:270] Detached the serial input due to peer close/error.
2025-03-26T20:17:24.706187885 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:INFO:src/vmm/src/lib.rs:823] Vmm is stopping.
2025-03-26T20:17:24.706364743 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:INFO:src/vmm/src/lib.rs:823] Vmm is stopping.
2025-03-26T20:17:24.736697540 [01JQ9Y4CMDQBB4BJ3N9N6MAYHM:main:INFO:src/firecracker/src/main.rs:103] Firecracker exiting successfully. exit_code=0

Any ideas on why it exits? Is this the expected behaviour? I tried to look for something similar online without much success.

Thanks beforehand

@richardcase
Copy link
Member

Thanks @PruserPC . I have been working through the quick start guide, i still have a bunch of things to change. Glad the change to provision.sh helped a bit.

This log line seems suspect to me:

Detached the serial input due to peer close/error.

I'll need to have a look but i suspect to fix this we need to change the kernel command line.....but i need to try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants