Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

virtiofsd error on Debian 12 #1702

Closed
lukerops opened this issue Feb 28, 2025 · 2 comments · Fixed by #1718
Closed

virtiofsd error on Debian 12 #1702

lukerops opened this issue Feb 28, 2025 · 2 comments · Fixed by #1718
Labels
Documentation Documentation needs updating Easy Good for new contributors
Milestone

Comments

@lukerops
Copy link

Hi,
I am trying to create a storage volume of type filesystem to share between two VMs, but, when I add this volume to the VM it breaks the startup and I need to kill the QEMU process to stop incus of trying to start the VM. Every time I try to stop the VM, I get the following message:

$ incus delete --force v1 
Error: Failed deleting instance "v1" in project "test": Failed to create instance delete operation: Instance is busy running a "start" operation

I am using the incus 6.0 LTS installed from the backported repository of the Debian 12 (Bookworm).

PS: I was following this video to execute the commands.


I tried to debug, and I found that for this type of volume (filesystem), incus has two options: vitiofs and 9p. Every time it found the binary of virtiofsd, it uses it. The log in the file /var/log/incus/test_v1/disk.foo.log show that the parameter --cache is wrong for the version of virtiofsd I have installed. After I verified the available parameters accepted, I found that the file must use -o cache.

The workaround I found is to set the io.bus to 9p to every custom filesystem volume device in the instance. With this the VM start working again.

`$ /usr/lib/qemu/virtiofsd --help`
$ /usr/lib/qemu/virtiofsd --help
usage: /usr/lib/qemu/virtiofsd [options]

    -h   --help                print help
    -V   --version             print version
    --print-capabilities       print vhost-user.json
    -d   -o debug              enable debug output (implies -f)
    --syslog                   log to syslog (default stderr)
    -f                         foreground operation
    --daemonize                run in background
    -o cache=<mode>            cache mode. could be one of "auto, always, none"
                               default: auto
    -o flock|no_flock          enable/disable flock
                               default: no_flock
    -o log_level=<level>       log level, default to "info"
                               level could be one of "debug, info, warn, err"
    -o max_idle_threads        the maximum number of idle worker threads
                               allowed (default: 10)
    -o posix_lock|no_posix_lock
                               enable/disable remote posix lock
                               default: no_posix_lock
    -o readdirplus|no_readdirplus
                               enable/disable readirplus
                               default: readdirplus except with cache=none
    -o sandbox=namespace|chroot
                               sandboxing mode:
                               - namespace: mount, pid, and net
                                 namespaces with pivot_root(2)
                                 into shared directory
                               - chroot: chroot(2) into shared
                                 directory (use in containers)
                               default: namespace
    -o timeout=<number>        I/O timeout (seconds)
                               default: depends on cache= option.
    -o writeback|no_writeback  enable/disable writeback cache
                               default: no_writeback
    -o xattr|no_xattr          enable/disable xattr
                               default: no_xattr
    -o xattrmap=<mapping>      Enable xattr mapping (enables xattr)
                               <mapping> is a string consists of a series of rules
                               e.g. -o xattrmap=:map::user.virtiofs.:
    -o modcaps=CAPLIST         Modify the list of capabilities
                               e.g. -o modcaps=+sys_admin:-chown
    --rlimit-nofile=<num>      set maximum number of file descriptors
                               (0 leaves rlimit unchanged)
                               default: min(1000000, fs.file-max - 16384)
                                        if the current rlimit is lower
    -o allow_direct_io|no_allow_direct_io
                               retain/discard O_DIRECT flags passed down
                               to virtiofsd from guest applications.
                               default: no_allow_direct_io
    -o announce_submounts      Announce sub-mount points to the guest
    -o posix_acl/no_posix_acl  Enable/Disable posix_acl. (default: disabled)
    -o security_label/no_security_label  Enable/Disable security label. (default: disabled)
    -o killpriv_v2/no_killpriv_v2
                               Enable/Disable FUSE_HANDLE_KILLPRIV_V2.
                               (default: enabled as long as client supports it)
    -o source=PATH             shared directory tree
    -o allow_root              allow access by root
    --socket-path=PATH         path for the vhost-user socket
    --socket-group=GRNAME      name of group for the vhost-user socket
    --fd=FDNUM                 fd number of vhost-user socket
    --thread-pool-size=NUM     thread pool size limit (default 0)
`$ cat /var/log/incus/test_v1/disk.foo.log`
$ cat /var/log/incus/test_v1/disk.foo.log
fuse: unknown option(s): `--cache=never'
`$ /usr/lib/qemu/virtiofsd --version`
$ /usr/lib/qemu/virtiofsd --version
virtiofsd version 7.2.15 (Debian 1:7.2+dfsg-7+deb12u12)
Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers
using FUSE kernel interface version 7.36
`$ qemu-system-x86_64 --version`
$ qemu-system-x86_64 --version
QEMU emulator version 7.2.15 (Debian 1:7.2+dfsg-7+deb12u12)
Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers
`$ incus version`
$ incus version
Client version: 6.0.3
Server version: 6.0.3
`$ incus storage info standard`
$ incus storage info standard
info:
  description: ""
  driver: btrfs
  name: standard
  space used: 72.33GiB
  total space: 3.64TiB
`$ incus info`
$ incus info
config:
  core.https_address: 0.0.0.0:8443
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_dev_incus
- migration_pre_copy
- infiniband
- dev_incus_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- dev_incus_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- images_all_projects
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- zfs_delegate
- storage_api_remote_volume_snapshot_copy
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- image_restriction_privileged
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- certificate_description
- disk_io_bus_virtio_blk
- loki_config_instance
- instance_create_start
- clustering_evacuation_stop_options
- boot_host_shutdown_action
- agent_config_drive
- network_state_ovn_lr
- image_template_permissions
- storage_bucket_backup
- storage_lvm_cluster
- shared_custom_block_volumes
- auth_tls_jwt
- oidc_claim
- device_usb_serial
- numa_cpu_balanced
- image_restriction_nesting
- network_integrations
- instance_memory_swap_bytes
- network_bridge_external_create
- network_zones_all_projects
- storage_zfs_vdev
- container_migration_stateful
- profiles_all_projects
- instances_scriptlet_get_instances
- instances_scriptlet_get_cluster_members
- instances_scriptlet_get_project
- network_acl_stateless
- instance_state_started_at
- networks_all_projects
- network_acls_all_projects
- storage_buckets_all_projects
- resources_load
- instance_access
- project_access
- projects_force_delete
- resources_cpu_flags
- disk_io_bus_cache_filesystem
- instances_lxcfs_per_instance
- disk_volume_subpath
- projects_limits_disk_pool
- network_ovn_isolated
- qemu_raw_qmp
- network_load_balancer_health_check
- oidc_scopes
- network_integrations_peer_name
- qemu_scriptlet
- instance_auto_restart
- storage_lvm_metadatasize
- ovn_nic_promiscuous
- ovn_nic_ip_address_none
- instances_state_os_info
- network_load_balancer_state
- instance_nic_macvlan_mode
- storage_lvm_cluster_create
- network_ovn_external_interfaces
- instances_scriptlet_get_instances_count
- cluster_rebalance
- custom_volume_refresh_exclude_older_snapshots
- storage_initial_owner
- storage_live_migration
- instance_console_screenshot
- image_import_alias
- authorization_scriptlet
- console_force
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: auth_user_name
auth_user_method: tls
environment:
  addresses:
  - 192.168.15.6:8443
  - 192.168.15.11:8443
  - 10.106.87.1:8443
  - 10.229.115.1:8443
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    Certificate
    -----END CERTIFICATE-----
  certificate_fingerprint: fingerprint
  driver: lxc | qemu
  driver_version: 5.0.2 | 7.2.15
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    uevent_injection: "true"
    unpriv_binfmt: "false"
    unpriv_fscaps: "true"
  kernel_version: 6.1.0-31-amd64
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Debian GNU/Linux
  os_version: "12"
  project: test
  server: incus
  server_clustered: true
  server_event_mode: full-mesh
  server_name: localhost
  server_pid: 471618
  server_version: 6.0.3
  storage: btrfs
  storage_version: "6.2"
  storage_supported_drivers:
  - name: dir
    version: "1"
    remote: false
  - name: lvm
    version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.47.0
    remote: false
  - name: lvmcluster
    version: 2.03.16(2) (2022-05-18) / 1.02.185 (2022-05-18) / 4.47.0
    remote: true
  - name: btrfs
    version: "6.2"
    remote: false
@lukerops
Copy link
Author

lukerops commented Feb 28, 2025

More context:

As I see, incus expect that you are running QEMU version 8+, because it uses the virtiofsd-rs and QEMU stop packing its own version of virtiofsd in favor of virtiofsd-rs in version 8.

I installed QEMU from stable repository (not the backported version), so I was using QEMU version 7.2.15 and this version still has virtiofsd. I upgrade the QEMU to the backported version (9.2.0) and the problem is gone, because there is no virtiofsd installed.

As I didn't see anything about this version requirement in the documentation, I think it needs to be updated.

@stgraber
Copy link
Member

stgraber commented Feb 28, 2025

We'll add a mention that while we do work with the older QEMU, we do indeed require the external virtiofsd to be installed as the old built-in one doesn't support what we need.

@stgraber stgraber added this to the incus-6.11 milestone Feb 28, 2025
@stgraber stgraber added Documentation Documentation needs updating Easy Good for new contributors labels Feb 28, 2025
stgraber added a commit to stgraber/incus that referenced this issue Mar 3, 2025
Closes lxc#1702

Signed-off-by: Stéphane Graber <[email protected]>
stgraber added a commit that referenced this issue Mar 15, 2025
Closes #1702

Signed-off-by: Stéphane Graber <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Documentation Documentation needs updating Easy Good for new contributors
Development

Successfully merging a pull request may close this issue.

2 participants