Description
The VhostUserInflight
struct implements ByteValued
, but it has 4 bytes of padding at the end, so reading that uninitialized memory is UB, making the API unsound
/// Single memory region descriptor as payload for ADD_MEM_REG and REM_MEM_REG
/// requests.
#[repr(C)]
#[derive(Copy, Clone, Default)]
pub struct VhostUserInflight {
/// Size of the area to track inflight I/O.
pub mmap_size: u64,
/// Offset of this area from the start of the supplied file descriptor.
pub mmap_offset: u64,
/// Number of virtqueues.
pub num_queues: u16,
/// Size of virtqueues.
pub queue_size: u16,
}
...
unsafe impl ByteValued for VhostUserInflight {}
the problem is basically reading that uninitialized memory as &[u8]
, since type reading uinit memory is UB and a reference to uninit mem is also UB.
Possible solution
Although the right thing to do is to fix qemu and have this struct be packed, it will be difficult not to break backward compatibility.
Perhaps, we can fix this while keeping the backwards compatibility:
Make VhostUserInflight
#[repr(C, packed)]
, after reading VhostUserInflight
, read MaybeUninit<u32>
, we can add a new method Endpoint<R>::skip(len: usize)
. But, we need to skip it at the beginning of BackendReqHandler::handle_request()
pub fn handle_request(&mut self) -> Result<()> {
let (size, buf) = match hdr.get_size() {
0 => (0, vec![0u8; 0]),
len => {
let (size2, rbuf) = self.main_sock.recv_data(len as usize)?; // the UB happens inside recv_data()
if size2 != len as usize {
return Err(Error::InvalidMessage);
}
(size2, rbuf)
}
};
Fix qemu (and frontend) adding an explicit padding field guaranteeing that it will be initialized to 0
:
typedef struct VhostUserInflight {
uint64_t mmap_size;
uint64_t mmap_offset;
uint16_t num_queues;
uint16_t queue_size;
uint32_t padding; // MUST be zero initialized
} VhostUserInflight;
but this forces us to have two VhostUserInflight
definitions, one for the backend and one for the frontend, because the frontend must continue sending the unpacked version with padding.
This issue is to continue the discussion started on #208