You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a breaking change but is probably worthwhile to consider:
Removing the vspace_t dependency from libsel4vm and libsel4vmmplatsupport
and interacting with the vspace objects directly mostly resolves a series of issues
and don't really introduce any significant new problems for guests or a client vmm
other than needing to update vmm drivers that expect to use the vspace objects.
It also provides a reasonably small stepping stone for eventually removing more of
sel4_libs as discussed recently (https://sel4.discourse.group/t/sel4-virtualization-support-current-efforts/665/3)
vspace_t is an abstraction that wraps seL4 vspace objects and provides
the following functionality:
Helper functions for creating OS objects like ipc buffers and stacks
Tracking reserved regions in the address space like kernel window
mappings
Unmapping vspace entries indexed by virtual address
Creating shared mappings from one VSpace into another vspace indexed
by virtual address
Automatic placement of anonymous memory regions (eg mmap)
Automatic allocation and mapping of any internal book-keeping data
self-hosted or remote management of Vspaces where a vspace can refer
to the current process address space or a different address space.
A way to iteratively and dynamically map frames from another vspace
into the callers vspace and call a callback function with the mapped
window as an argument. The mapping is then removed when the callback
returns.
Unfortunately there are the following issues:
The book-keeping data allocation can only happen through a vspace_t
instance. For book-keeping a VM's vspace, the host must then also
have a vspace_t for managing its own vspace, a host may not want this
level of dynamism
The book-keeping data doesn't currently do anything clever for large
frames and still keeps a unit of data for every 4k range of memory
used in the vspace. So a 2MiB mapping would have 512 duplicate
book-keeping entries, and a 1GiB would have 51251216bytes = 4MiB
overhead.
Encouraging implementations to use the shared-memory iterator leads to
very inefficient emulated drivers as mapping and unmapping are usually
more expensive than short memcpys and dominate the operation time.
The VMM only really needs features 1, 2 and 9, and it already implements
its own version of region reservations (2) separately. There's also some
helper functions in libsel4utils that implement just 1. and depend on a
vka_t interface that the VMM also already has a copy of.
As for 6. the vmm does need a more efficient copyin/copyout mechanism
that would be somewhat system specific and shouldn't create new mappings
on each access.
The text was updated successfully, but these errors were encountered:
This is a breaking change but is probably worthwhile to consider:
Removing the vspace_t dependency from libsel4vm and libsel4vmmplatsupport
and interacting with the vspace objects directly mostly resolves a series of issues
and don't really introduce any significant new problems for guests or a client vmm
other than needing to update vmm drivers that expect to use the vspace objects.
It also provides a reasonably small stepping stone for eventually removing more of
sel4_libs as discussed recently (https://sel4.discourse.group/t/sel4-virtualization-support-current-efforts/665/3)
vspace_t is an abstraction that wraps seL4 vspace objects and provides
the following functionality:
mappings
by virtual address
to the current process address space or a different address space.
into the callers vspace and call a callback function with the mapped
window as an argument. The mapping is then removed when the callback
returns.
Unfortunately there are the following issues:
instance. For book-keeping a VM's vspace, the host must then also
have a vspace_t for managing its own vspace, a host may not want this
level of dynamism
frames and still keeps a unit of data for every 4k range of memory
used in the vspace. So a 2MiB mapping would have 512 duplicate
book-keeping entries, and a 1GiB would have 51251216bytes = 4MiB
overhead.
very inefficient emulated drivers as mapping and unmapping are usually
more expensive than short memcpys and dominate the operation time.
The VMM only really needs features 1, 2 and 9, and it already implements
its own version of region reservations (2) separately. There's also some
helper functions in libsel4utils that implement just 1. and depend on a
vka_t interface that the VMM also already has a copy of.
As for 6. the vmm does need a more efficient copyin/copyout mechanism
that would be somewhat system specific and shouldn't create new mappings
on each access.
The text was updated successfully, but these errors were encountered: