Replies: 2 comments 7 replies
-
Beta Was this translation helpful? Give feedback.
-
Yes, it looks awful. I think the key issue is that: how often will we write to guest memory in the hypervisor mode? If it's not uncommon, with large data chunks, then the overhead of the second approach could be intolerable, not to mention the complexity. So generally I think the first one is better. Missing functions like However, there IS something to worry: dynamic memory allocation. It's a feature we do not need to implement now but clearly good to have in the future, but obviously, the first approach effectively eliminated the possibility of it. Luckily, we have ⬇️
It's a good design. Some extra methods about memory copying allow us to isolate the extra complexity in So, with everything mentioned above, my preference is:
Another possible solution (though it may be kind of strange and not very applicable) is to mirror the ept in a reserved range of the page table of |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Physical memory allocation for guest VM
Currently, in axaddressspace, we use
alloc_frame
fromPagingHandler
trait to allocate backend physical address forMemoryArea
from memory_set, which allocates at the unit of single frame and may not be contiguous.As a result, a contiguous
MemoryArea
s backend memory may spread into different physical frames. Which make it complicate for accessing the guest VM's address space within the hypervisor. A guestMemoryArea
with a contiguous GPA (Guest Physical Address) may be non-contiguous on the HPA (Host Physical Address). For operations like loading the guest image, we cannot simply access the guest memory by converting the image load address (GPA) to HPA and then usingphys_to_virt
to convert it to HVA (Host Virtual Address), because contiguous GPAs may be mapped to non-contiguous HPAs.Two possible solutions:
1. allocate contiguous physical frames for
MemoryArea
One possible solution is that we make
MemoryArea
’s physical frames contiguous, through methods likealloc_contiguous
in GlobalPage ofaxalloc
, in this case we can easily translate GPA into HPA, and then access corresponding HVA through raw pointer dereference within the limit ofMemoryArea
size.However, current
PagingHandler
trait provided by page_table_multiarch does not support this semantics; it can only allocate one page frame at a time (throughalloc_frame()
). To use thealloc_contiguous
method ofGlobalPage
, we need to rely onaxalloc
. However, I only wantaxaddresspace
to hold aPagingHandling
generic.So, the first method might require modifying the
page_table_multiarch
crate, which seems like a significant change.Furthermore, allocating a large block of contiguous physical memory at once is not flexible enough, making it difficult to support demand paging.
2. Manually concatenate non-contiguous physical address segments
The other solution is more complicated but more flexible. We can keep current frame by frame allocation strategy, while use methods like
translated _byte_buffer
(like rCore-Tutorial do) to obtain physical page frames corresponding to virtual pages one by one, convert them into byte arrays (&'static mut [u8]
), and store them individually in aVec
.Current implementation:
The downside of this approach is undoubtedly increased complexity. And the
translated_byte_buffer
or other similar methods need to know the size of the corresponding GPA region in advance. During image loading, we need to know the size of the image file. Currently, the image file size is not included in the VM configuration information and is only obtained when reading the image (though this is not an problem).We might need to rely on an
Enum
to wrap the GPA data type from the hypervisor's (HPA & HVA) perspective, encapsulating the complexity of theVec<&'static mut [u8]>
structure within theEnum
. Additionally, theEnum
would need to implement the std::io::Read and Write traits.something like:
Memory Management for passthrough device & other emulated devices
My second question is relatively simpler and more straightforward.
How do we design the memory management part in
AxVMConfig
?We can use current design of
VMMemConfig
for normal memory region like RAM of guest VMShould we place the MMIO memory spaces for passthrough devices and emulated devices together with the normal memory regions, or should we place them in the device configuration part ?
Note:
Beta Was this translation helpful? Give feedback.
All reactions