Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mapping aliased memory regions will fail on the HAXM module #21

Closed
AlexAltea opened this issue Apr 26, 2019 · 4 comments
Closed

Mapping aliased memory regions will fail on the HAXM module #21

AlexAltea opened this issue Apr 26, 2019 · 4 comments

Comments

@AlexAltea
Copy link

AlexAltea commented Apr 26, 2019

Background

Many emulators, e.g. QEMU, represent the guest physical address space as an acyclic directed graph whose leaves provide backing memory (for RAM/ROM) and/or hooks for reading/writing (for IO). See: https://github.com/qemu/qemu/blob/master/docs/devel/memory.rst

As a result, the same backing HVA can be referenced at two different locations: For example, the 256 KB (0x40000 bytes) BIOS image is mapped into 0xFFFC0000-0xFFFFFFFF. But an alias into 128 KB at the bottom of the BIOS image region is mapped at 0xE0000-0xFFFFF.

Issue

The implementation of MapGuestMemory and MapGuestMemoryLarge in the HAXM module calls each two ioctl's in succession. Specifically:

  • MapGuestMemory:
    • HAX_VM_IOCTL_ALLOC_RAM
    • HAX_VM_IOCTL_SET_RAM
  • MapGuestMemoryLarge:
    • HAX_VM_IOCTL_ADD_RAMBLOCK
    • HAX_VM_IOCTL_SET_RAM2

If I understand HAXM correctly, The first ioctl is supposed to create the HVA-to-HPA mappings, and the second one is actually taking care of the GPA-to-HPA mappings (via EPT).

EPT-translation works in the same way as CR3-translation turns HVAs into HPAs, so it allows overlapping HPA ranges, so the second ioctl has no issue with overlapping ranges (like the BIOS one mentioned above). However, the first ioctl won't allow adding the same HVA range twice.

Since your approach merges the two steps, it won't allow aliases.

Possible solution

This might not be the most efficient way of dealing with this, but to fix it without changing your API, you could cache which HVA ranges have been already "added" to HAXM, and avoid them in future MapGuestMemory calls.

@AlexAltea
Copy link
Author

Also, it's worth noting that I find this issue to be a limitation of the HAXM API. I have planned to do a complete redesign @ intel/haxm#121, inspired by the KVM API to prevent issues like that.

@StrikerX3
Copy link
Owner

StrikerX3 commented May 3, 2019

@AlexAltea, can you check if the issue is resolved in PR #22?

I simply took advantage of the fact that HAXM returns an error when HAX_VM_IOCTL_ALLOC_RAM and HAX_VM_IOCTL_ADD_RAMBLOCK are called with HVAs that are already mapped. This does cost a user-to-kernel transition, but memory mapping is typically a one-time setup procedure done at VM creation so the impact should be minimal.

No changes were made to the public API except for the new feature flag. I'm simply relaxing the constraints for MapGuestMemory[Large] so that it can be used to map one host memory range to multiple guest memory ranges.

@AlexAltea
Copy link
Author

I simply took advantage of the fact that HAXM returns an error when HAX_VM_IOCTL_ALLOC_RAM and HAX_VM_IOCTL_ADD_RAMBLOCK are called with HVAs that are already mapped.

Yeah, I think that's a good workaround. The user-kernel transition costs should be negligible as you said.
This issue could be closed after merging #22.

@StrikerX3
Copy link
Owner

Thanks for confirming. #22 has been merged. Closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants