-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
clock_gettime returning 4ms increments #236
Comments
Hi, The vdso implementation inside ertgo is patched out because the vdso pointers would need to be initialized to point to some emulation code inside the enclave. We found it easier to replace the call with a call to (fast emulated) clock_gettime instead of additionally implementing such code. vdso in Open Enclave is not used for time, but only as a way to handle exceptions. When this feature was added, we experienced system freezes in some high-load situations and thus disabled it. The alternative exception handling, which previously had been the standard method, has since been sufficient for EGo. You already found out that our clock_gettime implementation loads the vdso entry directly and doesn't do an actual syscall. I assume you stepped into https://github.com/edgelesssys/edgelessrt/blob/5365e10d10ff4ec108cb278b566e60222f812917/src/ertlibc/time.cpp#L97 Here you can also see that the implementation only supports coarse time because of the missing rdtsc. This is the reason for the 4ms increments. On Icelake and newer, SGX actually supports rdtsc. It may be possible to extend the implementation to provide fine-grained time. If you have the required hardware and are able to implement this, we would welcome a contribution. Regarding the loop taking longer when running inside the enclave, the (relative) difference gets much smaller on my machine if I set the loop count, e.g., 100 times higher. |
Thank you for explanation. Would you agree that this would fix it:
|
These changes are required, yes, but there's more. The pointers are not the vDSO routines, but the memory where the kernel writes the timestamps. These are used for reading the time here: If we want high-res time, the timestamp would need to be adjusted by the value obtained by RDTSC. I think the corresponding Linux code is this: So in addition to your suggested changes:
|
@thomasten calling time.Now() (clock_gettime under the hood) and compiling it program with ego:
When running inside the enclave vs. outside of it, I am getting 4ms gaps between time measurements. Also the loop itself takes a lot longer in enclave:
I can see that you have patched out vdso implementation of clock_gettime inside the ertgo - and also disable vdso in the openenclave patch - and instead go for the syscall - the whole point of vdso is so that clock_gettime calls are faster, as the data structures are updated by kernel itself, and therefore syscall doesn't need to happen. I can also see removal of rdtsc calls - these I understand are not supported on production SGX.
I wonder why remove vdso support and go for slower syscall instead?
Regardless, this alone doesnt explain why there are 4ms between clock_gettime calls, as if I run outside of the enclave I can see 1ms increments. I have stepped through the
time.now()
call viaego-gdb
(and having the signed binary be configured withdebug: true
), and I can see that the patched syscall ends up in custom libc implementation, that does load vdso entry directly.Is this something that Intel SGX does to prevent timing attacks? Is there a way to get actual accurate time inside the enclave?
The text was updated successfully, but these errors were encountered: