-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calibration math #1
Comments
Just in case, here's actual code. It uses a matrix class and a transformation class, but should be obvious: State:
Per tracking sample pair:
After all samples have been entered:
|
Thanks for the comment! This idea simplifies the approach a lot. Since the initial implementation, I figured out why I was having trouble trying to get a single-step solution to work: the origin of rotation for G was not always zero – matzman666/OpenVR-InputEmulator#113. With the new driver in 1bed4af, this problem should be fixed. |
@pushrax Hi! Why knowing 3D models of controllers, do not calibrate their location in space, just putting the controllers next to each other in a predetermined position? |
@Doc-Ok Hi,professor, thanks for your sharing, this is really helpful for me ,and I cost nearly half a day to understanding your idea. I guess I did some stupid mistake, and there is no one can help me at my side .. Really thanks, professor. |
@Doc-Ok Finally, I got it....thanks, I negate the wrong side.. |
I implemented a single system solver as described here in 9040063, but at the time could not get it to match the reliability of the previous algorithm. I tried a variant with polar decomposition as well. The single system seemed to be more sensitive to error induced by sample timing mismatch between the references devices. I have a feeling the reliance on only rotation data to calibrate rotation in the old algorithm could explain its better performance. Under movement, this would be dependent much more on gyro data and reject accelerometer data. I'm going to close this issue now as the current system is performing well and I don't have much of a reason to continue trying to understand/fix the gap in performance. Thanks very much for the information though! It's been a good exercise. |
This allows a single sample collection to be used for both the rotation and translation calibration, by applying the computed rotation offset to the previously-collected samples. Unlike the experimental approach referenced in [1], this does not change the actual solving process, but instead just allows both steps to share their raw input, thus speeding up calibration by 2x. [1]: pushrax#1 (comment)
This allows a single sample collection to be used for both the rotation and translation calibration, by applying the computed rotation offset to the previously-collected samples. Unlike the experimental approach referenced (here)[1], this does not change the actual solving process, but instead just allows both steps to share their raw input, thus speeding up calibration by 2x. [1]: pushrax#1 (comment)
This allows a single sample collection to be used for both the rotation and translation calibration, by applying the computed rotation offset to the previously-collected samples. Unlike the experimental approach [referenced here][1], this does not change the actual solving process, but instead just allows both steps to share their raw input, thus speeding up calibration by 2x. [1]: pushrax#1 (comment)
This allows a single sample collection to be used for both the rotation and translation calibration, by applying the computed rotation offset to the previously-collected samples. Unlike the experimental approach [referenced here][1], this does not change the actual solving process, but instead just allows both steps to share their raw input, thus speeding up calibration by 2x. [1]: pushrax#1 (comment)
It turns out the calibration math is quite simple. If you represent a tracking result from either system as a 4x4 matrix, where the upper-left 3x3 matrix is a rotation matrix, the upper-right 3x1 matrix is a translation vector (or position), and the lower 1x4 matrix is (0, 0, 0, 1), then you can write the calibration system as
for all i: Ai * L = G * Bi
where Ai, Bi are a tracking sample pair, L is the local transformation between the two devices, and G is the calibration matrix you're looking for, which takes a tracking sample from system B and transforms it to system A. All matrices are 4x4.
If you multiply that formula out for a single tracking sample pair, you get 12 linear equations in 24 unknowns. If you put a bunch of observations into a standard linear least-squares system, you can solve for matrices L and G directly (and then ignore L). Approach: each tracking pair adds 12 rows to matrix A (which has 24 columns), and one row to the right-hand-side vector b. After n pairs, matrix A is n12 x 24, and b is n12 x 1. You then solve the normal system A^T * A * x = A^T * b using a pivoting Gaussian solver, which yields the 24 unknown coefficients of L and G. (For efficiency, don't construct A or b but construct A^T * A, which is 24x24, and A^T * b, which is 24x1, on-the-fly, one tracking sample pair at a time.)
The upper-left 3x3 matrix of G will be close to a scaled rotation matrix (with scaling close to 1 if the two tracking systems use the same linear unit), and you can clean it up via polar decomposition.
The text was updated successfully, but these errors were encountered: