From 38b8cf1539e90635327ee01255a84d69e36d4d6a Mon Sep 17 00:00:00 2001 From: Marc Tonsen Date: Tue, 3 Sep 2024 10:20:46 +0200 Subject: [PATCH 1/4] Iterated on content --- alpha-lab/imu-transformations/index.md | 142 ++++++------------------- 1 file changed, 30 insertions(+), 112 deletions(-) diff --git a/alpha-lab/imu-transformations/index.md b/alpha-lab/imu-transformations/index.md index 7090574f2..9425e1854 100644 --- a/alpha-lab/imu-transformations/index.md +++ b/alpha-lab/imu-transformations/index.md @@ -26,17 +26,13 @@ import TagLinks from '@components/TagLinks.vue' -::: tip -Want to compare IMU and gaze data in the same coordinate system to better understand how people coordinate head and eye movements? The transformation functions in this tutorial will show you how! -::: - -This guide contains various transformation functions that can assist when working with Neon's IMU data. +This guide contains various transformation functions that help with relating [Neon's IMU data](https://docs.pupil-labs.com/neon/data-collection/data-streams/#movement-imu-data) with other data streams. -## IMU to World Coordinates +## Rotation between the IMU and the World -One of the key steps when working with IMU data is the transformation that takes coordinates in the local IMU coordinate system to their corresponding coordinates in [the world coordinate system](http://docs.pupil-labs.com/neon/data-collection/data-streams/#movement-imu-data). The quaternion values provided by the IMU can be used to convert between the two coordinate systems. The `transform_imu_to_world` function, defined below, will be used throughout this article. +The IMU data includes a description of how the IMU is rotated in relation to the world. Concretely, the IMU data contains quaternions that define a rotation transformation between the [the world coordinate system](http://docs.pupil-labs.com/neon/data-collection/data-streams/#movement-imu-data) and the IMU's local coordinate system at different points in time. -Note that the origin of the IMU coordinate system is the same as the origin of the world coordinate system. +The `transform_imu_to_world` function below demonstrates how to use these quaternions to transform data from the IMU's local coordinate system to the world coordinate system. ```python from scipy.spatial.transform import Rotation as R @@ -57,11 +53,11 @@ def transform_imu_to_world(imu_coordinates, imu_quaternions): ]) ``` -Now that we have the `transform_imu_to_world` function, let's use it! +### Example: Heading Vectors in World Coordinates -### Obtain IMU Heading Vectors +The `transform_imu_to_world` function can be used to calculate heading vectors of the IMU in world coordinates. The heading vector essentially describes the direction the IMU is facing. If we imagine the IMU inside the Neon module while it is worn on sombody's head, the heading vector describes the direction the wearer's face is pointing. -An alternative representation of IMU orientation data is a heading vector that points outwards from the center of the IMU. It can be useful to compare this heading vector with the 3D gaze vectors in world coordinates. +The "forward-facing axis" is the y-axis, so we can calculate the heading vector by transforming the `(0, 1, 0)` vector. ```python def imu_heading_in_world(imu_quaternions): @@ -71,11 +67,13 @@ def imu_heading_in_world(imu_quaternions): ) ``` -Neutral orientation of the IMU would correspond to a heading vector that points at magnetic North and that is oriented perpendicular to the line of gravity. +::: tip +Neutral orientation (i.e. an identity rotation in the quaternion) of the IMU would correspond to a heading vector that points at magnetic North and that is oriented perpendicular to the line of gravity. +::: -### IMU Acceleration in World +### Example: Acceleration in World Coordinates -The IMU’s acceleration data are specified in its local coordinate system. If you want to understand how the observer is accelerating through their environment, then it can be easier to have the acceleration data specified in the world coordinate system: +The IMU’s translational acceleration data is given in the IMU's local coordinate system. To understand how the observer is accelerating through the world it can be helpful to transform the data into the world coordinate system: ```python accelerations_in_world = transform_imu_to_world( @@ -85,17 +83,16 @@ accelerations_in_world = transform_imu_to_world( ## Scene to World Coordinates -Neon simultaneously records data in the scene camera and IMU coordinate systems, making it possible to study the relationship between head and eye movements. +A lot of the data generated by Neon is provided in the scene camera's coordinate system, including e.g. gaze, fixation, and eye state data. This coordinate system is **not** equal to the IMU's coordinate system! There is a translation between them (simply because there is a physical distance between the camera and the IMU in the module) and also a rotation (because of how the scene camera's coordinate system is defined). -To facilitate the comparison, it can be useful to represent these data streams in the same coordinate system. An important step is accounting for [the fixed 102 degree rotation offset between the scene camera and IMU coordinate systems](https://docs.pupil-labs.com/neon/data-collection/data-streams/#movement-imu-data), as depicted below. +The rotation is a 102 degree rotation around the x-axis of the IMU coordinate system and the translation is along the vector `(0.0 mm, -1.3 mm, -6.62 mm)`. ![Diagrams showing the fixed 102 degree rotation offset between the IMU and scene camera coordinate systems.](./imu-scene_camera_offset-black.png) -We can use data from the IMU to transform gaze in scene camera coordinates to world coordinates. We proceed by building a `transform_scene_to_imu` function that handles the rotation between the two coordinate systems. It also accepts a `translation_in_imu` keyword argument to specify if points should be shifted in the IMU system. This will be relevant when converting 3D eyestate to world coordinates. - +We can define a `transform_scene_to_imu` function that handles the rotation between the two coordinate systems. ```python -def transform_scene_to_imu(coords_in_scene, translation_in_imu=np.zeros((3,))): +def transform_scene_to_imu(coords_in_scene, translation_in_imu=np.array([0.0, -1.3, -6.62])): imu_scene_rotation_diff = np.deg2rad(-90 - 12) scene_to_imu = np.array( [ @@ -122,27 +119,25 @@ def transform_scene_to_imu(coords_in_scene, translation_in_imu=np.zeros((3,))): return coords_in_imu.T ``` -Putting together the `transform_scene_to_imu` and `transform_imu_to_world` functions, we can build a composite `transform_scene_to_world` function: +Combining the `transform_scene_to_imu` function with the `transform_imu_to_world` function allows us to go all the way from scene camera coordinate system to world coordinate system ```python -def transform_scene_to_world(coords_in_scene, imu_quaternions, translation_in_imu=np.zeros((3,))): +def transform_scene_to_world(coords_in_scene, imu_quaternions, translation_in_imu=np.array([0.0, -1.3, -6.62])): coords_in_imu = transform_scene_to_imu(coords_in_scene, translation_in_imu) return transform_imu_to_world(coords_in_imu, imu_quaternions) ``` -You can now use this function to transform data in scene camera coordinates to world coordinates. Head to the [Application Example](#application-example) to see how! +### Example: Eyestate in World Coordinates -## Eyestate to World Coordinates +The `transform_scene_to_world` function allows us easily convert [eye state data](](https://docs.pupil-labs.com/neon/data-collection/data-streams/#_3d-eye-states)) given in scene camera coordinates to world coordinates. -The [3D eyestate estimates](https://docs.pupil-labs.com/neon/data-collection/data-streams/#_3d-eye-states) provided by Neon are aligned with the scene camera coordinate system. This means we can use the `transform_scene_to_world` function above to reconstruct the pose of the eyes in the world coordinate system. We just need to consider that the scene camera is displaced a bit from the IMU. -Since the eyeball center and optical axis values provided by the 3D eyestate estimates are intrinsically linked, we provide the `eyestate_to_world` function to simplify doing the conversions: +::: warning +Note, to do this right in practice you need to make sure you sample the quaternions and eye state data from the same timestamps. Since both data streams are generated independently and do not share the same set of timestamps, this is a challenge in itself. -```python -# The 3D eyestate data and the IMU quaternions should be sampled -# at the same timestamps. You can linearly interpolate the IMU data. -# See the Application Example: -# http://docs.pupil-labs.com/alpha-lab/imu-transformations/#application-example +We are glossing over this here, but one possible solution to this is interpolating the IMU data to match the timestamps of the eye state data, which is demonstrated [here](http://docs.pupil-labs.com/alpha-lab/imu-transformations/#application-example). +::: +```python def eyestate_to_world(eyeball_centers, optical_axes, imu_quaternions): """ The eyeball_centers and optical_axes inputs are for the same eye. @@ -165,9 +160,8 @@ def eyestate_to_world(eyeball_centers, optical_axes, imu_quaternions): return eyeball_centers_in_world, optical_axes_in_world ``` -## 3D Gaze to World Coordinates - -Neon provides 3D gaze data in [spherical coordinates (i.e., `azimuth/elevation [deg]`)](https://docs.pupil-labs.com/neon/data-collection/data-format/#gaze-csv). The `transform_scene_to_world` function above expects 3D Cartesian coordinates, so to convert spherical 3D gaze to world coordinates, we will need the `spherical_to_cartesian_scene` function: +### Example: 3D Gaze Direction in World Coordinates +Neon provides 3D gaze directions in [spherical coordinates (i.e., `azimuth/elevation [deg]`)](https://docs.pupil-labs.com/neon/data-collection/data-format/#gaze-csv). The `transform_scene_to_world` function above expects 3D Cartesian coordinates, so we need to convert the data first. ```python def spherical_to_cartesian_scene(elevations, azimuths): @@ -202,93 +196,17 @@ def spherical_to_cartesian_scene(elevations, azimuths): ).T ``` -Now we have the tools to convert 3D gaze data to world coordinates: +Now we can transform the data to world coordinates. Since we are dealing with 3D directions, rather than 3D points here, it does not make sense to apply the translation that we used in the `transform_scene_to_world` function above. We are thus setting it to zero here. ```python def gaze_3d_to_world(gaze_elevation, gaze_azimuth, imu_quaternions): cart_gazes_in_scene = spherical_to_cartesian_scene(gaze_elevation, gaze_azimuth) - return transform_scene_to_world(cart_gazes_in_scene, imu_quaternions) -``` - -## 2D Gaze to World Coordinates - -If you are starting from [the 2D gaze values in scene image coordinates (i.e., `gaze x/y [px]`)](https://docs.pupil-labs.com/neon/data-collection/data-format/#gaze-csv), then you will need to first [undistort](https://docs.pupil-labs.com/alpha-lab/undistort/) and unproject those points to obtain the corresponding 3D gaze vectors. Note that this requires [loading the scene camera calibration data](https://docs-staging.pupil-labs.com/alpha-lab/undistort/#reading-from-the-cloud-download-json-file). - -```python -def unproject_2d_gaze(gaze_points_2d, scene_camera_matrix, scene_distortion_coefficients): - """ - Transform the 2D gaze values from Neon to 3D gaze vectors. - """ - - gaze_points_2d_undist = cv2.undistortPoints(gaze_points_2d, scene_camera_matrix, scene_distortion_coefficients) - gaze_vectors_3d = cv2.convertPointsToHomogeneous(gaze_points_2d_undist) - gaze_vectors_3d /= np.linalg.norm(gaze_vectors_3d, axis=2)[:, np.newaxis] - - return gaze_vectors_3d -``` - -Then, we can use the functions from [the previous section](#3d-gaze-to-world-coordinates) to convert 2D gaze to 3D world coordinates: - -```python -# The gaze data and the IMU quaternions should be sampled at the -# same timestamps. You can linearly interpolate the IMU data to -# ensure this. -# See the Application Example: -# http://docs.pupil-labs.com/alpha-lab/imu-transformations/#application-example - -gaze_points_2d = gaze[["gaze x [px]", "gaze y [px]"]].to_numpy() -def gaze_2d_to_world(gaze_points_2d, scene_camera_matrix, scene_distortion_coefficients, imu_quaternions): - cart_gazes_in_scene = undistort_and_unproject( - gaze_points_2d, scene_camera_matrix, scene_distortion_coefficients - ) - return transform_scene_to_world( - cart_gazes_in_scene, imu_quaternions - ) -``` - -## World Spherical Coordinates - -When studying head orientation and gaze orientation as observers navigate a 3D environment, it can be useful to know how much these quantities deviate from pointing at a given landmark or direction. For instance, you might want to know when someone’s gaze or heading deviates from parallel with the horizon. This can be simplified by converting world points from Cartesian to spherical coordinates. The [Euler angles from the IMU](https://docs.pupil-labs.com/neon/data-collection/data-streams/#euler-angles) are already in a compatible format. For gaze data in world coordinates, the `cartesian_to_spherical_world` function below will do the necessary transformation. When wearing Neon normally, an elevation and azimuth of 0 degrees corresponds to a neutral orientation: i.e., aimed at magnetic North and parallel to the horizon. - -```python -def cartesian_to_spherical_world(world_points_3d): - """ - Convert points in 3D Cartesian world coordinates to spherical coordinates. - - For elevation: - - Neutral orientation = 0 (i.e., parallel with horizon) - - Upwards is positive - - Downwards is negative - - For azimuth: - - Neutral orientation = 0 (i.e., aligned with magnetic North) - - Leftwards is positive - - Rightwards is negative - """ - - x = world_points_3d[:, 0] - y = world_points_3d[:, 1] - z = world_points_3d[:, 2] - - radii = np.sqrt(x**2 + y**2 + z**2) - - elevation = -(np.arccos(z / radii) - np.pi / 2) - azimuth = np.arctan2(y, x) - np.pi / 2 - - # Keep all azimuth values in the range of [-180, 180] to remain - # consistent with the yaw orientation values provided by the IMU. - azimuth[azimuth < -np.pi] += 2 * np.pi - azimuth[azimuth > np.pi] -= 2 * np.pi - - elevation = np.rad2deg(elevation) - azimuth = np.rad2deg(azimuth) - - return elevation, azimuth + return transform_scene_to_world(cart_gazes_in_scene, imu_quaternions, translation_in_imu=np.zeros(3)) ``` ## Application Example -Below, we present a video showing how some of the functions in this article where used to visualize different combinations of head and eye movements in world coordinates. The code for producing the visualization [can be found here](https://gist.github.com/rennis250/8a684ea1e2f92c79fa2104b7a0f30e20). +Below, we present a video showing how some of the functions in this article were used to visualize different combinations of head and eye movements in world coordinates. The code for producing the visualization [can be found here](https://gist.github.com/rennis250/8a684ea1e2f92c79fa2104b7a0f30e20). From 548ae4d3e4ca9560e10c8e4fdc94f6037d3bb140 Mon Sep 17 00:00:00 2001 From: Robert Ennis Date: Tue, 3 Sep 2024 11:50:25 +0200 Subject: [PATCH 2/4] remove tip about neon positioning and leftward pos/neg rotations --- neon/data-collection/data-streams/index.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/neon/data-collection/data-streams/index.md b/neon/data-collection/data-streams/index.md index cb0945ae3..4c3312943 100644 --- a/neon/data-collection/data-streams/index.md +++ b/neon/data-collection/data-streams/index.md @@ -102,8 +102,4 @@ When exporting recordings from Pupil Cloud or Neon Player the IMU's orientation - Yaw is a rotation around the world z-axis with a value range of -180° to +180°. With a calibrated IMU, a yaw of 0° indicates alignment with magnetic north. Leftward turn is positive, rightward is negative. - Roll is a rotations around the world y-axis with a value range of -180° to +180°. Wearing Neon upright with a neutral head pose roughly corresponds to a roll of 0°. Rightward tilt is positive, leftward is negative. -::: tip -Neon may sit differently on each wearer’s face, causing deviations from a neutral orientation even when facing magnetic north. Also, leftward rotations for the yaw values reported by the IMU are positive, whereas leftward rotations about the y-axis of the scene camera coordinate system are negative. -::: - ![IMU Pitch, Yaw, Roll](./imu-pitch-yaw-roll-black.png) From 739440c3afde25074b003854a5767d2dd54ddcb3 Mon Sep 17 00:00:00 2001 From: Marc Tonsen Date: Tue, 3 Sep 2024 10:20:46 +0200 Subject: [PATCH 3/4] Iterated on content --- alpha-lab/imu-transformations/index.md | 142 ++++++------------------- 1 file changed, 30 insertions(+), 112 deletions(-) diff --git a/alpha-lab/imu-transformations/index.md b/alpha-lab/imu-transformations/index.md index 7090574f2..9425e1854 100644 --- a/alpha-lab/imu-transformations/index.md +++ b/alpha-lab/imu-transformations/index.md @@ -26,17 +26,13 @@ import TagLinks from '@components/TagLinks.vue' -::: tip -Want to compare IMU and gaze data in the same coordinate system to better understand how people coordinate head and eye movements? The transformation functions in this tutorial will show you how! -::: - -This guide contains various transformation functions that can assist when working with Neon's IMU data. +This guide contains various transformation functions that help with relating [Neon's IMU data](https://docs.pupil-labs.com/neon/data-collection/data-streams/#movement-imu-data) with other data streams. -## IMU to World Coordinates +## Rotation between the IMU and the World -One of the key steps when working with IMU data is the transformation that takes coordinates in the local IMU coordinate system to their corresponding coordinates in [the world coordinate system](http://docs.pupil-labs.com/neon/data-collection/data-streams/#movement-imu-data). The quaternion values provided by the IMU can be used to convert between the two coordinate systems. The `transform_imu_to_world` function, defined below, will be used throughout this article. +The IMU data includes a description of how the IMU is rotated in relation to the world. Concretely, the IMU data contains quaternions that define a rotation transformation between the [the world coordinate system](http://docs.pupil-labs.com/neon/data-collection/data-streams/#movement-imu-data) and the IMU's local coordinate system at different points in time. -Note that the origin of the IMU coordinate system is the same as the origin of the world coordinate system. +The `transform_imu_to_world` function below demonstrates how to use these quaternions to transform data from the IMU's local coordinate system to the world coordinate system. ```python from scipy.spatial.transform import Rotation as R @@ -57,11 +53,11 @@ def transform_imu_to_world(imu_coordinates, imu_quaternions): ]) ``` -Now that we have the `transform_imu_to_world` function, let's use it! +### Example: Heading Vectors in World Coordinates -### Obtain IMU Heading Vectors +The `transform_imu_to_world` function can be used to calculate heading vectors of the IMU in world coordinates. The heading vector essentially describes the direction the IMU is facing. If we imagine the IMU inside the Neon module while it is worn on sombody's head, the heading vector describes the direction the wearer's face is pointing. -An alternative representation of IMU orientation data is a heading vector that points outwards from the center of the IMU. It can be useful to compare this heading vector with the 3D gaze vectors in world coordinates. +The "forward-facing axis" is the y-axis, so we can calculate the heading vector by transforming the `(0, 1, 0)` vector. ```python def imu_heading_in_world(imu_quaternions): @@ -71,11 +67,13 @@ def imu_heading_in_world(imu_quaternions): ) ``` -Neutral orientation of the IMU would correspond to a heading vector that points at magnetic North and that is oriented perpendicular to the line of gravity. +::: tip +Neutral orientation (i.e. an identity rotation in the quaternion) of the IMU would correspond to a heading vector that points at magnetic North and that is oriented perpendicular to the line of gravity. +::: -### IMU Acceleration in World +### Example: Acceleration in World Coordinates -The IMU’s acceleration data are specified in its local coordinate system. If you want to understand how the observer is accelerating through their environment, then it can be easier to have the acceleration data specified in the world coordinate system: +The IMU’s translational acceleration data is given in the IMU's local coordinate system. To understand how the observer is accelerating through the world it can be helpful to transform the data into the world coordinate system: ```python accelerations_in_world = transform_imu_to_world( @@ -85,17 +83,16 @@ accelerations_in_world = transform_imu_to_world( ## Scene to World Coordinates -Neon simultaneously records data in the scene camera and IMU coordinate systems, making it possible to study the relationship between head and eye movements. +A lot of the data generated by Neon is provided in the scene camera's coordinate system, including e.g. gaze, fixation, and eye state data. This coordinate system is **not** equal to the IMU's coordinate system! There is a translation between them (simply because there is a physical distance between the camera and the IMU in the module) and also a rotation (because of how the scene camera's coordinate system is defined). -To facilitate the comparison, it can be useful to represent these data streams in the same coordinate system. An important step is accounting for [the fixed 102 degree rotation offset between the scene camera and IMU coordinate systems](https://docs.pupil-labs.com/neon/data-collection/data-streams/#movement-imu-data), as depicted below. +The rotation is a 102 degree rotation around the x-axis of the IMU coordinate system and the translation is along the vector `(0.0 mm, -1.3 mm, -6.62 mm)`. ![Diagrams showing the fixed 102 degree rotation offset between the IMU and scene camera coordinate systems.](./imu-scene_camera_offset-black.png) -We can use data from the IMU to transform gaze in scene camera coordinates to world coordinates. We proceed by building a `transform_scene_to_imu` function that handles the rotation between the two coordinate systems. It also accepts a `translation_in_imu` keyword argument to specify if points should be shifted in the IMU system. This will be relevant when converting 3D eyestate to world coordinates. - +We can define a `transform_scene_to_imu` function that handles the rotation between the two coordinate systems. ```python -def transform_scene_to_imu(coords_in_scene, translation_in_imu=np.zeros((3,))): +def transform_scene_to_imu(coords_in_scene, translation_in_imu=np.array([0.0, -1.3, -6.62])): imu_scene_rotation_diff = np.deg2rad(-90 - 12) scene_to_imu = np.array( [ @@ -122,27 +119,25 @@ def transform_scene_to_imu(coords_in_scene, translation_in_imu=np.zeros((3,))): return coords_in_imu.T ``` -Putting together the `transform_scene_to_imu` and `transform_imu_to_world` functions, we can build a composite `transform_scene_to_world` function: +Combining the `transform_scene_to_imu` function with the `transform_imu_to_world` function allows us to go all the way from scene camera coordinate system to world coordinate system ```python -def transform_scene_to_world(coords_in_scene, imu_quaternions, translation_in_imu=np.zeros((3,))): +def transform_scene_to_world(coords_in_scene, imu_quaternions, translation_in_imu=np.array([0.0, -1.3, -6.62])): coords_in_imu = transform_scene_to_imu(coords_in_scene, translation_in_imu) return transform_imu_to_world(coords_in_imu, imu_quaternions) ``` -You can now use this function to transform data in scene camera coordinates to world coordinates. Head to the [Application Example](#application-example) to see how! +### Example: Eyestate in World Coordinates -## Eyestate to World Coordinates +The `transform_scene_to_world` function allows us easily convert [eye state data](](https://docs.pupil-labs.com/neon/data-collection/data-streams/#_3d-eye-states)) given in scene camera coordinates to world coordinates. -The [3D eyestate estimates](https://docs.pupil-labs.com/neon/data-collection/data-streams/#_3d-eye-states) provided by Neon are aligned with the scene camera coordinate system. This means we can use the `transform_scene_to_world` function above to reconstruct the pose of the eyes in the world coordinate system. We just need to consider that the scene camera is displaced a bit from the IMU. -Since the eyeball center and optical axis values provided by the 3D eyestate estimates are intrinsically linked, we provide the `eyestate_to_world` function to simplify doing the conversions: +::: warning +Note, to do this right in practice you need to make sure you sample the quaternions and eye state data from the same timestamps. Since both data streams are generated independently and do not share the same set of timestamps, this is a challenge in itself. -```python -# The 3D eyestate data and the IMU quaternions should be sampled -# at the same timestamps. You can linearly interpolate the IMU data. -# See the Application Example: -# http://docs.pupil-labs.com/alpha-lab/imu-transformations/#application-example +We are glossing over this here, but one possible solution to this is interpolating the IMU data to match the timestamps of the eye state data, which is demonstrated [here](http://docs.pupil-labs.com/alpha-lab/imu-transformations/#application-example). +::: +```python def eyestate_to_world(eyeball_centers, optical_axes, imu_quaternions): """ The eyeball_centers and optical_axes inputs are for the same eye. @@ -165,9 +160,8 @@ def eyestate_to_world(eyeball_centers, optical_axes, imu_quaternions): return eyeball_centers_in_world, optical_axes_in_world ``` -## 3D Gaze to World Coordinates - -Neon provides 3D gaze data in [spherical coordinates (i.e., `azimuth/elevation [deg]`)](https://docs.pupil-labs.com/neon/data-collection/data-format/#gaze-csv). The `transform_scene_to_world` function above expects 3D Cartesian coordinates, so to convert spherical 3D gaze to world coordinates, we will need the `spherical_to_cartesian_scene` function: +### Example: 3D Gaze Direction in World Coordinates +Neon provides 3D gaze directions in [spherical coordinates (i.e., `azimuth/elevation [deg]`)](https://docs.pupil-labs.com/neon/data-collection/data-format/#gaze-csv). The `transform_scene_to_world` function above expects 3D Cartesian coordinates, so we need to convert the data first. ```python def spherical_to_cartesian_scene(elevations, azimuths): @@ -202,93 +196,17 @@ def spherical_to_cartesian_scene(elevations, azimuths): ).T ``` -Now we have the tools to convert 3D gaze data to world coordinates: +Now we can transform the data to world coordinates. Since we are dealing with 3D directions, rather than 3D points here, it does not make sense to apply the translation that we used in the `transform_scene_to_world` function above. We are thus setting it to zero here. ```python def gaze_3d_to_world(gaze_elevation, gaze_azimuth, imu_quaternions): cart_gazes_in_scene = spherical_to_cartesian_scene(gaze_elevation, gaze_azimuth) - return transform_scene_to_world(cart_gazes_in_scene, imu_quaternions) -``` - -## 2D Gaze to World Coordinates - -If you are starting from [the 2D gaze values in scene image coordinates (i.e., `gaze x/y [px]`)](https://docs.pupil-labs.com/neon/data-collection/data-format/#gaze-csv), then you will need to first [undistort](https://docs.pupil-labs.com/alpha-lab/undistort/) and unproject those points to obtain the corresponding 3D gaze vectors. Note that this requires [loading the scene camera calibration data](https://docs-staging.pupil-labs.com/alpha-lab/undistort/#reading-from-the-cloud-download-json-file). - -```python -def unproject_2d_gaze(gaze_points_2d, scene_camera_matrix, scene_distortion_coefficients): - """ - Transform the 2D gaze values from Neon to 3D gaze vectors. - """ - - gaze_points_2d_undist = cv2.undistortPoints(gaze_points_2d, scene_camera_matrix, scene_distortion_coefficients) - gaze_vectors_3d = cv2.convertPointsToHomogeneous(gaze_points_2d_undist) - gaze_vectors_3d /= np.linalg.norm(gaze_vectors_3d, axis=2)[:, np.newaxis] - - return gaze_vectors_3d -``` - -Then, we can use the functions from [the previous section](#3d-gaze-to-world-coordinates) to convert 2D gaze to 3D world coordinates: - -```python -# The gaze data and the IMU quaternions should be sampled at the -# same timestamps. You can linearly interpolate the IMU data to -# ensure this. -# See the Application Example: -# http://docs.pupil-labs.com/alpha-lab/imu-transformations/#application-example - -gaze_points_2d = gaze[["gaze x [px]", "gaze y [px]"]].to_numpy() -def gaze_2d_to_world(gaze_points_2d, scene_camera_matrix, scene_distortion_coefficients, imu_quaternions): - cart_gazes_in_scene = undistort_and_unproject( - gaze_points_2d, scene_camera_matrix, scene_distortion_coefficients - ) - return transform_scene_to_world( - cart_gazes_in_scene, imu_quaternions - ) -``` - -## World Spherical Coordinates - -When studying head orientation and gaze orientation as observers navigate a 3D environment, it can be useful to know how much these quantities deviate from pointing at a given landmark or direction. For instance, you might want to know when someone’s gaze or heading deviates from parallel with the horizon. This can be simplified by converting world points from Cartesian to spherical coordinates. The [Euler angles from the IMU](https://docs.pupil-labs.com/neon/data-collection/data-streams/#euler-angles) are already in a compatible format. For gaze data in world coordinates, the `cartesian_to_spherical_world` function below will do the necessary transformation. When wearing Neon normally, an elevation and azimuth of 0 degrees corresponds to a neutral orientation: i.e., aimed at magnetic North and parallel to the horizon. - -```python -def cartesian_to_spherical_world(world_points_3d): - """ - Convert points in 3D Cartesian world coordinates to spherical coordinates. - - For elevation: - - Neutral orientation = 0 (i.e., parallel with horizon) - - Upwards is positive - - Downwards is negative - - For azimuth: - - Neutral orientation = 0 (i.e., aligned with magnetic North) - - Leftwards is positive - - Rightwards is negative - """ - - x = world_points_3d[:, 0] - y = world_points_3d[:, 1] - z = world_points_3d[:, 2] - - radii = np.sqrt(x**2 + y**2 + z**2) - - elevation = -(np.arccos(z / radii) - np.pi / 2) - azimuth = np.arctan2(y, x) - np.pi / 2 - - # Keep all azimuth values in the range of [-180, 180] to remain - # consistent with the yaw orientation values provided by the IMU. - azimuth[azimuth < -np.pi] += 2 * np.pi - azimuth[azimuth > np.pi] -= 2 * np.pi - - elevation = np.rad2deg(elevation) - azimuth = np.rad2deg(azimuth) - - return elevation, azimuth + return transform_scene_to_world(cart_gazes_in_scene, imu_quaternions, translation_in_imu=np.zeros(3)) ``` ## Application Example -Below, we present a video showing how some of the functions in this article where used to visualize different combinations of head and eye movements in world coordinates. The code for producing the visualization [can be found here](https://gist.github.com/rennis250/8a684ea1e2f92c79fa2104b7a0f30e20). +Below, we present a video showing how some of the functions in this article were used to visualize different combinations of head and eye movements in world coordinates. The code for producing the visualization [can be found here](https://gist.github.com/rennis250/8a684ea1e2f92c79fa2104b7a0f30e20). From 42af1c1b51b669f9d7044349e07d0163ddc68fa3 Mon Sep 17 00:00:00 2001 From: Marc Tonsen Date: Tue, 3 Sep 2024 13:25:00 +0200 Subject: [PATCH 4/4] Bring back the spehrical world coordinates section --- alpha-lab/imu-transformations/index.md | 43 ++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/alpha-lab/imu-transformations/index.md b/alpha-lab/imu-transformations/index.md index 9425e1854..0628c1986 100644 --- a/alpha-lab/imu-transformations/index.md +++ b/alpha-lab/imu-transformations/index.md @@ -204,6 +204,49 @@ def gaze_3d_to_world(gaze_elevation, gaze_azimuth, imu_quaternions): return transform_scene_to_world(cart_gazes_in_scene, imu_quaternions, translation_in_imu=np.zeros(3)) ``` +## World Spherical Coordinates +Using the transformations introduced above, we can transform various data into cartesian world coordinates. For some things it is more intuitive to have the data in spherical coordinates though. For instance, you might want to know when someone’s gaze or heading deviates from parallel with the horizon, i.e. if they are looking/facing upwards or downwards. + +Converting data into spherical world coordinates makes these things obvious. When wearing Neon, an elevation and azimuth of 0 degrees corresponds to a neutral orientation: i.e., aimed at magnetic North and parallel to the horizon. A positive elevation corresponds to looking upwards, and a negative elevation corresponds to looking downwards. + +The [Euler angles from the IMU](https://docs.pupil-labs.com/neon/data-collection/data-streams/#euler-angles) are already in a compatible format. For gaze data in world coordinates, the `cartesian_to_spherical_world` function below will do the necessary transformation. + +```python +def cartesian_to_spherical_world(world_points_3d): + """ + Convert points in 3D Cartesian world coordinates to spherical coordinates. + + For elevation: + - Neutral orientation = 0 (i.e., parallel with horizon) + - Upwards is positive + - Downwards is negative + + For azimuth: + - Neutral orientation = 0 (i.e., aligned with magnetic North) + - Leftwards is positive + - Rightwards is negative + """ + + x = world_points_3d[:, 0] + y = world_points_3d[:, 1] + z = world_points_3d[:, 2] + + radii = np.sqrt(x**2 + y**2 + z**2) + + elevation = -(np.arccos(z / radii) - np.pi / 2) + azimuth = np.arctan2(y, x) - np.pi / 2 + + # Keep all azimuth values in the range of [-180, 180] to remain + # consistent with the yaw orientation values provided by the IMU. + azimuth[azimuth < -np.pi] += 2 * np.pi + azimuth[azimuth > np.pi] -= 2 * np.pi + + elevation = np.rad2deg(elevation) + azimuth = np.rad2deg(azimuth) + + return elevation, azimuth +``` + ## Application Example Below, we present a video showing how some of the functions in this article were used to visualize different combinations of head and eye movements in world coordinates. The code for producing the visualization [can be found here](https://gist.github.com/rennis250/8a684ea1e2f92c79fa2104b7a0f30e20).