-
Notifications
You must be signed in to change notification settings - Fork 59
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
open PR to get tag-aligner alab article into staging
- Loading branch information
Showing
6 changed files
with
94 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,84 @@ | ||
--- | ||
title: "Map Gaze into a Reference 3D Coordinate System" | ||
description: "Map gaze, head pose, and observer position into a 3D coordinate system of your choice using our Tag Aligner tool." | ||
permalink: /alpha-lab/tag-aligner/ | ||
meta: | ||
- name: twitter:card | ||
content: summary | ||
- name: twitter:image | ||
content: "https://i.ytimg.com/vi/nt_zNSBMJWI/maxresdefault.jpg" | ||
- name: twitter:player | ||
content: "https://www.youtube.com/embed/vEshPxgWs3E" | ||
- name: twitter:width | ||
content: "1280" | ||
- name: twitter:height | ||
content: "720" | ||
- property: og:image | ||
content: "https://i.ytimg.com/vi/nt_zNSBMJWI/maxresdefault.jpg" | ||
tags: [Neon, Cloud] | ||
--- | ||
|
||
<script setup> | ||
import TagLinks from '@components/TagLinks.vue' | ||
</script> | ||
|
||
# Map Gaze into a Reference 3D Coordinate System | ||
|
||
<TagLinks :tags="$frontmatter.tags" /> | ||
|
||
<Youtube src="FuzhGwN5t8U"/> | ||
|
||
::: tip | ||
Experience your participants' journey and gaze direction in a real-world environment, digitally reimagined, using Tag Aligner! Our new tool combines Neon eye tracking and headpose data with a user-supplied third-party 3D model. | ||
Check warning on line 32 in alpha-lab/tag-aligner/index.md GitHub Actions / ✍️ Check spelling
|
||
::: | ||
|
||
## Real-world Positions and Rotations | ||
|
||
![Comparison of the different ways to project Neon data into various coordinate systems](./coord-sys-comparisons.png) | ||
|
||
It's often important to know the position of an observer in an environment, and how they orient their head and gaze while navigating it. For example, to understand when museum-goers pause in thought while viewing artworks, or how engineers move about and gaze at the surrounding environment during mission-critical tasks. | ||
|
||
With a digital twin of a real-world environment, you can visualize an observer's trajectory, and how they direct their gaze spatially onto objects within the scene, by mapping Neon’s gaze + head pose into the digital twin. | ||
|
||
In this guide, we'll show you how to do this using data from Neon + Reference Image Mapper (RIM) and the digital twin of your choice. | ||
|
||
## Transforming Poses from RIM Data | ||
|
||
The first step, as in the title, is to transform camera poses from the RIM 3D model to the Digital twin. For context, our RIM enrichment uses 3D features of a scene to map gaze onto a reference image regardless of the subject’s position or orientation. Under the hood, RIM builds a sparse 3D model of the environment and calculates camera poses relative to the 3D features of the scene. However, the origin, scaling, and orientation of the coordinate system for these camera poses is arbitrary and cannot be used directly for any real-world metrics or visualizations. | ||
|
||
![Depiction of the sparse 3D model produced by our Reference Image Mapper](./rim_3d_model.png) | ||
|
||
The white dots on this image (statue of [Theodor Koch-Grünberg](https://en.wikipedia.org/wiki/Theodor_Koch-Grunberg)) represent key points of a sparse 3D model built from a RIM enrichment scanning recording. The model is used by RIM to calculate scene camera positions in an arbitrary coordinate system. | ||
Check warning on line 51 in alpha-lab/tag-aligner/index.md GitHub Actions / ✍️ Check spelling
|
||
|
||
Rather usefully, we found that a stationary AprilTag marker with a known size, position, and rotation, placed in a RIM-enriched recording, can be used to align the camera poses to a useful coordinate system, such as that of a digital twin or real-world measures. So we implemented this in a transformation function, and put it into a package called ‘Tag Aligner’. | ||
|
||
After calculating the transformation to convert from the RIM enrichment’s arbitrary coordinate system to your desired coordinate system, the transformation can then be applied to all of the recordings’ poses from that enrichment. | ||
|
||
These aligned poses could be used for analysis or to visualize observer motion and gaze within a digital twin. The Tag Aligner package includes a simple 3D viewer that can load a glTF model and allow you to explore the scene while playing back the recording. A set of Neon frames and the gaze ray are also added to the scene for visualization. | ||
|
||
## Steps to recreate | ||
|
||
1. AprilTags are the key to Tag Aligner, hence the name, so make sure you have one printed and at the ready! You will want a tag from the “tag36h11” family, printed at a good, visible size. We have already prepared [a PDF of them](https://github.com/pupil-labs/pupil-helpers/blob/master/markers_stickersheet/tag36h11_full.pdf?raw=True) for you. Note that you need to include a white border around the printed AprilTag. | ||
2. Grab a copy of [Tag Aligner](https://github.com/pupil-labs/tag-aligner) and follow the instructions in the README. | ||
3. If you have a glTF model of your environment and want to visualize the aligned poses and gaze, then be sure to check out the “Bonus” section of the Tag Aligner repo, where we offer a real-time visualization, a Blender plugin, and a Python notebook with some basic analysis. | ||
|
||
|
||
## Working with Aligned Poses | ||
|
||
You have now expanded the analysis possibilities of Neon + RIM to the third dimension! | ||
|
||
After running the Tag Aligner tool, you will find a file in the recording folder, called "aligned_poses.csv", with the scaled and aligned poses of the scene camera over time. | ||
|
||
If you run the bonus section, you will also have a pop-up interactive window that renders a glTF model of your environment that you can use to visualize the aligned poses and gaze. | ||
|
||
Finally, you can analyze the results further to gain other insights. For example, you might want to plot an overhead view of the wearer’s trajectory. To get you started, we plotted the “translation_z” against the “translation_x” columns - check out the result below. | ||
|
||
![Overhead projection of observer trajectory and gaze mapped onto statue scene](./observer_position.png) | ||
|
||
## Related Content | ||
|
||
Be sure to check out our AlphaLab article about how to [Map Gaze Onto a 3D Model of an Environment](https://docs.pupil-labs.com/alpha-lab/nerfs/) using Neural Radiance Fields. | ||
|
||
::: tip | ||
Need assistance with aligning your AprilTags or applying the transformations to your RIM recordings? Or do you have something more custom in mind? Reach out to us via email at [[email protected]](mailto:[email protected]), on our [Discord server](https://pupil-labs.com/chat/), or visit our [Support Page](https://pupil-labs.com/products/support/) for dedicated support options. | ||
::: |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.