diff --git a/README.md b/README.md
index c0e605e47..f45fcfdb5 100644
--- a/README.md
+++ b/README.md
@@ -1,18 +1,20 @@
# Recommended Defaults
+
- When importing images, use markdown-style imports
+
```markdown
![alt text](/path/to/image.png)
```
+
- When trying to create a gallery view, consider using the `PhotoGrid` component
- Avoid using custom HTML and CSS as much as possible.
-
# Capitalization
+
- We use title case for all titles, headings, menu sections etc. We use title case as it is defined by the Associated Press Stylebook. There is a converter available [here](https://titlecaseconverter.com/) to double-check correctness.
- Fixed product and feature names are capitalized, like e.g.
- - Reference Image Mapper
- - Neon Companion App
- - Neon Companion Device
- - Pupil Invisible Glasses
- - Video Renderer
- - Heatmap Visualization
+ - Reference Image Mapper
+ - Neon Companion
+ - Pupil Invisible Glasses
+ - Video Renderer
+ - Heatmap Visualization
diff --git a/alpha-lab/imu-transformations/pl-imu-transformations b/alpha-lab/imu-transformations/pl-imu-transformations
new file mode 160000
index 000000000..542043d8a
--- /dev/null
+++ b/alpha-lab/imu-transformations/pl-imu-transformations
@@ -0,0 +1 @@
+Subproject commit 542043d8a4500ae908af6abe9846266a76bd1eeb
diff --git a/alpha-lab/neon-with-capture/index.md b/alpha-lab/neon-with-capture/index.md
index d6eabb55d..fc7baaafe 100644
--- a/alpha-lab/neon-with-capture/index.md
+++ b/alpha-lab/neon-with-capture/index.md
@@ -1,6 +1,6 @@
# Use Neon with Pupil Capture
-It's possible to make recordings using Neon in Pupil Capture under MacOS and Linux. You can use the Neon module as if it was a Pupil Core headset by plugging it into your Computer and running Pupil Capture from [source](https://github.com/pupil-labs/pupil/tree/master).
+It's possible to make recordings using Neon in Pupil Capture under MacOS and Linux. You can use the Neon module as if it was a Pupil Core headset by plugging it into your Computer and running Pupil Capture from [source](https://github.com/pupil-labs/pupil/tree/master).
Note, that in this case gaze estimation will **NOT** be done using **NeonNet**, but using Pupil Core's gaze estimation pipeline. This means you will have to do a calibration and experience a lack of robustness compared to NeonNet.
@@ -11,10 +11,10 @@ To get started with Neon in Capture, follow these steps:
1. Connect **Neon** to your computer.
2. Open **Pupil Capture**.
3. Under Video Source, click **"Activate Device"** and choose **Neon** to activate the scene and eye cameras.
-4. In the eye camera's window, you can adjust the absolute exposure time and gain. Keep in mind that the Neon eye cameras do not have individual controls, so changes will affect both cameras.
-Additionally, you may want to switch to ROI mode in the general settings to define a specific area for pupil detection. For more guidance, check out this [video](https://drive.google.com/file/d/1tr1KQ7QFmFUZQjN9aYtSzpMcaybRnuqi/view?usp=sharing).
+4. In the eye camera's window, you can adjust the absolute exposure time and gain. Keep in mind that the Neon eye cameras do not have individual controls, so changes will affect both cameras.
+ Additionally, you may want to switch to ROI mode in the general settings to define a specific area for pupil detection. For more guidance, check out this [video](https://drive.google.com/file/d/1tr1KQ7QFmFUZQjN9aYtSzpMcaybRnuqi/view?usp=sharing).
5. Select either `Neon 3D` or `2D` as the **Gaze Mapping** option in the **Calibration** tab.
::: warning
Recordings made with the Neon Companion app (rather than Pupil Capture) are **NOT** compatible with Pupil Player.
-:::
\ No newline at end of file
+:::
diff --git a/invisible/data-collection/offset-correction/index.md b/invisible/data-collection/offset-correction/index.md
index 1e0ff52c7..079773957 100644
--- a/invisible/data-collection/offset-correction/index.md
+++ b/invisible/data-collection/offset-correction/index.md
@@ -1,7 +1,8 @@
# Using Offset Correction to Improve Gaze Accuracy
-For some wearers, you may find a constant offset in their gaze estimates. Especially in setups where wearers gaze at somethings that is less than 1 meter away from them this can happen due to parallax error.
-To compensate for those offsets, you can use the Offset Correction feature in the Pupil Invisible Companion App. See the video below to learn how it works! The video is using demonstrating the feature within the Neon Companion App, but the approach is very similar.
+For some wearers, you may find a constant offset in their gaze estimates. Especially in setups where wearers gaze at somethings that is less than 1 meter away from them this can happen due to parallax error.
+
+To compensate for those offsets, you can use the Offset Correction feature in the Pupil Invisible Companion App. See the video below to learn how it works! The video is using demonstrating the feature within the Neon Companion app, but the approach is very similar.
@@ -9,4 +10,4 @@ To compensate for those offsets, you can use the Offset Correction feature in th
Note, that the amount of offset introduced by parallax error is highly dependend on the distance between the wearer and the object they are looking at. The closer the object, the larger the offset.
The offset correction is only valid for the specific distance at which it was recorded. If the wearer changes their distance to the object, you will need to record a new offset correction.
-:::
\ No newline at end of file
+:::
diff --git a/neon/.vitepress/config.mts b/neon/.vitepress/config.mts
index 1c9f38d17..dc74e1bf4 100644
--- a/neon/.vitepress/config.mts
+++ b/neon/.vitepress/config.mts
@@ -106,6 +106,10 @@ let theme_config_additions = {
text: "Measuring the IED",
link: "/data-collection/measuring-ied/",
},
+ {
+ text: "Gaze Mode",
+ link: "/data-collection/gaze-mode/",
+ },
{
text: "Scene Camera Exposure",
link: "/data-collection/scene-camera-exposure/",
diff --git a/neon/data-collection/data-format/index.md b/neon/data-collection/data-format/index.md
index 7fe76fdfb..e1aceaf61 100644
--- a/neon/data-collection/data-format/index.md
+++ b/neon/data-collection/data-format/index.md
@@ -34,6 +34,7 @@ This file contains meta-information on the recording.
| **start_time** | Timestamp of when the recording was started. Given as UTC timestamp in nanoseconds. |
| **template_data** | Data regarding the selected template for the recording as well as the response values. |
| **wearer_id** | Unique identifier of the wearer selected for this recording. |
+| **gaze_mode** | Indicates whether binocular or monocular (right/ left) pipeline was used to infer gaze. |
| **wearer_name** | Name of the wearer selected for this recording. |
| **workspace_id** | The ID of the Pupil Cloud workspace this recording has been assigned to. |
diff --git a/neon/data-collection/first-recording/index.md b/neon/data-collection/first-recording/index.md
index 2b81450d4..108c71966 100644
--- a/neon/data-collection/first-recording/index.md
+++ b/neon/data-collection/first-recording/index.md
@@ -27,7 +27,7 @@ Next, install the Neon Companion app on your device:
-![Neon Companion App](/ne-companion_app_logo-bg.png)
+![Neon Companion app](/ne-companion_app_logo-bg.png)
diff --git a/neon/data-collection/gaze-mode/index.md b/neon/data-collection/gaze-mode/index.md
new file mode 100644
index 000000000..32b9acb29
--- /dev/null
+++ b/neon/data-collection/gaze-mode/index.md
@@ -0,0 +1,25 @@
+# Gaze Mode
+You can configure Neon to generate binocular or monocular gaze data by changing the `Gaze Mode` in the Neon Companion app
+settings.
+
+In `Binocular` mode, gaze data is generated using images from both the left and right eyes. This is the default setting
+and is recommended for most users.
+
+Some specialist applications, like ophthalmic testing, require gaze data to be generated from just one eye. This can
+be achieved by switching to a `Monocular` gaze mode. `Monocular Left` generates gaze data using only images of the left
+eye, while `Monocular Right` uses only images of the right eye.
+
+## Changing Gaze Modes
+You can switch between gaze modes in the Neon Companion app settings.
+
+:::info
+After selecting a new gaze mode, be sure to unplug and re-plug the Neon device.
+:::
+
+## Considerations When Switching to Monocular Gaze
+
+- If a monocular gaze mode is selected, no binocular gaze signal will be generated. This means all downstream data, including fixations and enrichment data, will be based on monocular gaze data.
+
+- [Eye State](/data-collection/data-streams/#_3d-eye-states) and [Pupillometry](/data-collection/data-streams/#pupil-diameters) are unaffected by the gaze mode configuration and will always be generated using images from **both** eyes.
+
+- If a monocular gaze mode is selected, Pupil Cloud will **not** re-process a recording to obtain a 200 Hz signal. Instead, Pupil Cloud will use the real-time signal, which may be lower than 200 Hz depending on which Companion device was used, and which gaze rate was selected in the Neon Companion app settings.
diff --git a/neon/data-collection/index.md b/neon/data-collection/index.md
index 884f07589..5ead192df 100644
--- a/neon/data-collection/index.md
+++ b/neon/data-collection/index.md
@@ -1,10 +1,11 @@
# Data Collection with Neon
+
Data collection is a key step for any application of Neon! In this section you can learn everything about it, starting with how to make [your first recording](/data-collection/first-recording/)!
You can find an overview of what data is contained in Neon recordings including all the [data streams](/data-collection/data-streams/) from the various sensors, as well as any additional data like [events](/data-collection/events/), [wearers](/data-collection/wearers/), and [templates](/data-collection/templates/).
-You can find introductions on how to use the Neon Companion App, e.g. on [Offset Correction](/data-collection/offset-correction/), and how-to guides on common tasks during data collection, e.g. [Time Synchronization](/data-collection/time-synchronization/).
+You can find introductions on how to use the Neon Companion app, e.g. on [Offset Correction](/data-collection/offset-correction/), and how-to guides on common tasks during data collection, e.g. [Time Synchronization](/data-collection/time-synchronization/).
Documentation on useful software and integrations for data collection is also available, see e.g. [Monitor App](/data-collection/monitor-app/) or [Lab Streaming Layer](/data-collection/lab-streaming-layer/).
-Finally, you can find a list of [troubleshooting](/data-collection/troubleshooting/) tips and tricks for common issues during data collection.
\ No newline at end of file
+Finally, you can find a list of [troubleshooting](/data-collection/troubleshooting/) tips and tricks for common issues during data collection.
diff --git a/neon/data-collection/lab-streaming-layer/index.md b/neon/data-collection/lab-streaming-layer/index.md
index e7696970b..ee2e97f96 100644
--- a/neon/data-collection/lab-streaming-layer/index.md
+++ b/neon/data-collection/lab-streaming-layer/index.md
@@ -2,7 +2,7 @@
[Lab Streaming Layer](https://labstreaminglayer.org/) (LSL) is an open-source framework that connects, manages, and synchronizes data streams from multiple sources, such as EEG, GSK, and motion capture systems. Check out the [LSL documentation](https://labstreaminglayer.readthedocs.io/info/intro.html) for a full overview of supported devices.
-The Neon Companion App has built-in support for LSL, streaming Neon’s real-time generated data over the LSL network. This allows you to easily synchronize Neon with other LSL-supported devices.
+The Neon Companion app has built-in support for LSL, streaming Neon’s real-time generated data over the LSL network. This allows you to easily synchronize Neon with other LSL-supported devices.
## **Usage**
@@ -10,7 +10,7 @@ LSL streaming can be initiated in the Companion App by enabling the "Stream over
When enabled, data will be streamed over the LSL network, and subsequently, to any connected LSL inlet (such as the LSL LabRecorder App, or another third-party system with inlet functionality) which is listening. Like the [Real-Time API](https://docs.pupil-labs.com/neon/real-time-api/tutorials/), it is not necessary for the Companion App to be actively recording, but simultaneously streaming LSL data while making a recording is supported.
-Note that you'll need to ensure the Neon Companion App is connected to the same network as the other devices streaming via LSL.
+Note that you'll need to ensure the Neon Companion app is connected to the same network as the other devices streaming via LSL.
## **LSL Outlets**
@@ -40,11 +40,11 @@ If your devices are on the same network but you have trouble connecting, it is l
- UDP broadcasts to port `16571` and/or
- UDP multicast to port `16571` at
- - `FF02:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
- - `FF05:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
- - `FF08113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
- - `FF0E:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
- - `224.0.0.1`, `224.0.0.183`, `239.255.172.215`
+ - `FF02:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
+ - `FF05:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
+ - `FF08113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
+ - `FF0E:113D:6FDD:2C17:A643:FFE2:1BD1:3CD2`
+ - `224.0.0.1`, `224.0.0.183`, `239.255.172.215`
- TCP and UDP connections to the ports `16572`-`16604`
-More troubleshooting tips can be found in the [Network Troubleshooting](https://labstreaminglayer.readthedocs.io/info/network-connectivity.html) page in LSL’s documentation.
\ No newline at end of file
+More troubleshooting tips can be found in the [Network Troubleshooting](https://labstreaminglayer.readthedocs.io/info/network-connectivity.html) page in LSL’s documentation.
diff --git a/neon/data-collection/measuring-ied/index.md b/neon/data-collection/measuring-ied/index.md
index 9b1ab5953..00c2c6db6 100644
--- a/neon/data-collection/measuring-ied/index.md
+++ b/neon/data-collection/measuring-ied/index.md
@@ -1,15 +1,16 @@
# Measuring the Inter-Eye-Distance (IED)
-The wearer's IED can be set in the Neon Companion app for applications requiring
-precise pupillometry or eye-state measurements.
-This does not require prior calibration of the device. However, the accuracy of Neon’s 3D eye-state and pupil-size
+The wearer's IED can be set in the Neon Companion app for applications requiring
+precise pupillometry or eye-state measurements.
+This does not require prior calibration of the device. However, the accuracy of Neon’s 3D eye-state and pupil-size
measurements can be enhanced by correctly setting the IED for each wearer.
-To add the wearer's IED, input the value into their ‘Wearer Profile’ in the Neon Companion app before
-starting
+To add the wearer's IED, input the value into their ‘Wearer Profile’ in the Neon Companion app before
+starting
a recording. The default IED is set to 63 mm, which is the average for adults.
Here's a simple way to measure IED:
+
1. Ask the person to look at a distant object (to avoid convergence)
2. Hold a ruler in front of their eyes and measure from the center of one pupil to the center of the
-other
\ No newline at end of file
+ other
diff --git a/neon/data-collection/offset-correction/index.md b/neon/data-collection/offset-correction/index.md
index 18e1883f2..3c41d8ffe 100644
--- a/neon/data-collection/offset-correction/index.md
+++ b/neon/data-collection/offset-correction/index.md
@@ -1,5 +1,6 @@
# Using Offset Correction to Improve Gaze Accuracy
-For some wearers, you may find a constant offset in their gaze estimates. To compensate for those, you can use the Offset Correction feature in the Neon Companion App. See the video below to learn how it works!
+
+For some wearers, you may find a constant offset in their gaze estimates. To compensate for those, you can use the Offset Correction feature in the Neon Companion app. See the video below to learn how it works!
diff --git a/neon/data-collection/psychopy/index.md b/neon/data-collection/psychopy/index.md
index b550d37ef..1538e260c 100644
--- a/neon/data-collection/psychopy/index.md
+++ b/neon/data-collection/psychopy/index.md
@@ -8,11 +8,12 @@ We have created a dedicated plugin for PsychoPy that enables Neon to be used in
- [Coder](https://psychopy.org/coder/index.html) – Gives users the option to generate experiments or do other things programmatically, [using Psychopy like any other Python package](https://psychopy.org/api/).
## Using PsychoPy with Neon
+
When using PsychoPy with Neon, you can save eyetracking data in PsychoPy's hdf5 format, by enabling the "Save hdf5 file" option within the experiment settings. But we also recommend recording in the Neon Companion app for the duration of the experiment for data redundancy. PsychoPy’s standard "Eyetracker Record" component can be used to start and stop recordings on the Companion Device accordingly. If desired, custom timestamped events can be triggered from PsychoPy and saved in the Neon recording.
-* For experiments that only require pupillometry/eye state, make sure the "Compute Eye State" setting is enabled in the companion app. For experiments that do not require screen-based gaze coordinates, this is all that is required.
+- For experiments that only require pupillometry/eye state, make sure the "Compute Eye State" setting is enabled in the companion app. For experiments that do not require screen-based gaze coordinates, this is all that is required.
-* To use Neon for screen-based work in PsychoPy, the screen needs to be robustly located within the scene camera’s field of view, and Neon’s gaze data subsequently transformed from scene camera-based coordinates to screen-based coordinates. The plugin for PsychoPy achieves this with the use of AprilTag Markers and the [real-time-screen-gaze](https://github.com/pupil-labs/real-time-screen-gaze) Python package (installed automatically with the plugin).
+- To use Neon for screen-based work in PsychoPy, the screen needs to be robustly located within the scene camera’s field of view, and Neon’s gaze data subsequently transformed from scene camera-based coordinates to screen-based coordinates. The plugin for PsychoPy achieves this with the use of AprilTag Markers and the [real-time-screen-gaze](https://github.com/pupil-labs/real-time-screen-gaze) Python package (installed automatically with the plugin).
## Builder
@@ -38,7 +39,7 @@ Three new Builder components will be available in the components list under the
- April Tag Markers: for screen-based work, you will need to render AprilTag markers on your display. These components make it easy to do so. We recommend at least four markers, but more markers will improve gaze mapping.
- **April Tag Frame**: this component is recommended for most users. Using it in your Builder experiment will display an array of AprilTag markers around the edge of the screen. You can configure the number of markers to display along the horizontal and vertical edges of the screen, the size and contrast of the markers, and (optionally) the marker IDs. A minimum of four markers (2 horizontally by 2 vertically) is recommended, but more markers will provide more robust detection and accurate mapping. Marker IDs are automatically chosen but can be manually specified if needed.
- ![AprilTag Frame](./apriltag-frame.png)
+ ![AprilTag Frame](./apriltag-frame.png)
- **April Tag**: this component will add a single AprilTag marker to your display. It is intended for use when the April Tag Frame component cannot be used (e.g., you need to display stimuli on the edges of the display where the April Tag Frame component would place markers in the way). Using this component will give you control over the size and position of each marker. You will need to ensure that a unique marker ID is assigned to each AprilTag marker.
@@ -51,11 +52,12 @@ Three new Builder components will be available in the components list under the
[PsychoPy saves eyetracking data in its own format](https://psychopy.org/hardware/eyeTracking.html#what-about-the-data). Screen gaze data will be saved as `MonocularEyeSampleEvent` records (even when using the binocular gaze mode). Eye state data, if enabled, will appear in `BinocularEyeSampleEvent` records.
For eye state data in`BinocularEyeSampleEvent` records:
+
- For eye state records
- - `[left|right]_gaze_[x|y|z]` will be the optical axis vectors
- - `[left|right]_eye_cam_[x|y|z]` will be eye positions
- - `[left|right]_pupil_measure1` will be pupil diameters in mm
- - `[left|right]_pupil_measure1_type` will be `77`
+ - `[left|right]_gaze_[x|y|z]` will be the optical axis vectors
+ - `[left|right]_eye_cam_[x|y|z]` will be eye positions
+ - `[left|right]_pupil_measure1` will be pupil diameters in mm
+ - `[left|right]_pupil_measure1_type` will be `77`
### Example Builder Experiment
@@ -66,6 +68,7 @@ Check out our simple but complete [gaze contingent demo designed in PsychoPy Bui
To use Neon with PsychoPy coder, you'll need to configure ioHub, add AprilTag markers to the screen, and register the screen surface with the eyetracker. The example below shows how to collect realtime gaze position and pupil diameter in PsychoPy Coder.
### Example Coder Experiment
+
```python
from psychopy import visual, event
from psychopy.core import getTime
diff --git a/neon/data-collection/recordings/index.md b/neon/data-collection/recordings/index.md
index 1ab2ff740..3573406bc 100644
--- a/neon/data-collection/recordings/index.md
+++ b/neon/data-collection/recordings/index.md
@@ -1,4 +1,5 @@
# Recordings
+
A recording starts and stops when you press the record button in the Neon Companion app. While this should feel similar to recording a regular video on your phone, there is a lot more happening behind the scenes. When you are recording with the Neon Companion app, you are capturing not only video data but several more sensors (see [Data Streams](/data-collection/data-streams/)).
Recordings are designed to be as robust as possible. If at any point the Neon module is temporarily disconnected from the Companion phone, it will automatically start capturing again as soon as it is reconnected. You could start a recording with no Neon connected and plug it in at a later time. As soon as it is connected, data will be captured.
@@ -6,10 +7,11 @@ Recordings are designed to be as robust as possible. If at any point the Neon mo
The Neon Companion app has several more features to ensure robust data collection and will e.g. warn you in case the Companion device's battery is running low or if you run out of storage space.
## Events, Wearers, & Templates
+
When making a recording you can capture various additional data to record things like metadata or key events that happened during data collection. These will be saved as part of the recording itself.
[**Events**](/data-collection/events/) are key points in time in a recording that have been marked.
[**Wearers**](/data-collection/wearers/) are the people who wear the Neon device while recording.
-[**Templates**](/data-collection/templates/) are questionnaires that can be filled out at recording time.
\ No newline at end of file
+[**Templates**](/data-collection/templates/) are questionnaires that can be filled out at recording time.
diff --git a/neon/data-collection/scene-camera-exposure/index.md b/neon/data-collection/scene-camera-exposure/index.md
index a2276e518..95989e2e4 100644
--- a/neon/data-collection/scene-camera-exposure/index.md
+++ b/neon/data-collection/scene-camera-exposure/index.md
@@ -1,30 +1,34 @@
# Scene Camera Exposure
-The [scene camera’s](https://docs.pupil-labs.com/neon/data-collection/data-streams/#scene-video) exposure can be adjusted to improve image quality in different lighting conditions. There are four modes:
+
+The [scene camera’s](/data-collection/data-streams/#scene-video) exposure can be adjusted to improve image quality in different lighting conditions. There are four modes:
- **Manual:** This mode lets you set the exposure time manually.
- **Automatic**: `Highlights`, `Balanced`, and `Shadows` automatically adjust exposure according to the surrounding lighting.
-::: tip
-The mode you choose should depend on the lighting conditions in your environment. The images below provide some
+::: tip
+The mode you choose should depend on the lighting conditions in your environment. The images below provide some
examples and important considerations.
:::
## Changing Exposure Modes
-From the home screen of the Neon Companion app, tap
-the [Scene and Eye Camera preview](https://docs.pupil-labs.com/neon/data-collection/first-recording/#_4-open-the-live-preview),
+
+From the home screen of the Neon Companion app, tap
+the [Scene and Eye Camera preview](/data-collection/first-recording/#_4-open-the-live-preview),
and then select `Balanced` to reveal all four modes.
## Manual Exposure Mode
-Allows you to set the exposure time between 1 ms and 1000 ms.
+
+Allows you to set the exposure time between 1 ms and 1000 ms.
::: tip
Exposure duration is inversely related to camera frame rate. Exposure values above 330 ms will reduce the scene camera rate below 30fps.
:::
## Automatic Exposure Modes
+
`Highlights`- optimizes the exposure to capture bright areas in the environment, while potentially underexposing dark areas.
![This mode optimizes the exposure to capture bright areas in the environment, while potentially underexposing dark areas.](Highlight.webp)
-
+
`Balanced` - optimizes the exposure to capture brighter and darker areas equally.
![This mode optimizes the exposure to capture brighter and darker areas in the environment equally.](./Balance.webp)
diff --git a/neon/data-collection/time-synchronization/index.md b/neon/data-collection/time-synchronization/index.md
index e035c1243..dcce7c5bf 100644
--- a/neon/data-collection/time-synchronization/index.md
+++ b/neon/data-collection/time-synchronization/index.md
@@ -1,12 +1,13 @@
# Achieve Super-Precise Time Sync
-For some applications, it is critical to accurately synchronize your Neon with another clock. That could be from a second Neon device, an external sensor, or a computer you use for stimulus presentation.
+
+For some applications, it is critical to accurately synchronize your Neon with another clock. That could be from a second Neon device, an external sensor, or a computer you use for stimulus presentation.
Neon provides UTC timestamps for all the data it generates, which makes it easy to sync the data to anything. Those timestamps are generated using the clock of the Companion device. However, digital clocks can suffer from drift, meaning that they sometimes run slightly too fast or slow. Over time this error accumulates and can lead to errors when comparing two clocks.
Therefore digital clocks regularly readjust themselves by syncing to a master clock over the internet every other day or so. Two freshly synced clocks should have <~20 ms of an offset. From there, the offset increases in the order of a couple 10s of milliseconds per hour. After 24 hours it may reach about 1 second.
-
### Force Syncing to the Master Clock on Demand
+
The easiest way to achieve accurate time synchronization is to force a fresh sync-up to the master clock of all devices before starting data collection. This will ensure drift error is minimized for at least a few hours.
A sync-up can usually be forced by toggling the automatic determination of the current time off and back on in the operating system's settings. In Android 11, for example, the `Date & Time` settings in the `System` settings have a toggle called `Use network-provided time`. In Android 12, the toggle is called `Set time automatically`. Whenever this toggle is turned on, the system syncs up.
@@ -25,12 +26,14 @@ NTP_SERVER=north-america.pool.ntp.org
```
### Improving Synchronization further
+
While an error of `<20 ms` is sufficient for most applications, some require even better synchronization. To achieve this, you can estimate the offset between the clock used by Neon and the external clock down to single millisecond accuracy.
This can be done using the `TimeOffsetEstimator` of the [real-time API](/real-time-api/tutorials/). Using the following code, you can estimate the offset between the Neon clock and the clock of the host executing the code.
::: tip
**Dependency**: `pip install "pupil-labs-realtime-api>=1.1.0"`
:::
+
```python
from pupil_labs.realtime_api.simple import discover_one_device
@@ -43,7 +46,7 @@ if device is None:
estimate = device.estimate_time_offset()
if estimate is None:
device.close()
- raise SystemExit("Neon Companion app is too old")
+ raise SystemExit("Neon Companion App is too old")
print(f"Mean time offset: {estimate.time_offset_ms.mean} ms")
print(f"Mean roundtrip duration: {estimate.roundtrip_duration_ms.mean} ms")
@@ -53,9 +56,11 @@ device.close()
Using continuous offset estimates like this, you can precisely compensate for clock drifts by correcting the respective timestamps with it.
The calculation would look like this:
+
```python
companion_app_time = external_clock_time - offset
```
+
::: tip
**Note:** In very busy wifi networks the transfer speeds might fluctuate wildly and potentially impact the clock offset measurement. In such cases it would be helpful to connect the phone to the network via [ethernet](/hardware/using-a-usb-hub/) instead.
:::
diff --git a/neon/data-collection/transfer-recordings-via-usb/index.md b/neon/data-collection/transfer-recordings-via-usb/index.md
index 74cd3febd..0c86dbab0 100644
--- a/neon/data-collection/transfer-recordings-via-usb/index.md
+++ b/neon/data-collection/transfer-recordings-via-usb/index.md
@@ -4,21 +4,22 @@
The recommended way for transferring recordings off of the phone is to upload them to [Pupil Cloud](/pupil-cloud/). For some use-cases, however, this may not be possible and users may want to transfer the recordings via USB.
:::
-To transfer recordings directly to a computer you first need to export the recordings to the Android filesystem. Then you need to access the filesystem to copy the data over to your computer.
+To transfer recordings directly to a computer you first need to export the recordings to the Android filesystem. Then you need to access the filesystem to copy the data over to your computer.
Recordings downloaded directly from the phone will be in a raw binary format.
-#### Export from Neon Companion app
+#### Export from Neon Companion App
+
1. Open the recordings view in the Neon Companion app
2. Select desired recording/s
3. Export:
- - For single recordings, the export button is found by clicking on the 3 vertical dots to
+ - For single recordings, the export button is found by clicking on the 3 vertical dots to
the right of the cloud symbol
- - For multiple recordings, click the download symbol at the bottom of the screen
+ - For multiple recordings, click the download symbol at the bottom of the screen
4. The app via show you a dialog indicating to which folder the recordings will be exported too. Confirm this by clicking `Yes`.
-
#### Transfer Exported Recordings to a Computer
+
1. Connect your OnePlus device to a PC via USB (using the USB cable supplied)
2. Slide down from the top of the device's home-screen and click on 'Android System - USB charging this device'
3. Click on 'Tap for more options'
@@ -27,12 +28,11 @@ Recordings downloaded directly from the phone will be in a raw binary format.
6. Locate the export folder on the phone. Usually, it is in `Documents/Neon Export`.
7. Copy the recordings to your computer.
+Note that the export process does not delete the recordings from the Neon Companion app, and you can still upload
+to Pupil Cloud at a later date if required.
-Note that the export process does not delete the recordings from the Neon Companion app, and you can still upload
-to Pupil Cloud at a later date if required.
-
-Recordings that are deleted from the Neon Companion app, e.g. to free up storage space, cannot be transferred back
-to the Neon Companion app from your backup location (including Pupil Cloud, a laptop/desktop PC, or external HD).
+Recordings that are deleted from the Neon Companion app, e.g. to free up storage space, cannot be transferred back
+to the Neon Companion app from your backup location (including Pupil Cloud, a laptop/desktop PC, or external HD).
This means that if you delete the recordings prior to uploading them to Pupil Cloud, they cannot be uploaded at a later date.
diff --git a/neon/data-collection/wearers/index.md b/neon/data-collection/wearers/index.md
index 38bd6bbe6..ea44ee926 100644
--- a/neon/data-collection/wearers/index.md
+++ b/neon/data-collection/wearers/index.md
@@ -1,5 +1,6 @@
# Wearers
-Wearers are the people wearing Neon. In a typical study, each subject would be a wearer. Every recording you make is assigned to a wearer to help you organize your recordings. You can create new wearers on the fly in the Neon Companion app or in advance using [Pupil Cloud](/pupil-cloud/). It is also possible to change the assigned wearer in a recording post hoc in Pupil Cloud. Simply select *Change Wearer* from the context menu of a recording!
+
+Wearers are the people wearing Neon. In a typical study, each subject would be a wearer. Every recording you make is assigned to a wearer to help you organize your recordings. You can create new wearers on the fly in the Neon Companion app or in advance using [Pupil Cloud](/pupil-cloud/). It is also possible to change the assigned wearer in a recording post hoc in Pupil Cloud. Simply select _Change Wearer_ from the context menu of a recording!
Every wearer is assigned a unique ID, such that you can edit the name and profile picture at any time without mixing up your recordings.
diff --git a/neon/hardware/compatible-devices/index.md b/neon/hardware/compatible-devices/index.md
index a8f6e1e54..e8a19a7b4 100644
--- a/neon/hardware/compatible-devices/index.md
+++ b/neon/hardware/compatible-devices/index.md
@@ -1,7 +1,8 @@
# Companion Device
-The Companion device is a flagship Android smartphone. It is a regular phone that is not customized or modified in any way. To ensure maximum stability and performance we can only support a small number of carefully selected and tested models. The Neon Companion app is tuned to work with these particular models as we require full control over various low-level functions of the hardware.
-The supported models are: OnePlus 8, OnePlus 8T, OnePlus 10 Pro, and Motorola Edge 40 Pro. Currently, Neon ships with a Motorola Edge 40 Pro. We highly recommend the Edge 40 Pro, giving you the best performance, endurance and stability.
+The Companion device is a flagship Android smartphone. It is a regular phone that is not customized or modified in any way. To ensure maximum stability and performance we can only support a small number of carefully selected and tested models. The Neon Companion app is tuned to work with these particular models as we require full control over various low-level functions of the hardware.
+
+The supported models are: OnePlus 8, OnePlus 8T, OnePlus 10 Pro, and Motorola Edge 40 Pro. Currently, Neon ships with a Motorola Edge 40 Pro. We highly recommend the Edge 40 Pro, giving you the best performance, endurance and stability.
If you want to replace or add an extra Companion device you can purchase it [directly from us](https://pupil-labs.com/products/neon) or from any other distributor. The Neon Companion app is free and can be downloaded from the [Play Store](https://play.google.com/store/apps/details?id=com.pupillabs.neoncomp).
@@ -9,13 +10,14 @@ Using a fully charged Motorola Edge 40 Pro device you get around 4 hours of cont
## Companion Device Updates
-### Neon Companion app
-Make sure to update the Neon Companion app on a regular basis. The latest version will always be available on the
+### Neon Companion App
+
+Make sure to update the Neon Companion app on a regular basis. The latest version will always be available on the
[Play Store](https://play.google.com/store/apps/details?id=com.pupillabs.neoncomp).
### Android OS
-We ship each Companion Device with a specific Android Version, carefully tested to ensure robustness and stability.
+We ship each Companion Device with a specific Android Version, carefully tested to ensure robustness and stability.
The currently supported Android versions are as follows:
diff --git a/neon/neon-player/eye-state-timeline/index.md b/neon/neon-player/eye-state-timeline/index.md
index f1ab4fd3c..0a3a06b3a 100644
--- a/neon/neon-player/eye-state-timeline/index.md
+++ b/neon/neon-player/eye-state-timeline/index.md
@@ -1,21 +1,22 @@
# Eye State Timeline
-This plugin visualizes [3D eye state](/data-collection/data-streams/#_3d-eye-states) and [pupil diameter](/data-collection/data-streams/#pupil-diameters) data.
+This plugin visualizes [3D eye state](/data-collection/data-streams/#_3d-eye-states) and [pupil diameter](/data-collection/data-streams/#pupil-diameters) data.
![Eye State Timeline](./eye-state-timeline.webp)
::: info
-The data will only be visualized if Eye State computation was enabled in the Neon Companion app during recording.
-:::
+The data will only be visualized if Eye State computation was enabled in the Neon Companion app during recording.
+:::
## Export Format
+
Results exported to `3d_eye_states.csv` with the following fields:
-| Field | Description |
-| ------------------------- | -------- |
-| **section id** | Unique identifier of the corresponding section. |
-| **recording id** | Unique identifier of the recording this sample belongs to. |
-| **timestamp [ns]** | UTC timestamp in nanoseconds of the sample. Equal to the timestamp of the eye video frame this sample was generated with. |
-| **pupil diameter left [mm]** | Physical diameter of the pupil of the left eye. |
-| **pupil diameter right [mm]** | Physical diameter of the pupil of the right eye. |
+| Field | Description |
+| ------------------------- | -------- |
+| **section id** | Unique identifier of the corresponding section. |
+| **recording id** | Unique identifier of the recording this sample belongs to. |
+| **timestamp [ns]** | UTC timestamp in nanoseconds of the sample. Equal to the timestamp of the eye video frame this sample was generated with. |
+| **pupil diameter left [mm]** | Physical diameter of the pupil of the left eye. |
+| **pupil diameter right [mm]** | Physical diameter of the pupil of the right eye. |
| **eye ball center left x [mm]**
**eye ball center left y [mm]**
**eye ball center left z [mm]**
**eye ball center right x [mm]**
**eye ball center right y [mm]**
**eye ball center right z [mm]** | Location of left and right eye ball centers in millimeters in relation to the scene camera of the Neon module. For details on the coordinate systems see [here](/data-collection/data-streams/#_3d-eye-states). |
| **optical axis left x**
**optical axis left y**
**optical axis left z**
**optical axis right x**
**optical axis right y**
**optical axis right z** | Directional vector describing the optical axis of the left and right eye, i.e. the vector pointing from eye ball center to pupil center of the respective eye. For details on the coordinate systems see [here](/data-collection/data-streams/#_3d-eye-states). |
diff --git a/neon/neon-xr/index.md b/neon/neon-xr/index.md
index 29a9a71d4..43a378a71 100644
--- a/neon/neon-xr/index.md
+++ b/neon/neon-xr/index.md
@@ -1,4 +1,5 @@
# Neon XR
+
Neon XR allows you to equip XR devices with research-grade eye tracking powered by Neon. This enables both gaze-based interaction for XR applications and visual behaviour analysis in XR environments.
Thanks to the small form factor of the [Neon module](/hardware/module-technical-overview/), it can easily be integrated into a variety of XR devices. A hardware mount for the [Pico 4](https://pupil-labs.com/products/vr-ar) headset is available for purchase and additional mounts for other headsets are in development. You can also [build a mount yourself](/neon-xr/build-your-own-mount/) for any headset!
@@ -8,20 +9,22 @@ Thanks to the small form factor of the [Neon module](/hardware/module-technical-
Neon XR includes software integration with Unity. The [Neon XR Core Unity Package](/neon-xr/neon-xr-core-package/) allows you to receive gaze data from a Neon Module in your Unity project in real-time. We also provide a [template project](/neon-xr/MRTK3-template-project/) for the [Mixed Reality Toolkit 3.0](https://learn.microsoft.com/en-us/windows/mixed-reality/mrtk-unity/mrtk3-overview/) that makes it easy to get started with the development of your XR application.
## System Overview
+
The **Neon Module** is attached to it's **mount**, which is in turn attached to the **XR device**. Similar to regular Neon frames, the mount is tethered to the **Neon Companion Device** with a USB-C cable.
-The Neon Companion Device provides power to the module and runs the the **Neon Companion App**, which does all the real-time computation.
+The Neon Companion Device provides power to the module and runs the the **Neon Companion app**, which does all the real-time computation.
-Depending on the XR device used, the **Unity application** is running on the XR device itself or on a separate computer the XR device is tethered to.
-It communicates with the Neon Companion App over the network, receives gaze data in real-time, and projects it into the virtual world thanks to the **Neon XR Core** Unity Package.
+Depending on the XR device used, the **Unity application** is running on the XR device itself or on a separate computer the XR device is tethered to.
+It communicates with the Neon Companion app over the network, receives gaze data in real-time, and projects it into the virtual world thanks to the **Neon XR Core** Unity Package.
![System Overview](./system_overview.png)
## Getting Started
+
The easiest starting point for building XR applications with Neon is to use our [MRTK3 template project](/neon-xr/MRTK3-template-project/). It uses MRTK3 as the foundation, and contains several demo scenes you can work off of.
If you don't want to use MRTK3, you can also integrate the [`Neon XR Core` Unity Package](/neon-xr/neon-xr-core-package/) into your project directly, which contains only the ability to receive gaze data in real-time in Unity and to map it into the virtual world.
-If the available Pico 4 mount does not fit your needs, [build your own mount](/neon-xr/build-your-own-mount/)!
\ No newline at end of file
+If the available Pico 4 mount does not fit your needs, [build your own mount](/neon-xr/build-your-own-mount/)!
diff --git a/neon/neon-xr/neon-xr-core-package/index.md b/neon/neon-xr/neon-xr-core-package/index.md
index b4751d488..412e2e83d 100644
--- a/neon/neon-xr/neon-xr-core-package/index.md
+++ b/neon/neon-xr/neon-xr-core-package/index.md
@@ -1,19 +1,21 @@
# The Neon XR Core Package
+
Using Neon XR Core Package in your Unity project enables you to receive eye tracking data from a Neon device over the local network in real-time.
## Adding Neon XR to Your Project
-The [**Neon XR Unity package**](https://github.com/pupil-labs/neon-xr) enables you to receive eye tracking data from a Neon module in your Unity project in real-time.
+
+The [**Neon XR Unity package**](https://github.com/pupil-labs/neon-xr) enables you to receive eye tracking data from a Neon module in your Unity project in real-time.
To integrate it in your project, follow these steps:
1. Add the `Neon XR` package in the Package Manager.
- 1. Select `Window -> Package Manager`
- 2. Select `+ -> Add Package from git URL…`
- 3. Insert ` https://github.com/pupil-labs/neon-xr.git?path=/com.pupil-labs.neon-xr.core`.
+ 1. Select `Window -> Package Manager`
+ 2. Select `+ -> Add Package from git URL…`
+ 3. Insert ` https://github.com/pupil-labs/neon-xr.git?path=/com.pupil-labs.neon-xr.core`.
1. If your project does not use Addressables, create default Addressables settings.
- 1. Select `Window -> Asset Management -> Addressables -> Groups`.
- 2. Click on `Create Addressables Settings`.
- 3. If legacy bundles are detected click on `Ignore`.
+ 1. Select `Window -> Asset Management -> Addressables -> Groups`.
+ 2. Click on `Create Addressables Settings`.
+ 3. If legacy bundles are detected click on `Ignore`.
1. Select `Pupil Labs -> Addressables -> Import Groups`. After this step the `NeonXR Group` should appear in the `Addressables Groups` window (you can open this window again following step 2.1).
1. In the `Addressable Groups` window, select `Build -> New Build -> Default Build Script`.
1. Copy the `NeonXR` prefab from the imported package into the scene.
@@ -21,17 +23,19 @@ To integrate it in your project, follow these steps:
1. Add your own listener for the `gazeDataReady` event (see for example, `GazeDataVisualizer.OnGazeDataReady`).
## Connecting to Neon
-The Neon Companion App publishes the data it generates to the local network using the [real-time API](/real-time-api/tutorials/). The Neon XR Core package contains a client to receive this data and map it into the 3D virtual world. By default, it tries to connect the first Neon device it detects on the network.
+
+The Neon Companion app publishes the data it generates to the local network using the [real-time API](/real-time-api/tutorials/). The Neon XR Core package contains a client to receive this data and map it into the 3D virtual world. By default, it tries to connect the first Neon device it detects on the network.
Thie behavior can be further configured by editing the `config.json` file of the app, which is located at the following path:
+
```
\Android\data\com.MixedRealityToolkitOrganization.MRTK3Sample\files\config.json
-```
+```
It contains a field `rtspSettings` with the following keys:
-| Field | Description |
-| --- | --- |
-| `autoIP` | Enables the automatic discovery of Neon devices connected to the local network. The first detected device will be used. |
-| `deviceName` | If not empty, only devices with the provided name can be discovered. |
-| `ip` | The IP address that will be used if automatic discovery is disabled. |
\ No newline at end of file
+| Field | Description |
+| ------------ | ----------------------------------------------------------------------------------------------------------------------- |
+| `autoIP` | Enables the automatic discovery of Neon devices connected to the local network. The first detected device will be used. |
+| `deviceName` | If not empty, only devices with the provided name can be discovered. |
+| `ip` | The IP address that will be used if automatic discovery is disabled. |
diff --git a/neon/pupil-cloud/offset-correction/index.md b/neon/pupil-cloud/offset-correction/index.md
index 321a498e6..b68ffdfaf 100644
--- a/neon/pupil-cloud/offset-correction/index.md
+++ b/neon/pupil-cloud/offset-correction/index.md
@@ -1,10 +1,10 @@
# Offset Correction on Pupil Cloud
-For some subjects, you may find a constant offset in their gaze estimates. This gaze offset can be [compensated for in the Neon Companion app](https://docs.pupil-labs.com/neon/data-collection/offset-correction/) before starting a recording, or post hoc in Pupil Cloud as shown below. Click on the gaze offset icon to start the gaze offset correction.
+For some subjects, you may find a constant offset in their gaze estimates. This gaze offset can be [compensated for in the Neon Companion app](/data-collection/offset-correction/) before starting a recording, or post hoc in Pupil Cloud as shown below. Click on the gaze offset icon to start the gaze offset correction.
![Offset correction on Cloud header image](./offset-cloud-timeline.png)
-Once you enter the `Edit Gaze Offset` view, simply drag the gaze circle to apply the correction. See the video below for reference.
+Once you enter the `Edit Gaze Offset` view, simply drag the gaze circle to apply the correction. See the video below for reference.
- The grey circle indicates the raw gaze estimate provided by Neon’s gaze estimation pipeline.
- The red circle indicates the gaze position with the current offset applied.
@@ -15,5 +15,5 @@ Once you enter the `Edit Gaze Offset` view, simply drag the gaze circle to appl
::: tip
-Modifying the gaze offset impacts all downstream data, such as fixations, mapped gaze from enrichments, and visualizations. Where possible, data is updated instantly. If not, the respective data will be deleted, requiring (partial) re-computation of enrichments or visualizations. In that case, the enrichment or visualization will be shown as *Not Started* and you need to re-run them.
-:::
\ No newline at end of file
+Modifying the gaze offset impacts all downstream data, such as fixations, mapped gaze from enrichments, and visualizations. Where possible, data is updated instantly. If not, the respective data will be deleted, requiring (partial) re-computation of enrichments or visualizations. In that case, the enrichment or visualization will be shown as _Not Started_ and you need to re-run them.
+:::
diff --git a/neon/pupil-cloud/troubleshooting/index.md b/neon/pupil-cloud/troubleshooting/index.md
index b4585eb9c..03d8e2eb8 100644
--- a/neon/pupil-cloud/troubleshooting/index.md
+++ b/neon/pupil-cloud/troubleshooting/index.md
@@ -1,12 +1,16 @@
# Troubleshooting
+
Below you can find a list of issues we have observed in the past and recommendations on how to fix them. If you can not find your issue in the list, please reach out to us on [Discord](https://pupil-labs.com/chat/) or via email to `info@pupil-labs.com`.
## Recordings Are Not Uploading to Pupil Cloud Successfully
+
1. Make sure **Cloud upload** is enabled in the Neon Companion app's settings.
1. Try logging out of the app and back in.
## My Enrichment Download Contains Only an ‘info.json’ File and Nothing Else!
+
Did you use **Safari browser** to make the download?
- - Enrichment downloads come as ZIP files. By default, Safari will automatically extract ZIP files when the download is finished. However, it will only extract file types that are considered "safe". Surprisingly, CSV and MP4 files are not considered safe and Safari will by default only extract the remaining JSON files.
- To fix this you can either use a different browser to make the download, or disable the **"Open 'safe' files after downloading"** setting in Safari.
\ No newline at end of file
+- Enrichment downloads come as ZIP files. By default, Safari will automatically extract ZIP files when the download is finished. However, it will only extract file types that are considered "safe". Surprisingly, CSV and MP4 files are not considered safe and Safari will by default only extract the remaining JSON files.
+
+ To fix this you can either use a different browser to make the download, or disable the **"Open 'safe' files after downloading"** setting in Safari.
diff --git a/neon/real-time-api/tutorials/index.md b/neon/real-time-api/tutorials/index.md
index f04a4a748..953fcc7d1 100644
--- a/neon/real-time-api/tutorials/index.md
+++ b/neon/real-time-api/tutorials/index.md
@@ -36,6 +36,7 @@ print(f"Battery level: {device.battery_level_percent}%")
print(f"Free storage: {device.memory_num_free_bytes / 1024**3:.1f} GB")
print(f"Serial number of connected glasses: {device.serial_number_glasses}")
```
+
```
Phone IP address: 192.168.1.168
Phone name: OnePlus8
@@ -63,6 +64,7 @@ time.sleep(5)
device.recording_stop_and_save()
```
+
```
Started recording with id 2f99d9f9-f009-4015-97dd-eb253de443b0
```
@@ -89,6 +91,7 @@ print(device.send_event("test event 2", event_timestamp_unix_ns=time.time_ns()))
device.recording_stop_and_save()
```
+
```
Event(name=None recording_id=None timestamp_unix_ns=1642599117043000000 datetime=2022-01-19 14:31:57.043000)
Event(name=None recording_id=fd8c98ca-cd6c-4d3f-9a05-fbdb0ef42668 timestamp_unix_ns=1642599122555200500 datetime=2022-01-19 14:32:02.555201)
@@ -121,7 +124,9 @@ scene_image_rgb = cv2.cvtColor(scene_sample.bgr_pixels, cv2.COLOR_BGR2RGB)
plt.imshow(scene_image_rgb)
plt.scatter(gaze_sample.x, gaze_sample.y, s=200, facecolors='none', edgecolors='r')
```
+
The output data would look as follows:
+
```
This sample contains the following data:
@@ -135,6 +140,7 @@ For the left eye x, y, z: -30.087890625, 10.048828125, -52.4462890625 and for th
Directional vector describing the optical axis of the left and right eye.
For the left eye x, y, z: -0.05339553952217102, 0.12345726788043976, 0.9909123182296753 and for the right eye x, y, z: -0.40384653210639954, 0.11708031594753265, 0.9073038101196289.
```
+
Alternatively, you could also use the [`receive_scene_video_frame`](https://pupil-labs-realtime-api.readthedocs.io/en/stable/api/simple.html#pupil_labs.realtime_api.simple.Device.receive_scene_video_frame) and [`receive_gaze_datum`](https://pupil-labs-realtime-api.readthedocs.io/en/stable/api/simple.html#pupil_labs.realtime_api.simple.Device.receive_gaze_datum) methods to obtain each sample separately. The [`receive_matched_scene_video_frame_and_gaze`](https://pupil-labs-realtime-api.readthedocs.io/en/stable/api/simple.html#pupil_labs.realtime_api.simple.Device.receive_matched_scene_video_frame_and_gaze) method does however also ensure that both samples are matched temporally.
## IMU Data
@@ -161,7 +167,9 @@ print(imu_sample.accel_data)
print(f"Gyro data:")
print(imu_sample.gyro_data)
```
+
The output data would look as follows:
+
```
This IMU sample was recorded at 2023-05-25 11:23:05.749155
It contains the following data:
@@ -200,6 +208,7 @@ print(calibration["left_distortion_coefficients"][0])
```
## Template Data
+
You can access the response data entered into the template questionnaire on the phone and also set those responses remotely.
Using the [`get_template`](https://pupil-labs-realtime-api.readthedocs.io/en/stable/api/simple.html#pupil_labs.realtime_api.simple.Device.get_template) method, you can receive the definition of the template containing all questions and sections.
@@ -209,6 +218,7 @@ template = device.get_template()
fstring = "{i.id}\t{i.title}\t{i.widget_type} \t{i.choices}"
print("\n".join(fstring.format(i=i) for i in template.items))
```
+
```
e3b94cc7-dce4-4781-a818-f769574c31d2 Section 1 SECTION_HEADER []
a54e85aa-5474-42f8-90c0-19f40e9ca825 Question 1 TEXT []
@@ -224,6 +234,7 @@ Using the [`get_template_data`](https://pupil-labs-realtime-api.readthedocs.io/e
data = device.get_template_data()
print("\n".join(f"{k}\t{v}" for k, v in data.items()))
```
+
```
6169276c-91f4-4ef9-8e03-45759ff61477 ['a']
3c7d620f-9f98-4556-92dd-b66df329999c An example paragraph.
@@ -243,12 +254,13 @@ questionnaire = {
device.post_template_data(questionnaire)
```
-You can also retrieve individual questions by their ID using the [`get_question_by_id`](https://pupil-labs-realtime-api.readthedocs.io/en/stable/api/models.html#pupil_labs.realtime_api.models.Template.get_question_by_id) method and check the validity of a response using the [`validate_answer`]( https://pupil-labs-realtime-api.readthedocs.io/en/stable/api/models.html#pupil_labs.realtime_api.models.TemplateItem.validate_answer) method.
+You can also retrieve individual questions by their ID using the [`get_question_by_id`](https://pupil-labs-realtime-api.readthedocs.io/en/stable/api/models.html#pupil_labs.realtime_api.models.Template.get_question_by_id) method and check the validity of a response using the [`validate_answer`](https://pupil-labs-realtime-api.readthedocs.io/en/stable/api/models.html#pupil_labs.realtime_api.models.TemplateItem.validate_answer) method.
```python
question = template.get_question_by_id("6169276c-91f4-4ef9-8e03-45759ff61477")
question.validate_answer(["invalid_option"])
```
+
```
pupil_labs/realtime_api/models.py", line 346, in validate_answer
raise InvalidTemplateAnswersError(self, answers, errors)