Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add monocular settings doc for the app #717

Merged
merged 15 commits into from
Nov 12, 2024
Merged
1 change: 1 addition & 0 deletions alpha-lab/imu-transformations/pl-imu-transformations
Submodule pl-imu-transformations added at 542043
4 changes: 4 additions & 0 deletions neon/.vitepress/config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,10 @@ let theme_config_additions = {
text: "Measuring the IED",
link: "/data-collection/measuring-ied/",
},
{
text: "Monocular Gaze",
link: "/data-collection/gaze-mode/",
},
{
text: "Scene Camera Exposure",
link: "/data-collection/scene-camera-exposure/",
Expand Down
1 change: 1 addition & 0 deletions neon/data-collection/data-format/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Recording Format

The page describes the data format when downloading Neon recordings from Pupil Cloud in the "Timeseries Data" and "Timeseries Data + Scene Video" formats. In this format the Data from the Neon Companion is augmented by adding gaze and eye state estimates whenever they had not been computed in realtime. Futhermore fixations and blink data as well as some IMU transformations are computed.

Check warning on line 3 in neon/data-collection/data-format/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (Timeseries)

Check warning on line 3 in neon/data-collection/data-format/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (Timeseries)

When downloading Native Recording Data from the Cloud or directly [extracting it via USB from the companion device](/data-collection/transfer-recordings-via-usb/), you can use the [pl-neon-recording](https://github.com/pupil-labs/pl-neon-recording) python library to read and access the data or load it onto [Neon Player](/neon-player/). This data format will contain all video data as well as all data that has been computed in realtime on the companion device.

Expand All @@ -20,7 +20,7 @@
| **android_device_model** | Model name of the Companion device. |
| **android_device_name** | Device name of the Companion device. |
| **app_version** | Version of the Neon Companion app used to make the recording. |
| **calib_version** | Version of the offset correction used by the Neon Companion app. |

Check warning on line 23 in neon/data-collection/data-format/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (calib)
| **data_format_version** | Version of the data format used by the Neon Companion app. |
| **duration** | Duration of the recording in nanoseconds |
| **firmware_version** | Version numbers of the firmware and FPGA. |
Expand All @@ -34,6 +34,7 @@
| **start_time** | Timestamp of when the recording was started. Given as UTC timestamp in nanoseconds. |
| **template_data** | Data regarding the selected template for the recording as well as the response values. |
| **wearer_id** | Unique identifier of the wearer selected for this recording. |
| **gaze_mode** | Indicates whether binocular or monocular (right/ left) pipeline was used to infer gaze. |
| **wearer_name** | Name of the wearer selected for this recording. |
| **workspace_id** | The ID of the Pupil Cloud workspace this recording has been assigned to. |

Expand All @@ -48,7 +49,7 @@
| Field | Description |
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **camera_matrix** | The camera matrix of the scene camera. |
| **dist_coefs** | The distortion coefficients of the scene camera. The order of the values is `(k1, k2, p1, p2, k3, k4, k5, k6)` following [OpenCV's distortion model](https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga3207604e4b1a1758aa66acb6ed5aa65d). |

Check warning on line 52 in neon/data-collection/data-format/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (coefs)
| **serial_number** | Serial number of Neon module used for the recording. This number is encoded in the QR code on the back of the Neon module. |
| **version** | The version of the intrinsics data format. |

Expand Down Expand Up @@ -108,17 +109,17 @@
| **azimuth [deg]** | The [azimuth](https://en.wikipedia.org/wiki/Horizontal_coordinate_system) of the gaze ray corresponding to the fixation location in relation to the scene camera in degrees. |
| **elevation [deg]** | The [elevation](https://en.wikipedia.org/wiki/Horizontal_coordinate_system) of the gaze ray corresponding to the fixation location in relation to the scene camera in degrees. |

## saccades.csv

Check warning on line 112 in neon/data-collection/data-format/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccades)

This file contains [saccades](/data-collection/data-streams/#fixations-saccades) detected by the fixation detector.

Check warning on line 114 in neon/data-collection/data-format/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccades)

| Field | Description |
| ---------------------------------- | ------------------------------------------------------------------------------------------------------- |
| **section id** | Unique identifier of the corresponding section. |
| **recording id** | Unique identifier of the recording this sample belongs to. |
| **saccade id** | Identifier of the saccade. The counter starts at the beginning of the recording. |

Check warning on line 120 in neon/data-collection/data-format/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccade)

Check warning on line 120 in neon/data-collection/data-format/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccade)
| **start timestamp [ns]** | UTC timestamp in nanoseconds of the start of the saccade. |

Check warning on line 121 in neon/data-collection/data-format/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccade)
| **end timestamp [ns]** | UTC timestamp in nanoseconds of the end of the saccade. |

Check warning on line 122 in neon/data-collection/data-format/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccade)
| **duration [ms]** | Duration of the saccade in milliseconds. |
| **amplitude [px]** | Float value representing the amplitude of the saccade in world camera pixel coordinates. |
| **amplitude [deg]** | Float value representing the amplitude of the saccade in degrees of visual angle. |
Expand Down
56 changes: 56 additions & 0 deletions neon/data-collection/gaze-mode/index.md
mikelgg93 marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# Binocular vs. Monocular Gaze Mode

Starting from version 2.8.33, the Neon Companion App allows you to select between using both eyes (binocular) or a single eye (monocular) images for outputing gaze positions. This flexibiliy enables you to isolate gaze from a specific eye, e.g. when recording from participants/wearers with phorias, or other experimental paradgims that require monocular gaze.

## Modes

- `Binocular` _(default)_: Utilizes images from both the right and left eyes to infer gaze position. This mode offers higher accuracy and robustness by leveraging information from both eyes.
- `Mono Right`: Uses only the right eye's image to infer gaze position. This mode may be useful in scenarios where one eye can only be used.
- `Mono Left`: Uses only the left eye's image to infer gaze position. Similar to Mono Right but using the left eye.

::: warning
**Monocular gaze is less accurate and robust** since it relies on a single eye image. Use this mode only if binocular tracking is not feasible or if there's a specific need for single-eye tracking.
:::

## Changing Gaze Modes
N-M-T marked this conversation as resolved.
Show resolved Hide resolved

To switch between gaze modes, follow these steps:

1. From the home screen of the Neon Companion App, tap the gear icon located at the top-right corner to open **Companion Settings**.
2. Scroll down to the **NeonNet** section.
3. Choose your desired **Gaze Mode** (`Binocular`, `Mono Right`, or `Mono Left`).
4. After selecting the new gaze mode, **unplug and re-plug** the Neon device to apply the changes.

::: tip
After altering the gaze mode to monocular, it's recommended to perform a new [Offset Correction](/data-collection/offset-correction/) to improve accuracy.
:::

## Other Considerations

- Changing the gaze mode modifies the existing gaze stream. It does **not** create an additional stream.
- All downstream processes, including fixations and enrichments, will utilize this monocular gaze data.
- Eye State and Pupillometry remain unaffected by changes to the gaze mode and will output the data for each eye.

## In Pupil Cloud:

Pupil Cloud handles gaze data processing as follows:

- **Default Behavior**: Pupil Cloud reprocesses recordings to maintain a consistent sampling rate of **200Hz**, regardless of the real-time sampling rate set in the app.

- **Monocular Mode**: If a monocular gaze mode is selected, Pupil Cloud **will not** reprocess the recordings. Ensure that this aligns with your data analysis requirements.

## Where Can I Find Which Mode Was Used on a Recording?

On the recording's view in the Neon Companion App, you can tap on the three dots to visualize the metadata.

Additionally, the [info.json](/data-collection/data-format/#info-json) file now includes a new field `gaze_mode`.

---

### Best Practices / Additional Recommendations

- **Testing**: After changing the gaze mode, perform tests to verify that the gaze tracking meets your accuracy and performance needs.

- **Update your Team**: Keep your team informed about changes in gaze modes to ensure consistency in data collection and analysis.

---
Loading