Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AOI Editor docs and pupillometry update #678

Merged
merged 2 commits into from
Apr 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion neon/data-collection/data-format/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
| **android_device_model** | Model name of the Companion device. |
| **android_device_name** | Device name of the Companion device. |
| **app_version** | Version of the Neon Companion app used to make the recording. |
| **calib_version** | Version of the offset correction used by the Neon Companion app. |

Check warning on line 18 in neon/data-collection/data-format/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (calib)
| **data_format_version** | Version of the data format used by the Neon Companion app. |
| **duration** | Duration of the recording in nanoseconds|
| **firmware_version** | Version numbers of the firmware and FPGA. |
Expand Down Expand Up @@ -43,7 +43,7 @@
| Field | Description |
| -------- | -------- |
| **camera_matrix** | The camera matrix of the scene camera. |
| **dist_coefs** | The distortion coefficients of the scene camera. The order of the values is `(k1, k2, p1, p2, k3, k4, k5, k6)` following [OpenCV's distortion model](https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html#ga3207604e4b1a1758aa66acb6ed5aa65d). |

Check warning on line 46 in neon/data-collection/data-format/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (coefs)
| **serial_number** | Serial number of Neon module used for the recording. This number is encoded in the QR code on the back of the Neon module. |
| **version** | The version of the intrinsics data format. |

Expand Down Expand Up @@ -115,9 +115,10 @@
| **section id** | Unique identifier of the corresponding section. |
| **recording id** | Unique identifier of the recording this sample belongs to. |
| **timestamp [ns]** | UTC timestamp in nanoseconds of the sample. Equal to the timestamp of the eye video frame this sample was generated with. |
| **pupil diameter [mm]** | Physical diameter of the pupils of the left and right eye. |
| **pupil diameter left [mm]** | Physical diameter of the pupil of the left eye. |
| **pupil diameter right [mm]** | Physical diameter of the pupil of the right eye. |
| **eye&nbsp;ball&nbsp;center&nbsp;left&nbsp;x&nbsp;[mm]**<br /> **eye ball center left y [mm]**<br /> **eye ball center left z [mm]**<br /> **eye&nbsp;ball&nbsp;center&nbsp;right&nbsp;x&nbsp;[mm]**<br /> **eye&nbsp;ball&nbsp;center&nbsp;right&nbsp;y&nbsp;[mm]**<br /> **eye ball center right z [mm]** | Location of left and right eye ball centers in millimeters in relation to the scene camera of the Neon module. For details on the coordinate systems see [here](/data-collection/data-streams/#_3d-eye-states). |
| **optical axis left x**<br /> **optical axis left y**<br /> **optical axis left z**<br /> **optical axis right x**<br /> **optical axis right y**<br /> **optical axis right z** | Directional vector describing the optical axis of the left and right eye, i.e. the vector pointing from eye ball center to pupil center of the resepective eye. For details on the coordinate systems see [here](/data-collection/data-streams/#_3d-eye-states). |

Check warning on line 121 in neon/data-collection/data-format/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (resepective)

## blinks.csv
This file contains [blinks](/data-collection/data-streams/#blinks) detected in the eye video.
Expand Down
2 changes: 1 addition & 1 deletion neon/data-collection/data-streams/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,20 +20,20 @@

The achieved framerate can vary based on what Companion device is used and environmental conditions. On the OnePlus 10, the full 200 Hz can generally be achieved outside of especially hot environments. On the OnePlus 8, the framerate typically drops to ~120 Hz within a few minutes of starting a recording. Other apps running simultaneously on the phone may decrease the framerate.

After a recording is uploaded to Pupil Cloud, gaze data is automatically re-computed at the full 200 Hz framerat and can be downloaded from there.

Check warning on line 23 in neon/data-collection/data-streams/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (framerat)

The gaze estimation algorithm is based on end-2-end deep learning and provides gaze data robustly without requiring a calibration. We are currently working on a white paper that thoroughly evaluated the algorithm and will link it here once it is published.

## Fixations

<Badge>Pupil Cloud</Badge><Badge>Neon Player</Badge>
The two primary types of eye movements exhibited by the visual system are fixations and saccades. During fixations, the eyes are directed at a specific point in the environment. A saccade is a very quick movement where the eyes jump from one fixation to the next. Properties like the fixation duration are of significant importance for studying gaze behavior.

Check warning on line 30 in neon/data-collection/data-streams/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccades)

Check warning on line 30 in neon/data-collection/data-streams/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccade)

![Fixations](./fixations.jpg)

Fixations are calculated automatically in Pupil Cloud after uploading a recording and are included in relevant downloads. The downloads for gaze mapping enrichments ([Reference Image Mapper](/pupil-cloud/enrichments/reference-image-mapper/#export-format), [Marker Mapper](/pupil-cloud/enrichments/marker-mapper/#export-format)) also include "mapped fixations".

The deployed fixation detection algorithm was specifically designed for head-mounted eye trackers and offers increased robustness in the presence of head movements. Especially movements due to vestibulo-ocular reflex are compensated for, which is not the case for most other fixation detection algorithms. Read more about that in the [Pupil Labs fixation detector whitepaper](https://docs.google.com/document/d/1dTL1VS83F-W1AZfbG-EogYwq2PFk463HqwGgshK3yJE/export?format=pdf)

Check warning on line 36 in neon/data-collection/data-streams/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (vestibulo)

## 3D Eye States

Expand All @@ -49,7 +49,7 @@
## Pupil Diameters

<Badge>Pupil Cloud</Badge>
After uploading a recording to Pupil Cloud, pupil diameters are computed automatically at 200 Hz. The computed pupil diameters correspond to the physical pupil size in mm, rather than the apparent pupil size in pixels as observed in the eye videos. The algorithm does not provide independent measurements per eye but reports a single value for both eyes.
After uploading a recording to Pupil Cloud, pupil diameters are computed automatically at 200 Hz, separately for the left and right eye. The computed pupil diameters correspond to the physical pupil size in mm, rather than the apparent pupil size in pixels as observed in the eye videos.

Similar to the 3D eye states, the accuracy of the pupil diameter measurements improves when supplying the wearer's IED in the wearer profile before making a recording.

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
64 changes: 63 additions & 1 deletion neon/pupil-cloud/visualizations/areas-of-interest/index.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,64 @@
# Areas of Interest (AOIs)

Check warning on line 1 in neon/pupil-cloud/visualizations/areas-of-interest/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (AOIs)
Coming soon!

The AOI Editor allows you to draw areas of interest (AOIs) on top of the reference image or surface. You can draw anything from simple polygons to multiple disconnected shapes. This tool is available for use after a [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) or a [Marker Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/marker-mapper/) enrichment is completed.

## Setup

### AOI Editing and Drawing

Upon completion of the Reference Image Mapper or Marker Mapper enrichment, access the main view of the enrichment by navigating to the **`Enrichments`** tab and selecting **`Edit AOIs`** under the section **`Tools`**.

![Edit AOIs](./AOI_enrichment_view.png)

From there, you will enter the AOI editing view and you are ready to start drawing AOIs on your reference image or surface.

<Youtube src="7-9m3Mq-fio"/>

### AOI Heatmap and Metrics

To visualize your AOI heatmap:

- Navigate to the **`Visualizations`** tab.
- Click on **`Create Visualization`**.
- Select **`AOI heatmap`** and the enrichment to which it should be applied.

![View AOI heatmap](./View_AOI_heatmap.png)

Within the AOI Heatmap view, users can specify the recordings to be included, the metric to be displayed, and which AOIs should be incorporated into the visualization.

<Youtube src="Rrb6OKmTCOs"/>

## Export Format

Through the **`Visualizations`** tab, in the AOI Heatmap view, you can download the final visualization displaying the metric of your interest in **`.png`** format.

Through the **`Downloads`** tab, you can download the AOI-related files as part of the enrichment folder. Note, that the following CSV files will be empty if no AOIs are defined for a specific enrichment.

### aoi_fixations.csv

This file contains fixation events mapped on each area of interest.

| Field | Description |
| ---------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **aoi id** | Unique identifier of the corresponding area of interest. |
| **section id** | Unique identifier of the corresponding section. |
| **recording id** | Unique identifier of the recording this sample belongs to. |
| **fixation id** | Identifier of fixation within the section. The counter starts at the beginning of the recording. |
| **fixation&nbsp;duration&nbsp;[ms]** | Duration of the fixation in milliseconds. |

### aoi_metrics.csv

This file contains standard fixation and gaze metrics on AOIs.

| Field | Description |
| ---------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **aoi id** | Unique identifier of the corresponding area of interest. |
| **recording id** | Unique identifier of the recording this sample belongs to. |
| **recording name** | Name of the recording this sample belongs to. |
| **aoi name** | Name of the corresponding area of interest. |
| **average&nbsp;fixation&nbsp;duration&nbsp;[ms]** | Average fixation duration for the corresponding area of interest in milliseconds. |
| **total fixations** | Total number of fixations for the corresponding area of interest in milliseconds. |
| **time&nbsp;to&nbsp;first&nbsp;fixation&nbsp;[ms]** | Average time in milliseconds until the corresponding area of interest gets fixated on for the first time in a recording. |
| **time&nbsp;to&nbsp;first&nbsp;gaze&nbsp;[ms]** | Average time in milliseconds until the corresponding area of interest gets gazed at for the first time in a recording. |
| **total&nbsp;fixation&nbsp;duration&nbsp;[ms]** | Total fixation duration for the corresponding area of interest in milliseconds. |
| **total&nbsp;gaze&nbsp;duration&nbsp;[ms]** | Total fixation duration for the corresponding area of interest in milliseconds. |
Loading