Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix error in aoi_metrics.csv output #684

Merged
merged 3 commits into from
May 31, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions neon/data-collection/data-streams/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,22 +18,22 @@

![Gaze](./gaze.jpg)

The achieved framerate can vary based on what Companion device is used and environmental conditions. On the OnePlus 10, the full 200 Hz can generally be achieved outside of especially hot environments. On the OnePlus 8, the framerate typically drops to ~120 Hz within a few minutes of starting a recording. Other apps running simultaneously on the phone may decrease the framerate.
The achieved framerate can vary based on what Companion device is used and environmental conditions. On the OnePlus 10 and Motorola Edge 40 Pro, the full 200 Hz can generally be achieved outside of especially hot environments. On the OnePlus 8, the framerate typically drops to ~120 Hz within a few minutes of starting a recording. Other apps running simultaneously on the phone may decrease the framerate.

After a recording is uploaded to Pupil Cloud, gaze data is automatically re-computed at the full 200 Hz framerat and can be downloaded from there.
After a recording is uploaded to Pupil Cloud, gaze data is automatically re-computed at the full 200 Hz framerate and can be downloaded from there.

The gaze estimation algorithm is based on end-2-end deep learning and provides gaze data robustly without requiring a calibration. We are currently working on a white paper that thoroughly evaluated the algorithm and will link it here once it is published.
The gaze estimation algorithm is based on end-2-end deep learning and provides gaze data robustly without requiring a calibration. You can find a high-level description as well as a thorough evaluation of the accuracy and robustness of the algorithm in our [white paper](https://zenodo.org/doi/10.5281/zenodo.10420388).

## Fixations & Saccades

Check warning on line 27 in neon/data-collection/data-streams/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (Saccades)

<Badge>Pupil Cloud</Badge><Badge>Neon Player</Badge>
The two primary types of eye movements exhibited by the visual system are fixations and saccades. During fixations, the eyes are directed at a specific point in the environment. A saccade is a very quick movement where the eyes jump from one fixation to the next. Properties like the fixation duration are of significant importance for studying gaze behavior.

Check warning on line 30 in neon/data-collection/data-streams/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccades)

Check warning on line 30 in neon/data-collection/data-streams/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccade)

![Fixations](./fixations.jpg)

Fixations and saccades are calculated automatically in Pupil Cloud after uploading a recording and are included in the recording downloads. The deployed fixation detection algorithm was specifically designed for head-mounted eye trackers and offers increased robustness in the presence of head movements. Especially movements due to vestibulo-ocular reflex are compensated for, which is not the case for most other fixation detection algorithms. You can learn more about it in the [Pupil Labs fixation detector whitepaper](https://docs.google.com/document/d/1dTL1VS83F-W1AZfbG-EogYwq2PFk463HqwGgshK3yJE/export?format=pdf) and in our [publication](https://link.springer.com/article/10.3758/s13428-024-02360-0) in *Behavior Research Methods* discussing fixation detection strategies.

Check warning on line 34 in neon/data-collection/data-streams/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccades)

Check warning on line 34 in neon/data-collection/data-streams/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (vestibulo)

We detect saccades based on the fixation results, considering the gaps between fixations to be saccades. Note, that this assumption is only true in the absence of smooth pursuit eye movements. Additionally, the fixation detector does not compensate for blinks, which can cause a break in a fixation and thus introduce a false saccade.

Check warning on line 36 in neon/data-collection/data-streams/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccades)

Check warning on line 36 in neon/data-collection/data-streams/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccades)

Check warning on line 36 in neon/data-collection/data-streams/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (saccade)

The downloads for gaze mapping enrichments ([Reference Image Mapper](/pupil-cloud/enrichments/reference-image-mapper/#export-format), [Marker Mapper](/pupil-cloud/enrichments/marker-mapper/#export-format)) also include mapped fixations, i.e. fixations in reference image or surface coordinates respectively.

Expand Down
3 changes: 1 addition & 2 deletions neon/pupil-cloud/visualizations/areas-of-interest/index.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Areas of Interest (AOIs)

Check warning on line 1 in neon/pupil-cloud/visualizations/areas-of-interest/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (AOIs)

The AOI Editor allows you to draw areas of interest (AOIs) on top of the reference image or surface. You can draw anything from simple polygons to multiple disconnected shapes. This tool is available for use after a [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) or a [Marker Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/marker-mapper/) enrichment is completed.

Expand Down Expand Up @@ -59,6 +59,5 @@
| **average&nbsp;fixation&nbsp;duration&nbsp;[ms]** | Average fixation duration for the corresponding area of interest in milliseconds. |
| **total fixations** | Total number of fixations for the corresponding area of interest in milliseconds. |
| **time&nbsp;to&nbsp;first&nbsp;fixation&nbsp;[ms]** | Average time in milliseconds until the corresponding area of interest gets fixated on for the first time in a recording. |
| **time&nbsp;to&nbsp;first&nbsp;gaze&nbsp;[ms]** | Average time in milliseconds until the corresponding area of interest gets gazed at for the first time in a recording. |
| **total&nbsp;fixation&nbsp;duration&nbsp;[ms]** | Total fixation duration for the corresponding area of interest in milliseconds. |
| **total&nbsp;gaze&nbsp;duration&nbsp;[ms]** | Total fixation duration for the corresponding area of interest in milliseconds. |

Loading