Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge master to GH #642

Merged
merged 5 commits into from
Nov 28, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion alpha-lab/gaze-contingency-assistive/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,12 @@

# A practical guide to implementing gaze contingency for assistive technology

<TagLinks :tags="$frontmatter.tags" />

Check warning on line 27 in alpha-lab/gaze-contingency-assistive/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (frontmatter)

<Youtube src="cuvWqVOAc5M"/>

::: tip
Imagine a world where transformative assistive solutions enable you to browse the internet with a mere glance or write an email using only your eyes. This is not science fiction; it is the realm of gaze-contingent technology.

Check warning on line 32 in alpha-lab/gaze-contingency-assistive/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (transformative)
:::

## Hacking the eyes with gaze contingency
Expand All @@ -54,7 +54,7 @@
To locate the screen, we use [AprilTags](https://april.eecs.umich.edu/software/apriltag) to identify the image of the
screen as it appears in Neon’s scene camera. Gaze data is transferred to the computer via Neon's
[Real-time API](https://docs.pupil-labs.com/neon/real-time-api/tutorials/). We then transform gaze from _scene camera_ to _screen-based_
coordinates using a [homography](<https://en.m.wikipedia.org/wiki/Homography_(computer_vision)>) approach like the [Marker Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/marker-mapper/)

Check warning on line 57 in alpha-lab/gaze-contingency-assistive/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (homography)
enrichment we offer in Pupil Cloud as a post-hoc solution. The heavy lifting of all this is handled by
our [Real-time Screen Gaze](https://github.com/pupil-labs/realtime-screen-gaze/) package (written for this guide).

Expand All @@ -73,7 +73,7 @@
## Steps

1. Follow the instructions in [Gaze-controlled Cursor Demo](https://github.com/pupil-labs/gaze-controlled-cursor-demo) to download and run it locally on your computer.
2. Start up [Neon](https://docs.pupil-labs.com/neon/getting-started/first-recording/), make sure it’s detected in the demo window, then check out the settings:
2. Start up [Neon](https://docs.pupil-labs.com/neon/data-collection/first-recording/), make sure it’s detected in the demo window, then check out the settings:
- Adjust the `Tag Size` and `Tag Brightness` settings as necessary until all four AprilTag markers are successfully tracked (markers that are not tracked will display a red border as shown in the image below).
- Modify the `Dwell Radius` and `Dwell Time` values to customize the size of the gaze circle and the dwell time required for gaze to trigger a mouse action.
- Click on `Mouse Control` and embark on your journey into the realm of gaze contingency.
Expand Down
14 changes: 7 additions & 7 deletions alpha-lab/map-your-gaze-to-a-2d-screen/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,21 +8,21 @@
import TagLinks from '@components/TagLinks.vue'
</script>

# Map and visualise gaze onto a display content using the Reference Image Mapper

Check warning on line 11 in alpha-lab/map-your-gaze-to-a-2d-screen/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (visualise)

<TagLinks :tags="$frontmatter.tags" />

Check warning on line 13 in alpha-lab/map-your-gaze-to-a-2d-screen/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (frontmatter)

<Youtube src="OXIUjIzCplc"/>

Check warning on line 15 in alpha-lab/map-your-gaze-to-a-2d-screen/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (Cplc)

In this guide, we will show you how to map and visualise gaze onto a screen with dynamic content, e.g. a video, web browsing or any other content of your choice, using the [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/) enrichment and a few clicks.
In this guide, we will show you how to map and visualise gaze onto a screen with dynamic content, e.g. a video, web browsing or any other content of your choice, using the [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) enrichment and a few clicks.

Check warning on line 17 in alpha-lab/map-your-gaze-to-a-2d-screen/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (visualise)

::: tip
**Note:** This tutorial requires some technical knowledge, but don't worry. We made it almost click and run for you! You can learn as much or as little as you like.
:::

## What you'll need

Before continuing, ensure you are familiar with the [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/) enrichment. Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference.
Before continuing, ensure you are familiar with the [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) enrichment. Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference.

We recommend you run the enrichment, e.g. with a short recording of your desktop + monitor/screen to ensure it's working okay. Once satisfied, you can use the same reference image + scanning recording for your dynamic screen content.

Expand All @@ -37,27 +37,27 @@

## Making the recording

Let's assume you have everything ready to go – your participant is sat infront of the screen wearing the eye tracker, your screen content is ready to play.

Check warning on line 40 in alpha-lab/map-your-gaze-to-a-2d-screen/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (infront)

So that we can capture your participant's visual interactions with the screen content, we will need to make sure that both the _eye tracking_ **and** _screen recordings_ happen at the same time.

Importantly, both sources (eye tracking and screen recording) record individually. As such, you'll need what we call an [event annotation](https://docs.pupil-labs.com/neon/general/events/) to synchronise them later.
Importantly, both sources (eye tracking and screen recording) record individually. As such, you'll need what we call an [event annotation](https://docs.pupil-labs.com/neon/data-collection/events/) to synchronise them later.

Check warning on line 44 in alpha-lab/map-your-gaze-to-a-2d-screen/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (synchronise)

The [event annotation](https://docs.pupil-labs.com/neon/general/events/) should be used to indicate the beginning of the _screen content recording_ in the _eye tracking recording_, and be named `start.video`.
Check [here](https://docs.pupil-labs.com/neon/general/events/) how you can create these events in the Cloud.
The [event annotation](https://docs.pupil-labs.com/neon/data-collection/events/) should be used to indicate the beginning of the _screen content recording_ in the _eye tracking recording_, and be named `start.video`.
Check [here](https://docs.pupil-labs.com/neon/data-collection/events/) how you can create these events in the Cloud.

::: tip
**Tip:**
When you initiate your recordings, you'll need to know when the screen recording started, relative to your eye tracking recording. Thus, start your eye tracker recording first, and make sure that the eye tracker scene camera faces the OBS program on the screen. Then, start the screen recording.

By looking at the screen when you press the button, you'll have a visual reference to create the [event annotation](https://docs.pupil-labs.com/neon/general/events/) later in Cloud.
By looking at the screen when you press the button, you'll have a visual reference to create the [event annotation](https://docs.pupil-labs.com/neon/data-collection/events/) later in Cloud.

**Recap**: Eye tracking **first**; screen recording **second**
:::

## Once you have everything recorded

- Create a new [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/) enrichment, or add your new eye tracking recordings to an existing enrichment. Run the enrichment, and download the results by right-clicking the enrichment in Cloud once it's computed (see the screenshot below).
- Create a new [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) enrichment, or add your new eye tracking recordings to an existing enrichment. Run the enrichment, and download the results by right-clicking the enrichment in Cloud once it's computed (see the screenshot below).

![Download Reference Image Mapper results](./download_rim.png)

Expand Down Expand Up @@ -94,7 +94,7 @@

::: warning
**Tip:**
You might find some libav.mp4 warnings. These warnings are due to some issues with the aac codec and timestamping. These messages only appear when adding the audio stream. You can safely ignore them, or you can disable audio within the code.

Check warning on line 97 in alpha-lab/map-your-gaze-to-a-2d-screen/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (libav)
:::

## How the code works
Expand Down
12 changes: 6 additions & 6 deletions alpha-lab/multiple-rim/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,21 +39,21 @@ Level-up your Reference Image Mapper workflow to extract insights from participa
## Exploring gaze patterns in multiple regions of an environment

Understanding where people focus their gaze while exploring their environment is a topic of interest for researchers in
diverse fields, ranging from Art and Architecture to Zoology. The [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/)
diverse fields, ranging from Art and Architecture to Zoology. The [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/)
enrichment in Pupil Cloud makes it possible to map gaze onto 3D real-world environments and generate heatmaps. These provide
an informative overview of visual exploration patterns and also pave the way for further analysis, such as region of interest analysis.

In this guide, we will demonstrate how to use the [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/) to map a
In this guide, we will demonstrate how to use the [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) to map a
participant's gaze onto various regions of a living environment as they freely navigate through it.

::: tip
Before continuing, ensure you are familiar with the [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/) enrichment.
Before continuing, ensure you are familiar with the [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) enrichment.
Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference.
:::

## The tools at hand

The [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/) enables mapping of gaze onto a
The [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) enables mapping of gaze onto a
_single_ reference image of an environment. However, there is often a need to analyze _multiple_ regions for a more in-depth
understanding of visual exploration. This guide demonstrates how to accomplish this by applying the enrichment multiple
times during the same recording to generate mappings and heatmaps for different regions.
Expand All @@ -66,7 +66,7 @@ For the analysis, we will need the following:
- Single or multiple scanning recordings. The choice of whether to use single or multiple scanning recordings depends on
the dimensions of the space to be explored (see below for examples)
- An eye tracking recording taken as the participant(s) move freely within the environment
- User-inputted [events](https://docs.pupil-labs.com/neon/general/events/) to segment the recording(s) into [sections](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/#enrichment-sections) based on
- User-inputted [events](https://docs.pupil-labs.com/neon/data-collection/events/) to segment the recording(s) into [sections](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/#enrichment-sections) based on
the areas the person was looking at

1. **Capture Reference Images:** Take pictures of the areas or objects within the environment you wish to investigate. Here are some example pictures of different areas and pieces of furniture in our environment (a living room, dining area, and kitchen):
Expand Down Expand Up @@ -152,7 +152,7 @@ consider placing some strategic items within the environment to increase the cha

<div style="margin-bottom: 5px;"></div>

4. **Add Custom Events:** During the eye tracking recording, users may focus on a specific region once or multiple times. I.e. they may revisit that region. By adding custom [event](https://docs.pupil-labs.com/neon/general/events/) annotations corresponding to these periods, you can create [sections](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/#enrichment-sections) for the enrichments to be computed. This enables you to run each enrichment only on the section(s) of recording where a certain region is being gazed at. For this guide, we used the following event annotations to run five Reference Image Mapper enrichments:
4. **Add Custom Events:** During the eye tracking recording, users may focus on a specific region once or multiple times. I.e. they may revisit that region. By adding custom [event](https://docs.pupil-labs.com/neon/data-collection/events/) annotations corresponding to these periods, you can create [sections](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/#enrichment-sections) for the enrichments to be computed. This enables you to run each enrichment only on the section(s) of recording where a certain region is being gazed at. For this guide, we used the following event annotations to run five Reference Image Mapper enrichments:

- Desk: `desk.begin` and `desk.end`
- TV area 1: `tv1.begin` and `tv1.end`
Expand Down
2 changes: 1 addition & 1 deletion alpha-lab/scanpath-rim/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Thus, we chose to develop a script that shows you how to build your own scanpath


## Steps
1. Run a [Reference Image Mapper enrichment](https://docs.pupil-labs.com/enrichments/reference-image-mapper/) and download the results
1. Run a [Reference Image Mapper enrichment](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) and download the results
2. Download [this script](https://gist.github.com/elepl94/9f669c4d81e455cf2095957831219664) and follow the [installation instructions](https://gist.github.com/elepl94/9f669c4d81e455cf2095957831219664#installation)

## Review the scanpaths
Expand Down
5 changes: 1 addition & 4 deletions components/Youtube.vue
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,7 @@
</script>

<template>
<div
class="pb-4"
style="display: flex; justify-content: center; aspect-ratio: 16 / 9"
>
<div style="display: flex; justify-content: center; aspect-ratio: 16 / 9">
<iframe
width="100%"
height="height"
Expand Down
4 changes: 4 additions & 0 deletions custom.css
Original file line number Diff line number Diff line change
Expand Up @@ -561,6 +561,10 @@
background-color: var(--vp-c-default-1);
}

.subgrid {
grid-template-rows: subgrid;
}

@media (min-width: 768px) {
#app .VPNavScreen {
display: unset;
Expand Down
16 changes: 15 additions & 1 deletion default_config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,21 @@ type ThemeConfigProps = {
};

export const config: ConfigProps = {
head: [["link", { rel: "icon", href: "/favicon.png" }]],
head: [
["link", { rel: "icon", href: "/favicon.png" }],
[
"script",
{
async: "",
src: "https://www.googletagmanager.com/gtag/js?id=G-YSCHB0T6ML",
},
],
[
"script",
{},
"window.dataLayer = window.dataLayer || [];\nfunction gtag(){dataLayer.push(arguments);}\ngtag('js', new Date());\ngtag('config', 'G-YSCHB0T6ML');",
],
],
appearance: true,
cleanUrls: true,
};
Expand Down
4 changes: 2 additions & 2 deletions invisible/.vitepress/config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ let theme_config_additions = {
{
text: "Recordings",
items: [
{ text: "Overview", link: "/hardware/recordings/" },
{ text: "Overview", link: "/data-collection/recordings/" },
{ text: "Data Streams", link: "/data-collection/data-streams/" },
{ text: "Data Format", link: "/data-collection/data-format/" },
],
Expand Down Expand Up @@ -111,7 +111,7 @@ let theme_config_additions = {
},
{
text: "Publications & Citation",
link: "/hardware/publications-and-citation/",
link: "/data-collection/publications-and-citation/",
},
{ text: "Troubleshooting", link: "/data-collection/troubleshooting/" },
],
Expand Down
Binary file not shown.
7 changes: 7 additions & 0 deletions invisible/tailwind.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,11 @@ module.exports = {
},
},
plugins: [],
safelist: [
{
pattern: /grid-cols-(1|2|3|4|5)/,
},
"^sm:grid-cols-",
"m-auto",
],
};
13 changes: 13 additions & 0 deletions landing-page/.vitepress/config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,19 @@ const config_additions = {
titleTemplate: ":title - Pupil Labs Docs",
description: "Documentation for all Pupil Labs products.",
head: [
["link", { rel: "icon", href: "/favicon.png" }],
[
"script",
{
async: "",
src: "https://www.googletagmanager.com/gtag/js?id=G-YSCHB0T6ML",
},
],
[
"script",
{},
"window.dataLayer = window.dataLayer || [];\nfunction gtag(){dataLayer.push(arguments);}\ngtag('js', new Date());\ngtag('config', 'G-YSCHB0T6ML');",
],
[
"script",
{},
Expand Down
Loading
Loading