Skip to content

Commit

Permalink
Merge pull request #642 from pupil-labs/master
Browse files Browse the repository at this point in the history
Merge master to GH
  • Loading branch information
mikelgg93 authored Nov 28, 2023
2 parents 1b652a3 + 2404319 commit 37dfa53
Show file tree
Hide file tree
Showing 15 changed files with 144 additions and 95 deletions.
2 changes: 1 addition & 1 deletion alpha-lab/gaze-contingency-assistive/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ Follow the steps in the next section to be able to use your gaze to navigate a w
## Steps

1. Follow the instructions in [Gaze-controlled Cursor Demo](https://github.com/pupil-labs/gaze-controlled-cursor-demo) to download and run it locally on your computer.
2. Start up [Neon](https://docs.pupil-labs.com/neon/getting-started/first-recording/), make sure it’s detected in the demo window, then check out the settings:
2. Start up [Neon](https://docs.pupil-labs.com/neon/data-collection/first-recording/), make sure it’s detected in the demo window, then check out the settings:
- Adjust the `Tag Size` and `Tag Brightness` settings as necessary until all four AprilTag markers are successfully tracked (markers that are not tracked will display a red border as shown in the image below).
- Modify the `Dwell Radius` and `Dwell Time` values to customize the size of the gaze circle and the dwell time required for gaze to trigger a mouse action.
- Click on `Mouse Control` and embark on your journey into the realm of gaze contingency.
Expand Down
14 changes: 7 additions & 7 deletions alpha-lab/map-your-gaze-to-a-2d-screen/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@ import TagLinks from '@components/TagLinks.vue'

<Youtube src="OXIUjIzCplc"/>

Check warning on line 15 in alpha-lab/map-your-gaze-to-a-2d-screen/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (Cplc)

In this guide, we will show you how to map and visualise gaze onto a screen with dynamic content, e.g. a video, web browsing or any other content of your choice, using the [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/) enrichment and a few clicks.
In this guide, we will show you how to map and visualise gaze onto a screen with dynamic content, e.g. a video, web browsing or any other content of your choice, using the [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) enrichment and a few clicks.

Check warning on line 17 in alpha-lab/map-your-gaze-to-a-2d-screen/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (visualise)

::: tip
**Note:** This tutorial requires some technical knowledge, but don't worry. We made it almost click and run for you! You can learn as much or as little as you like.
:::

## What you'll need

Before continuing, ensure you are familiar with the [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/) enrichment. Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference.
Before continuing, ensure you are familiar with the [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) enrichment. Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference.

We recommend you run the enrichment, e.g. with a short recording of your desktop + monitor/screen to ensure it's working okay. Once satisfied, you can use the same reference image + scanning recording for your dynamic screen content.

Expand All @@ -41,23 +41,23 @@ Let's assume you have everything ready to go – your participant is sat infron

So that we can capture your participant's visual interactions with the screen content, we will need to make sure that both the _eye tracking_ **and** _screen recordings_ happen at the same time.

Importantly, both sources (eye tracking and screen recording) record individually. As such, you'll need what we call an [event annotation](https://docs.pupil-labs.com/neon/general/events/) to synchronise them later.
Importantly, both sources (eye tracking and screen recording) record individually. As such, you'll need what we call an [event annotation](https://docs.pupil-labs.com/neon/data-collection/events/) to synchronise them later.

Check warning on line 44 in alpha-lab/map-your-gaze-to-a-2d-screen/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (synchronise)

The [event annotation](https://docs.pupil-labs.com/neon/general/events/) should be used to indicate the beginning of the _screen content recording_ in the _eye tracking recording_, and be named `start.video`.
Check [here](https://docs.pupil-labs.com/neon/general/events/) how you can create these events in the Cloud.
The [event annotation](https://docs.pupil-labs.com/neon/data-collection/events/) should be used to indicate the beginning of the _screen content recording_ in the _eye tracking recording_, and be named `start.video`.
Check [here](https://docs.pupil-labs.com/neon/data-collection/events/) how you can create these events in the Cloud.

::: tip
**Tip:**
When you initiate your recordings, you'll need to know when the screen recording started, relative to your eye tracking recording. Thus, start your eye tracker recording first, and make sure that the eye tracker scene camera faces the OBS program on the screen. Then, start the screen recording.

By looking at the screen when you press the button, you'll have a visual reference to create the [event annotation](https://docs.pupil-labs.com/neon/general/events/) later in Cloud.
By looking at the screen when you press the button, you'll have a visual reference to create the [event annotation](https://docs.pupil-labs.com/neon/data-collection/events/) later in Cloud.

**Recap**: Eye tracking **first**; screen recording **second**
:::

## Once you have everything recorded

- Create a new [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/) enrichment, or add your new eye tracking recordings to an existing enrichment. Run the enrichment, and download the results by right-clicking the enrichment in Cloud once it's computed (see the screenshot below).
- Create a new [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) enrichment, or add your new eye tracking recordings to an existing enrichment. Run the enrichment, and download the results by right-clicking the enrichment in Cloud once it's computed (see the screenshot below).

![Download Reference Image Mapper results](./download_rim.png)

Expand Down
12 changes: 6 additions & 6 deletions alpha-lab/multiple-rim/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,21 +39,21 @@ Level-up your Reference Image Mapper workflow to extract insights from participa
## Exploring gaze patterns in multiple regions of an environment

Understanding where people focus their gaze while exploring their environment is a topic of interest for researchers in
diverse fields, ranging from Art and Architecture to Zoology. The [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/)
diverse fields, ranging from Art and Architecture to Zoology. The [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/)
enrichment in Pupil Cloud makes it possible to map gaze onto 3D real-world environments and generate heatmaps. These provide
an informative overview of visual exploration patterns and also pave the way for further analysis, such as region of interest analysis.

In this guide, we will demonstrate how to use the [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/) to map a
In this guide, we will demonstrate how to use the [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) to map a
participant's gaze onto various regions of a living environment as they freely navigate through it.

::: tip
Before continuing, ensure you are familiar with the [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/) enrichment.
Before continuing, ensure you are familiar with the [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) enrichment.
Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference.
:::

## The tools at hand

The [Reference Image Mapper](https://docs.pupil-labs.com/pupil-cloud/enrichments/reference-image-mapper/) enables mapping of gaze onto a
The [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) enables mapping of gaze onto a
_single_ reference image of an environment. However, there is often a need to analyze _multiple_ regions for a more in-depth
understanding of visual exploration. This guide demonstrates how to accomplish this by applying the enrichment multiple
times during the same recording to generate mappings and heatmaps for different regions.
Expand All @@ -66,7 +66,7 @@ For the analysis, we will need the following:
- Single or multiple scanning recordings. The choice of whether to use single or multiple scanning recordings depends on
the dimensions of the space to be explored (see below for examples)
- An eye tracking recording taken as the participant(s) move freely within the environment
- User-inputted [events](https://docs.pupil-labs.com/neon/general/events/) to segment the recording(s) into [sections](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/#enrichment-sections) based on
- User-inputted [events](https://docs.pupil-labs.com/neon/data-collection/events/) to segment the recording(s) into [sections](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/#enrichment-sections) based on
the areas the person was looking at

1. **Capture Reference Images:** Take pictures of the areas or objects within the environment you wish to investigate. Here are some example pictures of different areas and pieces of furniture in our environment (a living room, dining area, and kitchen):
Expand Down Expand Up @@ -152,7 +152,7 @@ consider placing some strategic items within the environment to increase the cha

<div style="margin-bottom: 5px;"></div>

4. **Add Custom Events:** During the eye tracking recording, users may focus on a specific region once or multiple times. I.e. they may revisit that region. By adding custom [event](https://docs.pupil-labs.com/neon/general/events/) annotations corresponding to these periods, you can create [sections](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/#enrichment-sections) for the enrichments to be computed. This enables you to run each enrichment only on the section(s) of recording where a certain region is being gazed at. For this guide, we used the following event annotations to run five Reference Image Mapper enrichments:
4. **Add Custom Events:** During the eye tracking recording, users may focus on a specific region once or multiple times. I.e. they may revisit that region. By adding custom [event](https://docs.pupil-labs.com/neon/data-collection/events/) annotations corresponding to these periods, you can create [sections](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/#enrichment-sections) for the enrichments to be computed. This enables you to run each enrichment only on the section(s) of recording where a certain region is being gazed at. For this guide, we used the following event annotations to run five Reference Image Mapper enrichments:

- Desk: `desk.begin` and `desk.end`
- TV area 1: `tv1.begin` and `tv1.end`
Expand Down
2 changes: 1 addition & 1 deletion alpha-lab/scanpath-rim/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Thus, we chose to develop a script that shows you how to build your own scanpath


## Steps
1. Run a [Reference Image Mapper enrichment](https://docs.pupil-labs.com/enrichments/reference-image-mapper/) and download the results
1. Run a [Reference Image Mapper enrichment](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) and download the results
2. Download [this script](https://gist.github.com/elepl94/9f669c4d81e455cf2095957831219664) and follow the [installation instructions](https://gist.github.com/elepl94/9f669c4d81e455cf2095957831219664#installation)

## Review the scanpaths
Expand Down
5 changes: 1 addition & 4 deletions components/Youtube.vue
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,7 @@
</script>

<template>
<div
class="pb-4"
style="display: flex; justify-content: center; aspect-ratio: 16 / 9"
>
<div style="display: flex; justify-content: center; aspect-ratio: 16 / 9">
<iframe
width="100%"
height="height"
Expand Down
4 changes: 4 additions & 0 deletions custom.css
Original file line number Diff line number Diff line change
Expand Up @@ -561,6 +561,10 @@
background-color: var(--vp-c-default-1);
}

.subgrid {
grid-template-rows: subgrid;
}

@media (min-width: 768px) {
#app .VPNavScreen {
display: unset;
Expand Down
16 changes: 15 additions & 1 deletion default_config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,21 @@ type ThemeConfigProps = {
};

export const config: ConfigProps = {
head: [["link", { rel: "icon", href: "/favicon.png" }]],
head: [
["link", { rel: "icon", href: "/favicon.png" }],
[
"script",
{
async: "",
src: "https://www.googletagmanager.com/gtag/js?id=G-YSCHB0T6ML",
},
],
[
"script",
{},
"window.dataLayer = window.dataLayer || [];\nfunction gtag(){dataLayer.push(arguments);}\ngtag('js', new Date());\ngtag('config', 'G-YSCHB0T6ML');",
],
],
appearance: true,
cleanUrls: true,
};
Expand Down
2 changes: 1 addition & 1 deletion invisible/.vitepress/config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ let theme_config_additions = {
},
{
text: "Publications & Citation",
link: "/hardware/publications-and-citation/",
link: "/data-collection/publications-and-citation/",
},
{ text: "Troubleshooting", link: "/data-collection/troubleshooting/" },
],
Expand Down
Binary file not shown.
7 changes: 7 additions & 0 deletions invisible/tailwind.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,11 @@ module.exports = {
},
},
plugins: [],
safelist: [
{
pattern: /grid-cols-(1|2|3|4|5)/,
},
"^sm:grid-cols-",
"m-auto",
],
};
13 changes: 13 additions & 0 deletions landing-page/.vitepress/config.mts
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,19 @@ const config_additions = {
titleTemplate: ":title - Pupil Labs Docs",
description: "Documentation for all Pupil Labs products.",
head: [
["link", { rel: "icon", href: "/favicon.png" }],
[
"script",
{
async: "",
src: "https://www.googletagmanager.com/gtag/js?id=G-YSCHB0T6ML",
},
],
[
"script",
{},
"window.dataLayer = window.dataLayer || [];\nfunction gtag(){dataLayer.push(arguments);}\ngtag('js', new Date());\ngtag('config', 'G-YSCHB0T6ML');",
],
[
"script",
{},
Expand Down
Loading

0 comments on commit 37dfa53

Please sign in to comment.