Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Al_fixes_2404 #673

Merged
merged 3 commits into from
Apr 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions alpha-lab/dense-pose/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Map Your Gaze Onto Body Parts With DensePose"
description: "Using densepose to map gaze onto body parts. To be or not to be? proclaims Prince Hamlet while holding a skull in his hand. But, where is the audience looking? At the hand, at the arm, or the face?"

Check warning on line 3 in alpha-lab/dense-pose/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (densepose)
permalink: /alpha-lab/dense-pose/
meta:
- name: twitter:card
Expand All @@ -24,9 +24,9 @@

# Map Gaze Onto Body Parts

<TagLinks :tags="$frontmatter.tags" />

Check warning on line 27 in alpha-lab/dense-pose/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (frontmatter)

<Youtube src="nt_zNSBMJWI"/>

Check warning on line 29 in alpha-lab/dense-pose/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (NSBMJWI)

**Act 3, Scene 1:** _"To be or not to be?"_ But where is the audience looking? At the hand, the face or the arm? <br>

Expand All @@ -34,12 +34,12 @@
Have you ever wondered which body parts we gaze upon while conversing with others? Where a professional basketball player looks just before passing? Does hand movement play a role when delivering a speech? This guide will show you how to get data that can be used to answer these questions!
:::

## Understanding Visual Behaviour on Body Parts

Check warning on line 37 in alpha-lab/dense-pose/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (Behaviour)

Understanding which body parts people look at during interactions, whether visual or otherwise, is an important topic in
fields ranging from sports science to psycholinguistics. This guide shows you how to use Neon or Pupil Invisible eye
tracking with [DensePose](https://github.com/facebookresearch/DensePose) (Github repository of [Dense Human Pose Estimation In The Wild](https://arxiv.org/abs/1802.00434))
to characterise gaze behaviour on body parts that appear in the scene video, as shown above.

Check warning on line 42 in alpha-lab/dense-pose/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (characterise)

Check warning on line 42 in alpha-lab/dense-pose/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (behaviour)

## What Tools Enable This?

Expand All @@ -49,19 +49,19 @@

## Steps

1. Download a [Raw data export]() from your project in Pupil Cloud.
2. Upload (uncompressed) one of the subfolders (recording folder) that you're interested in to Google Drive from the raw data export.
1. Download a recording from your project in Pupil Cloud (in Timeseries & Scene Video Format).

Check warning on line 52 in alpha-lab/dense-pose/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (Timeseries)
2. Upload (uncompressed) one of the recording folders that you're interested in to your Google Drive Account. (Don't want to use Google Drive? Check out how to [run it locally](#running-locally).)
3. Access our [Google Colab Notebook](https://colab.research.google.com/drive/1s6mBNAhcnxhJlqxeaQ2IZMk_Ca381p25?usp=sharing) and carefully follow the instructions.

Check warning on line 54 in alpha-lab/dense-pose/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (Colab)

<div class="mb-4" style="display:flex;justify-content:center;">
<a href="https://colab.research.google.com/drive/1s6mBNAhcnxhJlqxeaQ2IZMk_Ca381p25?usp=sharing" target="_blank">
<img style="width:180px" src="https://img.shields.io/static/v1?label=&message=Open%20in%20Google%20Colab&color=blue&labelColor=grey&logo=Google%20Colab&logoColor=#F9AB00" alt="colab badge">

Check warning on line 58 in alpha-lab/dense-pose/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (colab)
</a>
</div>

## Results

After executing the code, new files will be generated. Check the new DensePoseColab folder for the results:

Check warning on line 64 in alpha-lab/dense-pose/index.md

View workflow job for this annotation

GitHub Actions / ✍️ Check spelling

Unknown word (Colab)

1. A video showing a bounding box that delimits each detected person, a blue shaded mask over the body parts, a yellow highlighted body part when it's gazed at, and the typical red circle for the gaze position.
2. An image showing the body segments and the number of frames in which they were gazed, as shown below:
Expand Down
11 changes: 4 additions & 7 deletions alpha-lab/map-your-gaze-to-a-2d-screen/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,11 +56,9 @@ By looking at the screen when you press the button, you'll have a visual referen

## Once You Have Everything Recorded

- Create a new [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) enrichment, or add your new eye tracking recordings to an existing enrichment. Run the enrichment, and download the results by right-clicking the enrichment in Cloud once it's computed (see the screenshot below).
- Create a new [Reference Image Mapper](https://docs.pupil-labs.com/neon/pupil-cloud/enrichments/reference-image-mapper/) enrichment, or add your new eye tracking recordings to an existing enrichment. Run the enrichment, and download the results of the enrichment from Cloud once it's computed.

![Download Reference Image Mapper results](./download_rim.png)

- Now you'll need to get the raw data from your new recording(s). Download the raw data by clicking on downloads at the bottom left side of the page in the project's view.
- Now you'll need to get the timeseries and scene video from your new recording(s). Download them by clicking on downloads at the bottom left side of the page in the project's view.

## Running the Code

Expand All @@ -71,7 +69,7 @@ Now you can run the code by executing the following command in your console:
The application/script will prompt for a series of user inputs:

1. Select the folder path to the Reference Image Mapper download. This folder should contain gaze.csv, sections.csv files and the reference image
2. Select the subfolder corresponding to raw data export of the eye tracking recording you are using. This directory should contain event.csv, gaze.csv, world_timestamps.csv, and video files (don't select the folder with the scanning recording you made of the scene)
2. Select the subfolder corresponding to Timeseries & Scene Video export of the eye tracking recording you are using. This directory should contain event.csv, gaze.csv, world_timestamps.csv, and video files (don't select the folder with the scanning recording you made of the scene)
3. Select the video from the screen display

::: danger
Expand Down Expand Up @@ -211,7 +209,6 @@ Go to lines **76 & 77** and modify them according to the parameters we had in th
**Do not use 8080!** Pupil Invisible uses this one for the real-time API.
:::


- **L77:** Password -> Obvious, isn't it?

Once everything is set, you only have to run _recording.py_.
Expand All @@ -222,4 +219,4 @@ This will automatically connect to Pupil Invisible, launch OBS in your system, w

::: tip
If you need assistance in implementing this guide, reach out to us via email at [[email protected]](mailto:[email protected]), on our [Discord server](https://pupil-labs.com/chat/), or visit our [Support Page](https://pupil-labs.com/products/support/) for dedicated support options.
:::
:::
Loading