diff --git a/.lycheeignore b/.lycheeignore
index f5ea878fd..f4596189c 100644
--- a/.lycheeignore
+++ b/.lycheeignore
@@ -85,10 +85,19 @@ https://docs.pupil-labs.com/eEl3sswsTms
https://docs.pupil-labs.com/aPLnqu26tWI
https://docs.pupil-labs.com/_Jnxi1OMMTc
https://docs.pupil-labs.com/uoq11XtNH5E
+https://docs.pupil-labs.com/BZoO7zxHaiw?si=I3zF-EV4O-ancTY0
+https://docs.pupil-labs.com/jeL8gs053lg?si=6wlx4fjxlfiqrbRq
+https://docs.pupil-labs.com/zksTzVkGifk?si=3bxl0eKOgRbfoes-
+https://docs.pupil-labs.com/Bg_SiFByceY?si=d2koC7-V7bbrYL3h
+https://docs.pupil-labs.com/0r8oAn2AZMQ?si=SbSVHedGTJ4Zshfw
+https://docs.pupil-labs.com/fmy9F8Q9eW0?si=F7q399iZHGW2kArv
+https://docs.pupil-labs.com/7V3X4XmbRAM
+https://docs.pupil-labs.com/X43aTIRjwgQ?si=aTzAkRrYNqdOEf0T
+
# TCP
https://docs.pupil-labs.com/f'tcp://%7Bip%7D:%7Bpub_port%7D
https://docs.pupil-labs.com/f'tcp://%7Bip%7D:%7Bport%7D
https://docs.pupil-labs.com/f'tcp://%7Bip%7D:%7Bsub_port%7D
# Twitter as it seems to timeout
https://twitter.com/pupil_labs
-https://twitter.com/
\ No newline at end of file
+https://twitter.com/
diff --git a/src/alpha-lab/README.md b/src/alpha-lab/README.md
index da4b9ca38..939c20ed2 100644
--- a/src/alpha-lab/README.md
+++ b/src/alpha-lab/README.md
@@ -98,7 +98,7 @@ export default {
title: "RIM Room",
text: "We pushed the limits of markerless mapping with Pupil Cloud’s Reference Image Mapper - scanning an entire apartment.",
to: "/alpha-lab/multiple-rim/",
- img: "desk-overlay.png",
+ img: "desk-heatmap.jpeg",
},
{
title: "Look at my hand!",
diff --git a/src/alpha-lab/multiple-rim.md b/src/alpha-lab/multiple-rim.md
index dd7daba45..2b8e52398 100644
--- a/src/alpha-lab/multiple-rim.md
+++ b/src/alpha-lab/multiple-rim.md
@@ -9,88 +9,176 @@ tags: [Pupil Invisible, Neon, Cloud]
-
+
-In the [Reference Image Mapper](/enrichments/reference-image-mapper/) guide, we learnt how to properly set up a Reference Image Mapper enrichment, with a single reference image. However, there are some cases in which it would be useful to map gaze onto multiple reference images taken from the same environment - for example, moving in a room while interacting with certain parts of it.
+::: tip
+Level-up your Reference Image Mapper workflow to extract insights from participants freely exploring their environment!
+:::
+
+## Exploring gaze patterns on multiple regions of an environment
+Understanding where subjects focus their gaze in relation to their environment is a common area of study for researchers in fields as diverse as art, architecture, and fall safety. Recently, powerful scene recognition tools such as the Reference Image Mapper enrichment in Pupil Cloud have made it possible to map gaze onto 3D environments, and generate heatmap visualizations. This offers a high-level overview of visual exploration patterns and also paves the way for further analysis, such as region of interest analysis.
+
+In this guide, we will show you how to use the [Reference Image Mapper](/enrichments/reference-image-mapper/) to map a participant's gaze onto multiple areas of a living environment as they freely navigate around it.
::: tip
Before continuing, ensure you are familiar with the [Reference Image Mapper](/enrichments/reference-image-mapper) enrichment. Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference.
:::
-## Reference Images
+## The tools at hand
+The [Reference Image Mapper](/enrichments/reference-image-mapper/) enrichment available in Pupil Cloud can be used to map gaze onto a single reference image of an environment. However, mapping gaze onto *multiple* regions of an environment is often of interest, since it can enable a deeper understanding of patterns of visual exploration. Therefore, we have put together this guide to show you how to leverage Reference Image Mapper for this purpose.
-First, we will take pictures of the areas and/or furniture of the room we are interested in.
+Because the [Reference Image Mapper](/enrichments/reference-image-mapper/) is only able to map gaze onto a *single* reference image, we need a few things to generate *multiple* reference image mappings in a larger environment:
-| | |
-| ------------------------------------------------------ | -------------------------------------------- |
-| | |
-| | |
+- Multiple reference images of the environment
+- Single or multiple scanning recordings. The choice of whether to use single or multiple scanning recordings depends on the dimensions of the space to be explored. Further details on this in ‘Steps’
+- An eye tracking recording taken as the participant(s) move freely within the environment, combined with custom user-inputted [events](/neon/basic-concepts/events) to segment the recording into [sections](/enrichments/#enrichment-sections) based on the specific areas the person was looking at
-## Scanning recordings
+## Steps
+1. **Capture Reference Images:** Take pictures of the areas or objects within the environment you wish to investigate. Here are some example pictures of different areas and pieces of furniture in the living room:
-In this guide, we want to map gaze onto different parts of a living room, for this reason, we recorded **two** scanning videos. We chose to use more than one scanning recording because the environment is a bit too big to be effectively scanned just by a single one.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-Based on the environment dimension/complexity, you might need to do the same and record separate scanning videos.
+
-Please follow our [best practices](/enrichments/reference-image-mapper/#scanning-best-practices) for optimal scanning.
+2. **Record Scanning Videos:** For this tutorial, we used *five* separate scanning recordings to cover the living room environment. If you have an even bigger or more complex environment, it might be necessary to use more scanning recordings, which is fine. On the other hand, it might be possible to use just one scanning recording if you can capture sufficient data, or where you have a smaller environment. Remember, each scanning recording must be **under 3 minutes in duration**.
+
+Check out these videos which show how we made the scans (also be sure to follow our [best practices](/enrichments/reference-image-mapper/#scanning-best-practices) for optimal scanning):
::: tip
-To ensure good scanning of big plain surfaces - like tables and kitchen countertops - enrich them with features. Use a printed tablecloth and/or place items to produce a successful mapping!
+To ensure accurate scanning of large plain surfaces like tables and kitchen countertops, enrich them with features. Consider using a printed tablecloth or placing items to enhance the mapping process.
:::
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
+
-::: danger
-**Scanning Recording Duration**
-
-
-Please record a scanning video that is less than 3 minutes long!
-
-
-The Reference Image Mapper enrichment does **not** accept longer recordings.
-:::
-
-## Run the enrichments
-
-Here we recorded just one video where the wearer was asked to walk and freely explore the living room. Now it is time to map the subject's gaze from this video into the five pictures above.
+
+
+
+
+
+
+
+
+
+
+
+
-
-
+
+
+
+
+
+
-During the recording, the user looked at the same furniture and parts of the room multiple times. We suggest you focus on
-specific [sections](/enrichments/#enrichment-sections) of the recording based on which part of the
-room the user is exploring.
+
-For this recording, we used the following [event annotations](/invisible/basic-concepts/events) to run five Reference Image Mapper enrichments:
+3. **Eye Tracking Recordings:** Make an eye tracking recording while the participant(s) freely explore and visually interact with various elements within the environment. (You can of course make these prior to the reference images and scanning recordings).
-- Cupboard: `cupboard.begin` and `cupboard.end`
-- Desk: `desk.begin` and `desk.end`
-- Kitchen: `kitchen.begin` and `kitchen.end`
-- TV: `tv.begin` and `tv.end`
-- Table: `table.begin` and `table.end`
+
-## Final results
+4. **Add Custom Events:** During the eye tracking recording, users may focus on specific furniture or parts of the room multiple times. By adding custom [events](/neon/basic-concepts/events) annotations corresponding to these areas or objects, you can create [sections](/enrichments/#enrichment-sections) for the enrichments to be performed. This approach allows you to run each enrichment only on the portion of the recording where a certain object is present. For this tutorial, we used the following event annotations to run five Reference Image Mapper enrichments:
+ - Desk: `desk.begin` and `desk.end`
+ - TV area 1: `tv1.begin` and `tv1.end`
+ - TV area 2: `tv2.begin` and `tv2.end`
+ - Table: `table.begin` and `table.end`
+ - Kitchen: `kitchen.begin` and `kitchen.end`
+ - Cupboard: `cupboard.begin` and `cupboard.end`
-It may take several minutes to run these enrichments depending on how long your recordings are. Once everything is finished, you can visualize how gaze is simultaneously mapped both on the recording and the reference images from the Project Editor view (as shown in the video at the very beginning of this guide).
+5. **Create and run the enrichments:** You will need to create a separate enrichment for each reference image. A reasonable naming scheme *could* correspond to each area of the environment, like ‘cupboard’, ‘desk’ etc. In the temporal selection of each enrichment, be sure to use the appropriate events labels. E.g. for ‘cupboard’, you would use `cupboard.begin` and `cupboard.end`. Now, run the enrichments to map the subject's gaze from the recording onto the multiple reference images you captured.
-From the Enrichment view, you can visualize heatmaps of each reference image:
+## Final results
-| | |
-| ------------------------------------------------------ | --------------------------------------------------------- |
-| | |
-| | |
-| | |
-| | |
+Once the enrichments are completed, heatmaps are automatically generated illustrating the areas which attracted more gaze. Additionally, you'll have the option to download gaze and fixation data mapped within the bounds of the pictures, enabling you to conduct further in-depth analyses.
-That's it. We look forward to seeing your own mapped environments!
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/src/alpha-lab/scanpath-rim.md b/src/alpha-lab/scanpath-rim.md
index d471510ba..08d0ee681 100644
--- a/src/alpha-lab/scanpath-rim.md
+++ b/src/alpha-lab/scanpath-rim.md
@@ -13,123 +13,72 @@ tags: [Pupil Invisible, Neon, Cloud]
-The [Reference Image Mapper](/enrichments/reference-image-mapper/) is a powerful tool that maps gaze onto 2D
-images of real-world environments and generates heatmaps. Now, we offer a new way to visualize your Reference Image Mapper
-data. We have created a ready-to-use script that generates static and dynamic scanpaths, providing deeper insights into
-patterns of visual behavior.
-
::: tip
-Before continuing, ensure you are familiar with the [Reference Image Mapper](/enrichments/reference-image-mapper)
-enrichment. Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference.
+Picture this: Build and customise scanpath visualisations with your Reference Image Mapper exports!
:::
-## What is a scanpath?
-A scanpath is a graphical representation of an individual's gaze movements. It shows the sequence of fixations, or pauses
-in gaze, and the rapid eye movements made between fixations, known as saccades. The scanpath offers a glimpse into what
-the observer is focusing on and the duration and frequency of their attention to different aspects of the scene. This
-information is a valuable tool for understanding a person's visual attention and perception.
-
-
-
-## What you'll need:
-- A Reference Image Mapper export download
-- Python 3.7 or higher
-- [This](https://gist.github.com/elepl94/9f669c4d81e455cf2095957831219664) ready-to-go script
-
-## Running the code
-All you need to do is run the command `python3 RIM_scanpath.py` in your terminal. A prompt will then appear asking for
-the location of the Reference Image Mapper export folder. After this, just sit back and wait for the processing to finish.
-Upon completion, the resulting scanpath visualisations will be saved in a newly created sub-folder called "scanpath”.
-
-If you wish to enhance the appearance of your scanpaths, keep reading for additional instructions!
-
-## Personalization
-### To each their own color
-
-This function generates random colors for each participant based on their names.
-
-```python
-def color_generator(...):
- colors = {
- subj: (
- random.randint(0, 255),
- random.randint(0, 255),
- random.randint(0, 255),
- )
- for subj in names
- }
-
- return colors
-```
-
-
-However, if you prefer to assign specific colors to each participant, you can easily modify the function to suit your needs. An example could be:
+## Unlocking visual exploration with scanpaths
+A scanpath is a graphical representation of an individual's gaze. It shows the sequence of fixations (pauses in gaze), and rapid eye movements made between fixations (saccades). Scanpaths offer a glimpse into how the observer has focused their attention on different aspects of the scene, which is a valuable tool for understanding a person's visual attention and perception.
-``` python
-def color_generator():
- colors = {}
- colors['Subject1'] = (0, 0, 255)
- colors['Subject2'] = (255, 0, 0)
- colors['Subject3'] = (0, 255, 0)
-
- return colors
-```
-
+In this guide, we will show you how to generate both static and dynamic scanpath visualisations using your Reference Image Mapper exported data, like in the video above.
-### Make it font-tastic
-If you have a preferred font or would like to change the size, simply edit the draw_on_frame() function. The fixation
-IDs are displayed in black text with a white border to make them stand out from the background. If you adjust the font
-size, it's also recommended to increase the values of `font_thick_w` and `font_thick_b` to maintain visual contrast.
-``` python
-def draw_on_frame(...):
-# text aesthetics
- font = cv2.FONT_HERSHEY_DUPLEX
- font_size = 1
- font_thick_w = 3
- font_thick_b = 1
-...
-```
-### My name is legend
-The script includes two functions for creating a legend to display the wearer names and corresponding colors:
-
-1. `draw_name_legend()`: This function creates a legend box that displays only the name of the wearer on their individual scanpath video and image.
-2. `draw_all_names_legend()`: This function creates a legend that displays all the wearer names on the final general scanpath image.
-
-To customize the appearance of the legend, such as the position, dimensions, or colors of the rectangular white box or the colored line,
-you can modify the following parameters in both functions:
+::: tip
+Before continuing, ensure you are familiar with the [Reference Image Mapper](/enrichments/reference-image-mapper)
+enrichment. Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference.
+:::
-- `r_end_point` - x and y values of the ending coordinates of the rectangular legend box
-- `r_start_point` - x and y values of the starting coordinates of the rectangular legend box
-- `l_end_point` - x and y values of the ending coordinates of the colored line
-- `l_start_point` - x and y values of the starting coordinates of the colored line
-- In `cv2.rectangle`, edit `color` to set a new color for the legend box
-- In `cv2.line`, edit `thickness` to set a new width for the colored line
+## Building the visualisations in an offline context
+The [Reference Image Mapper](/enrichments/reference-image-mapper) available in Pupil Cloud is a tool that maps gaze onto 2D images and can subsequently generate heatmaps. However, it currently does not support the production of scanpath visualizations. Since scanpaths provide a useful characterization of *where*, *when*, and *how long* attention was focused on various elements, we developed a script that enables you to generate both static and dynamic scanpaths using your Reference Image Mapper data exported from Pupil Cloud.
+
+## Steps
+1. Run a [Reference Image Mapper](https://docs.pupil-labs.com/enrichments/reference-image-mapper/) enrichment and download the results
+2. Download [this](https://gist.github.com/elepl94/9f669c4d81e455cf2095957831219664) gist and follow the instructions in the [readme](https://gist.github.com/elepl94/9f669c4d81e455cf2095957831219664#file-readme-md)
+
+## Final results
+After the script has completed its execution, you'll find the resulting scanpath visualizations stored in a newly created sub-folder named "scanpath." For each participant, you will obtain a reference image with the scanpath superimposed on it, along with a video that illustrates gaze behavior on the reference image, featuring a dynamic scanpath overlay. Furthermore, an aggregated visualization, combining all participants' scanpaths, will be at your disposal, providing the opportunity for more comprehensive and in-depth analyses.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/src/media/alpha-lab/Jack_scanpath.jpeg b/src/media/alpha-lab/Jack_scanpath.jpeg
new file mode 100644
index 000000000..b7fdb2a4d
Binary files /dev/null and b/src/media/alpha-lab/Jack_scanpath.jpeg differ
diff --git a/src/media/alpha-lab/cupboard-heatmap.jpeg b/src/media/alpha-lab/cupboard-heatmap.jpeg
new file mode 100644
index 000000000..1c396296a
Binary files /dev/null and b/src/media/alpha-lab/cupboard-heatmap.jpeg differ
diff --git a/src/media/alpha-lab/cupboard-img.png b/src/media/alpha-lab/cupboard-img.png
deleted file mode 100644
index 6b14afcbe..000000000
Binary files a/src/media/alpha-lab/cupboard-img.png and /dev/null differ
diff --git a/src/media/alpha-lab/cupboard-overlay.png b/src/media/alpha-lab/cupboard-overlay.png
deleted file mode 100644
index 620518f5f..000000000
Binary files a/src/media/alpha-lab/cupboard-overlay.png and /dev/null differ
diff --git a/src/media/alpha-lab/cupboard.jpeg b/src/media/alpha-lab/cupboard.jpeg
new file mode 100644
index 000000000..bbfc9a84d
Binary files /dev/null and b/src/media/alpha-lab/cupboard.jpeg differ
diff --git a/src/media/alpha-lab/desk-heatmap.jpeg b/src/media/alpha-lab/desk-heatmap.jpeg
new file mode 100644
index 000000000..b8491a9f9
Binary files /dev/null and b/src/media/alpha-lab/desk-heatmap.jpeg differ
diff --git a/src/media/alpha-lab/desk-img.png b/src/media/alpha-lab/desk-img.png
deleted file mode 100644
index 67a8c3183..000000000
Binary files a/src/media/alpha-lab/desk-img.png and /dev/null differ
diff --git a/src/media/alpha-lab/desk-overlay.png b/src/media/alpha-lab/desk-overlay.png
deleted file mode 100644
index cccfd0a64..000000000
Binary files a/src/media/alpha-lab/desk-overlay.png and /dev/null differ
diff --git a/src/media/alpha-lab/desk.jpeg b/src/media/alpha-lab/desk.jpeg
new file mode 100644
index 000000000..264c568e9
Binary files /dev/null and b/src/media/alpha-lab/desk.jpeg differ
diff --git a/src/media/alpha-lab/general_scanpath.jpeg b/src/media/alpha-lab/general_scanpath.jpeg
new file mode 100644
index 000000000..bfb3e4c0a
Binary files /dev/null and b/src/media/alpha-lab/general_scanpath.jpeg differ
diff --git a/src/media/alpha-lab/kitchen+table-img.jpeg b/src/media/alpha-lab/kitchen+table-img.jpeg
deleted file mode 100644
index 000eb6c34..000000000
Binary files a/src/media/alpha-lab/kitchen+table-img.jpeg and /dev/null differ
diff --git a/src/media/alpha-lab/kitchen+table-overlay.png b/src/media/alpha-lab/kitchen+table-overlay.png
deleted file mode 100644
index 5527740b9..000000000
Binary files a/src/media/alpha-lab/kitchen+table-overlay.png and /dev/null differ
diff --git a/src/media/alpha-lab/kitchen-heatmap.jpeg b/src/media/alpha-lab/kitchen-heatmap.jpeg
new file mode 100644
index 000000000..011dde700
Binary files /dev/null and b/src/media/alpha-lab/kitchen-heatmap.jpeg differ
diff --git a/src/media/alpha-lab/kitchen-imgs.png b/src/media/alpha-lab/kitchen-imgs.png
deleted file mode 100644
index e6d01ec22..000000000
Binary files a/src/media/alpha-lab/kitchen-imgs.png and /dev/null differ
diff --git a/src/media/alpha-lab/kitchen-overlay.png b/src/media/alpha-lab/kitchen-overlay.png
deleted file mode 100644
index 38935e109..000000000
Binary files a/src/media/alpha-lab/kitchen-overlay.png and /dev/null differ
diff --git a/src/media/alpha-lab/kitchen.jpeg b/src/media/alpha-lab/kitchen.jpeg
new file mode 100644
index 000000000..4441c33a8
Binary files /dev/null and b/src/media/alpha-lab/kitchen.jpeg differ
diff --git a/src/media/alpha-lab/table-heatmap.jpeg b/src/media/alpha-lab/table-heatmap.jpeg
new file mode 100644
index 000000000..916d1043a
Binary files /dev/null and b/src/media/alpha-lab/table-heatmap.jpeg differ
diff --git a/src/media/alpha-lab/table.jpeg b/src/media/alpha-lab/table.jpeg
new file mode 100644
index 000000000..60d40e54e
Binary files /dev/null and b/src/media/alpha-lab/table.jpeg differ
diff --git a/src/media/alpha-lab/tv-img.png b/src/media/alpha-lab/tv-img.png
deleted file mode 100644
index c04f1a02b..000000000
Binary files a/src/media/alpha-lab/tv-img.png and /dev/null differ
diff --git a/src/media/alpha-lab/tv-overlay.png b/src/media/alpha-lab/tv-overlay.png
deleted file mode 100644
index a39b30583..000000000
Binary files a/src/media/alpha-lab/tv-overlay.png and /dev/null differ
diff --git a/src/media/alpha-lab/tv1-heatmap.jpeg b/src/media/alpha-lab/tv1-heatmap.jpeg
new file mode 100644
index 000000000..0607cc30d
Binary files /dev/null and b/src/media/alpha-lab/tv1-heatmap.jpeg differ
diff --git a/src/media/alpha-lab/tv1.jpeg b/src/media/alpha-lab/tv1.jpeg
new file mode 100644
index 000000000..7a84a8a2a
Binary files /dev/null and b/src/media/alpha-lab/tv1.jpeg differ
diff --git a/src/media/alpha-lab/tv2-heatmap.jpeg b/src/media/alpha-lab/tv2-heatmap.jpeg
new file mode 100644
index 000000000..cd5c0a48c
Binary files /dev/null and b/src/media/alpha-lab/tv2-heatmap.jpeg differ
diff --git a/src/media/alpha-lab/tv2.jpeg b/src/media/alpha-lab/tv2.jpeg
new file mode 100644
index 000000000..75d023772
Binary files /dev/null and b/src/media/alpha-lab/tv2.jpeg differ