From d21d7bdc9f887a614795aa0939b7a7ba09b0d633 Mon Sep 17 00:00:00 2001
From: elepl94 <113358967+elepl94@users.noreply.github.com>
Date: Thu, 28 Sep 2023 11:00:29 +0200
Subject: [PATCH 1/7] Reworked tutorial contents
---
src/alpha-lab/scanpath-rim.md | 167 ++++++++++++----------------------
1 file changed, 58 insertions(+), 109 deletions(-)
diff --git a/src/alpha-lab/scanpath-rim.md b/src/alpha-lab/scanpath-rim.md
index d471510ba..08d0ee681 100644
--- a/src/alpha-lab/scanpath-rim.md
+++ b/src/alpha-lab/scanpath-rim.md
@@ -13,123 +13,72 @@ tags: [Pupil Invisible, Neon, Cloud]
-The [Reference Image Mapper](/enrichments/reference-image-mapper/) is a powerful tool that maps gaze onto 2D
-images of real-world environments and generates heatmaps. Now, we offer a new way to visualize your Reference Image Mapper
-data. We have created a ready-to-use script that generates static and dynamic scanpaths, providing deeper insights into
-patterns of visual behavior.
-
::: tip
-Before continuing, ensure you are familiar with the [Reference Image Mapper](/enrichments/reference-image-mapper)
-enrichment. Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference.
+Picture this: Build and customise scanpath visualisations with your Reference Image Mapper exports!
:::
-## What is a scanpath?
-A scanpath is a graphical representation of an individual's gaze movements. It shows the sequence of fixations, or pauses
-in gaze, and the rapid eye movements made between fixations, known as saccades. The scanpath offers a glimpse into what
-the observer is focusing on and the duration and frequency of their attention to different aspects of the scene. This
-information is a valuable tool for understanding a person's visual attention and perception.
-
-
-
-## What you'll need:
-- A Reference Image Mapper export download
-- Python 3.7 or higher
-- [This](https://gist.github.com/elepl94/9f669c4d81e455cf2095957831219664) ready-to-go script
-
-## Running the code
-All you need to do is run the command `python3 RIM_scanpath.py` in your terminal. A prompt will then appear asking for
-the location of the Reference Image Mapper export folder. After this, just sit back and wait for the processing to finish.
-Upon completion, the resulting scanpath visualisations will be saved in a newly created sub-folder called "scanpath”.
-
-If you wish to enhance the appearance of your scanpaths, keep reading for additional instructions!
-
-## Personalization
-### To each their own color
-
-This function generates random colors for each participant based on their names. - -```python -def color_generator(...): - colors = { - subj: ( - random.randint(0, 255), - random.randint(0, 255), - random.randint(0, 255), - ) - for subj in names - } - - return colors -``` -
--However, if you prefer to assign specific colors to each participant, you can easily modify the function to suit your needs. An example could be: +## Unlocking visual exploration with scanpaths +A scanpath is a graphical representation of an individual's gaze. It shows the sequence of fixations (pauses in gaze), and rapid eye movements made between fixations (saccades). Scanpaths offer a glimpse into how the observer has focused their attention on different aspects of the scene, which is a valuable tool for understanding a person's visual attention and perception. -``` python -def color_generator(): - colors = {} - colors['Subject1'] = (0, 0, 255) - colors['Subject2'] = (255, 0, 0) - colors['Subject3'] = (0, 255, 0) - - return colors -``` -
+In this guide, we will show you how to generate both static and dynamic scanpath visualisations using your Reference Image Mapper exported data, like in the video above. -### Make it font-tastic -If you have a preferred font or would like to change the size, simply edit the draw_on_frame() function. The fixation -IDs are displayed in black text with a white border to make them stand out from the background. If you adjust the font -size, it's also recommended to increase the values of `font_thick_w` and `font_thick_b` to maintain visual contrast. -``` python -def draw_on_frame(...): -# text aesthetics - font = cv2.FONT_HERSHEY_DUPLEX - font_size = 1 - font_thick_w = 3 - font_thick_b = 1 -... -``` -### My name is legend -The script includes two functions for creating a legend to display the wearer names and corresponding colors: - -1. `draw_name_legend()`: This function creates a legend box that displays only the name of the wearer on their individual scanpath video and image. -2. `draw_all_names_legend()`: This function creates a legend that displays all the wearer names on the final general scanpath image. - -To customize the appearance of the legend, such as the position, dimensions, or colors of the rectangular white box or the colored line, -you can modify the following parameters in both functions: +::: tip +Before continuing, ensure you are familiar with the [Reference Image Mapper](/enrichments/reference-image-mapper) +enrichment. Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference. +::: -- `r_end_point` - x and y values of the ending coordinates of the rectangular legend box -- `r_start_point` - x and y values of the starting coordinates of the rectangular legend box -- `l_end_point` - x and y values of the ending coordinates of the colored line -- `l_start_point` - x and y values of the starting coordinates of the colored line -- In `cv2.rectangle`, edit `color` to set a new color for the legend box -- In `cv2.line`, edit `thickness` to set a new width for the colored line +## Building the visualisations in an offline context +The [Reference Image Mapper](/enrichments/reference-image-mapper) available in Pupil Cloud is a tool that maps gaze onto 2D images and can subsequently generate heatmaps. However, it currently does not support the production of scanpath visualizations. Since scanpaths provide a useful characterization of *where*, *when*, and *how long* attention was focused on various elements, we developed a script that enables you to generate both static and dynamic scanpaths using your Reference Image Mapper data exported from Pupil Cloud. + +## Steps +1. Run a [Reference Image Mapper](https://docs.pupil-labs.com/enrichments/reference-image-mapper/) enrichment and download the results +2. Download [this](https://gist.github.com/elepl94/9f669c4d81e455cf2095957831219664) gist and follow the instructions in the [readme](https://gist.github.com/elepl94/9f669c4d81e455cf2095957831219664#file-readme-md) + +## Final results +After the script has completed its execution, you'll find the resulting scanpath visualizations stored in a newly created sub-folder named "scanpath." For each participant, you will obtain a reference image with the scanpath superimposed on it, along with a video that illustrates gaze behavior on the reference image, featuring a dynamic scanpath overlay. Furthermore, an aggregated visualization, combining all participants' scanpaths, will be at your disposal, providing the opportunity for more comprehensive and in-depth analyses. + +{*Hd_geYeyR)`ea!@-+Yu+(1q!fsDa~7(ker~A_@Yr9O5I<7=2)%4ibdY
zQ{X|}=tlu-B}^Rj%|jSx#lSBBq1XbZQ;iugJN@bpGSnZWfppJ)vAU#7CC>KqipJ-a
z` MiRr8Fb;*vk5rJIM?!@Q@@T5)QLiS9XTaCsphnUFb63fHyLb*Q860
zmpZ@oHOOIEbKi>Z%IkWeL8>lvC878_ue0=rn;#~Ua*)MyE7I$pyDj)LlZ8Wwi5u^c
zWxhgFxg_qG2lDE&R-920ELQR@*Ifmh=e@-kBSa8B*9{OCROB7z7fpJaRmrf7BgWqT
z377>bmxY6eAyNYy!h^{b!Cai%G%_#}Nut;O-#vWUH4h15PAA>N`At--_>Q`c=v2oE8vNm2Bw_LrN;c75}J%PMw>-Z)%kF_g^hZl
zA2|juXgHF$?({p!kTisYs_9))FW$&IrbD7AR}ehEe3iCbbZyFB`V&?Y%G)vOVrUvV
z7}N}eWjWP)Zn|LOLk7A^!6)vCCwp1oW7_ZmqQHzd2D9e2U~1(ouEfugh#LshJVmq)
z*1CsJJ4J`W0VoVNdq&wIl_yzQiC6`is4fM1etv-b)9*43rnu>_`2)R&fJ|FR%rA~)
zo5V5wX|1j&RNAF^ww+pMy&Nl8MjY&t-@9$jEtw42IM
5t@Hx#%LwvsLYM
zpS!44QZdbsF0|yq=x@`j&AZJm!2V5{-Muoa9AEsGi}^KB%_@|qc5X1s)!WTb)l;E&
z`UOJ&+1xK*A6z~H$DUpC=Z4VB;RMrx%YF(dtM)O*O{pDgdW5`FOO4M?`NcE5Y^-x3
zUq9vO_Ilb485c3z3w!3WNEKMU&(Dy5({5_rk$(OHpK4
hNkR)S7}Fsm;L@*6%2nmkzBig5S3