diff --git a/.lycheeignore b/.lycheeignore index f5ea878fd..7e304ed2d 100644 --- a/.lycheeignore +++ b/.lycheeignore @@ -85,10 +85,11 @@ https://docs.pupil-labs.com/eEl3sswsTms https://docs.pupil-labs.com/aPLnqu26tWI https://docs.pupil-labs.com/_Jnxi1OMMTc https://docs.pupil-labs.com/uoq11XtNH5E +https://docs.pupil-labs.com/X43aTIRjwgQ?si=aTzAkRrYNqdOEf0T # TCP https://docs.pupil-labs.com/f'tcp://%7Bip%7D:%7Bpub_port%7D https://docs.pupil-labs.com/f'tcp://%7Bip%7D:%7Bport%7D https://docs.pupil-labs.com/f'tcp://%7Bip%7D:%7Bsub_port%7D # Twitter as it seems to timeout https://twitter.com/pupil_labs -https://twitter.com/ \ No newline at end of file +https://twitter.com/ diff --git a/src/alpha-lab/scanpath-rim.md b/src/alpha-lab/scanpath-rim.md index d471510ba..8a525ee55 100644 --- a/src/alpha-lab/scanpath-rim.md +++ b/src/alpha-lab/scanpath-rim.md @@ -13,123 +13,86 @@ tags: [Pupil Invisible, Neon, Cloud]
-The [Reference Image Mapper](/enrichments/reference-image-mapper/) is a powerful tool that maps gaze onto 2D -images of real-world environments and generates heatmaps. Now, we offer a new way to visualize your Reference Image Mapper -data. We have created a ready-to-use script that generates static and dynamic scanpaths, providing deeper insights into -patterns of visual behavior. +::: tip +Picture this: Build and customise scanpath visualisations with your Reference Image Mapper exports! +::: + +## Visualising gaze exploration with scanpaths +Scanpaths are graphical representations of gaze over time. Scanpaths offer a glimpse into how the observer has focused their attention on different aspects of the scene, which is a valuable tool for understanding a person's visual attention and perception. The video above shows: +- Fixation locations are visualized as blue numbered circles. Saccades are shown with blue lines connecting fixations. +- Fixation duration is mapped to circle size. Longer fixations corresponding to bigger circles. +- Lines visualize saccades. Longer lines correspond larger saccadic amplitude (larger shifts in gaze). + + +In this guide, we will show you how to generate both static and dynamic scanpath visualisations using your Reference +Image Mapper exported data. ::: tip Before continuing, ensure you are familiar with the [Reference Image Mapper](/enrichments/reference-image-mapper) enrichment. Check out [this explainer video](https://www.youtube.com/watch?v=ygqzQEzUIS4&t=56s) for reference. ::: -## What is a scanpath? -A scanpath is a graphical representation of an individual's gaze movements. It shows the sequence of fixations, or pauses -in gaze, and the rapid eye movements made between fixations, known as saccades. The scanpath offers a glimpse into what -the observer is focusing on and the duration and frequency of their attention to different aspects of the scene. This -information is a valuable tool for understanding a person's visual attention and perception. - - - -## What you'll need: -- A Reference Image Mapper export download -- Python 3.7 or higher -- [This](https://gist.github.com/elepl94/9f669c4d81e455cf2095957831219664) ready-to-go script - -## Running the code -All you need to do is run the command `python3 RIM_scanpath.py` in your terminal. A prompt will then appear asking for -the location of the Reference Image Mapper export folder. After this, just sit back and wait for the processing to finish. -Upon completion, the resulting scanpath visualisations will be saved in a newly created sub-folder called "scanpath”. - -If you wish to enhance the appearance of your scanpaths, keep reading for additional instructions! - -## Personalization -### To each their own color -

-This function generates random colors for each participant based on their names. - -```python -def color_generator(...): - colors = { - subj: ( - random.randint(0, 255), - random.randint(0, 255), - random.randint(0, 255), - ) - for subj in names - } +## Extending current tools +The [Reference Image Mapper](/enrichments/reference-image-mapper) enrichment available in Pupil Cloud is a tool that maps gaze onto +2D images and can subsequently generate heatmaps. However, it currently does not support the production of scanpath visualizations. +Thus, we chose to develop a script that shows you how to build your own scanpaths using Reference Image Mapped data. - return colors -``` -

-

-However, if you prefer to assign specific colors to each participant, you can easily modify the function to suit your needs. An example could be: - -``` python -def color_generator(): - colors = {} - colors['Subject1'] = (0, 0, 255) - colors['Subject2'] = (255, 0, 0) - colors['Subject3'] = (0, 255, 0) - - return colors -``` -

- -### Make it font-tastic -If you have a preferred font or would like to change the size, simply edit the draw_on_frame() function. The fixation -IDs are displayed in black text with a white border to make them stand out from the background. If you adjust the font -size, it's also recommended to increase the values of `font_thick_w` and `font_thick_b` to maintain visual contrast. -``` python -def draw_on_frame(...): -# text aesthetics - font = cv2.FONT_HERSHEY_DUPLEX - font_size = 1 - font_thick_w = 3 - font_thick_b = 1 -... -``` -### My name is legend -The script includes two functions for creating a legend to display the wearer names and corresponding colors: - -1. `draw_name_legend()`: This function creates a legend box that displays only the name of the wearer on their individual scanpath video and image. -2. `draw_all_names_legend()`: This function creates a legend that displays all the wearer names on the final general scanpath image. - -To customize the appearance of the legend, such as the position, dimensions, or colors of the rectangular white box or the colored line, -you can modify the following parameters in both functions: - -- `r_end_point` - x and y values of the ending coordinates of the rectangular legend box -- `r_start_point` - x and y values of the starting coordinates of the rectangular legend box -- `l_end_point` - x and y values of the ending coordinates of the colored line -- `l_start_point` - x and y values of the starting coordinates of the colored line -- In `cv2.rectangle`, edit `color` to set a new color for the legend box -- In `cv2.line`, edit `thickness` to set a new width for the colored line + +## Steps +1. Run a [Reference Image Mapper enrichment](https://docs.pupil-labs.com/enrichments/reference-image-mapper/) and download the results +2. Download [this script](https://gist.github.com/elepl94/9f669c4d81e455cf2095957831219664) and follow the [installation instructions](https://gist.github.com/elepl94/9f669c4d81e455cf2095957831219664#installation) + +## Review the scanpaths +
+ +
+
+ +After the script has completed its execution, you'll find the resulting scanpath visualizations stored in a newly created +sub-folder named "scanpath." For each participant, you will obtain a reference image with the scanpath superimposed on it. +You will also find a video featuring a dynamic scanpath overlay. Finally, if you had multiple participants, an aggregated +visualization combining all participants' scanpaths will be available, enabling a more comprehensive overview of the subjects' +gaze behavior. + +
+
+
+ Jack Scanpath +
+
+
+ +
+
+
+ General Scanpath +
+
+
diff --git a/src/media/alpha-lab/Jack_scanpath.jpeg b/src/media/alpha-lab/Jack_scanpath.jpeg new file mode 100644 index 000000000..b7fdb2a4d Binary files /dev/null and b/src/media/alpha-lab/Jack_scanpath.jpeg differ diff --git a/src/media/alpha-lab/general_scanpath.jpeg b/src/media/alpha-lab/general_scanpath.jpeg new file mode 100644 index 000000000..bfb3e4c0a Binary files /dev/null and b/src/media/alpha-lab/general_scanpath.jpeg differ