Skip to content

rusty-electron/visual_world_paradigm_project

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

97 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Visual World Paradigm

A classical visual world study showing how people predict upcoming words with the help of Gazepoint eye tracker.

Authors: Kapil Mulchandani, Manpa Barman, Pritom Gogoi. (The authors are alphabetically ordered.)

Course: Acquisition and analysis of eye-tracking data

Semester: Summer semester 2023

💡 Project Abstract

The study presented in this paper explores the dynamics of predictive language processing through the visual world paradigm (VWP), a widely employed method in cognitive psychology. The primary objective of the research is to unwind how individuals anticipate or predict forthcoming words during the unfolding of the spoken instructions, leveraging the Gazepoint eye tracker for precise gaze pattern analysis. The investigation delves into the impact of competitor words on gaze patterns, to study the cognitive mechanisms underlying real-time language comprehension. Our experiment uses a collection of competitor words sharing phonetic or semantic similarities with the target, and validates the hypothesis that the existence of such competitors leads to an increased number of fixations on them, reflecting the participants' evolving predictions of the upcoming word.

Reference Paper: Allopenna, P., Magnuson, J., & Tanenhaus, M. (1998). Tracking the time course of spoken word recognition using eye movements: evidence for continuous mapping models. Journal of Memory and Language, 38, 419-439.

💻 Installations

Software requirements:

Hardware requirements:

Our report is created using Quarto. You can install Quarto from here.

📄 Overview of Folder Structure

.
├── data # Contains the raw and processed data for the project.
├── docs # Contains the documentation for the project.
│   ├── analysis
│   └── experiment
├── experiment # Contains the Opensesame experiment file for the project.
├── plots  # Contains all the plots generated during the analysis
│   ├── agg_plot_square.png
│   ├── final
│   └── fixation
├── presentation # Contains the final presentation for the project.
├── project_milestones # Contains the project milestones for the project.
├── README.md
├── report # Contains the report files for the project.
│   ├── custom.scss
│   ├── docs
│   ├── frontiers.csl
│   ├── Makefile
│   ├── README.md
│   └── src
├── scripts # Contains all the scripts used for preprocessing the stimuli
│   ├── conv_to_wav.sh
│   ├── remove_trailing_silence.py
│   ├── resize_imgs.py
│   └── run_py_for_all.sh
├── src # Contains all the source code for the analysis
│   ├── constants.py
│   ├── create_analysis_plot.py
│   ├── create_fixation_plots.py
│   ├── utils.py
│   ├── plots.py
│   ├── run_preprocessing.py
│   └── core.py
├──  stimuli # Contains all the stimuli files used in the experiment
│    ├── audio-stimuli
│    ├── grid.png
│    ├── image-stimuli
│    ├── README-md
│   └── stimuli-final.csv
├── LICENSE

📊 How to run the experiment

  1. Clone the repository

    git clone https://github.com/ReboreExplore/visual_word_paradigm_project
  2. Download the experiment file from the experiment folder and open it in OpenSesame.

    [!NOTE]
    For the next two steps (3 and 4), the stimuli files are already available in the experiment file downloaded. However if you wish to design the experiment from scratch, you can follow the next two steps to load the stimuli, otherwise directly jump to step 5 .

  3. Load the audio-stimuli files and the image-stimuli files in the Opensesame file pool. The files are available in the stimuli folder.

  4. Load the randomization file named stimuli-final.csv file in the loop item. The file is available in the stimuli folder.

  5. Select the backend as PsychoPy.

  6. Select Gazepoint as the eye tracker.

    [!NOTE]
    If you are not using the eye tracker you can also choose the Advanced mouse click option to run the experiment.

  7. Run the experiment.

The experiments will run for 24 trials and will take around 5 minutes to complete.You are required to calibrate the eye tracker before starting the experiment independently for each participant, according to the eye tracker manual.

Important

If you use some other stimuli files or generate your own stimuli, make sure you follow the stimuli preprocessing steps mentioned in the stimuli folder, prior to running the experiment.

📕 Experiment Guidelines

It is necessary to follow the guidelines mentioned below to ensure that the experiment runs smoothly. Follow the documents below to know more about the experiment guidelines.

  1. Before the Experiment
  2. Consent Form
  3. After the Experiment
  4. Study Information Sheet

🗃 Dataset

The data recorded during our experiments can be obtained here. The data has been anonymized and there exists no possible way to associate it with the participants.

The folders processed and raw need to be placed in the /data folder.

📈 How to run the analysis

Fixation plots

Run the create_fixation_plots.py script on the raw data recording whose fixation plot is to be viewed.

# this runs the script for subject 12.
# The data folder `sub-12` should exist in the given path.
python create_fixation_plots.py --subject 12 --path ../

This step will generate the fixation plots for subject 12 in the folder audio_target_plots_12/.

Final analysis plot

  1. Run the run_preprocessing.py script on the raw data recording that you want to preprocess.
# this runs the script for subject 12.
# The data folder `sub-12` should exist in the given path.
python run_preprocessing.py --subject 12 --path ../

This step will generate the intermediate csv file for subject 12 in the folder intermediate_csv.

  1. Run the create_analysis_plot.py script to generate the final analysis plot and view it. Optionally, add --save flag to save it.
python create_analysis_plot.py --path ./intermediate_csv/

📝 License

MIT License

👨‍🚀 Show your support

Give a ⭐️ if this project helped you!

Resources

  1. https://www.sciencedirect.com/science/article/abs/pii/S0749596X97925584
  2. https://people.ku.edu/~sjpa/Classes/CBS592/EyeTracking/visual-world-paradigm.html
  3. https://osdoc.cogsci.nl/3.3/tutorials/visual-world/

About

Our Eye tracking project

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HTML 70.4%
  • Python 14.9%
  • JavaScript 11.4%
  • TeX 2.8%
  • Shell 0.4%
  • Makefile 0.1%