Skip to content

Code and data for our IROS paper: "Are Large Language Models Aligned with People's Social Intuitions for Human–Robot Interactions?"

Notifications You must be signed in to change notification settings

lwachowiak/LLMs-for-Social-Robotics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Are Large Language Models Aligned with People's Social Intuitions for Human–Robot Interactions?

In this paper, we investigate the alignment between LLMs and people in experiments from social HRI.

@misc{wachowiak2024large,
      title={Are Large Language Models Aligned with People's Social Intuitions for Human-Robot Interactions?}, 
      author={Lennart Wachowiak and Andrew Coles and Oya Celiktutan and Gerard Canal},
      year={2024},
      eprint={2403.05701},
      archivePrefix={arXiv},
      primaryClass={cs.RO}
}

Results

Correlations are highest with GPT-4, as shown in the following scatterplots:

Experiment 1 Correlations for Exp1 with GPT-4

Experiment 2 Correlations for Exp1 with GPT-4 Correlations for Exp1 with GPT-4

For full results, refer to the paper. Scatterplots for other models can be found here for Experiment 1 and here for Experiment 2.

Video Stimuli

To get the video stimuli, use the following GitHub: https://github.com/lwachowiak/HRI-Video-Survey-on-Preferred-Robot-Responses

About

Code and data for our IROS paper: "Are Large Language Models Aligned with People's Social Intuitions for Human–Robot Interactions?"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published