Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Did you try few-shot prompting GPT-4? #2

Open
lukasberglund opened this issue Feb 6, 2024 · 1 comment
Open

Did you try few-shot prompting GPT-4? #2

lukasberglund opened this issue Feb 6, 2024 · 1 comment

Comments

@lukasberglund
Copy link

Hi! I enjoyed reading your paper. I also appreciate that you provided all your code. I suspect that GPT-4 would do a lot better at some of the questions (for example the accessibility questions) if you gave it a few-shot prompt (e.g. a five shot prompt). Did you try this out at all? If so, how well did models do?

@skywalker023
Copy link
Owner

skywalker023 commented Feb 9, 2024

Hello, thanks for your interest in our paper and a great question!
Indeed, GPT-4 will do a lot better if you give few-shot examples. However, giving few-shot examples for theory-of-mind questions means you are trying to make the model directly depend on lower-level processes (e.g., shortcut pattern matching). This is violating the "mentalizing" criteria for ToM validation, which is mentioned in our paper. Hope this helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants