Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
phelps-sg authored Sep 29, 2023
1 parent 028b992 commit 0eec720
Showing 1 changed file with 18 additions and 6 deletions.
24 changes: 18 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,18 @@

[![GitHub Workflow Status](https://github.com/phelps-sg/llm-cooperation/actions/workflows/tests.yaml/badge.svg)](https://github.com/phelps-sg/llm-cooperation/actions/workflows/tests.yaml)

This repo contains code, explanations and results of experiments to ascertain the propensity of large-language models
to cooperate in social dilemmas. The experiments are described in the following papers.

S. Phelps and Y. I. Russell, *Investigating Emergent Goal-Like Behaviour in Large Language Models Using Experimental
Economics*, working paper, May 2023, [arXiv:2305.07970](https://arxiv.org/abs/2305.07970)
This repo contains code, explanations and results of experiments to ascertain the propensity of large-language models to cooperate in social dilemmas. The experiments are described in the following papers.


S. Phelps and Y. I. Russell, *Investigating Emergent Goal-Like Behaviour in Large Language Models Using Experimental Economics*, working paper, May 2023, [arXiv:2305.07970](https://arxiv.org/abs/2305.07970)

S. Phelps and R. Rannson, *Of Models and Tin Men - a behavioural economics study of principal-agent problems in AI alignment using large-language models*, working paper, July 2023, [arXiv:2307.11137](https://arxiv.org/abs/2307.11137)


## Getting started


1. Install [mambaforge](https://github.com/conda-forge/miniforge#mambaforge).
2. In a shell:
~~~bash
Expand All @@ -20,40 +22,50 @@ make install
make run
~~~


## Configuration


To run specific experiments and parameter combinations follow instructions below.


1. In a shell:


~~~bash
mkdir ~/.llm-cooperation
cat > ~/.llm-cooperation/llm_config.py << EOF
grid = {
"temperature": [0.1, 0.6],
"model": ["gpt-3.5-turbo", "gpt-4"],
"max_tokens": [300]
}
sample_size = 3
experiments = ["dictator", "dilemma"]
EOF
~~~


2. Edit `$HOME/.llm-cooperation/llm_config.py` with required values.


3. In a shell:
~~~bash
export OPENAI_API_KEY='<key>'
make run
~~~




## Contributing


If you have a new experiment then please submit a pull request.
All code should have corresponding tests and all experiments
should be replicable.

0 comments on commit 0eec720

Please sign in to comment.