Skip to content

Latest commit

 

History

History
39 lines (26 loc) · 846 Bytes

README.md

File metadata and controls

39 lines (26 loc) · 846 Bytes

Evaluation scripts for code models

Compute quality

python experiments/compute_quality --config-name {conf}

Create random splits and finetune models on them

python experiments/dataset_random_split.py

Add the -r (--run) flag to run the experiment immediately (on a cluster).

Create quality-based splits and finetune models on them

python experiments/dataset_quality_split.py

Args:

-d --data-path path to the desired dataset
-g --gpu preferred gpu (one of {a100, v100})
-q --quality-key what quality key to use for the split
-r --run whether to run the experiment with slurm immediately after the split

Use the -q flag to change the key of the quality metric we use to split (we assume higher is better).

Train a model

python run.py --config-name {conf}