-
Notifications
You must be signed in to change notification settings - Fork 0
Evaluation
Because of the lack of a subjective metric, indicating how well the predicted function name suites the function body, the evaluation of this project is done manually.
Please follow these steps to create your own examples.c2s. You can skip this if you want to use the existing example functions.
-
Write your own
examples.py
. Feel free to update the existing functions as a starting point. -
Write the parsed ASTs. Use the
parse_python.py
script included in the py150 dataset to parse yourexamples.py
. (Unfortunately this script was written in Python 2.7 and fails when executed in a python 3 environment).
python parse_python.py "<PATH_TO_EXAMPLES_FOLDER>/examples.py" > "<PATH_TO_EXAMPLES_FOLDER>/examples.json"
- Finally the parsed ASTs have to be transformed so that the model can read them. The script is included in the examples folder. (This is again Python 3.8 like the rest of the project).
python extract.py examples.json examples.c2s
To evaluate a trained model enter the code-embeddings docker container and run the evaluation script. (When creating a new container remember to copy the model checkpoints folder).
python ./src/evaluate.py \
<OUTPUT_FILE> \
--dict <PATH_TO_PREPROCESSED_DICT> \
--data <PATH_TO_EXAMPLES>