A library for software engineering task evaluation
Task | Metric | Reference | If Integrated? |
---|---|---|---|
Code Generation | EM (Exact Match) | CodeXGLUE - Text2Code Generation | ✔️ |
BLEU | CodeXGLUE - Text2Code Generation | ✔️ |
Task | Metric | Reference | If Integrated? |
---|---|---|---|
Code Translation | EM (Exact Match) | CodeXGLUE -- Code Translator | ✔️ |
BLEU | CodeXGLUE -- Code Translator | ✔️ | |
Code Repair | EM (Exact Match) | CodeXGLUE -- Code Refinement | ✔️ |
BLEU | CodeXGLUE -- Code Refinement | ✔️ | |
Code Completion | EM (Exact Match) | CodeXGLUE -- Code Completion (token level) | ✔️ |
Code Search |
Task | Metric | Reference | If Integrated? |
---|---|---|---|
Code Summarization | EM (Exact Match) | CodeXGLUE - Code-Text | ✔️ |
Task | Metric | Reference | If Integrated? |
---|---|---|---|
Clone Detection | MAP@R score | CodeXGLUE - Clone Detection | ✔️ |
Bug/Defect Prediction - Binary | EM (Exact Match) | CodeXGLUE - Defect Detection | ✔️ |
Bug/Vulnerability Type Prediction - Multi-class | Paper with Replication Package |
Task | Metric | Reference | If Integrated? |
---|---|---|---|
Fault/Bug Localization | Paper with Replication Package |
Thank you for your interest in contributing! This document outlines the process for contributing to our project. Your contributions can make a real difference, and we appreciate every effort you make to help improve this project.
- Identify your target software engineering task (Unfamiliar with SE tasks? Find them here!)
You can either choose to integrate an existing evaluation technique or add a new evaluation technique.
Note, there could be evaluation tasks that are currently being worked on. Check the pull requests tab to see if a task is already in the works
- Integrate the evaluation method
Ensure that you have a detailed readme that describes how to use the evaluation method.
An example of an evaluation method and appropriate readme can be found here.
- Add a test script for you evaluation
In order to ensure the validity of the evaluation method, we require that you provide a test script as well.
There is a separate test folder that you must add your tests to. We also ask that you provide a 'how-to-test' section in your readme, detailing how to test the evaluation method.
An example test script can be found here.
Mitchell Huggins, please contact [email protected] if any questions about SEVAL.
- python 3.6 or 3.7
- numpy