Skip to content

mrhuggins03/SEVAL

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

94 Commits
 
 
 
 
 
 
 
 

Repository files navigation

SEEDEVAL

A library for software engineering task evaluation

Supported Tasks

Text-to-Code

Task Metric Reference If Integrated?
Code Generation EM (Exact Match) CodeXGLUE - Text2Code Generation ✔️
BLEU CodeXGLUE - Text2Code Generation ✔️

Code-to-Code

Task Metric Reference If Integrated?
Code Translation EM (Exact Match) CodeXGLUE -- Code Translator ✔️
BLEU CodeXGLUE -- Code Translator ✔️
Code Repair EM (Exact Match) CodeXGLUE -- Code Refinement ✔️
BLEU CodeXGLUE -- Code Refinement ✔️
Code Completion EM (Exact Match) CodeXGLUE -- Code Completion (token level) ✔️
Code Search

Code-to-Text

Task Metric Reference If Integrated?
Code Summarization EM (Exact Match) CodeXGLUE - Code-Text ✔️

Code Clasification

Task Metric Reference If Integrated?
Clone Detection MAP@R score CodeXGLUE - Clone Detection ✔️
Bug/Defect Prediction - Binary EM (Exact Match) CodeXGLUE - Defect Detection ✔️
Bug/Vulnerability Type Prediction - Multi-class Paper with Replication Package

Others

Task Metric Reference If Integrated?
Fault/Bug Localization Paper with Replication Package

How to Contribute

Thank you for your interest in contributing! This document outlines the process for contributing to our project. Your contributions can make a real difference, and we appreciate every effort you make to help improve this project.

Getting Started

  1. Identify your target software engineering task (Unfamiliar with SE tasks? Find them here!)

You can either choose to integrate an existing evaluation technique or add a new evaluation technique.

Note, there could be evaluation tasks that are currently being worked on. Check the pull requests tab to see if a task is already in the works

  1. Integrate the evaluation method

Ensure that you have a detailed readme that describes how to use the evaluation method.

An example of an evaluation method and appropriate readme can be found here.

  1. Add a test script for you evaluation

In order to ensure the validity of the evaluation method, we require that you provide a test script as well.

There is a separate test folder that you must add your tests to. We also ask that you provide a 'how-to-test' section in your readme, detailing how to test the evaluation method.

An example test script can be found here.

Coordinator

Mitchell Huggins, please contact [email protected] if any questions about SEVAL.

Contributors

mrhuggins03 chaseltb ArsalaanK7 BrennenFa EZ7051 ywang146 kritipat

Dependancy

  • python 3.6 or 3.7
  • numpy

About

Auto-evaluation for SE tasks.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%