CodiumAI Cover Agent aims to help efficiently increasing code coverage, by automatically generating qualified tests to enhance existing test suites
Cover-Agent now supports nearly any LLM model in the world, using LiteLLM package.
Notice that GPT-4 outperforms almost any open-source model in the world when it comes to code tasks and following complicated instructions.
However, we updated the post-processing scripts to be more comprehensive, and were able to successfully run the baseline script with llama3-8B
and llama3-70B models
, for example.
This repository includes the first known implementation of TestGen-LLM, described in the paper Automated Unit Test Improvement using Large Language Models at Meta.
Welcome to Cover-Agent. This focused project utilizes Generative AI to automate and enhance the generation of tests (currently mostly unit tests), aiming to streamline development workflows. Cover-Agent can run via a terminal, and is planned to be integrated into popular CI platforms.
We invite the community to collaborate and help extend the capabilities of Cover Agent, continuing its development as a cutting-edge solution in the automated unit test generation domain. We also wish to inspire researchers to leverage this open-source tool to explore new test-generation techniques.
This tool is part of a broader suite of utilities designed to automate the creation of unit tests for software projects. Utilizing advanced Generative AI models, it aims to simplify and expedite the testing process, ensuring high-quality software development. The system comprises several components:
- Test Runner: Executes the command or scripts to run the test suite and generate code coverage reports.
- Coverage Parser: Validates that code coverage increases as tests are added, ensuring that new tests contribute to the overall test effectiveness.
- Prompt Builder: Gathers necessary data from the codebase and constructs the prompt to be passed to the Large Language Model (LLM).
- AI Caller: Interacts with the LLM to generate tests based on the prompt provided.
Before you begin, make sure you have the following:
OPENAI_API_KEY
set in your environment variables, which is required for calling the OpenAI API.- Code Coverage tool: A Cobertura XML code coverage report is required for the tool to function correctly.
- For example, in Python one could use
pytest-cov
. Add the--cov-report=xml
option when running Pytest. - Note: We are actively working on adding more coverage types but please feel free to open a PR and contribute to
cover_agent/CoverageProcessor.py
- For example, in Python one could use
If running directly from the repository you will also need:
- Python installed on your system.
- Poetry installed for managing Python package dependencies. Installation instructions for Poetry can be found at https://python-poetry.org/docs/.
The Cover Agent can be installed as a Python Pip package or run as a standalone executable.
To install the Python Pip package directly via GitHub run the following command:
pip install git+https://github.com/Codium-ai/cover-agent.git
The binary can be run without any Python environment installed on your system (e.g. within a Docker container that does not contain Python). You can download the release for your system by navigating to the project's release page.
Run the following command to install all the dependencies and run the project from source:
poetry install
After downloading the executable or installing the Pip package you can run the Cover Agent to generate and validate unit tests. Execute it from the command line by using the following command:
cover-agent \
--source-file-path "<path_to_source_file>" \
--test-file-path "<path_to_test_file>" \
--code-coverage-report-path "<path_to_coverage_report>" \
--test-command "<test_command_to_run>" \
--test-command-dir "<directory_to_run_test_command>" \
--coverage-type "<type_of_coverage_report>" \
--desired-coverage <desired_coverage_between_0_and_100> \
--max-iterations <max_number_of_llm_iterations> \
--included-files "<optional_list_of_files_to_include>"
You can use the example projects within this repository to run this code as a test.
Follow the steps in the README.md file located in the templated_tests/python_fastapi/
directory, then return to the root of the repository and run the following command to add tests to the python fastapi example:
cover-agent \
--source-file-path "templated_tests/python_fastapi/app.py" \
--test-file-path "templated_tests/python_fastapi/test_app.py" \
--code-coverage-report-path "templated_tests/python_fastapi/coverage.xml" \
--test-command "pytest --cov=. --cov-report=xml --cov-report=term" \
--test-command-dir "templated_tests/python_fastapi" \
--coverage-type "cobertura" \
--desired-coverage 70 \
--max-iterations 10
For an example using go cd
into templated_tests/go_webservice
, set up the project following the README.md
.
To work with coverage reporting, you need to install gocov
and gocov-xml
. Run the following commands to install these tools:
go install github.com/axw/gocov/[email protected]
go install github.com/AlekSi/[email protected]
and then run the following command:
cover-agent \
--source-file-path "app.go" \
--test-file-path "app_test.go" \
--code-coverage-report-path "coverage.xml" \
--test-command "go test -coverprofile=coverage.out && gocov convert coverage.out | gocov-xml > coverage.xml" \
--test-command-dir $(pwd) \
--coverage-type "cobertura" \
--desired-coverage 70 \
--max-iterations 1
Try and add more tests to this project by running this command at the root of this repository:
poetry run cover-agent \
--source-file-path "cover_agent/main.py" \
--test-file-path "tests/test_main.py" \
--code-coverage-report-path "coverage.xml" \
--test-command "poetry run pytest --junitxml=testLog.xml --cov=templated_tests --cov=cover_agent --cov-report=xml --cov-report=term --log-cli-level=INFO" \
--coverage-type "cobertura" \
--desired-coverage 70 \
--max-iterations 1 \
--model "gpt-4o"
Note: If you are using Poetry then use the poetry run cover-agent
command instead of the cover-agent
run command.
A few debug files will be outputted locally within the repository (that are part of the .gitignore
)
generated_prompt.md
: The full prompt that is sent to the LLMrun.log
: A copy of the logger that gets dumped to yourstdout
test_results.html
: A results table that contains the following for each generated test:- Test status
- Failure reason (if applicable)
- Exit code,
stderr
stdout
- Generated test
This project uses LiteLLM to communicate with OpenAI and other hosted LLMs (supporting 100+ LLMs to date). To use a different model other than the OpenAI default you'll need to:
- Export any environment variables needed by the supported LLM following the LiteLLM instructions.
- Call the name of the model using the
--model
option when calling Cover Agent.
For example (as found in the LiteLLM Quick Start guide):
export VERTEX_PROJECT="hardy-project"
export VERTEX_LOCATION="us-west"
cover-agent \
...
--model "vertex_ai/gemini-pro"
This section discusses the development of this project.
Before merging to main make sure to manually increment the version number in cover_agent/version.txt
at the root of the repository.
Set up your development environment by running the poetry install
command as you did above.
Note: for older versions of Poetry you may need to include the --dev
option to install Dev dependencies.
After setting up your environment run the following command:
poetry run pytest --junitxml=testLog.xml --cov=templated_tests --cov=cover_agent --cov-report=xml --cov-report=term --log-cli-level=INFO
This will also generate all logs and output reports that are generated in .github/workflows/ci_pipeline.yml
.
Below is the roadmap of planned features, with the current implementation status:
- Automatically generates unit tests for your software projects, utilizing advanced AI models to ensure comprehensive test coverage and quality assurance. (similar to Meta)
- Being able to generate tests for different programming languages
- Being able to deal with a large variety of testing scenarios
- Generate a behavior analysis for the code under test, and generate tests accordingly
- Check test flakiness, e.g. by running 5 times as suggested by TestGen-LLM
- Cover more test generation pains
- Generate new tests that are focused on the PR changeset
- Run over an entire repo/code-base and attempt to enhance all existing test suites
- Improve usability
- Connectors for GitHub Actions, Jenkins, CircleCI, Travis CI, and more
- Integrate into databases, APIs, OpenTelemetry and other sources of data to extract relevant i/o for the test generation
- Add a setting file
CodiumAI's mission is to enable busy dev teams to increase and maintain their code integrity. We offer various tools, including "Pro" versions of our open-source tools, which are meant to handle enterprise-level code complexity and are multi-repo codebase aware.