Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First commit #1

Merged
merged 4 commits into from
Oct 28, 2024
Merged

First commit #1

merged 4 commits into from
Oct 28, 2024

Conversation

ternaus
Copy link
Contributor

@ternaus ternaus commented Oct 28, 2024

Summary by Sourcery

Introduce a benchmarking suite for image augmentation libraries, supporting Albumentations, imgaug, torchvision, Kornia, and Augly. Implement a BenchmarkRunner class to manage the benchmarking process, including adaptive warmup and variance stability checks. Add scripts for running benchmarks and generating comparison outputs. Set up CI for code formatting and type checking. Update README with detailed documentation.

New Features:

  • Introduce a comprehensive benchmarking suite for comparing the performance of popular image augmentation libraries, including Albumentations, imgaug, torchvision, Kornia, and Augly.

Enhancements:

  • Implement a BenchmarkRunner class to manage the benchmarking process, including loading images, running transforms, and collecting results.
  • Add support for adaptive warmup and variance stability checks to ensure reliable benchmarking results.
  • Include detailed performance metrics and system information in the benchmarking output.

Build:

  • Add a script to run benchmarks for a single library and generate output in JSON format.
  • Add a script to run benchmarks for all supported libraries and generate a comparison CSV file.

CI:

  • Set up a CI workflow to check code formatting with ruff and type checking with mypy.

Documentation:

  • Update README.md with detailed documentation on the benchmarking suite, including setup instructions, usage, and methodology.

Tests:

  • Add implementations for various image augmentation libraries to support benchmarking, including Albumentations, imgaug, torchvision, Kornia, and Augly.

Copy link

sourcery-ai bot commented Oct 28, 2024

Reviewer's Guide by Sourcery

This pull request establishes a comprehensive benchmarking suite for comparing the performance of popular image augmentation libraries. The implementation includes a modular architecture for running benchmarks, collecting metrics, and generating comparison reports. The suite supports multiple libraries (Albumentations, imgaug, torchvision, Kornia, and Augly) and features adaptive warmup, statistical analysis, and thread control settings.

No diagrams generated as the changes look simple and do not need a visual representation.

File-Level Changes

Change Details Files
Implemented the core benchmarking infrastructure
  • Created a BenchmarkRunner class to manage benchmark execution
  • Added support for adaptive warmup to ensure stable measurements
  • Implemented thread control settings for consistent single-threaded performance
  • Added system information collection and environment verification
benchmark/runner.py
benchmark/utils.py
Added library-specific transform implementations
  • Created transform implementations for Albumentations
  • Created transform implementations for imgaug
  • Created transform implementations for torchvision
  • Created transform implementations for Kornia
  • Created transform implementations for Augly
  • Defined standardized transform specifications
benchmark/transforms/albumentations_impl.py
benchmark/transforms/imgaug_impl.py
benchmark/transforms/torchvision_impl.py
benchmark/transforms/kornia_impl.py
benchmark/transforms/augly_impl.py
benchmark/transforms/specs.py
Implemented results processing and comparison functionality
  • Added JSON result file generation for each library
  • Created comparison table generation with statistical analysis
  • Added support for markdown report generation
benchmark/compare_results.py
Added shell scripts for running benchmarks
  • Created script for running single library benchmark
  • Created script for running all libraries and generating comparison
  • Added virtual environment management for each library
run_single.sh
run_all.sh
Added project configuration and CI setup
  • Added pre-commit configuration for code quality checks
  • Set up GitHub Actions workflow for CI
  • Added GitHub funding configuration
.pre-commit-config.yaml
.github/workflows/ci.yml
.github/FUNDING.yml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time. You can also use
    this command to specify where the summary should be inserted.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ternaus - I've reviewed your changes - here's some feedback:

Overall Comments:

  • Consider adding a LICENSE file to clarify the terms under which this benchmarking suite can be used and distributed.
Here's what I looked at during the review
  • 🟢 General issues: all looks good
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟡 Complexity: 1 issue found
  • 🟡 Documentation: 3 issues found

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

README.md Show resolved Hide resolved
README.md Show resolved Hide resolved
README.md Show resolved Hide resolved
benchmark/runner.py Outdated Show resolved Hide resolved
benchmark/compare_results.py Show resolved Hide resolved
benchmark/runner.py Outdated Show resolved Hide resolved
benchmark/runner.py Outdated Show resolved Hide resolved
benchmark/runner.py Outdated Show resolved Hide resolved
benchmark/transforms/imgaug_impl.py Outdated Show resolved Hide resolved
@ternaus ternaus merged commit 8a40869 into main Oct 28, 2024
1 check passed
@ternaus ternaus deleted the first_commit branch October 28, 2024 02:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant