Change the repository type filter
All
Repositories list
103 repositories
- This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.
- Collective Knowledge (CK) and Common Metadata eXchange (CMX): community-driven projects to learn how to run AI, ML and other emerging workloads in a more efficient and cost-effective way across diverse models, datasets, software and hardware using MLPerf automations, CK playground and open reproducibility and optimization challenges
- CM interface and automation recipes to analyze MLPerf Inference, Tiny and Training results. The goal is to make it easier for the community to visualize, compare and reproduce MLPerf results and add derived metrics such as Performance/Watt or Performance/$
- MLCFlow: Simplifying MLPerf Automations
- These are automated test submissions for validating the MLPerf inference workflows
- A collection of portable, reusable and cross-platform CM automations for MLOps and MLPerf to simplify the process of building, benchmarking and optimizing AI systems across diverse models, data sets, software and hardware
mlperf_inference_submissions
Public templatechakra
PublicGaNDLF
PublicA generalizable application framework for segmentation, regression, and classification using PyTorchpolicies
Publicmedperf
Publicmobile_open
Publicpower-dev
Publiccroissant
PublicCroissant is a high-level format for machine learning datasets that brings together four rich layers.training_results_v3.1
Publicinference_results_visualization_template
Public templatelogging
Publicailuminate
Public