Skip to content

Training ops kernels: Speeding up the Llama-based MoE architectures #15837

Training ops kernels: Speeding up the Llama-based MoE architectures

Training ops kernels: Speeding up the Llama-based MoE architectures #15837

Workflow file for this run

name: Formatting
on:
workflow_dispatch:
pull_request:
branches:
'**'
merge_group:
branches: [ master ]
schedule:
- cron: "0 0 * * *"
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
# formatting and basic install on cpu-only machine
unit-tests:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- name: environment
run: |
which python
python --version
- name: Install dependencies
run: |
# Previously we would do pip install .[dev] but this is causing out of
# space errors start with torch 2.1.0 release
grep -E "clang-format|pre-commit" requirements/requirements-dev.txt | xargs pip install
- name: Formatting checks
run: |
pip show pre-commit clang-format
pre-commit run --all-files