Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update MoE examples #192

Merged
merged 7 commits into from
Sep 23, 2024
Merged

Update MoE examples #192

merged 7 commits into from
Sep 23, 2024

Conversation

mgoin
Copy link
Collaborator

@mgoin mgoin commented Sep 20, 2024

Update the FP8 Mixtral example to not use GPTQ and add it to examples list

Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

@mgoin mgoin added the ready When a PR is ready for review label Sep 20, 2024
@dsikka
Copy link
Collaborator

dsikka commented Sep 23, 2024

slightly unrelated but once your fp8 moe PR in vllm lands, we should add an e2e test case: https://github.com/vllm-project/llm-compressor/blob/main/tests/e2e/vLLM/test_vllm.py

@mgoin mgoin merged commit 2e0035f into main Sep 23, 2024
6 of 7 checks passed
@mgoin mgoin deleted the moe-fp8-update branch September 23, 2024 14:37
dbarbuzzi added a commit to dbarbuzzi/llm-compressor that referenced this pull request Sep 23, 2024
mgoin pushed a commit that referenced this pull request Sep 27, 2024
* Add tests for examples

* Ignore examples tests by default

* Trailing comma

* Add test for "quantizing_moe_fp8" example

* Update "quantizing_moe" example tests

* Add test for "compressed_inference" example folder

* Remove unused import

* Add new dependency

* Different approach for "flash_attn"

* Add comment about flash_attn requirement

* Test additional quantizing_moe example script

* Add decorator to skip based on available VRAM

* Limit GPU usage in 'cpu_offloading' example

* Add optional pytest-xdist parallelization

* Reduce persistent /tmp usage

* Fix parametrization in big_models

* Add pytest mark for GPU count requirement

* Add 'multi_gpu' pytest marker

* Skip 'deepseek_moe_w4a16.py' by default

* Fix skip mark

* Mark 'ex_trl_distillation.py' as multi_gpu

* Abstract command copy/run to helper functions

* Update for MoE examples PR #192

* Reduce test marker/decorator redundancy

* style fixes

* Rip out unused run parallelization

* Exclude 'deepseek_moe_w8a8_fp8' from multi-GPU

* Use variable for repeated string literal

* Use `requires_gpu_count` over `requires_gpu`
markmc pushed a commit to markmc/llm-compressor that referenced this pull request Nov 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready When a PR is ready for review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants