Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support FP8 KV Cache #652

Merged
merged 27 commits into from
Oct 29, 2024
Merged

Support FP8 KV Cache #652

merged 27 commits into from
Oct 29, 2024

Conversation

ajtejankar
Copy link
Contributor

@ajtejankar ajtejankar commented Oct 17, 2024

Tested GLUE tasks with and without adapter and the accuracy is as expected. Previous implementation using vllm kernels didn't work in this setting. Uses static scales obtained with ultrachat 2k. The cache is stored in E4M3 format. Test weights at ajinkya-tejankar/Mistral-7B-Instruct-v0.2-FP8-UltraChat-2000-KV.

@ajtejankar ajtejankar marked this pull request as draft October 17, 2024 08:00
@ajtejankar ajtejankar marked this pull request as ready for review October 18, 2024 21:54
@ajtejankar ajtejankar requested a review from tgaddair October 18, 2024 21:54
Copy link
Contributor

@tgaddair tgaddair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ajtejankar ajtejankar merged commit 2ff1c71 into main Oct 29, 2024
2 checks passed
@ajtejankar ajtejankar deleted the fp8-kv-flash-infer branch October 29, 2024 19:41
@samagra14
Copy link

Suddenly started getting these errors - 'FlashLlamaAttention' object has no attribute 'fp8_kv' since about a few hours ago both on L40S and A10Gs.

I am guessing it is because of this merge. Any idea how to resolve ?

@ajtejankar
Copy link
Contributor Author

Should be fixed with #662

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants