Replies: 1 comment 1 reply
-
FSDP+QLoRA only supports 4-bit quantization |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi
In the Readme it says QLora 8bit training of a 70B model is possible with 80GB of ram. My cards are 48x2 = 96GB. How do I do Qlora training for 70B Llama3?
Cards: 2* RTX A6000
Beta Was this translation helpful? Give feedback.
All reactions