Enabling Sequence Parallelism for finetuning gpt with lora #8620
Unanswered
AnirudhVIyer
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I am trying to implement sequence parallelism when finetuning a gpt model with lora. Is this possible? I cannot find any documentation regarding this.
Beta Was this translation helpful? Give feedback.
All reactions