Is it possible to continue fine-tuning after the training has been interrupted? #997
Unanswered
hschaeufler
asked this question in
Q&A
Replies: 1 comment 1 reply
-
You can resume trianing by specifying We don't keep the iteration state which is why it resumes at iteration 1. But it is actually continuing from the partially trained adapters. Does that work ok for you? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Every now and then I had to interrupt the fine-tuning. I then tried to continue the training by adjusting the resume_adapter_file field in my config.yaml. However, the training then starts from iteration 1 and not at Iteration 3000 where the last Adapter-File was written (0003000_adapters.safetensors). Am I doing something wrong? Or do I have to adjust the number of iterations manually to remaining iterations?
Which saftensor file do I have to specify in the field? The adapters.saftensors or the 0003000_adapters.safetensors file?
It tried this
and i also tried it like this.
Beta Was this translation helpful? Give feedback.
All reactions