Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8-bit AdamW Optimizer #15

Closed
wants to merge 1 commit into from
Closed

8-bit AdamW Optimizer #15

wants to merge 1 commit into from

Conversation

RyanJDick
Copy link
Collaborator

Add the 8-bit AdamW optimizer. This optimizer reduces the VRAM requirements for training (a little for LoRA training, quite a bit for full model training).

  • Merge earlier PRs and change target branch to main.

@RyanJDick RyanJDick marked this pull request as ready for review August 11, 2023 00:27
@RyanJDick RyanJDick marked this pull request as draft August 14, 2023 14:16
@RyanJDick RyanJDick removed the request for review from brandonrising August 14, 2023 14:16
@RyanJDick
Copy link
Collaborator Author

Moved back to draft, because Kent reported some issue on Windows with this that I need to investigate.

Base automatically changed from ryan/sdxl-lora-2 to main August 14, 2023 19:16
@RyanJDick RyanJDick closed this May 13, 2024
@RyanJDick RyanJDick deleted the ryan/adam-8 branch May 13, 2024 21:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant