Skip to content

Commit

Permalink
Typo fix: torwards -> towards (huggingface#2134)
Browse files Browse the repository at this point in the history
  • Loading branch information
RahulBhalley authored Jan 27, 2023
1 parent c750a82 commit 43c5ac2
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion examples/dreambooth/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ Low-Rank Adaption of Large Language Models was first introduced by Microsoft in
In a nutshell, LoRA allows to adapt pretrained models by adding pairs of rank-decomposition matrices to existing weights and **only** training those newly added weights. This has a couple of advantages:
- Previous pretrained weights are kept frozen so that the model is not prone to [catastrophic forgetting](https://www.pnas.org/doi/10.1073/pnas.1611835114)
- Rank-decomposition matrices have significantly fewer parameters than the original model, which means that trained LoRA weights are easily portable.
- LoRA attention layers allow to control to which extent the model is adapted torwards new training images via a `scale` parameter.
- LoRA attention layers allow to control to which extent the model is adapted towards new training images via a `scale` parameter.

[cloneofsimo](https://github.com/cloneofsimo) was the first to try out LoRA training for Stable Diffusion in
the popular [lora](https://github.com/cloneofsimo/lora) GitHub repository.
Expand Down

0 comments on commit 43c5ac2

Please sign in to comment.