New feature : Low-Rank Pivotal Tuning #53
Replies: 5 comments 6 replies
-
By the way, compare the results with non-pivotal tuning 3000 steps-trained LoRA with 1e-4 with same prompt : Notice how reconstructiveness has decreased as it is simply overwhelmed by other prompts. |
Beta Was this translation helpful? Give feedback.
-
Thing to note : I've simply tried to jointly training them (text embedding + LoRAs), but that makes it overfit very easily with entangled features. Extensive comparisons on the tradeoff between training inversion + training adaptation is required here. I think this is the kinda the ultimate form of space of parameters we are going to get : although maybe reasonable next step, we can try multi-token inversion... |
Beta Was this translation helpful? Give feedback.
-
hello, sorry maybe a dumb question, so this helps in the flexibility of the concept in regards to promps? I'm trying it but not sure if batch_size affects embedding or lora training, or does it apply to both? |
Beta Was this translation helpful? Give feedback.
-
Oh, there goes my weekend ... |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
New feature! Pivotal Tuning with LoRA
I have successfully adopted low-rank pivotal tuning, and it seems to perform very well compared to other settings.
Paper of Pivotal Tuning : https://arxiv.org/abs/2106.05744
With learning rate of 1e-4, 2500 steps, here are the results:
Basically, we first perform textual inversion for 1500 steps, then fine-tune LoRA for another 1000 steps.
You can checkout
run_lorpt.sh
for training script, andscripts/run_lorpt.ipynb
in the develop branch.Beta Was this translation helpful? Give feedback.
All reactions