You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a justification for this or is it simply an empirical magic number?
(2) While forwarding through your adapter for the value matrix, it seems like you reuse the query weight matrix (A as defined in the paper as I understand it). Is this a typo/bug?
Hi, many thanks for interests! The scaling factor is a hyper-parameter, you can manually adjust it but from my experience it won't affect the performance much. For the value matrix, actually we share the same decomposition here so that's why reusing it.
Hi, thanks for the great work and releasing the code to reproduce it.
I have a few questions regarding the kronecker adaptation forward pass through the adapter modules:
(1) The scaling factor you use for the KAdaptation is 1/5 times the scaling used in standard LoRA:
PEViT/vision_benchmark/evaluation/model.py
Line 564 in be6fb43
Is there a justification for this or is it simply an empirical magic number?
(2) While forwarding through your adapter for the value matrix, it seems like you reuse the query weight matrix (A as defined in the paper as I understand it). Is this a typo/bug?
PEViT/vision_benchmark/evaluation/model.py
Lines 571 to 580 in be6fb43
Shouldn't line 580 be
H = kronecker_product_einsum_batched(phm_rule2, Wv).sum(0)
instead?The text was updated successfully, but these errors were encountered: