You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm planning on using the config for the FFHQ-1024, just wanted to double check it's correct.
Is the "Conditioning Dropout Rate", the same as mapping_dropout_rate, or something else?
Attention Heads (Width / Head Dim) seems like it's configured automatically based on the "widths", and "depths"?
For Levels (Local + Global Attention) 3+2, I assume I add three " {"type": "shifted-window", "d_head": 64, "window_size": 7}, ", and two {"type": "global", "d_head": 64}?
The type for those self-attention blocks should be neighborhood unless you do want to use Swin, and we used a mapping dropout rate of 0. Apart from that, the config matches what we used.
The type for those self-attention blocks should be neighborhood unless you do want to use Swin, and we used a mapping dropout rate of 0. Apart from that, the config matches what we used.
And to answer your other two questions:
Something else iirc
Yes
May you release the pre-trained models of HDiT on FFHQ-1024?
I'm planning on using the config for the FFHQ-1024, just wanted to double check it's correct.
The text was updated successfully, but these errors were encountered: