You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
input_size= (dim*self.num_categories) +num_continuousl=dim//8# to be used shared embeddinghidden_dimensions=list(map(lambdat: input_size*t, mlp_hidden_mults))
I think it could be very confusing because the author of the paper used two kinds of "l" parameters (size of the input & dimension of shared embedding)
Other person already created issue about shared embedding, so the code should be modified considering this issue too. #12
Please check whether my opinion is correct or not.
Thank you.
The text was updated successfully, but these errors were encountered:
I appreciate for your code :)
I want to suggest an issue about hyperparameters.
I think, according to the paper, hyperparameters of MLP part should be changed.
According to Appendix B of the paper, "mlp_hidden_mults" is multiplied to "input_size",
and "l" is shared embedding dimension
The code should be changed as below. (class TabTransformer() - def init())
[Original code]
[Modified code]
I think it could be very confusing because the author of the paper used two kinds of "l" parameters (size of the input & dimension of shared embedding)
Other person already created issue about shared embedding, so the code should be modified considering this issue too.
#12
Please check whether my opinion is correct or not.
Thank you.
The text was updated successfully, but these errors were encountered: