-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Casting to long #236
Comments
Link is broken for me, but I think casting to long is a habit I still have from learning from the pytorch tutorial that came out like 7 years ago with the initial release :D. I am not sure that it is at all necessary. |
@Adamits if I removed these casts and things work as before would you recommend that PR? |
32-bit seems pretty safe to me. My one reason why I am not sure is, I think the tensors are going to be 32-bit on most cuda GPUs anyway, and idk if 64-bit will really be a slow down on CPU. It could be that I dont have a good understanding though, and I don't really see many cases where just making it 32-bit would effect us. |
I know of no reason to think these are necessary. Closes CUNY-CL#236. Closes CUNY-CL#236. Closes CUNY-CL#236. Closes CUNY-CL#236.
Here we cast encoded tensors to the
Long
dtype, i.e., int64. Why? Surely int32 would be enough. Is there any particular reason for this? Could we save memory by getting rid of the dtype spec?The text was updated successfully, but these errors were encountered: