-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Number of threads for CPU training #19
Comments
Hi, I'm looking to train a dataset to test DeepNOG with. Could you provide a bit more information on how to do this? I'm working on a cluster environment if that helps at all. Do I need to make any changes to deepnog train relative to the docs once I set OMP_NUM_THREADS? |
Last time I tried this, setting this env variable was sufficient to control the number of CPU cores. In general, I'd suggest to train models on GPU rather than CPU for massively improved efficiency. |
I'm trying out GPU training, but our partitions with GPUs here are not properly detected as having GPUs by torch as having CUDA available. I'm investigating that on my end, but wanted to try training a dataset in the mean time :) Thanks for the clarification! |
In this case you might want to try training a small model, that is, not on root or bacteria level, but something like tax 5794. This might finish in finite time also on CPUs :) |
We should document how to set the number of threads for training on CPUs (in case anyone would like to do that).
Basically, it's
export OMP_NUM_THREADS=8
for intra-op parallelism.Alternatively, this may be set programatically with
torch.set_num_threads()
andset_num_interop_threads()
.The text was updated successfully, but these errors were encountered: