Replies: 1 comment
-
Hi @mfoglio, The file # @package _global_
trainer: cpu
datamodule:
batch_size: 2
num_workers: 1 You'll need to change it to look something like this: # @package _global_
defaults:
- override /trainer: cpu
datamodule:
batch_size: 2
num_workers: 1 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am using hydra to setup a pytorch lightning project to train a neural network.
By default, I'd like to:
I also want to create an optional file where I'd like to store settings that needs to be used when training locally: in fact, when running the project locally, I need to change the 3 parameters above: I want to try on the CPU using 1 worker and a batch_size=2. The idea is that by simply starting the script with the parameter
local=default
, the script will use all the proper settings needs to run the code locally.So far, I achieved everything I wanted to, except using the CPU.
Here's a simplified structure of my hydra files:
File
config/train.yaml
(entrypoint):File
config/datamodule/vehicles.yaml
:File
config/trainer/default.yaml
:File
config/trainer/cpu.yaml
(alternative to the default above):File
config/local/default.yaml
:If I add the line
trainer: cpu
to theconfig/local/default.yaml
I get the following error when accessingcfg.trainer._target_
:Why would that be? I thought that the following two line in file
config/trainer/cpu.yaml
would load the_target_
value fromconfig/local/default.yaml
:However, if I remove the
trainer: cpu
from fileconfig/local/default.yaml
and I run the script with parameterslocal=default trainer=cpu
everything works fine. What am I doing wrong?Beta Was this translation helpful? Give feedback.
All reactions