-
Notifications
You must be signed in to change notification settings - Fork 4
push trained model weights for 0_baseline_LSTM and 2_multitask_dense … #178
push trained model weights for 0_baseline_LSTM and 2_multitask_dense … #178
Conversation
…with new features from [USGS-R#177]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great - thank you, Galen!
In 0dbac60, I wondered whether we may as well keep the multitask models commented out in 2a_model.R since the training weights only include the 0_baseline_LSTM and the 2_metab_dense models. But I think it's fine to leave them - if I understand the snakemake workflow, when building the pipeline the models would not re-train for the baseline and metab_dense models, but would trigger training for the two multitask models. Is that right, @jsadler2?
@@ -1,8 +1,8 @@ | |||
exp_name: "0_baseline_LSTM" | |||
exp_name: 0_baseline_LSTM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I specify that these should be quoted in the function that builds the config files so I was surprised to see that the quoted formatting gets lost when built on Tallgrass. I've played with this a bit before and I think if we updated the version of the {yaml}
R package and added it to the container these config files would keep the quoted format as before. I've added a note in #158.
It's good to know that the models still run even though these new config files don't have quotes around exp_name
and y_vars
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think python will automatically read these as strings since they aren't numbers
I've linked #176 to this PR since we've committed training weights for 10 model replicates here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just have one question. Otherwise it looks good.
#the 1_ models use the same model and therefore the same Snakefile | ||
#as the 0_baseline_LSTM run | ||
#list(model_id = "1_metab_multitask", | ||
#snakefile_dir = "0_baseline_LSTM", | ||
#config_path = stringr::str_remove(p2a_config_metab_multitask_yml, "2a_model/src/models/")), | ||
#list(model_id = "1a_multitask_do_gpp_er", | ||
#snakefile_dir = "0_baseline_LSTM", | ||
#config_path = stringr::str_remove(p2a_config_1a_metab_multitask_yml, "2a_model/src/models/")), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I commented these lines out since we didn't train new model weights for these models.
This undoes 0dbac60. @galengorski - was there a reason you uncommented the 1 and 1a models?
…with new features from #177. The only manual edit that I made was to change the
num_replicates
parameter from 1 to 10 in the_targets.R
file, then I ran the model to create the weights