-
Notifications
You must be signed in to change notification settings - Fork 11
Modification and reorganization of NeuralSnow Extension #1118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThis update removes the deprecated Possibly related PRs
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (3)
src/Artifacts.jl (1)
427-438
: Well-documented artifact access function.The function is appropriately documented and leverages the artifact system for reliable weight file access.
The docstring could be enhanced with a brief description of the weights format and usage.
""" neural_depth_model_weights_path(; context = nothing) Triggers the download of the text file containing the trained weights for the neural network for predicting snow depth as described in Charbonneau et. al., 2025, if not already downloaded, using Julia Artifacts, and returns the path to this file. + +The weights file contains parameters for a Flux neural network with the architecture +specified in NeuralDepthModel.jl. """src/standalone/Snow/NeuralDepthModel.jl (2)
200-208
: Prefer&&
over&
for Boolean logic
&
performs bitwise operations and evaluates both sides.&&
short‑circuits and better conveys intent.- if (Δt > 43200) & (isnothing(α)) + if (Δt > 43200) && isnothing(α)
263-270
: Docstring signature driftThe docstring mentions
model::SnowModel
but the actual method no longer accepts it.
Keep them in sync to avoid confusion.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
.buildkite/Project.toml
(1 hunks)Artifacts.toml
(1 hunks)Project.toml
(2 hunks)src/Artifacts.jl
(1 hunks)src/standalone/Snow/NeuralDepthModel.jl
(1 hunks)src/standalone/Snow/Snow.jl
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`*`: # CodeRabbit Style Guide (CliMA Inspired)
Leverage CodeRabbit for code reviews aligning with CliMA's practices.
I. Key Areas for CodeRabbit:
- Style: Naming (Titl...
*
: # CodeRabbit Style Guide (CliMA Inspired)Leverage CodeRabbit for code reviews aligning with CliMA's practices.
I. Key Areas for CodeRabbit:
- Style: Naming (TitleCase, lowercase_with_underscores), line length (<78), indentation (4 spaces), import order.
- DRY: Flag duplicated code; encourage modularization.
- Docstrings: Flag missing docstrings for modules, structs, functions.
- Tests: Detect missing unit tests (if configured).
- Complexity: Report on cyclomatic complexity.
II. Conventions (CodeRabbit Can Help):
- Naming: Follow CliMA/CMIP conventions. Avoid
l
,O
,I
as single-char vars.- Unicode: Human review needed. Acceptable chars.
III. Documentation (CodeRabbit Flags Deficiencies):
- Docstrings: Describe purpose, inputs, outputs, examples.
IV. Testing (CodeRabbit: Missing Tests):
- Unit tests required for new/modified code.
V. CodeRabbit Config:
.coderabbit.yaml
: Customize language, line length, tests.VI. Workflow:
- Review CodeRabbit's suggestions; investigate, provide context, address issues, improve config.
VII. Abandoned Code: Delete or clearly mark as deprecated.
VIII. Reviewer Responsibilities: Ensure adherence to best practices.
Project.toml
Artifacts.toml
🪛 GitHub Actions: JuliaFormatter
src/standalone/Snow/NeuralDepthModel.jl
[error] 1-400: Process completed with exit code 1 indicating failure in the code changes. The diff shows multiple code style and syntax adjustments, including missing newline at end of file and trailing commas, which likely caused the pipeline failure.
⏰ Context from checks skipped due to timeout of 90000ms (9)
- GitHub Check: ci 1.11 - windows-latest
- GitHub Check: ci 1.11 - ubuntu-latest
- GitHub Check: ci 1.10 - windows-latest
- GitHub Check: downstream ClimaCoupler.jl (1.11)
- GitHub Check: downstream ClimaCoupler.jl (1.10)
- GitHub Check: test (1.11)
- GitHub Check: docbuild
- GitHub Check: test (1.10)
- GitHub Check: ci 1.10 - ubuntu-latest
🔇 Additional comments (6)
.buildkite/Project.toml (1)
47-47
: Appropriate version expansion for Flux.Expanding Flux compatibility to include version 0.16 aligns with the updated requirements in the main Project.toml file.
Project.toml (2)
13-13
: Good transition from weakdep to direct dependency.Moving Flux from weakdeps to direct deps is appropriate since the neural network model is now integrated directly into the Snow module.
29-29
: Properly removed Flux from weakdeps.Line is correctly empty after removing Flux from the weakdeps section.
Artifacts.toml (1)
189-194
: Well-structured artifact entry for neural network weights.The artifact entry includes all necessary components: git-tree-sha1, sha256 checksum, and download URL.
src/standalone/Snow/Snow.jl (1)
696-696
: Properly integrates the NeuralDepthModel.Inclusion of the new neural model file is correctly positioned after the other parameterization includes.
src/standalone/Snow/NeuralDepthModel.jl (1)
259-260
: Guard against zero snow‑cover fraction
min(z, swe / max(scf, eps(FT)))
works, yet ifscf
is exactly zero the ratio still evaluates but may propagate NaNs upstream when bothswe
andscf
are zero. Consider an early return or clearer handling of snow‑free points.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
docs/tutorials/standalone/Snow/base_tutorial.jl (1)
82-91
: Redundant re‑assignment ofnfeatures
nfeatures
is computed on line 82 and then recomputed (same expression) on lines 89‑91.
Removing the duplicate keeps the tutorial concise and avoids the impression that two different values are intended.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
Project.toml
(1 hunks)docs/Project.toml
(1 hunks)docs/tutorials/standalone/Snow/base_tutorial.jl
(5 hunks)ext/NeuralSnowExt.jl
(1 hunks)ext/neural_snow/ModelTools.jl
(9 hunks)ext/neural_snow/NeuralSnow.jl
(0 hunks)src/Artifacts.jl
(1 hunks)test/Project.toml
(1 hunks)
💤 Files with no reviewable changes (1)
- ext/neural_snow/NeuralSnow.jl
✅ Files skipped from review due to trivial changes (4)
- Project.toml
- test/Project.toml
- docs/Project.toml
- ext/NeuralSnowExt.jl
🚧 Files skipped from review as they are similar to previous changes (1)
- src/Artifacts.jl
⏰ Context from checks skipped due to timeout of 90000ms (9)
- GitHub Check: ci 1.11 - windows-latest
- GitHub Check: ci 1.11 - ubuntu-latest
- GitHub Check: ci 1.10 - windows-latest
- GitHub Check: test (1.11)
- GitHub Check: downstream ClimaCoupler.jl (1.11)
- GitHub Check: docbuild
- GitHub Check: downstream ClimaCoupler.jl (1.10)
- GitHub Check: test (1.10)
- GitHub Check: ci 1.10 - ubuntu-latest
🔇 Additional comments (1)
ext/neural_snow/ModelTools.jl (1)
61-72
: Shape assumption inconnection_optimized
could mis‑align columns
[upperb(pred, input) lowerb(pred, input) pred]
performs anhcat
.
If any of the supplied arrays are vectors rather than 1 × K matrices, Julia will turn them into column vectors, giving a K × 3 matrix instead of the expected 3 × K.
Consider enforcing row‑matrix outputs or explicitly usinghcat
on transposed vectors to guarantee correct orientation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (2)
src/standalone/Snow/NeuralDepthModel.jl (1)
175-181
:⚠️ Potential issue
l3.bias
assignment breaks on immutable struct
Dense
is immutable – you can mutate the existing bias array but you cannot re‑bind the field.
Using plain=
will raisesetfield!
at runtime.- network[:pred].layers.layers[:l3].bias = data["l3_BIAS"] + network[:pred].layers.layers[:l3].bias .= data["l3_BIAS"]ext/neural_snow/ModelTools.jl (1)
171-182
:⚠️ Potential issue
convert_model!
still returnsnothing
→ every caller getsnothing
Previous review already flagged this.
make_model(...)
andmake_model_paper(...)
now returnnothing
, breaking all downstream code.- model = Flux.f32(model) - elseif FT == Float64 - model = Flux.f64(model) + return Flux.f32(model) + elseif FT == Float64 + return Flux.f64(model) else error("…") end - end # ← dead code; the function falls through returning `nothing` + return model
🧹 Nitpick comments (2)
src/standalone/Snow/NeuralDepthModel.jl (1)
73-82
: Element‑wise broadcast already handled – drop@.
for clarity
@. (input[7,:]' > 0) * relu(pred)
is fine, but the surrounding broadcast macro also dots the'
transpose creating an unnecessary allocation.
Consider explicit.*
and keep@. ...
off to avoid hidden surprises when someone tweaks the expression.ext/neural_snow/ModelTools.jl (1)
381-383
:setoutscale!
mutates weight but ignores type mismatchIf
dtype != eltype(model)
, the assignment silently widens/narrows only one element.
Recommend:@assert eltype(model[:apply_relus].weight) == dtypeor just drop the
dtype
kwarg and infer from the weight.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
.buildkite/Manifest-v1.11.toml
(12 hunks).buildkite/Project.toml
(1 hunks)Project.toml
(2 hunks)docs/Manifest-v1.11.toml
(16 hunks)docs/Project.toml
(1 hunks)docs/tutorials/standalone/Snow/analysis_tools.jl
(10 hunks)docs/tutorials/standalone/Snow/base_tutorial.jl
(5 hunks)docs/tutorials/standalone/Snow/data_tutorial.jl
(1 hunks)docs/tutorials/standalone/Snow/display_tools.jl
(1 hunks)experiments/standalone/Snow/snowmip_simulation.jl
(1 hunks)ext/neural_snow/ModelTools.jl
(9 hunks)src/Artifacts.jl
(1 hunks)src/standalone/Snow/NeuralDepthModel.jl
(1 hunks)src/standalone/Snow/snow_parameterizations.jl
(1 hunks)test/Project.toml
(1 hunks)
✅ Files skipped from review due to trivial changes (4)
- src/standalone/Snow/snow_parameterizations.jl
- docs/tutorials/standalone/Snow/data_tutorial.jl
- docs/tutorials/standalone/Snow/analysis_tools.jl
- docs/Manifest-v1.11.toml
🚧 Files skipped from review as they are similar to previous changes (5)
- docs/Project.toml
- test/Project.toml
- Project.toml
- .buildkite/Project.toml
- src/Artifacts.jl
⏰ Context from checks skipped due to timeout of 90000ms (11)
- GitHub Check: ci 1.11 - windows-latest
- GitHub Check: ci 1.11 - macOS-latest
- GitHub Check: ci 1.11 - ubuntu-latest
- GitHub Check: ci 1.10 - windows-latest
- GitHub Check: test (1.11)
- GitHub Check: downstream ClimaCoupler.jl (1.11)
- GitHub Check: ci 1.10 - macOS-latest
- GitHub Check: docbuild
- GitHub Check: test (1.10)
- GitHub Check: downstream ClimaCoupler.jl (1.10)
- GitHub Check: ci 1.10 - ubuntu-latest
🔇 Additional comments (13)
experiments/standalone/Snow/snowmip_simulation.jl (1)
50-50
: Neural model import path updatedUpdated to use the integrated Snow module's NeuralDepthModel rather than the previous extension. This matches the PR's goal of moving neural network code directly into the Snow module.
docs/tutorials/standalone/Snow/display_tools.jl (1)
36-36
: Removed specialized visualization functionsSeveral visualization functions have been removed (show_net_weights, hyperparam_plots, dplot), likely because they were specific to the old neural network implementation.
.buildkite/Manifest-v1.11.toml (4)
3-5
: Julia version downgradeRolled back from 1.11.4 to 1.11.3. This change appears deliberate but verify it's intentional.
416-433
: Updated dependency configuration for ClimaLandFlux now appears as a direct dependency rather than a weak dependency, and BSON has been removed. This aligns with the core changes in the PR to improve GPU support.
1000-1003
: Flux version updateUpdated from 0.15.2 to 0.16.3, which should provide improved GPU support for the neural network models.
3079-3096
: Zygote dependency restructuringRemoved GPUArrays and Requires while adding Atom as a weak dependency.
docs/tutorials/standalone/Snow/base_tutorial.jl (7)
46-46
: Removed unnecessary importsRemoved Flux and BSON imports, consistent with the modified architecture.
101-103
: Improved data source managementReplaced hardcoded URLs with artifact-based retrieval through ClimaLand.Artifacts functions. This is more maintainable.
126-163
: Added enhanced model creation workflowNew sample code shows how to define custom boundary functions and build models with specific constraints. This directly supports the PR goal of facilitating new constrained networks.
164-176
: Added paper model reference implementationAdded reference to the faster predefined method to recreate the model from the paper. This provides both flexibility and convenience.
179-183
: Clarified model scaling function compatibilityAdded important notes about which scaling functions work with which model creation methods. This prevents potential user errors.
186-192
: Updated training documentationAdded notes about compatibility with custom models and fixed-weight layers, expanding the usefulness of the tutorial.
239-251
: Added model persistence functionalityNew section on saving and loading model weights, including artifact-based retrieval for the paper model. This improves reusability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
ext/neural_snow/ModelTools.jl (1)
747-758
:⚠️ Potential issueDefault callback value throws a
MethodError
cb = Nothing
and latercb()
(l. 781) raisesMethodError: objects of type Nothing are not callable
.- cb = Nothing, + cb = () -> nothing,Guarantees a no‑op callback when the caller does not supply one.
🧹 Nitpick comments (5)
ext/neural_snow/ModelTools.jl (4)
61-72
: Avoid unnecessary allocation inconnection_optimized
vcat
allocates a new matrix on every forward pass.
For small nets this is fine, but once the model is called inside a solver loop this becomes a hot‐spot.-function connection_optimized(upperb::Function, lowerb::Function, - pred::AbstractArray{T}, input::AbstractArray{T})::AbstractArray{T} where {T<:AbstractFloat} - return [ - upperb(pred, input) - lowerb(pred, input) - pred - ] +function connection_optimized(upperb::Function, lowerb::Function, + pred::AbstractArray{T}, input::AbstractArray{T}) where {T<:AbstractFloat} + upper = upperb(pred, input) + lower = lowerb(pred, input) + out = similar(pred, size(pred, 1) + 2, size(pred, 2)) # 2 extra rows + @views out[1, :] .= upper + @views out[2, :] .= lower + @views out[3:end, :] .= pred + return out endThis keeps the allocation to a single array (which can even be reused by the caller) and removes intermediate temporaries.
166-183
:convert_model!
is not in‑place – rename or drop the bangThe function re‑binds
model
and then returns the new object; the original reference is untouched.
In Julia the convention is that a trailing!
signals in‑place mutation. Consider:-function convert_model!(model, FT::Type{<:AbstractFloat}) +function convert_model(model, FT::Type{<:AbstractFloat})or mutate the weights and return
nothing
.Misleading naming makes downstream code harder to reason about.
461-473
: Robustness offreeze_fixed_layers!
checks
hasproperty.([model.layers], …)
broadcasts over a length‑1 Vector, producing a 3 × 1 matrix – it works but is needlessly convoluted and allocates.- Failure messages mention “Flux Setup State” even when the model is at fault, which can confuse users.
-@assert all(hasproperty(model, s) for s in (:apply_relus, :apply_upper, :sumlayer)) "...msg..." -@assert all(hasproperty(opt_state.layers, s) for s in (:apply_relus, :apply_upper, :sumlayer)) "...msg..." +required = (:apply_relus, :apply_upper, :sumlayer) +@assert all(s -> hasproperty(model, s), required) "Model missing fixed layers $(required)." +@assert all(s -> hasproperty(opt_state.layers, s), required) "Optimizer state missing layers $(required)."Cleaner, allocation‑free and the error message tells exactly which symbol is missing.
803-820
: Order of layers insave_predictive_model_weights
is non‑deterministic
collect(keys(layers))
relies onNamedTuple
field order which is stable today but unspecified by Julia, and will change if the predictive sub‑chain is reordered.-for layer in collect(keys(layers))[2:end] +for layer in filter(!=( :scale), sort(keys(layers)))A deterministic ordering prevents silent mismatches when weights are re‑loaded in another environment.
test/standalone/Snow/tool_tests.jl (1)
305-308
: Hard‑coded URLs hinder offline testingThe tests pull weight files from Box at runtime. CI and users without internet will see brittle failures unrelated to code quality. Consider vendoring tiny test weights into
test/artifacts/
and loading them from the filesystem.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
docs/tutorials/standalone/Snow/base_tutorial.jl
(5 hunks)ext/neural_snow/DataTools.jl
(2 hunks)ext/neural_snow/ModelTools.jl
(9 hunks)src/Artifacts.jl
(1 hunks)src/standalone/Snow/NeuralDepthModel.jl
(1 hunks)src/standalone/Snow/snow_parameterizations.jl
(1 hunks)test/standalone/Snow/tool_tests.jl
(8 hunks)
✅ Files skipped from review due to trivial changes (1)
- src/standalone/Snow/snow_parameterizations.jl
🚧 Files skipped from review as they are similar to previous changes (3)
- docs/tutorials/standalone/Snow/base_tutorial.jl
- src/Artifacts.jl
- src/standalone/Snow/NeuralDepthModel.jl
🧰 Additional context used
🧠 Learnings (1)
ext/neural_snow/ModelTools.jl (1)
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: ext/neural_snow/ModelTools.jl:460-466
Timestamp: 2025-04-22T22:45:03.223Z
Learning: In Flux versions 0.15 and newer, the `freeze!` function must be applied to components of the optimization state tree returned by `setup`, not directly to model components. Attempting to apply `freeze!` directly to model layers will result in an ArgumentError.
⏰ Context from checks skipped due to timeout of 90000ms (11)
- GitHub Check: ci 1.11 - windows-latest
- GitHub Check: ci 1.11 - macOS-latest
- GitHub Check: ci 1.11 - ubuntu-latest
- GitHub Check: ci 1.10 - windows-latest
- GitHub Check: downstream ClimaCoupler.jl (1.11)
- GitHub Check: ci 1.10 - macOS-latest
- GitHub Check: test (1.11)
- GitHub Check: ci 1.10 - ubuntu-latest
- GitHub Check: downstream ClimaCoupler.jl (1.10)
- GitHub Check: test (1.10)
- GitHub Check: docbuild
🔇 Additional comments (2)
ext/neural_snow/DataTools.jl (2)
32-32
: Good shift to standard library dependencyReplacing
StatsBase
withStatistics
reduces external dependencies by leveraging Julia's standard library.
607-609
: Manual IQR calculation preserves functionalityThe replacement of
iqr()
with manual quantile calculation maintains the same functionality while eliminating the dependency onStatsBase
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
.buildkite/Manifest-v1.11.toml
(17 hunks)Project.toml
(2 hunks)docs/Manifest-v1.11.toml
(20 hunks)docs/tutorials/standalone/Snow/analysis_tools.jl
(11 hunks)docs/tutorials/standalone/Snow/base_tutorial.jl
(5 hunks)docs/tutorials/standalone/Snow/data_tutorial.jl
(1 hunks)experiments/integrated/fluxnet/data_tools.jl
(1 hunks)experiments/integrated/fluxnet/ozark_pft.jl
(0 hunks)experiments/integrated/fluxnet/plot_utils.jl
(1 hunks)experiments/integrated/fluxnet/run_fluxnet.jl
(1 hunks)experiments/integrated/performance/integrated_timestep_test.jl
(2 hunks)experiments/standalone/Vegetation/timestep_test.jl
(1 hunks)lib/ClimaLandSimulations/src/Fluxnet/Fluxnet.jl
(1 hunks)test/standalone/Snow/tool_tests.jl
(8 hunks)
💤 Files with no reviewable changes (1)
- experiments/integrated/fluxnet/ozark_pft.jl
✅ Files skipped from review due to trivial changes (4)
- lib/ClimaLandSimulations/src/Fluxnet/Fluxnet.jl
- experiments/integrated/fluxnet/run_fluxnet.jl
- docs/tutorials/standalone/Snow/data_tutorial.jl
- docs/Manifest-v1.11.toml
🚧 Files skipped from review as they are similar to previous changes (5)
- Project.toml
- docs/tutorials/standalone/Snow/analysis_tools.jl
- test/standalone/Snow/tool_tests.jl
- docs/tutorials/standalone/Snow/base_tutorial.jl
- .buildkite/Manifest-v1.11.toml
🔇 Additional comments (5)
experiments/integrated/performance/integrated_timestep_test.jl (2)
44-44
: Removed StatsBase dependency in favor of custom percentile implementation.The import of
percentile
fromStatsBase
is now commented out, with a custom implementation added below.
70-72
: Clean implementation of percentile function using base Statistics package.Good replacement that maintains the same functionality while reducing external dependencies.
experiments/integrated/fluxnet/plot_utils.jl (1)
104-105
: Replaced StatsBase functions with base Statistics equivalents.The implementation now uses built-in functions for RMSD and R² calculations:
- RMSD: Manual implementation with
sqrt(mean(abs2, ...))
- R²: Using base
cor
functionThis change successfully removes the StatsBase dependency while maintaining identical functionality.
experiments/integrated/fluxnet/data_tools.jl (2)
9-9
: Replaced StatsBase import with Statistics package.Clean replacement of the dependency, aligning with similar changes across other files.
14-23
: Well-documented percentile function implementation.Good addition of a utility function with:
- Clear documentation explaining its purpose
- Input validation to ensure percentile is between 0-100
- Correct implementation using
quantile
This implementation maintains the original functionality while reducing external dependencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
ext/neural_snow/WebTools.jl (2)
19-19
: URL validation could be more robustThe URL validation is basic - only checking for http/https prefixes. Consider using Julia's built-in
URIs
package for more comprehensive validation.- @assert any(startswith.([url], ["http://", "https://"])) "Please provide a valid HTTP url.\n" #is there a better way to check the validity of the url without importing any more packages? + using URIs + @assert try URI(url); true; catch; false; end "Please provide a valid HTTP url.\n"
70-88
: Consider handling malformed data more robustlyThe parsing assumes well-formed data with consistent delimiters. Consider adding validation for inconsistent column counts across rows.
ext/neural_snow/ModelTools.jl (1)
844-896
: Consider using more structured error handling for weight loadingThe weight loading logic assumes a specific file format. Consider adding more validation of the parsed data structure before attempting to assign weights.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
.buildkite/Manifest-v1.11.toml
(22 hunks).buildkite/Project.toml
(2 hunks)Project.toml
(2 hunks)docs/Manifest-v1.11.toml
(20 hunks)docs/Project.toml
(2 hunks)docs/tutorials/standalone/Snow/base_tutorial.jl
(5 hunks)docs/tutorials/standalone/Snow/data_tutorial.jl
(2 hunks)experiments/integrated/fluxnet/data_tools.jl
(1 hunks)experiments/integrated/fluxnet/plot_utils.jl
(1 hunks)experiments/integrated/fluxnet/run_fluxnet.jl
(1 hunks)experiments/integrated/performance/integrated_timestep_test.jl
(1 hunks)experiments/standalone/Vegetation/timestep_test.jl
(0 hunks)ext/neural_snow/DataTools.jl
(6 hunks)ext/neural_snow/ModelTools.jl
(9 hunks)ext/neural_snow/WebTools.jl
(1 hunks)lib/ClimaLandSimulations/Project.toml
(0 hunks)lib/ClimaLandSimulations/src/Fluxnet/Fluxnet.jl
(1 hunks)test/Project.toml
(2 hunks)test/standalone/Snow/tool_tests.jl
(10 hunks)
💤 Files with no reviewable changes (2)
- experiments/standalone/Vegetation/timestep_test.jl
- lib/ClimaLandSimulations/Project.toml
✅ Files skipped from review due to trivial changes (1)
- docs/tutorials/standalone/Snow/data_tutorial.jl
🚧 Files skipped from review as they are similar to previous changes (14)
- lib/ClimaLandSimulations/src/Fluxnet/Fluxnet.jl
- experiments/integrated/fluxnet/run_fluxnet.jl
- docs/Project.toml
- .buildkite/Project.toml
- experiments/integrated/performance/integrated_timestep_test.jl
- experiments/integrated/fluxnet/data_tools.jl
- experiments/integrated/fluxnet/plot_utils.jl
- Project.toml
- test/Project.toml
- ext/neural_snow/DataTools.jl
- test/standalone/Snow/tool_tests.jl
- docs/tutorials/standalone/Snow/base_tutorial.jl
- .buildkite/Manifest-v1.11.toml
- docs/Manifest-v1.11.toml
🧰 Additional context used
🧠 Learnings (1)
ext/neural_snow/ModelTools.jl (1)
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: ext/neural_snow/ModelTools.jl:460-466
Timestamp: 2025-04-22T22:45:03.223Z
Learning: In Flux versions 0.15 and newer, the `freeze!` function must be applied to components of the optimization state tree returned by `setup`, not directly to model components. Attempting to apply `freeze!` directly to model layers will result in an ArgumentError.
🔇 Additional comments (7)
ext/neural_snow/WebTools.jl (1)
1-88
: Clean lightweight web utilities implementationThis module provides a good lightweight alternative to HTTP and CSV dependencies, with proper error handling and type conversion. The CSV parsing with automatic type inference is particularly well-implemented.
ext/neural_snow/ModelTools.jl (6)
172-184
: Fixed:convert_model!
now returns the converted modelThe function now properly returns the model after conversion, fixing the issue where it previously returned
nothing
.
462-474
: Improved error messaging with assertionsThe added assertions in
freeze_fixed_layers!
will make debugging easier by providing clear error messages when model structure doesn't match expectations.
25-36
: Well-designed customMulLayer
for optimizationThis custom Flux layer implementation avoids unnecessary allocations when no activation or bias is needed. Good optimization.
62-73
: Good abstraction for boundary enforcementThe
connection_optimized
function provides a clean abstraction for combining upper and lower boundary functions with predictions.
413-429
: Well-designed boundary scaling functionThe
setboundscale!
function provides a clean interface for scaling boundary enforcement, with good error handling for invalid bounds.
187-263
: Flexible model construction with constraint optionsThe redesigned
make_model
function allows for custom predictive models with arbitrary boundary functions, enhancing flexibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🔭 Outside diff range comments (1)
.buildkite/Project.toml (1)
18-23
: 🛠️ Refactor suggestionStale
HTTP
entry contradicts the new lightweight goal
Downloads
has been added (👍), butHTTP
is still listed in[deps]
even though the PR description says it was dropped. If nothing in the buildkite environment usesHTTP
any more, please remove it to avoid an unnecessary compile + pre-compilation time hit and inadvertent re-introduction.-HTTP = "cd3eb016-35fb-5094-929b-558a96fad6f3"
♻️ Duplicate comments (1)
src/standalone/Snow/NeuralDepthModel.jl (1)
114-117
:false
is not a bias vector – Dense constructor will fail
(See earlier review; still present.)
Dense(W, bias, σ)
expectsbias::AbstractVector
. Passing a Boolean lands in an ambiguous or missing method and raisesMethodError
.- apply_relus = Dense(scale_and_relus, false, relu), - apply_upper = Dense(get_min, false, relu), - sumlayer = Dense(final_mul, false), + apply_relus = Dense(scale_and_relus; bias = false, σ = relu), + apply_upper = Dense(get_min; bias = false, σ = relu), + sumlayer = Dense(final_mul; bias = false),Using keyword arguments makes intent explicit and matches the supported signature.
🧹 Nitpick comments (4)
ext/neural_snow/WebTools.jl (2)
18-32
:@assert
is user-facing; preferArgumentError
@assert
is compiled out with--compile=min --check-bounds=no
and gives a rather cryptic stack trace. Use an explicit condition +throw(ArgumentError(...))
.
70-92
: Potential type inflation + unnecessaryArray{Vector}
parsed_cols = Array{Vector}(undef, length(header))
stores un-typed vectors → type-unstable DataFrame construction.promote_type(typeof.(parsed)...)
already includesMissing
; wrapping again inUnion{Missing,T}
duplicatesMissing
.Not critical for small CSVs, but can hurt large loads. Consider:
- parsed_cols = Vector{Vector}(undef, length(header)) + parsed_cols = Vector{Vector}(undef, length(header)) ... - if any(ismissing, parsed) - parsed_cols[i] = convert(Vector{Union{Missing, T}}, parsed) + if any(ismissing, parsed) + parsed_cols[i] = Vector{Union{Missing, T}}(parsed)src/standalone/Snow/AndersonModel.jl (1)
53-66
: Vectorised compaction: risk of divide-by-zero
factor = (W_i > 0.001) ? (exp(B) - 1) / B : FT(1)
WhenB ≈ 0
butW_i > 0.001
, we can hit overflow / NaN. A safe guard likeabs(B) < eps(FT)
would be safer.src/standalone/Snow/NeuralDepthModel.jl (1)
11-12
: Unused importLP
import ClimaLand.Parameters as LP
is never referenced. Consider removing to avoid dead code warnings.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
.buildkite/Manifest-v1.11.toml
(22 hunks).buildkite/Project.toml
(2 hunks)experiments/standalone/Snow/snowmip_simulation.jl
(1 hunks)ext/neural_snow/WebTools.jl
(1 hunks)src/standalone/Snow/AndersonModel.jl
(1 hunks)src/standalone/Snow/NeuralDepthModel.jl
(1 hunks)src/standalone/Snow/Snow.jl
(1 hunks)test/standalone/Snow/tool_tests.jl
(12 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- src/standalone/Snow/Snow.jl
- experiments/standalone/Snow/snowmip_simulation.jl
- test/standalone/Snow/tool_tests.jl
- .buildkite/Manifest-v1.11.toml
🔇 Additional comments (1)
.buildkite/Project.toml (1)
45-46
: Double-check Flux upper bound
Flux = "0.15, 0.16"
leaves the upper side open. 0.17 (when released) could introduce breaking API changes. Consider capping at<0.17
unless you have CI coverage for nightly Flux.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
src/standalone/Snow/NeuralDepthModel.jl (1)
113-116
:⚠️ Potential issue
false
is not a bias vector –Dense
will throw
Dense(W, b, σ)
expectsb::AbstractVecOrNothing
.
Passingfalse
triggers aMethodError
at compile-time.- apply_relus = Dense(scale_and_relus, false, relu), - apply_upper = Dense(get_min, false, relu), - sumlayer = Dense(final_mul, false), + apply_relus = Dense(scale_and_relus; bias = false, σ = relu), + apply_upper = Dense(get_min; bias = false, σ = relu), + sumlayer = Dense(final_mul; bias = false),Using the keyword form is the supported way to create a bias-free layer.
ext/neural_snow/ModelTools.jl (1)
443-469
: Model layer freezing implementation improved based on feedback.The function now includes assertions to verify model structure before freezing layers, which addresses feedback from previous reviews.
🧹 Nitpick comments (1)
ext/neural_snow/WebTools.jl (1)
84-88
: PossibleAny
column type –promote_type
withMissing
degrades type stability
promote_type(typeof.(parsed)... )
returnsUnion{Missing, T}
once any element is
missing
, so the subsequentUnion{Missing, T}
wrapping gives
Union{Missing, Union{Missing,T}}
⇒ collapses toAny
.
A cheaper & type-stable route is to inspect a non-missing sample:- T = promote_type(typeof.(parsed)...) + T = eltype(skipmissing(parsed))(This is available since Julia 1.8.)
Keeps column eltypes concrete and avoids
Any
vectors.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
experiments/standalone/Snow/snowmip_simulation.jl
(2 hunks)ext/neural_snow/ModelTools.jl
(9 hunks)ext/neural_snow/WebTools.jl
(1 hunks)src/standalone/Snow/AndersonModel.jl
(1 hunks)src/standalone/Snow/NeuralDepthModel.jl
(1 hunks)test/standalone/Snow/tool_tests.jl
(12 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- experiments/standalone/Snow/snowmip_simulation.jl
🧰 Additional context used
🧠 Learnings (2)
src/standalone/Snow/NeuralDepthModel.jl (2)
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: src/standalone/Snow/NeuralDepthModel.jl:96-112
Timestamp: 2025-04-24T20:24:39.781Z
Learning: In Flux.jl, when defining a layer in a Chain using syntax like `scale = x -> x .* in_scales`, the variable `in_scales` captured in the closure remains accessible as a property of that layer. This allows modifications via `model[:pred].layers.layers[:scale].in_scales .= new_values`, which works even though this behavior might not be expected from typical Julia closures.
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: src/standalone/Snow/NeuralDepthModel.jl:96-112
Timestamp: 2025-04-24T20:24:39.781Z
Learning: In Flux.jl, when defining a layer in a Chain using the syntax `scale = x -> x .* in_scales`, the variable `in_scales` remains accessible as a property of that layer, allowing it to be modified later with syntax like `model[:pred].layers.layers[:scale].in_scales .= new_values`. This is contrary to what might be expected from typical closure behavior in Julia, but is a feature of how Flux handles named functions in Chains.
ext/neural_snow/ModelTools.jl (1)
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: ext/neural_snow/ModelTools.jl:460-466
Timestamp: 2025-04-22T22:45:03.223Z
Learning: In Flux versions 0.15 and newer, the `freeze!` function must be applied to components of the optimization state tree returned by `setup`, not directly to model components. Attempting to apply `freeze!` directly to model layers will result in an ArgumentError.
🔇 Additional comments (14)
ext/neural_snow/ModelTools.jl (14)
5-16
: Exports updated to match new functionality.Added exports match all the new functions implemented in this file, which enhances the module's usability.
18-18
: New dependency added.WebTools.jl is included to support loading models from URLs.
20-39
: Good optimization with custom MulLayer.Custom layer implementation reduces memory allocations when only matrix multiplication is needed without bias or activation.
42-73
: Well-designed connection function with clear documentation.The boundary enforcement mechanism is clearly documented and properly typed with a generic approach.
76-119
: Example boundary function with thorough documentation.The upper bound implementation shows good use of broadcasting and conditional logic. Documentation includes helpful examples.
121-164
: Lower bound function implementation looks solid.Documentation clearly explains how the boundary is calculated and provides examples.
167-184
: Model conversion function fixed.This function properly returns the converted model, addressing the issue mentioned in previous review comments.
187-262
: Strong model construction with boundary enforcement.The model creation function provides a flexible way to build constrained networks with custom boundaries.
265-361
: Paper-specific model implementation is well structured.The specialized constructor includes input validation and default scaling values from the paper.
364-385
: Output scaling function implementation is clear.Good documentation explaining the scaling factor behavior and limitations.
387-424
: Boundary scaling function properly documented.Implementation correctly handles upper and lower bound scaling with appropriate error handling.
763-763
: Updated training function to use layer freezing.Integration of
freeze_fixed_layers!
simplifies training workflow.
781-816
: Model weight saving implementation looks robust.The function saves predictive component weights in a human-readable format, enhancing interoperability.
818-895
: Model weight loading supports both files and URLs.Implementation properly handles different input sources and includes appropriate error handling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
ext/neural_snow/ModelTools.jl (1)
443-469
: Improved layer freezing with assertionsThe
freeze_fixed_layers!
function now includes assertions to verify model structure, addressing feedback from previous review comments. The implementation correctly applies freezing to optimizer state components as required by Flux 0.15+.
🧹 Nitpick comments (1)
ext/neural_snow/ModelTools.jl (1)
819-895
: Comprehensive weight loading functionalityThe
load_model_weights!
function supports both local files and URLs. Good error handling with the try/finally block.However, there's a potential improvement:
- push!(current_values, parse.(T, split(line))) + # Use the actual element type of the model component being loaded + push!(current_values, parse.(eltype(model), split(line)))This ensures that the parsed values match the model's actual element type, avoiding potential precision issues.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
docs/tutorials/standalone/Snow/base_tutorial.jl
(5 hunks)experiments/standalone/Snow/snowmip_simulation.jl
(1 hunks)ext/neural_snow/ModelTools.jl
(9 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- experiments/standalone/Snow/snowmip_simulation.jl
- docs/tutorials/standalone/Snow/base_tutorial.jl
🧰 Additional context used
🧠 Learnings (1)
ext/neural_snow/ModelTools.jl (1)
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: ext/neural_snow/ModelTools.jl:460-466
Timestamp: 2025-04-22T22:45:03.223Z
Learning: In Flux versions 0.15 and newer, the `freeze!` function must be applied to components of the optimization state tree returned by `setup`, not directly to model components. Attempting to apply `freeze!` directly to model layers will result in an ArgumentError.
🔇 Additional comments (13)
ext/neural_snow/ModelTools.jl (13)
6-6
: Exported new boundary scaling functionsetboundscale!
Good to see this new export for setting boundary scaling parameters. This complements the existing
setoutscale!
function.
12-16
: New exports expand module capabilitiesAdded exports for paired time series, model weight management, and layer freezing functions enhance the module's functionality and provide better support for model persistence.
18-18
: WebTools.jl added for URL handlingIncluding WebTools.jl enables loading model weights from URLs, improving model distribution capabilities.
20-39
: New MulLayer improves performanceThe custom MulLayer reduces memory allocations when only matrix multiplication is needed without bias or activation. Good optimization for simpler computation paths.
42-73
: Well-documented connection optimizerThe
connection_optimized
function provides a clean abstraction for combining upper and lower bounds with predictions. Documentation clearly explains usage and extension points.
76-119
: Thorough documentation of paper_upper_boundExcellent documentation with examples of how to create custom boundary functions. The use of
@.
macro for vectorized operations is a good performance choice.
121-164
: Well-documented paper_lower_boundClear explanation of the lower bound function with examples. The implementation is simple but effective.
167-184
: Fixed convert_model functionThe function now correctly returns the converted model, addressing the issue mentioned in past review comments where it was returning
nothing
.
187-263
: Flexible model creation with boundary functionsThe new
make_model
implementation allows for customizable constraints through user-provided boundary functions. The local binding withlet
ensures the model captures all necessary variables.
266-362
: Paper-specific model implementationThe
make_model_paper
function provides a specialized version with defaults from the paper. The error message on line 307-313 is helpful for preventing common usage mistakes.
365-385
: Clear scaling implementationThe
setoutscale!
function now has improved documentation about its intended use and limitations. The direct matrix manipulation is efficient.
388-424
: New boundary scaling functionThe
setboundscale!
function provides granular control over individual bounds. Documentation clearly explains constraints and usage limitations.
780-816
: New weight saving functionalityThe
save_predictive_model_weights
function provides a human-readable format for model persistence that works across Flux versions. The implementation correctly handles the model structure.
85247cd
to
0380057
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/standalone/Snow/AndersonModel.jl (1)
124-129
:⚠️ Potential issueTemperature conversion issue for new snow temperature.
The code is passing Kelvin temperatures to
newsnow_temp()
which expects Celsius.- T_newsnow = newsnow_temp.(p.drivers.T) + T_newsnow = newsnow_temp.(air_temp)
🧹 Nitpick comments (5)
ext/neural_snow/WebTools.jl (2)
18-32
: Consider using system temp directory instead of extension directory.Using
extension_dir_path
for temporary files could accumulate files if cleanup fails. Consider using Julia's built-intempdir()
ormktempdir()
instead.-mktemp(extension_dir_path) do path, _ +mktemp() do path, _
70-92
: Consider adding error handling for malformed CSV data.The current implementation may fail with index errors if the data is malformed (e.g., inconsistent column counts). Consider adding validation.
function df_from_url(url; comment = "#", delim = ",") body = read_webpage_body(url) lines = split(body, "\n", keepempty = false) data_rows = filter( l -> !(startswith(strip(l), comment) || isempty(strip(l))), lines, ) + isempty(data_rows) && return DataFrame() header = split(strip(data_rows[1]), delim; keepempty = true) + length(header) == 0 && return DataFrame() table = [split(strip(row), delim; keepempty = true) for row in data_rows[2:end]] + # Validate consistent column counts + if !all(length(row) == length(header) for row in table) + @warn "Inconsistent column counts in CSV data" + # Filter or pad rows as needed + table = [row for row in table if length(row) == length(header)] + endext/neural_snow/ModelTools.jl (1)
412-424
: Consider adding value validation in setboundscale!The function doesn't check if
scale
is positive or reasonable, which could lead to physically invalid model behavior.function setboundscale!(model::Chain, bound::Symbol, scale::Real;) FT = eltype(model[:apply_relus].weight) + @assert scale > 0 "Scale factor must be positive" if bound == :upper model[:apply_relus].weight[1, 1] = FT(1.0 * scale) model[:apply_relus].weight[2, 1] = FT(-1.0 * scale) model[:apply_relus].weight[5, 1] = FT(-1.0 * scale)
src/standalone/Snow/AndersonModel.jl (2)
156-159
: Consider DRYing redundant code between models.As noted in the comment, several functions are identical between this and NeuralDepthModel. Consider refactoring them into a common utility module.
148-148
: Potential division by zero risk.Setting ρ_ice to 0.1 when W_ice ≤ eps(FT) is a workaround, but consider a more robust approach with clearer physical meaning.
- parent(ρ_ice)[parent(W_ice) .<= eps(FT)] .= FT(0.1) #any nonzero value for no-snowpack to avoid NaN and return 0.0 + # Handle the case of no snowpack more explicitly + no_snow_mask = parent(W_ice) .<= eps(FT) + if any(no_snow_mask) + parent(ρ_ice)[no_snow_mask] .= FT(0.1) # Default density for trace snow + end
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (31)
.buildkite/Manifest-v1.11.toml
(22 hunks).buildkite/Project.toml
(2 hunks)Artifacts.toml
(1 hunks)Project.toml
(2 hunks)docs/Manifest-v1.11.toml
(20 hunks)docs/Project.toml
(2 hunks)docs/tutorials/standalone/Snow/analysis_tools.jl
(11 hunks)docs/tutorials/standalone/Snow/base_tutorial.jl
(5 hunks)docs/tutorials/standalone/Snow/data_tutorial.jl
(2 hunks)docs/tutorials/standalone/Snow/display_tools.jl
(1 hunks)experiments/integrated/fluxnet/data_tools.jl
(1 hunks)experiments/integrated/fluxnet/ozark_pft.jl
(0 hunks)experiments/integrated/fluxnet/plot_utils.jl
(1 hunks)experiments/integrated/fluxnet/run_fluxnet.jl
(1 hunks)experiments/integrated/performance/integrated_timestep_test.jl
(1 hunks)experiments/standalone/Snow/snowmip_simulation.jl
(1 hunks)experiments/standalone/Vegetation/timestep_test.jl
(0 hunks)ext/NeuralSnowExt.jl
(1 hunks)ext/neural_snow/DataTools.jl
(6 hunks)ext/neural_snow/ModelTools.jl
(9 hunks)ext/neural_snow/NeuralSnow.jl
(0 hunks)ext/neural_snow/WebTools.jl
(1 hunks)lib/ClimaLandSimulations/Project.toml
(0 hunks)lib/ClimaLandSimulations/src/Fluxnet/Fluxnet.jl
(1 hunks)src/Artifacts.jl
(1 hunks)src/standalone/Snow/AndersonModel.jl
(1 hunks)src/standalone/Snow/NeuralDepthModel.jl
(1 hunks)src/standalone/Snow/Snow.jl
(1 hunks)src/standalone/Snow/snow_parameterizations.jl
(1 hunks)test/Project.toml
(2 hunks)test/standalone/Snow/tool_tests.jl
(12 hunks)
💤 Files with no reviewable changes (4)
- experiments/integrated/fluxnet/ozark_pft.jl
- experiments/standalone/Vegetation/timestep_test.jl
- lib/ClimaLandSimulations/Project.toml
- ext/neural_snow/NeuralSnow.jl
✅ Files skipped from review due to trivial changes (2)
- src/standalone/Snow/snow_parameterizations.jl
- docs/tutorials/standalone/Snow/display_tools.jl
🚧 Files skipped from review as they are similar to previous changes (21)
- experiments/integrated/fluxnet/run_fluxnet.jl
- experiments/integrated/performance/integrated_timestep_test.jl
- lib/ClimaLandSimulations/src/Fluxnet/Fluxnet.jl
- src/standalone/Snow/Snow.jl
- experiments/integrated/fluxnet/plot_utils.jl
- experiments/standalone/Snow/snowmip_simulation.jl
- docs/tutorials/standalone/Snow/analysis_tools.jl
- Project.toml
- test/Project.toml
- experiments/integrated/fluxnet/data_tools.jl
- ext/NeuralSnowExt.jl
- docs/tutorials/standalone/Snow/data_tutorial.jl
- src/Artifacts.jl
- ext/neural_snow/DataTools.jl
- Artifacts.toml
- docs/tutorials/standalone/Snow/base_tutorial.jl
- test/standalone/Snow/tool_tests.jl
- docs/Project.toml
- docs/Manifest-v1.11.toml
- .buildkite/Manifest-v1.11.toml
- src/standalone/Snow/NeuralDepthModel.jl
🧰 Additional context used
🧠 Learnings (1)
ext/neural_snow/ModelTools.jl (1)
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: ext/neural_snow/ModelTools.jl:460-466
Timestamp: 2025-04-22T22:45:03.223Z
Learning: In Flux versions 0.15 and newer, the `freeze!` function must be applied to components of the optimization state tree returned by `setup`, not directly to model components. Attempting to apply `freeze!` directly to model layers will result in an ArgumentError.
🔇 Additional comments (10)
ext/neural_snow/WebTools.jl (2)
1-3
: Imports confirmed with appropriate dependencies.The module correctly includes all necessary imports including
Dates
, which was previously missing.
46-53
: Robust type conversion implementation.The type detection cascade properly handles empty cells and tries parsing in order of specificity.
ext/neural_snow/ModelTools.jl (5)
18-19
: WebTools inclusion supports lightweight data fetching.Including WebTools.jl enables loading model weights from URLs without heavy HTTP dependencies.
25-39
: Custom MulLayer improves performance.The MulLayer provides memory and speed optimizations by eliminating unnecessary bias and activation terms.
172-184
: Fixed convert_model function now returns model.Previous implementation had a bug where it didn't return the converted model. This version correctly returns the model after conversion.
457-469
: Robust assertions in freeze_fixed_layers!Great addition of assertions that check model structure before attempting to freeze layers, making the function more robust against incorrect usage.
871-872
: Use element type from model for parsing weights.The code now correctly uses the model's element type when parsing weights, addressing the previous precision issues.
.buildkite/Project.toml (2)
18-18
: Adding Downloads package supports new lightweight data fetching.The addition aligns with the removal of HTTP dependency and new data fetching approach.
45-45
: Expanded Flux compatibility ensures broader version support.Supporting Flux 0.16 alongside 0.15 allows for more flexibility in development environments.
src/standalone/Snow/AndersonModel.jl (1)
26-27
: Correct parameter values for cx and ρ_d.The values now match the docstring description where cx=23 and ρ_d=0.15.
0380057
to
b91ffcd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
ext/neural_snow/ModelTools.jl (1)
799-816
: Consider adding checksum validation for saved weights.The weight saving functionality is well-implemented, but lacks checksum validation which would help detect corruption when loading weights from remote sources.
You could add a simple checksum calculation:
function save_predictive_model_weights(path::String, model::Chain) layers = model[:pred].layers.layers scalings = layers.scale.in_scales + checksums = Dict{String, UInt}() open(path, "w") do file write(file, "SCALING:\n") println(file, join(scalings, " ")) + checksums["SCALING:"] = hash(scalings) for layer in collect(keys(layers))[2:end] write(file, "\n" * string(layer) * "-WEIGHT:\n") for row in eachrow(layers[layer].weight) println(file, join(row, " ")) end + checksums[string(layer) * "-WEIGHT:"] = hash(layers[layer].weight) write(file, "\n" * string(layer) * "-BIAS:\n") println(file, join(layers[layer].bias, " ")) + checksums[string(layer) * "-BIAS:"] = hash(layers[layer].bias) end write(file, "\nFINALSCALE:\n") println(file, -model[:apply_relus].weight[end, 3]) + checksums["FINALSCALE:"] = hash(-model[:apply_relus].weight[end, 3]) + write(file, "\nCHECKSUMS:\n") + for (key, value) in checksums + println(file, key, " ", value) + end end endsrc/standalone/Snow/AndersonModel.jl (1)
156-159
: Consider addressing the commented question about code redundancy.The comment raises a valid point about potential redundancy with NeuralDepthModel.jl. Consider extracting shared functionality to a common utility module instead of duplicating or commenting out code.
You could create a
SnowUtils.jl
module with common functions likeclip_dZdt
andswe_snow_area
that both models could import.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (31)
.buildkite/Manifest-v1.11.toml
(7 hunks).buildkite/Project.toml
(2 hunks)Artifacts.toml
(1 hunks)Project.toml
(2 hunks)docs/Manifest-v1.11.toml
(20 hunks)docs/Project.toml
(2 hunks)docs/tutorials/standalone/Snow/analysis_tools.jl
(11 hunks)docs/tutorials/standalone/Snow/base_tutorial.jl
(5 hunks)docs/tutorials/standalone/Snow/data_tutorial.jl
(2 hunks)docs/tutorials/standalone/Snow/display_tools.jl
(1 hunks)experiments/integrated/fluxnet/data_tools.jl
(1 hunks)experiments/integrated/fluxnet/ozark_pft.jl
(0 hunks)experiments/integrated/fluxnet/plot_utils.jl
(1 hunks)experiments/integrated/fluxnet/run_fluxnet.jl
(1 hunks)experiments/integrated/performance/integrated_timestep_test.jl
(1 hunks)experiments/standalone/Snow/snowmip_simulation.jl
(1 hunks)experiments/standalone/Vegetation/timestep_test.jl
(0 hunks)ext/NeuralSnowExt.jl
(1 hunks)ext/neural_snow/DataTools.jl
(6 hunks)ext/neural_snow/ModelTools.jl
(9 hunks)ext/neural_snow/NeuralSnow.jl
(0 hunks)ext/neural_snow/WebTools.jl
(1 hunks)lib/ClimaLandSimulations/Project.toml
(0 hunks)lib/ClimaLandSimulations/src/Fluxnet/Fluxnet.jl
(1 hunks)src/Artifacts.jl
(1 hunks)src/standalone/Snow/AndersonModel.jl
(1 hunks)src/standalone/Snow/NeuralDepthModel.jl
(1 hunks)src/standalone/Snow/Snow.jl
(1 hunks)src/standalone/Snow/snow_parameterizations.jl
(1 hunks)test/Project.toml
(2 hunks)test/standalone/Snow/tool_tests.jl
(12 hunks)
💤 Files with no reviewable changes (4)
- experiments/integrated/fluxnet/ozark_pft.jl
- lib/ClimaLandSimulations/Project.toml
- experiments/standalone/Vegetation/timestep_test.jl
- ext/neural_snow/NeuralSnow.jl
✅ Files skipped from review due to trivial changes (1)
- src/standalone/Snow/snow_parameterizations.jl
🚧 Files skipped from review as they are similar to previous changes (23)
- lib/ClimaLandSimulations/src/Fluxnet/Fluxnet.jl
- src/standalone/Snow/Snow.jl
- ext/NeuralSnowExt.jl
- experiments/integrated/fluxnet/run_fluxnet.jl
- experiments/integrated/fluxnet/data_tools.jl
- docs/Project.toml
- test/Project.toml
- experiments/integrated/performance/integrated_timestep_test.jl
- experiments/standalone/Snow/snowmip_simulation.jl
- Project.toml
- docs/tutorials/standalone/Snow/data_tutorial.jl
- experiments/integrated/fluxnet/plot_utils.jl
- docs/tutorials/standalone/Snow/analysis_tools.jl
- .buildkite/Project.toml
- Artifacts.toml
- docs/tutorials/standalone/Snow/display_tools.jl
- test/standalone/Snow/tool_tests.jl
- ext/neural_snow/DataTools.jl
- docs/Manifest-v1.11.toml
- docs/tutorials/standalone/Snow/base_tutorial.jl
- .buildkite/Manifest-v1.11.toml
- src/Artifacts.jl
- src/standalone/Snow/NeuralDepthModel.jl
🧰 Additional context used
🧠 Learnings (1)
ext/neural_snow/ModelTools.jl (1)
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: ext/neural_snow/ModelTools.jl:460-466
Timestamp: 2025-04-22T22:45:03.223Z
Learning: In Flux versions 0.15 and newer, the `freeze!` function must be applied to components of the optimization state tree returned by `setup`, not directly to model components. Attempting to apply `freeze!` directly to model layers will result in an ArgumentError.
⏰ Context from checks skipped due to timeout of 90000ms (9)
- GitHub Check: ci 1.11 - windows-latest
- GitHub Check: ci 1.11 - ubuntu-latest
- GitHub Check: ci 1.10 - windows-latest
- GitHub Check: ci 1.10 - ubuntu-latest
- GitHub Check: test (1.11)
- GitHub Check: downstream ClimaCoupler.jl (1.11)
- GitHub Check: docbuild
- GitHub Check: test (1.10)
- GitHub Check: downstream ClimaCoupler.jl (1.10)
🔇 Additional comments (12)
ext/neural_snow/WebTools.jl (4)
1-1
: Correctly added theDates
dependency.Good job including the
Dates
module, which is needed for theparse_cell
function to handle Date and DateTime types.
18-33
: Check if temporary files are cleaned up properly.The function creates temporary files in the extension directory using
mktemp(extension_dir_path)
. While Julia'smktemp
should handle cleanup automatically, consider verifying that these files don't accumulate over time, especially if exceptions are thrown inside the try block.
47-54
: Well-implemented type detection with fallback to string.The implementation correctly attempts parsing in order of specificity and falls back to returning the original string when no parsing succeeds.
71-93
: Robust CSV parsing without CSV.jl dependency.The implementation handles all the key aspects of CSV parsing - headers, comments, type inference, and missing values - without requiring the heavier CSV.jl dependency.
ext/neural_snow/ModelTools.jl (4)
18-18
: Good clean dependency management.Including WebTools.jl locally rather than adding a direct dependency keeps the module structure clean.
25-36
: Efficient custom Flux layer for matrix multiplication.The
MulLayer
provides memory and performance optimization for cases where only matrix multiplication is needed without bias or activation functions.
172-184
: Model converter now properly returns the model.Fixed the issue where
convert_model
wasn't returning the modified model, which would have broken downstream code.
459-464
: Good defensive programming with assertions.The assertions ensure the model and optimizer state have the expected structure before freezing layers, preventing cryptic errors later.
This implementation addresses the learning about Flux's
freeze!
function requiring application to components of the optimization state tree, not directly to model layers.src/standalone/Snow/AndersonModel.jl (4)
26-27
: Parameter ordering fixed forcx
andρ_d
.The parameter values now match the documentation -
cx = 23
andρ_d = 0.15
- ensuring density-dependent settling works correctly.
125-129
: Temperature conversion issue resolved.The code now correctly converts temperature from Kelvin to Celsius before passing to
newsnow_temp
andnewsnow_density
functions, fixing the previous Kelvin/Celsius mix-up.
42-67
: Robust compaction model with physical bounds.The
compact_density
function handles edge cases well, including very small SWE values and enforcing physically reasonable density ranges (0.05 to 0.45).
148-148
: Good handling of zero snowpack case.Setting a non-zero density for grid cells with no snowpack (line 148) prevents NaN issues in calculations without affecting the physics.
function load_model_weights!(filepath::String, model::Chain; FT = Float32) | ||
T = | ||
hasproperty(model.layers, :apply_relus) ? | ||
eltype(model[:apply_relus].weight) : FT | ||
data = Dict() # Dictionary to store the parsed data | ||
# Check if the filepath is a URL | ||
if startswith(filepath, "http://") || startswith(filepath, "https://") | ||
file_io = IOBuffer(read_webpage_body(filepath)) | ||
else | ||
file_io = open(filepath, "r") # Open as a regular file | ||
end | ||
try | ||
current_name = nothing | ||
current_values = [] # Temporary storage for current block | ||
|
||
for line in readlines(file_io) | ||
line = strip(line) | ||
if isempty(line) && current_name !== nothing | ||
# If an empty line is encountered, finalize the current item | ||
if length(current_values[1]) == 1 #one column of data (vector) | ||
data[current_name] = vcat(current_values...) | ||
else # It's a matrix (multiple rows) | ||
data[current_name] = hcat(current_values...)' | ||
end | ||
current_name = nothing | ||
current_values = [] | ||
elseif isempty(line) | ||
continue | ||
elseif isnothing(current_name) | ||
current_name = line | ||
else | ||
push!(current_values, parse.(T, split(line))) | ||
end | ||
end | ||
# Handle the last item (if file doesn't end with a blank line) | ||
if current_name !== nothing | ||
if length(current_values[1]) == 1 | ||
data[current_name] = vcat(current_values...) | ||
else | ||
data[current_name] = hcat(current_values...)' | ||
end | ||
end | ||
finally | ||
# Close the file only if it's an actual file, not an IOBuffer | ||
if isa(file_io, IO) | ||
close(file_io) | ||
end | ||
end | ||
layers = model[:pred].layers.layers | ||
layers[:scale].in_scales .= data["SCALING:"]' | ||
for layer in collect(keys(layers))[2:end] | ||
layers[layer].weight .= data[string(layer) * "-WEIGHT:"] | ||
layers[layer].bias .= data[string(layer) * "-BIAS:"]' | ||
end | ||
model[:apply_relus].weight[end, 3] = -1 * data["FINALSCALE:"][1] | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Error handling for malformed weight files could be improved.
The weight loading function assumes a well-formed file but doesn't gracefully handle malformed content that might cause parsing errors or produce unexpected results.
Add validation for expected block structure:
try
current_name = nothing
current_values = [] # Temporary storage for current block
+ expected_blocks = ["SCALING:", "l1-WEIGHT:", "l1-BIAS:", "l2-WEIGHT:", "l2-BIAS:", "l3-WEIGHT:", "l3-BIAS:", "FINALSCALE:"]
+ found_blocks = String[]
for line in readlines(file_io)
line = strip(line)
if isempty(line) && current_name !== nothing
# If an empty line is encountered, finalize the current item
+ push!(found_blocks, current_name)
if length(current_values[1]) == 1 #one column of data (vector)
data[current_name] = vcat(current_values...)
else # It's a matrix (multiple rows)
data[current_name] = hcat(current_values...)'
end
And at the end of the try block:
+ # Verify all required blocks were found
+ missing_blocks = setdiff(expected_blocks, found_blocks)
+ if !isempty(missing_blocks)
+ error("Missing required blocks in weight file: $(join(missing_blocks, ", "))")
+ end
… remove dependencies in extension, add new functionality
ea854c2
to
afd1cb4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
ext/neural_snow/ModelTools.jl (1)
819-895
: 🛠️ Refactor suggestionWeight loading needs improved error handling.
The weight loading function could benefit from better validation of expected block structure to prevent cryptic errors with malformed files.
Add validation for required blocks:
try current_name = nothing current_values = [] # Temporary storage for current block + expected_blocks = ["SCALING:", "l1-WEIGHT:", "l1-BIAS:", "l2-WEIGHT:", "l2-BIAS:", "l3-WEIGHT:", "l3-BIAS:", "FINALSCALE:"] + found_blocks = String[] for line in readlines(file_io) line = strip(line) if isempty(line) && current_name !== nothing # If an empty line is encountered, finalize the current item + push!(found_blocks, current_name) if length(current_values[1]) == 1 #one column of data (vector) data[current_name] = vcat(current_values...) else # It's a matrix (multiple rows) data[current_name] = hcat(current_values...)' endAt the end of the try block, add:
+ # Verify all required blocks were found + missing_blocks = setdiff(expected_blocks, found_blocks) + if !isempty(missing_blocks) + error("Missing required blocks in weight file: $(join(missing_blocks, ", "))") + end
🧹 Nitpick comments (2)
src/standalone/Snow/AndersonModel.jl (2)
148-148
: Non-physical fallback value for zero snowpack.Setting
ρ_ice
to 0.1 for zero snowpack cases prevents NaN but introduces non-physical values.Consider a more explicit approach:
- parent(ρ_ice)[parent(W_ice) .<= eps(FT)] .= FT(0.1) #any nonzero value for no-snowpack to avoid NaN and return 0.0 + # Handle zero snowpack case explicitly + zero_snow_mask = parent(W_ice) .<= eps(FT) + if any(zero_snow_mask) + parent(ρ_ice)[zero_snow_mask] .= FT(0.1) + # Ensure dz_compaction will be zero for these cases + parent(dz_compaction)[zero_snow_mask] .= FT(0.0) + end
156-159
: Remove design questions from code comments.The comment about type assertion should be resolved instead of left as a question in the code.
Either implement the suggested type union or remove the comment.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (31)
.buildkite/Manifest-v1.11.toml
(7 hunks).buildkite/Project.toml
(2 hunks)Artifacts.toml
(1 hunks)Project.toml
(2 hunks)docs/Manifest-v1.11.toml
(20 hunks)docs/Project.toml
(2 hunks)docs/tutorials/standalone/Snow/analysis_tools.jl
(11 hunks)docs/tutorials/standalone/Snow/base_tutorial.jl
(5 hunks)docs/tutorials/standalone/Snow/data_tutorial.jl
(2 hunks)docs/tutorials/standalone/Snow/display_tools.jl
(1 hunks)experiments/integrated/fluxnet/data_tools.jl
(1 hunks)experiments/integrated/fluxnet/ozark_pft.jl
(0 hunks)experiments/integrated/fluxnet/plot_utils.jl
(1 hunks)experiments/integrated/fluxnet/run_fluxnet.jl
(1 hunks)experiments/integrated/performance/integrated_timestep_test.jl
(1 hunks)experiments/standalone/Snow/snowmip_simulation.jl
(1 hunks)experiments/standalone/Vegetation/timestep_test.jl
(0 hunks)ext/NeuralSnowExt.jl
(1 hunks)ext/neural_snow/DataTools.jl
(6 hunks)ext/neural_snow/ModelTools.jl
(9 hunks)ext/neural_snow/NeuralSnow.jl
(0 hunks)ext/neural_snow/WebTools.jl
(1 hunks)lib/ClimaLandSimulations/Project.toml
(0 hunks)lib/ClimaLandSimulations/src/Fluxnet/Fluxnet.jl
(1 hunks)src/Artifacts.jl
(1 hunks)src/standalone/Snow/AndersonModel.jl
(1 hunks)src/standalone/Snow/NeuralDepthModel.jl
(1 hunks)src/standalone/Snow/Snow.jl
(1 hunks)src/standalone/Snow/snow_parameterizations.jl
(1 hunks)test/Project.toml
(2 hunks)test/standalone/Snow/tool_tests.jl
(12 hunks)
💤 Files with no reviewable changes (4)
- experiments/integrated/fluxnet/ozark_pft.jl
- lib/ClimaLandSimulations/Project.toml
- experiments/standalone/Vegetation/timestep_test.jl
- ext/neural_snow/NeuralSnow.jl
✅ Files skipped from review due to trivial changes (5)
- experiments/integrated/fluxnet/run_fluxnet.jl
- src/standalone/Snow/Snow.jl
- src/standalone/Snow/snow_parameterizations.jl
- src/Artifacts.jl
- src/standalone/Snow/NeuralDepthModel.jl
🚧 Files skipped from review as they are similar to previous changes (20)
- ext/NeuralSnowExt.jl
- experiments/standalone/Snow/snowmip_simulation.jl
- lib/ClimaLandSimulations/src/Fluxnet/Fluxnet.jl
- docs/Project.toml
- experiments/integrated/performance/integrated_timestep_test.jl
- experiments/integrated/fluxnet/plot_utils.jl
- docs/tutorials/standalone/Snow/analysis_tools.jl
- .buildkite/Project.toml
- Project.toml
- test/Project.toml
- ext/neural_snow/WebTools.jl
- experiments/integrated/fluxnet/data_tools.jl
- ext/neural_snow/DataTools.jl
- Artifacts.toml
- docs/tutorials/standalone/Snow/display_tools.jl
- docs/tutorials/standalone/Snow/data_tutorial.jl
- docs/tutorials/standalone/Snow/base_tutorial.jl
- docs/Manifest-v1.11.toml
- test/standalone/Snow/tool_tests.jl
- .buildkite/Manifest-v1.11.toml
🧰 Additional context used
🧠 Learnings (1)
ext/neural_snow/ModelTools.jl (1)
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: ext/neural_snow/ModelTools.jl:460-466
Timestamp: 2025-04-22T22:45:03.223Z
Learning: In Flux versions 0.15 and newer, the `freeze!` function must be applied to components of the optimization state tree returned by `setup`, not directly to model components. Attempting to apply `freeze!` directly to model layers will result in an ArgumentError.
⏰ Context from checks skipped due to timeout of 90000ms (11)
- GitHub Check: ci 1.11 - windows-latest
- GitHub Check: ci 1.11 - macOS-latest
- GitHub Check: ci 1.11 - ubuntu-latest
- GitHub Check: ci 1.10 - windows-latest
- GitHub Check: ci 1.10 - macOS-latest
- GitHub Check: ci 1.10 - ubuntu-latest
- GitHub Check: test (1.11)
- GitHub Check: test (1.10)
- GitHub Check: docbuild
- GitHub Check: downstream ClimaCoupler.jl (1.11)
- GitHub Check: downstream ClimaCoupler.jl (1.10)
🔇 Additional comments (13)
ext/neural_snow/ModelTools.jl (9)
18-39
: Good addition of MulLayer for optimized matrix multiplication.The custom layer implementation reduces allocations when only matrix multiplication is needed without bias or activation. The
@layer
macro properly integrates with Flux's system.
42-73
: Well-structured connection function with clear boundary handling.The
connection_optimized
function provides a clean approach to implement boundary functions. Good use of type parameters to ensure consistent behavior.
76-164
: Example boundary functions are well-documented.The
paper_upper_bound
andpaper_lower_bound
implementations provide excellent starting templates with practical examples for users to customize boundary functions.
167-184
: Fixed model conversion function.The
convert_model
function now correctly returns the converted model rather than attempting to rebind it locally, addressing the previously identified issue.
187-263
: Generalized model creation with robust boundary enforcement.The
make_model
function now properly handles arbitrary boundary functions with appropriate error checking for required parameters.
367-385
: Output scaling implementation is clear.The
setoutscale!
function handles the output scaling with a clear explanation of its purpose and limitations.
388-424
: Flexible boundary scaling with good constraints.The
setboundscale!
function properly handles scaling for both upper and lower boundaries with appropriate error handling.
444-469
: Robust layer freezing with comprehensive assertions.The
freeze_fixed_layers!
implementation has been updated to work with Flux 0.15+ as noted in learnings, using appropriate assertions to verify model structure before freezing.
782-816
: Clean weight saving implementation.The
save_predictive_model_weights
function creates human-readable text files with clear structure for model weights.src/standalone/Snow/AndersonModel.jl (4)
20-28
: The Anderson1976 type looks good but parameter values need verification.The struct implementation provides a physics-based snow density model with reasonable parameter defaults.
42-67
: Clear implementation of snow density compaction.The compaction function correctly implements the Snow17 model physics with proper bounds checking for extreme values.
76-92
: New snow density and temperature parameterizations are well implemented.The parameterizations follow established models with appropriate handling of temperature thresholds.
110-153
: Potential temperature unit mismatch.The code gets Celsius temperatures using
air_temp = p.drivers.T .- model.parameters.earth_param_set.T_freeze
but then passes this directly tonewsnow_temp
which expects Celsius input. This approach is correct as long asT
is in Kelvin andT_freeze
is also in Kelvin.Please verify that
p.drivers.T
is indeed in Kelvin and that the subtraction correctly yields temperatures in Celsius.
#= | ||
""" | ||
swe_snow_area(S::FT, scf::FT, z::FT)::FT where {FT} | ||
Estimates the SWE over the snow-covered portion of a grid cell, assming Z is provided | ||
as the depth over the snow-covered portion and S is provided as the depth over the whole cell area. | ||
Used in computing the time derivative of snow depth. | ||
""" | ||
function swe_snow_area(S::FT, scf::FT, z::FT)::FT where {FT} | ||
min_scf = 0.05 | ||
return min(z, S/max(scf, min_scf)) | ||
end | ||
=# |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Duplicated commented code should be removed or extracted.
The commented swe_snow_area
and clip_dZdt
functions duplicate code from NeuralDepthModel.
Instead of commenting these functions, consider:
- Move them to a common utility module
- Use import statements to reuse the existing implementations
- Remove the comments if they're not needed
Also applies to: 179-212
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
ext/neural_snow/ModelTools.jl (1)
848-903
: 🛠️ Refactor suggestionError handling for malformed weight files could be improved
The weight loading function should validate that all required blocks are present before processing.
try current_name = nothing current_values = [] # Temporary storage for current block + expected_blocks = ["SCALING:", "l1-WEIGHT:", "l1-BIAS:", "l2-WEIGHT:", "l2-BIAS:", "l3-WEIGHT:", "l3-BIAS:", "FINALSCALE:"] + found_blocks = String[] for line in readlines(file_io) line = strip(line) if isempty(line) && current_name !== nothing # If an empty line is encountered, finalize the current item + push!(found_blocks, current_name) if length(current_values[1]) == 1 #one column of data (vector) data[current_name] = vcat(current_values...) else # It's a matrix (multiple rows) data[current_name] = hcat(current_values...)' endAnd before line 896:
+ # Verify all required blocks were found + missing_blocks = setdiff(expected_blocks, found_blocks) + if !isempty(missing_blocks) + error("Missing required blocks in weight file: $(join(missing_blocks, ", "))") + end layers = model[:pred].layers.layers
🧹 Nitpick comments (3)
ext/neural_snow/ModelTools.jl (3)
128-128
: Docstring references wrong functionLine refers to
paper_upper_bound
instead ofpaper_lower_bound
.-`lowerb(pred, input) = paper_upper_bound(precip_idx, pred, input)` +`lowerb(pred, input) = paper_lower_bound(depth_idx, pred, input)`
471-472
: Assertions could be more specificAdd more descriptive error messages to help users troubleshoot issues when freezing layers.
- @assert hasproperty(model[:pred].layers.layers, :scale) "Provided Flux Setup State does not have the fixed input scaling vector for which this function was written, please implement a custom extension for the supplied model." - @assert hasproperty(l[:pred].layers.layers, :scale) "Provided Flux Setup State does not have the fixed input scaling vector for which this function was written, please implement a custom extension for the supplied model." + @assert hasproperty(model[:pred].layers.layers, :scale) "Model doesn't have expected input scaling vector (:scale). This function requires models created with make_model() or make_model_paper()." + @assert hasproperty(l[:pred].layers.layers, :scale) "Optimizer state doesn't have expected input scaling vector (:scale). This function requires models created with make_model() or make_model_paper()."
221-226
: Floating point type parameter name inconsistencyUse
FT
to match CliMA's naming convention for floating point types.- ftype::Type{FT} = Float32, + FT::Type{FT} = Float32,
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
Artifacts.toml
(1 hunks)Project.toml
(2 hunks)docs/tutorials/standalone/Snow/base_tutorial.jl
(5 hunks)docs/tutorials/standalone/Snow/data_tutorial.jl
(2 hunks)experiments/integrated/fluxnet/data_tools.jl
(1 hunks)ext/neural_snow/ModelTools.jl
(9 hunks)ext/neural_snow/WebTools.jl
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- Project.toml
- docs/tutorials/standalone/Snow/data_tutorial.jl
- Artifacts.toml
- experiments/integrated/fluxnet/data_tools.jl
- ext/neural_snow/WebTools.jl
- docs/tutorials/standalone/Snow/base_tutorial.jl
🧰 Additional context used
🧠 Learnings (1)
ext/neural_snow/ModelTools.jl (1)
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: ext/neural_snow/ModelTools.jl:460-466
Timestamp: 2025-04-22T22:45:03.245Z
Learning: In Flux versions 0.15 and newer, the `freeze!` function must be applied to components of the optimization state tree returned by `setup`, not directly to model components. Attempting to apply `freeze!` directly to model layers will result in an ArgumentError.
🪛 GitHub Actions: JuliaFormatter
ext/neural_snow/ModelTools.jl
[error] 229-237: Process failed with exit code 1 during execution. The log shows a warning about both nfeatures
and in_scale
being provided to make_model()
, but the actual error cause is not explicitly detailed.
⏰ Context from checks skipped due to timeout of 90000ms (13)
- GitHub Check: ci 1.11 - windows-latest
- GitHub Check: ci 1.11 - macOS-latest
- GitHub Check: ci 1.11 - ubuntu-latest
- GitHub Check: downstream ClimaCoupler.jl (1.11)
- GitHub Check: ci 1.10 - windows-latest
- GitHub Check: test (1.11)
- GitHub Check: ci 1.10 - macOS-latest
- GitHub Check: downstream ClimaCoupler.jl (1.10)
- GitHub Check: lib-climalandsimulations (1.11)
- GitHub Check: docbuild
- GitHub Check: test (1.10)
- GitHub Check: ci 1.10 - ubuntu-latest
- GitHub Check: lib-climalandsimulations (1.10)
🔇 Additional comments (4)
ext/neural_snow/ModelTools.jl (4)
229-237
: Both nfeatures and in_scale logicThe warning message is good but could be more precise about which takes precedence.
🧰 Tools
🪛 GitHub Actions: JuliaFormatter
[error] 229-237: Process failed with exit code 1 during execution. The log shows a warning about both
nfeatures
andin_scale
being provided tomake_model()
, but the actual error cause is not explicitly detailed.
465-477
: Freezing logic matches Flux 0.15+ requirementsCorrectly implements
freeze!
by applying it to optimizer state components per the learning note.
771-771
: Training now uses freeze_fixed_layers!Updated to properly freeze layers using the new implementation.
879-879
: Type safety when parsing weightsCorrectly uses model element type when parsing weight values.
function update_density_and_depth!( | ||
ρ_snow, | ||
z_snow, | ||
density::NeuralDepthModel, | ||
Y, | ||
p, | ||
params::SnowParameters, | ||
) | ||
@. z_snow = Y.snow.Z | ||
@. ρ_snow = snow_bulk_density(Y.snow.S, z_snow, params) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function is repeated exactly in AndersonModel.jl
, but for a different density
model type. Maybe this could be
function update_density_and_depth!(
ρ_snow,
z_snow,
density::Union{NeuralDepthModel, Anderson1976},
Y,
p,
params::SnowParameters,
)
and defined in the same location as clip_dZdt
, which is also shared
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
ext/neural_snow/ModelTools.jl (1)
850-905
: 🛠️ Refactor suggestionError handling for malformed weight files could be improved.
The weight loading function assumes a well-formed file but doesn't validate the expected structure, which could lead to silent failures.
Add validation for expected block structure:
try current_name = nothing current_values = [] # Temporary storage for current block + expected_blocks = ["SCALING:", "l1-WEIGHT:", "l1-BIAS:", "l2-WEIGHT:", "l2-BIAS:", "l3-WEIGHT:", "l3-BIAS:", "FINALSCALE:"] + found_blocks = String[] for line in readlines(file_io) line = strip(line) if isempty(line) && current_name !== nothing # If an empty line is encountered, finalize the current item + push!(found_blocks, current_name) if length(current_values[1]) == 1 #one column of data (vector) data[current_name] = vcat(current_values...) else # It's a matrix (multiple rows) data[current_name] = hcat(current_values...)' endAnd at the end of the try block:
+ # Verify all required blocks were found + missing_blocks = setdiff(expected_blocks, found_blocks) + if !isempty(missing_blocks) + error("Missing required blocks in weight file: $(join(missing_blocks, ", "))") + end
🧹 Nitpick comments (2)
ext/neural_snow/ModelTools.jl (2)
25-27
: Type parameters could be more specific.The
MulLayer
struct usesAbstractMatrix{<:AbstractFloat}
as a type parameter, but since this is for neural networks, consider usingAbstractMatrix{<:Real}
for broader compatibility.
257-260
: Matrix definitions could benefit from type stability.These matrix definitions use
Matrix{FT}
which is good for type stability. Consider adding a comment explaining the matrix structure for future maintenance.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
ext/neural_snow/ModelTools.jl
(9 hunks)
🧰 Additional context used
🧠 Learnings (1)
ext/neural_snow/ModelTools.jl (1)
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: ext/neural_snow/ModelTools.jl:460-466
Timestamp: 2025-04-22T22:45:03.245Z
Learning: In Flux versions 0.15 and newer, the `freeze!` function must be applied to components of the optimization state tree returned by `setup`, not directly to model components. Attempting to apply `freeze!` directly to model layers will result in an ArgumentError.
⏰ Context from checks skipped due to timeout of 90000ms (12)
- GitHub Check: ci 1.11 - windows-latest
- GitHub Check: ci 1.11 - ubuntu-latest
- GitHub Check: ci 1.10 - windows-latest
- GitHub Check: test (1.11)
- GitHub Check: ci 1.10 - macOS-latest
- GitHub Check: downstream ClimaCoupler.jl (1.11)
- GitHub Check: test (1.10)
- GitHub Check: ci 1.10 - ubuntu-latest
- GitHub Check: lib-climalandsimulations (1.11)
- GitHub Check: docbuild
- GitHub Check: downstream ClimaCoupler.jl (1.10)
- GitHub Check: lib-climalandsimulations (1.10)
🔇 Additional comments (7)
ext/neural_snow/ModelTools.jl (7)
39-39
: Good use of Flux.@layer macro.Using the
@layer
macro properly endows your custom layer with all required Flux properties.
62-73
: Function signatures are well-defined.The
connection_optimized
function has proper type constraints and return type annotation. The function is also well-documented.
114-118
: Optimized boundary function implementation.Using
@.
macro for vectorization is a good performance choice for the boundary functions.
172-184
: Fixed model conversion function.The function now correctly returns the converted model in all branches, addressing a previous issue.
221-240
: Good error handling for model parameters.The function appropriately errors when neither
in_scale
nornfeatures
is provided, and warns when both are provided.
467-479
: Robust implementation of layer freezing.The function now correctly freezes layers in the optimization state with good assertions to verify expected structure exists.
This implementation properly addresses Flux 0.15+ requirements for applying
freeze!
to optimization state components rather than directly to model layers.
881-881
: Good fix for parsing with correct float type.Now correctly uses the inferred type
T
for parsing instead of hardcodingFloat64
, addressing a previous issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a couple of examples running on buildkite?
[[snow_depth_model_weights.download]] | ||
sha256 = "8d3b59da970ef7bf1fb92d325947f632bb6f98ed0d79387adb20fbc9d8810bd9" | ||
url = "https://caltech.box.com/shared/static/miza4cec2b91qgn9kacqoiuqhd3obehd.gz" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a newline here
@@ -52,6 +53,8 @@ export sitedata_daily, | |||
qc_filter, | |||
scrape_site_paper | |||
|
|||
include("./WebTools.jl") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
include("./WebTools.jl") | |
include("WebTools.jl") |
freeze_fixed_layers!, | ||
convert_model | ||
|
||
include("./WebTools.jl") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
include("./WebTools.jl") | |
include("WebTools.jl") |
""" | ||
paper_upper_bound(precip_idx, pred, input) | ||
Generator function to create the memory/speed-optimized upper-bound function | ||
from the paper when given which index of the input features pertains to the | ||
precipitation value. This also serves as an example of one way to write a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not call things "paper". There's many papers associated to ClimaLand. Assume your readers are general ClimaLand users
l3 = Dense(nfeatures, 1), | ||
nfeatures::Int = 0, | ||
)::Chain where {FT <: AbstractFloat} | ||
if (nfeatures == 0) & isnothing(in_scale) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if (nfeatures == 0) & isnothing(in_scale) | |
if nfeatures == 0 & isnothing(in_scale) |
neural_snow_swenetwork_link() = | ||
"https://caltech.box.com/shared/static/nb2y4puindm5ncr7eldqk6l1q49lzi12.txt" | ||
|
||
neural_snow_training_data_link() = | ||
"https://caltech.box.com/shared/static/1gfyh71c44ljzb9xbnza3lbzj6p9723x.csv" | ||
|
||
neural_snow_testing_data_link() = | ||
"https://caltech.box.com/shared/static/qb2ze1wcc1a37fgt5k9wsj27gpoh39ax.csv" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we put what we need in artifacts?
Estimates the density of newly fallen snow as a function of air temperature (in celsius), in relative units (i.e. as a fraction of the density | ||
of water), according to the Snow17 model. | ||
Used in computing the time derivative of snow depth under the Anderson1976 density parameterization. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this comes from a paper, could you add the eq number?
|
||
""" | ||
compact_density(W_i::FT, sliq::FT, ρ::FT, Δt_t::FT, Tsnow::FT, density::Anderson1976) where {FT} | ||
Returns the new compressed density ρ_new from the input ρ under the compaction model defined |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this comes from a paper, could you add the eq number?
function get_network_weights(FT::DataType) | ||
weightfile = | ||
readlines(ClimaLand.Artifacts.neural_depth_model_weights_path()) | ||
@assert length(weightfile) == 56 "Artifact path might need updating, expected 56 lines and got $(length(weightfile))" | ||
data = Dict() | ||
data["SCALING"] = parse.(FT, split(weightfile[2], " ")) | ||
data["l1_WEIGHT"] = | ||
hcat([parse.(FT, split(weightfile[i], " ")) for i in 5:32]...)' | ||
data["l1_BIAS"] = parse.(FT, split(weightfile[35], " ")) | ||
data["l2_WEIGHT"] = | ||
hcat([parse.(FT, split(weightfile[i], " ")) for i in 38:44]...)' | ||
data["l2_BIAS"] = parse.(FT, split(weightfile[47], " ")) | ||
data["l3_WEIGHT"] = parse.(FT, split(weightfile[50], " "))' | ||
data["l3_BIAS"] = parse.(FT, split(weightfile[53], " ")) | ||
data["FINALSCALE"] = parse(FT, strip(weightfile[56])) | ||
return data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we maybe use a YAML file instead of a custom type?
include("./snow_parameterizations.jl") | ||
include("./boundary_fluxes.jl") | ||
include("./NeuralDepthModel.jl") | ||
include("./AndersonModel.jl") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
include("./snow_parameterizations.jl") | |
include("./boundary_fluxes.jl") | |
include("./NeuralDepthModel.jl") | |
include("./AndersonModel.jl") | |
include("snow_parameterizations.jl") | |
include("boundary_fluxes.jl") | |
include("NeuralDepthModel.jl") | |
include("AndersonModel.jl") |
This commit allows NeuralDepthModel to be used with a simulation on the GPU. - Changes snowmip_simulation.jl to move arrays to cpu memory before plotting - Adds Adapt.jl as a dependency. This is used to convert all arrays in the density model into static arrays. This could be done without Adapt.jl, but it would require far more code, because adapt can use Flux's internal adapt_structure defintions. - adds buildkite gpu run of snowmip Note that this does not add cuDNN to the buildkite project. cuDNN is required for full Flux gpu support, but the current implementation of the NeualDepthModel evaluates the model inside a broadcasted statement, which prevents the usual gpu optimizations from batch evaluation from occuring.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (4)
Project.toml (2)
29-34
:Downloads
is a stdlib – listing it as a weak-dep inflates the resolver.
Downloads
ships with Julia, so the package manager will always have it.
Keeping it in[weakdeps]
adds no benefit but lets the resolver believe it
could be optional, which may confuse tooling and version bounds.
Unless you intend to overloadusing Downloads
behindRequires.jl
, consider
removing it:-[weakdeps] -DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0" -Downloads = "f43a241f-c20a-4ad4-852c-f6b1247861c6" +[weakdeps] +DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
35-46
: Over-tightcompat
pins hinder forward patch releases.
Adapt = "4.3.0"
andDownloads = "1.6.0"
lock the project to a single
patch version. Bumping patch releases is generally safe, so a looser bound
("4.3"
/"1.6"
) avoids unnecessary downstream conflicts.-Adapt = "4.3.0" -Downloads = "1.6.0" +Adapt = "4.3" +Downloads = "1.6"test/standalone/Snow/tool_tests.jl (1)
206-225
: Matrix construction relies on implicit promotion – safer to spell it out.
Float32.([2 2 2; -1 -1 -1; test_pred])
works only becausetest_pred
happens to be a1×3
Matrix{Float32}
.
If its shape changes to aVector
(the common representation of a row) the
comprehension will dimension-mismatch at runtime.-test_pred = Float32.([1.0 3.0 -2.0]) -expected_out = Float32.([2 2 2; -1 -1 -1; test_pred]) +test_pred = reshape(Float32.([1, 3, -2]), 1, :) +expected_out = vcat( + fill(Float32(2), 1, 3), + fill(Float32(-1), 1, 3), + test_pred, +)src/standalone/Snow/NeuralDepthModel.jl (1)
352-387
: Docstring and signature diverge – missingdY
argument.
update_density_prog!
is documented asupdate_density_prog!(density::NeuralDepthModel, model::SnowModel, Y, p)
but the implementation is
function update_density_prog!(density::NeuralDepthModel, model::SnowModel, dY, Y, p)Update the docstring to match or vice-versa to avoid confusion.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
.buildkite/Manifest-v1.11.toml
(24 hunks).buildkite/Manifest.toml
(50 hunks).buildkite/pipeline.yml
(1 hunks)Project.toml
(2 hunks)experiments/standalone/Snow/snowmip_simulation.jl
(5 hunks)src/standalone/Snow/NeuralDepthModel.jl
(1 hunks)test/standalone/Snow/tool_tests.jl
(10 hunks)
✅ Files skipped from review due to trivial changes (2)
- .buildkite/pipeline.yml
- .buildkite/Manifest.toml
🚧 Files skipped from review as they are similar to previous changes (2)
- experiments/standalone/Snow/snowmip_simulation.jl
- .buildkite/Manifest-v1.11.toml
🧰 Additional context used
📓 Path-based instructions (1)
`*`: # CodeRabbit Style Guide (CliMA Inspired)
Leverage CodeRabbit for code reviews aligning with CliMA's practices.
I. Key Areas for CodeRabbit:
- Style: Naming (Titl...
*
: # CodeRabbit Style Guide (CliMA Inspired)Leverage CodeRabbit for code reviews aligning with CliMA's practices.
I. Key Areas for CodeRabbit:
- Style: Naming (TitleCase, lowercase_with_underscores), line length (<78), indentation (4 spaces), import order.
- DRY: Flag duplicated code; encourage modularization.
- Docstrings: Flag missing docstrings for modules, structs, functions.
- Tests: Detect missing unit tests (if configured).
- Complexity: Report on cyclomatic complexity.
II. Conventions (CodeRabbit Can Help):
- Naming: Follow CliMA/CMIP conventions. Avoid
l
,O
,I
as single-char vars.- Unicode: Human review needed. Acceptable chars.
III. Documentation (CodeRabbit Flags Deficiencies):
- Docstrings: Describe purpose, inputs, outputs, examples.
IV. Testing (CodeRabbit: Missing Tests):
- Unit tests required for new/modified code.
V. CodeRabbit Config:
.coderabbit.yaml
: Customize language, line length, tests.VI. Workflow:
- Review CodeRabbit's suggestions; investigate, provide context, address issues, improve config.
VII. Abandoned Code: Delete or clearly mark as deprecated.
VIII. Reviewer Responsibilities: Ensure adherence to best practices.
Project.toml
🧠 Learnings (2)
test/standalone/Snow/tool_tests.jl (1)
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: test/standalone/Snow/tool_tests.jl:270-274
Timestamp: 2025-04-24T21:08:28.113Z
Learning: In the ClimaLand.jl codebase, models created with ModelTools.make_model() have an apply_relus matrix of size 5×3, while models created with ModelTools.make_model_paper() have an apply_relus matrix of size 3×3. This affects how indices should be used when testing the weight matrices.
src/standalone/Snow/NeuralDepthModel.jl (2)
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: src/standalone/Snow/NeuralDepthModel.jl:0-0
Timestamp: 2025-04-25T00:56:04.807Z
Learning: In Flux.jl, when providing an explicit weight matrix to the Dense layer constructor, the second positional argument represents the bias parameter, and `false` can be used to indicate no bias (e.g., `Dense(matrix, false, activation)`). This is different from the dimension-based constructor where keyword arguments are typically used.
Learnt from: a-charbon
PR: CliMA/ClimaLand.jl#1118
File: src/standalone/Snow/NeuralDepthModel.jl:96-112
Timestamp: 2025-04-24T20:24:39.795Z
Learning: In Flux.jl, when defining a layer in a Chain using the syntax `scale = x -> x .* in_scales`, the variable `in_scales` remains accessible as a property of that layer, allowing it to be modified later with syntax like `model[:pred].layers.layers[:scale].in_scales .= new_values`. This is contrary to what might be expected from typical closure behavior in Julia, but is a feature of how Flux handles named functions in Chains.
⏰ Context from checks skipped due to timeout of 90000ms (13)
- GitHub Check: ci 1.11 - windows-latest
- GitHub Check: ci 1.11 - macOS-latest
- GitHub Check: ci 1.11 - ubuntu-latest
- GitHub Check: ci 1.10 - windows-latest
- GitHub Check: ci 1.10 - macOS-latest
- GitHub Check: test (1.11)
- GitHub Check: downstream ClimaCoupler.jl (1.11)
- GitHub Check: docbuild
- GitHub Check: ci 1.10 - ubuntu-latest
- GitHub Check: test (1.10)
- GitHub Check: downstream ClimaCoupler.jl (1.10)
- GitHub Check: lib-climalandsimulations (1.11)
- GitHub Check: lib-climalandsimulations (1.10)
🔇 Additional comments (1)
test/standalone/Snow/tool_tests.jl (1)
270-274
: Hidden assumption onapply_relus
size – add an explicit guard.The assertion
model[:apply_relus].weight[5,3]
relies on the
make_model
helper always producing a 5×3 matrix.
If the layer size ever changes (e.g. after a refactor) this will throw
BoundsError
long before the test failure reveals the real problem.
Add a shape check first:@test size(model[:apply_relus].weight, 1) ≥ 5 @test model[:apply_relus].weight[5, 3] == -0.001f0
function setoutscale!(model, scale::Real) | ||
FT = eltype(model[:apply_relus].weight) | ||
model[:apply_relus].weight[end, 3] = FT(-1 * scale) | ||
end | ||
|
||
|
||
""" | ||
settimescale!(model, dt) | ||
Set the timescale parameter for model usage | ||
|
||
# Arguments | ||
- `model::Chain`: the neural model to be used. | ||
- `dt::Real`: the number of seconds per timestep for usage. | ||
""" | ||
function settimescale!(model, dt::Real) | ||
FT = eltype(model[:apply_relus].weight) | ||
model[:apply_relus].weight[2, 2] = FT(1.0 / dt) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
setoutscale!
/ settimescale!
assume mutable arrays.
These helpers directly mutate model[:apply_relus].weight
.
If a caller passes the z_model
stored inside a NeuralDepthModel
(the static-array case above) they will fail. Consider:
- Documenting that they must be called before wrapping in
NeuralDepthModel
, or - Returning a fresh
Chain
with updated weights.
Either path prevents runtime surprises.
function NeuralDepthModel(z_model::MD, α::FT) where {FT, MD} | ||
static_z_model = Adapt.adapt(SArray, z_model) | ||
new{FT, typeof(static_z_model)}(static_z_model, α) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Post-construction mutability is lost after Adapt.adapt(SArray, …)
.
Adapt.adapt
converts all weight matrices to SMatrix
, which is immutable.
Any later call to setoutscale!
or settimescale!
on
density.z_model
will error with “attempt to setindex!”.
Two possible fixes:
- static_z_model = Adapt.adapt(SArray, z_model)
- new{FT, typeof(static_z_model)}(static_z_model, α)
+ # keep the mutable copy so scaling utilities remain usable
+ new{FT, typeof(z_model)}(z_model, α)
or keep the static arrays but make set*scale!
return a new model instead
of mutating in-place.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
function NeuralDepthModel(z_model::MD, α::FT) where {FT, MD} | |
static_z_model = Adapt.adapt(SArray, z_model) | |
new{FT, typeof(static_z_model)}(static_z_model, α) | |
end | |
function NeuralDepthModel(z_model::MD, α::FT) where {FT, MD} | |
# keep the mutable copy so scaling utilities remain usable | |
new{FT, typeof(z_model)}(z_model, α) | |
end |
Purpose
The PR serves 4 purposes:
The Anderson (1976) parameterization was also added to the Snow module as it shares a few runtime utilities with the NeuralDepthModel and was a relatively straightforward add with only a few functions in a standalone file.
To-do
Handle GPU enabling of neural models, perhaps also their mapping over ClimaFields (with Teja)
Look into runtime handling of some instabilities for small snowpacks (currently handled with
min_scf
parameter) (with Katherine)Content