Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Losses / Validation Metrics Implementation #41

Open
Rilwan-Adewoyin opened this issue Sep 2, 2024 · 0 comments
Open

Losses / Validation Metrics Implementation #41

Rilwan-Adewoyin opened this issue Sep 2, 2024 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@Rilwan-Adewoyin
Copy link
Member

Rilwan-Adewoyin commented Sep 2, 2024

Issues with Loss Implementation in anemoi-training

  1. Nomenclature:

    • The current implementation uses self.loss_weights for weighting in the spatial dimension and self.data_variances for weighting in the feature dimension. However, this naming can be confusing given that the default strategy for feature dimension weighting is not based on data_variances.
    • Proposal: Rename self.loss_weights to self.node_weight (for spatial dimension) and self.data_variances to self.feature_weight (for feature dimension). Additionally, modify the function responsible for generating feature scaling values to support various feature scaling strategies.
  2. Consistency in Feature Scaling for Loss Calculation:

    • The loss calculated by self.metrics does not currently apply feature scaling, whereas the loss used during training does apply the prescribed feature scaling. Previously, feature scaling was applied consistently across both metrics and training losses.
    • Proposal: Either revert to the previous behavior where feature scaling is applied to both, or introduce an option for users to specify whether feature scaling should be applied in self.metrics.
  3. Monitoring and Recording Loss Types During Validation:

    • Currently, only WeightedMSELoss is available for monitoring and recording during validation. This limitation restricts flexibility in evaluating model performance.
    • Proposal: Extend the configuration options to allow for:
      a) Selection of the type of loss/metric to record.
      b) Choice of whether to record metrics on processed or unprocessed features when evaluating single features (This is crucial for evaluating performance of model's that use different weighting strategies)

What are the steps to reproduce the bug?

NA - Inspect code

Version

develop

Platform (OS and architecture)

NA

Relevant log output

No response

Accompanying data

No response

Organisation

ECMWF

@Rilwan-Adewoyin Rilwan-Adewoyin added bug Something isn't working enhancement New feature or request and removed bug Something isn't working labels Sep 2, 2024
@Rilwan-Adewoyin Rilwan-Adewoyin self-assigned this Sep 2, 2024
@Rilwan-Adewoyin Rilwan-Adewoyin changed the title Losses Implementation Losses / Validation Metrics Implementation Sep 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant