You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Currently we're the only value which is Logged to the progress bar and WandB is the loss value. However there are many situations in which the loss is either of little to no value, or you're more interested in another metric. Examples include when using a negative log-likelihood based loss function like (von-Mises Fisher, Gaussian, or CrossEntropy), where other quantities like RMSE, MAE, Accuracy, or AUC might be more relevant metrics to go by. Other examples could occur if a combination of multiple losses is used, then each component might be of interest.
Describe the solution you'd like
I would like to have the option to log additional more human-friendly quantities. There exists a library called torchmetrics which defines a plethora of metrics, which we probably can integrate with.
For implementation I think the easiest would be to have the option to add additional metrics to a Task, along with a name for the metric, and then the StandardModel, would process all metrics from all it's tasks and be responsible for feeding them the proper data and logging them during training/validation.
Describe alternatives you've considered
We could make our own implementation of a Metric like class, and use those, but I think it might be easier to use a library that already exists, especially since it is developed by lightning-ai and we probably won't run into issues with incompatible versions.
Additional context
Some of the metrics also define a plot method, which might be integrated in a way such that this would also solve #507.
The text was updated successfully, but these errors were encountered:
I think this would be a nice addition to the library, and I agree that tying metrics to Tasks is the right approach.
Just like we have default prediction names for tasks, we could add default metrics for each task that would automatically be logged by wandb. At run time, the users could overwrite these default values. That would be very elegant :-)
Is your feature request related to a problem? Please describe.
Currently we're the only value which is Logged to the progress bar and WandB is the loss value. However there are many situations in which the loss is either of little to no value, or you're more interested in another metric. Examples include when using a negative log-likelihood based loss function like (von-Mises Fisher, Gaussian, or CrossEntropy), where other quantities like RMSE, MAE, Accuracy, or AUC might be more relevant metrics to go by. Other examples could occur if a combination of multiple losses is used, then each component might be of interest.
Describe the solution you'd like
I would like to have the option to log additional more human-friendly quantities. There exists a library called torchmetrics which defines a plethora of metrics, which we probably can integrate with.
For implementation I think the easiest would be to have the option to add additional metrics to a
Task
, along with a name for the metric, and then theStandardModel
, would process all metrics from all it's tasks and be responsible for feeding them the proper data and logging them during training/validation.Describe alternatives you've considered
We could make our own implementation of a Metric like class, and use those, but I think it might be easier to use a library that already exists, especially since it is developed by lightning-ai and we probably won't run into issues with incompatible versions.
Additional context
Some of the metrics also define a
plot
method, which might be integrated in a way such that this would also solve #507.The text was updated successfully, but these errors were encountered: