Skip to content

Commit

Permalink
[pre-commit.ci] auto fixes from pre-commit.com hooks
Browse files Browse the repository at this point in the history
for more information, see https://pre-commit.ci
  • Loading branch information
pre-commit-ci[bot] committed Sep 22, 2023
1 parent 4145c59 commit 775ffec
Show file tree
Hide file tree
Showing 121 changed files with 315 additions and 315 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/clear-cache.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ on:
workflow_dispatch:
inputs:
pattern:
description: "patter for cleaning cache"
description: "pattern for cleaning cache"
default: "pip|conda"
required: false
type: string
Expand Down
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Fixed padding removal for 3d input in `MSSSIM` ([#1674](https://github.com/Lightning-AI/torchmetrics/pull/1674))
- Fixed `max_det_threshold` in MAP detection ([#1712](https://github.com/Lightning-AI/torchmetrics/pull/1712))
- Fixed states being saved in metrics that use `register_buffer` ([#1728](https://github.com/Lightning-AI/torchmetrics/pull/1728))
- Fixed states not being correctly synced and device transfered in `MeanAveragePrecision` for `iou_type="segm"` ([#1763](https://github.com/Lightning-AI/torchmetrics/pull/1763))
- Fixed states not being correctly synced and device transferred in `MeanAveragePrecision` for `iou_type="segm"` ([#1763](https://github.com/Lightning-AI/torchmetrics/pull/1763))
- Fixed use of `prefix` and `postfix` in nested `MetricCollection` ([#1773](https://github.com/Lightning-AI/torchmetrics/pull/1773))
- Fixed `ax` plotting logging in `MetricCollection ([#1783](https://github.com/Lightning-AI/torchmetrics/pull/1783))
- Fixed lookup for punkt sources being downloaded in `RougeScore` ([#1789](https://github.com/Lightning-AI/torchmetrics/pull/1789))
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ def metric_ddp(rank, world_size):
acc = metric.compute()
print(f"Accuracy on all data: {acc}, accelerator rank: {rank}")

# Reseting internal state such that metric ready for new data
# Resetting internal state such that metric ready for new data
metric.reset()

# cleanup
Expand Down Expand Up @@ -298,7 +298,7 @@ Each domain may require some additional dependencies which can be installed with
#### Plotting

Visualization of metrics can be important to help understand what is going on with your machine learning algorithms.
Torchmetrics have build-in plotting support (install dependencies with `pip install torchmetrics[visual]`) for nearly
Torchmetrics have built-in plotting support (install dependencies with `pip install torchmetrics[visual]`) for nearly
all modular metrics through the `.plot` method. Simply call the method to get a simple visualization of any metric!

```python
Expand Down
2 changes: 1 addition & 1 deletion docs/paper_JOSS/paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ @article{scikit_learn

@misc{keras,
title={Keras},
author={Chollet, Fran\c{c}ois and others},
author={Chollet, Fran\c{c}is and others},
year={2015},
publisher={GitHub},
howpublished={\url{https://github.com/fchollet/keras}},
Expand Down
2 changes: 1 addition & 1 deletion docs/paper_JOSS/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,6 @@ TorchMetrics is released under the Apache 2.0 license. The source code is availa

# Acknowledgement

The TorchMetrics team thanks Thomas Chaton, Ethan Harris, Carlos Mocholí, Sean Narenthiran, Adrian Wälchli, and Ananth Subramaniam for contributing ideas, participating in discussions on API design, and completing Pull Request reviews. We also thank all of our open-source contributors for reporting and resolving issues with this package. We are grateful to the PyTorch Lightning team for their ongoing and dedicated support of this project, and Grid.ai for providing computing resources and cloud credits needed to run our Continuos Integrations.
The TorchMetrics team thanks Thomas Chaton, Ethan Harris, Carlos Mocholí, Sean Narenthiran, Adrian Wälchli, and Ananth Subramaniam for contributing ideas, participating in discussions on API design, and completing Pull Request reviews. We also thank all of our open-source contributors for reporting and resolving issues with this package. We are grateful to the PyTorch Lightning team for their ongoing and dedicated support of this project, and Grid.ai for providing computing resources and cloud credits needed to run our Continuous Integrations.

# References
2 changes: 1 addition & 1 deletion docs/source/all-metrics.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
.. this page is refering other pages with `customcarditem`; bypass hierarchy is patch with redirect
.. this page is referring other pages with `customcarditem`; bypass hierarchy is patch with redirect
All TorchMetrics
================
Expand Down
2 changes: 1 addition & 1 deletion docs/source/links.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@
.. _SpectralDistortionIndex: https://www.semanticscholar.org/paper/Multispectral-and-panchromatic-data-fusion-without-Alparone-Aiazzi/b6db12e3785326577cb95fd743fecbf5bc66c7c9
.. _RelativeAverageSpectralError: https://www.semanticscholar.org/paper/Data-Fusion.-Definitions-and-Architectures-Fusion-Wald/51b2b81e5124b3bb7ec53517a5dd64d8e348cadf
.. _WMAPE: https://en.wikipedia.org/wiki/WMAPE
.. _CER: https://rechtsprechung-im-ostseeraum.archiv.uni-greifswald.de/word-error-rate-character-error-rate-how-to-evaluate-a-model
.. _CER: https://rechtsprechung-im-ostseeraum.archive.uni-greifswald.de/word-error-rate-character-error-rate-how-to-evaluate-a-model
.. _MER: https://www.isca-speech.org/archive/interspeech_2004/morris04_interspeech.html
.. _WIL: https://www.isca-speech.org/archive/interspeech_2004/morris04_interspeech.html
.. _WIP: https://infoscience.epfl.ch/record/82766
Expand Down
2 changes: 1 addition & 1 deletion docs/source/pages/implement.rst
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ can behave in two ways:
5. Calls ``compute()`` to calculate metric for current batch.
6. Restores the global state.

2. If ``full_state_update`` is ``False`` (default) the metric state of one batch is completly independent of the state
2. If ``full_state_update`` is ``False`` (default) the metric state of one batch is completely independent of the state
of other batches, which means that we only need to call ``update`` once.

1. Caches the global state.
Expand Down
4 changes: 2 additions & 2 deletions docs/source/pages/lightning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ The following contains a list of pitfalls to be aware of:

* Modular metrics contain internal states that should belong to only one DataLoader. In case you are using multiple DataLoaders,
it is recommended to initialize a separate modular metric instances for each DataLoader and use them separately. The same holds
for using seperate metrics for training, validation and testing.
for using separate metrics for training, validation and testing.

.. testcode:: python

Expand Down Expand Up @@ -194,7 +194,7 @@ The following contains a list of pitfalls to be aware of:

* Calling ``self.log("val", self.metric(preds, target))`` with the intention of logging the metric object. Because
``self.metric(preds, target)`` corresponds to calling the forward method, this will return a tensor and not the
metric object. Such logging will be wrong in this case. Instead it is important to seperate into seperate lines:
metric object. Such logging will be wrong in this case. Instead it is important to separate into separate lines:

.. testcode:: python

Expand Down
8 changes: 4 additions & 4 deletions docs/source/pages/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ the native `MetricCollection`_ module can also be used to wrap multiple metrics.
self.metric1 = BinaryAccuracy()
self.metric2 = nn.ModuleList(BinaryAccuracy())
self.metric3 = nn.ModuleDict({'accuracy': BinaryAccuracy()})
self.metric4 = MetricCollection([BinaryAccuracy()]) # torchmetrics build-in collection class
self.metric4 = MetricCollection([BinaryAccuracy()]) # torchmetrics built-in collection class

def forward(self, batch):
data, target = batch
Expand Down Expand Up @@ -205,8 +205,8 @@ Most metrics in our collection can be used with 16-bit precision (``torch.half``
the following limitations:

* In general ``pytorch`` had better support for 16-bit precision much earlier on GPU than CPU. Therefore, we
recommend that anyone that want to use metrics with half precision on CPU, upgrade to atleast pytorch v1.6
where support for operations such as addition, subtraction, multiplication ect. was added.
recommend that anyone that want to use metrics with half precision on CPU, upgrade to at least pytorch v1.6
where support for operations such as addition, subtraction, multiplication etc. was added.
* Some metrics does not work at all in half precision on CPU. We have explicitly stated this in their docstring,
but they are also listed below:

Expand All @@ -217,7 +217,7 @@ the following limitations:
You can always check the precision/dtype of the metric by checking the `.dtype` property.

******************
Metric Arithmetics
Metric Arithmetic
******************

Metrics support most of python built-in operators for arithmetic, logic and bitwise operations.
Expand Down
8 changes: 4 additions & 4 deletions docs/source/pages/plotting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Plotting
`Scienceplot package <https://github.com/garrettj403/SciencePlots>`_ is also installed and all plots in
Torchmetrics will default to using that style.

Torchmetrics comes with build-in support for quick visualization of your metrics, by simply using the ``.plot`` method
Torchmetrics comes with built-in support for quick visualization of your metrics, by simply using the ``.plot`` method
that all modular metrics implement. This method provides a consistent interface for basic plotting of all metrics.

.. code-block:: python
Expand Down Expand Up @@ -146,7 +146,7 @@ a model over time, we could do it like this:
:include-source: false

Do note that metrics that do not return simple scalar tensors, such as `ConfusionMatrix`, `ROC` that have specialized
visualzation does not support plotting multiple steps, out of the box and the user needs to manually plot the values
visualization does not support plotting multiple steps, out of the box and the user needs to manually plot the values
for each step.

********************************
Expand Down Expand Up @@ -235,7 +235,7 @@ to rely on ``MetricTracker`` to keep track of the metrics over multiple steps.
# Extract all metrics from all steps
all_results = tracker.compute_all()
# Constuct a single figure with appropriate layout for all metrics
# Construct a single figure with appropriate layout for all metrics
fig = plt.figure(layout="constrained")
ax1 = plt.subplot(2, 2, 1)
ax2 = plt.subplot(2, 2, 2)
Expand All @@ -245,7 +245,7 @@ to rely on ``MetricTracker`` to keep track of the metrics over multiple steps.
confmat.plot(val=all_results[-1]['BinaryConfusionMatrix'], ax=ax1)
roc.plot(all_results[-1]["BinaryROC"], ax=ax2)
# For the remainig we plot the full history, but we need to extract the scalar values from the results
# For the remaining we plot the full history, but we need to extract the scalar values from the results
scalar_results = [
{k: v for k, v in ar.items() if isinstance(v, torch.Tensor) and v.numel() == 1} for ar in all_results
]
Expand Down
2 changes: 1 addition & 1 deletion docs/source/pages/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ The code below shows how to use the class-based interface:
acc = metric.compute()
print(f"Accuracy on all data: {acc}")

# Reseting internal state such that metric ready for new data
# Resetting internal state such that metric ready for new data
metric.reset()

.. testoutput::
Expand Down
2 changes: 1 addition & 1 deletion docs/source/references/utilities.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ dim_zero_sum
torchmetrics.utilities.distributed
**********************************

The `distributed` utilities are used to help with syncronization of metrics across multiple processes.
The `distributed` utilities are used to help with synchronization of metrics across multiple processes.

gather_all_tensors
~~~~~~~~~~~~~~~~~~
Expand Down
2 changes: 1 addition & 1 deletion requirements/image.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,5 @@

scipy >1.0.0, <1.11.0
torchvision >=0.8, <=0.15.2
torch-fidelity <=0.4.0 # bumping to alow install version from master, now used in testing
torch-fidelity <=0.4.0 # bumping to allow install version from master, now used in testing
lpips <=0.1.4
2 changes: 1 addition & 1 deletion src/torchmetrics/__about__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
Pytorch Lightning, but got split off so users could take advantage of the large collection of metrics
implemented without having to install Pytorch Lightning (even though we would love for you to try it out).
We currently have around 100+ metrics implemented and we continuously are adding more metrics, both within
already covered domains (classification, regression ect.) but also new domains (object detection ect.).
already covered domains (classification, regression etc.) but also new domains (object detection etc.).
We make sure that all our metrics are rigorously tested such that you can trust them.
"""

Expand Down
16 changes: 8 additions & 8 deletions src/torchmetrics/aggregation.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ class MaxMetric(BaseAggregator):
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitary shape ``(...,)``.
arbitrary shape ``(...,)``.
As output of `forward` and `compute` the metric returns the following output
Expand Down Expand Up @@ -222,7 +222,7 @@ class MinMetric(BaseAggregator):
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitary shape ``(...,)``.
arbitrary shape ``(...,)``.
As output of `forward` and `compute` the metric returns the following output
Expand Down Expand Up @@ -327,7 +327,7 @@ class SumMetric(BaseAggregator):
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitary shape ``(...,)``.
arbitrary shape ``(...,)``.
As output of `forward` and `compute` the metric returns the following output
Expand Down Expand Up @@ -432,7 +432,7 @@ class CatMetric(BaseAggregator):
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitary shape ``(...,)``.
arbitrary shape ``(...,)``.
As output of `forward` and `compute` the metric returns the following output
Expand Down Expand Up @@ -496,9 +496,9 @@ class MeanMetric(BaseAggregator):
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitary shape ``(...,)``.
arbitrary shape ``(...,)``.
- ``weight`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float value with
arbitary shape ``(...,)``. Needs to be broadcastable with the shape of ``value`` tensor.
arbitrary shape ``(...,)``. Needs to be broadcastable with the shape of ``value`` tensor.
As output of `forward` and `compute` the metric returns the following output
Expand Down Expand Up @@ -623,7 +623,7 @@ class RunningMean(Running):
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitary shape ``(...,)``.
arbitrary shape ``(...,)``.
As output of `forward` and `compute` the metric returns the following output
Expand Down Expand Up @@ -680,7 +680,7 @@ class RunningSum(Running):
As input to ``forward`` and ``update`` the metric accepts the following input
- ``value`` (:class:`~float` or :class:`~torch.Tensor`): a single float or an tensor of float values with
arbitary shape ``(...,)``.
arbitrary shape ``(...,)``.
As output of `forward` and `compute` the metric returns the following output
Expand Down
4 changes: 2 additions & 2 deletions src/torchmetrics/audio/pesq.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ class PerceptualEvaluationSpeechQuality(Metric):
"""Calculate `Perceptual Evaluation of Speech Quality`_ (PESQ).
It's a recognized industry standard for audio quality that takes into considerations characteristics such as:
audio sharpness, call volume, background noise, clipping, audio interference ect. PESQ returns a score between
audio sharpness, call volume, background noise, clipping, audio interference etc. PESQ returns a score between
-0.5 and 4.5 with the higher scores indicating a better quality.
This metric is a wrapper for the `pesq package`_. Note that input will be moved to ``cpu`` to perform the metric
Expand All @@ -54,7 +54,7 @@ class PerceptualEvaluationSpeechQuality(Metric):
fs: sampling frequency, should be 16000 or 8000 (Hz)
mode: ``'wb'`` (wide-band) or ``'nb'`` (narrow-band)
keep_same_device: whether to move the pesq value to the device of preds
n_processes: integer specifiying the number of processes to run in parallel for the metric calculation.
n_processes: integer specifying the number of processes to run in parallel for the metric calculation.
Only applies to batches of data and if ``multiprocessing`` package is installed.
kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
Expand Down
8 changes: 4 additions & 4 deletions src/torchmetrics/classification/accuracy.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ class BinaryAccuracy(BinaryStatScores):
- ``preds`` (:class:`~torch.Tensor`): An int or float tensor of shape ``(N, ...)``. If preds is a floating
point tensor with values outside [0,1] range we consider the input to be logits and will auto apply sigmoid
per element. Addtionally, we convert to int tensor with thresholding using the value in ``threshold``.
per element. Additionally, we convert to int tensor with thresholding using the value in ``threshold``.
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, ...)``
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand Down Expand Up @@ -177,7 +177,7 @@ class MulticlassAccuracy(MulticlassStatScores):
- If ``average=None/'none'``, the shape will be ``(N, C)``
Args:
num_classes: Integer specifing the number of classes
num_classes: Integer specifying the number of classes
average:
Defines the reduction that is applied over labels. Should be one of the following:
Expand Down Expand Up @@ -307,7 +307,7 @@ class MultilabelAccuracy(MultilabelStatScores):
- ``preds`` (:class:`~torch.Tensor`): An int or float tensor of shape ``(N, C, ...)``. If preds is a floating
point tensor with values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per
element. Addtionally, we convert to int tensor with thresholding using the value in ``threshold``.
element. Additionally, we convert to int tensor with thresholding using the value in ``threshold``.
- ``target`` (:class:`~torch.Tensor`): An int tensor of shape ``(N, C, ...)``
As output to ``forward`` and ``compute`` the metric returns the following output:
Expand All @@ -326,7 +326,7 @@ class MultilabelAccuracy(MultilabelStatScores):
- If ``average=None/'none'``, the shape will be ``(N, C)``
Args:
num_labels: Integer specifing the number of labels
num_labels: Integer specifying the number of labels
threshold: Threshold for transforming probability to binary (0,1) predictions
average:
Defines the reduction that is applied over labels. Should be one of the following:
Expand Down
6 changes: 3 additions & 3 deletions src/torchmetrics/classification/auroc.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ class MulticlassAUROC(MulticlassPrecisionRecallCurve):
corresponds to random guessing.
For multiclass the metric is calculated by iteratively treating each class as the positive class and all other
classes as the negative, which is refered to as the one-vs-rest approach. One-vs-one is currently not supported by
classes as the negative, which is referred to as the one-vs-rest approach. One-vs-one is currently not supported by
this metric. By default the reported metric is then the average over all classes, but this behavior can be changed
by setting the ``average`` argument.
Expand All @@ -199,7 +199,7 @@ class MulticlassAUROC(MulticlassPrecisionRecallCurve):
size :math:`\mathcal{O}(n_{thresholds} \times n_{classes})` (constant memory).
Args:
num_classes: Integer specifing the number of classes
num_classes: Integer specifying the number of classes
average:
Defines the reduction that is applied over classes. Should be one of the following:
Expand Down Expand Up @@ -346,7 +346,7 @@ class MultilabelAUROC(MultilabelPrecisionRecallCurve):
size :math:`\mathcal{O}(n_{thresholds} \times n_{labels})` (constant memory).
Args:
num_labels: Integer specifing the number of labels
num_labels: Integer specifying the number of labels
average:
Defines the reduction that is applied over labels. Should be one of the following:
Expand Down
Loading

0 comments on commit 775ffec

Please sign in to comment.