Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

check codespell & fixing some... #2103

Merged
merged 22 commits into from
Sep 26, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .github/actions/push-caches/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,11 @@ runs:
# run: |
# import os
# fp = 'requirements.dump'
# with open(fp) as fo:
# lines = [ln.strip() for ln in fo.readlines()]
# with open(fp) as fopen:
# lines = [ln.strip() for ln in fopen.readlines()]
# lines = [ln.split('+')[0] for ln in lines if '-e ' not in ln]
# with open(fp, 'w') as fw:
# fw.writelines([ln + os.linesep for ln in lines])
# with open(fp, 'w') as fwrite:
# fwrite.writelines([ln + os.linesep for ln in lines])
# shell: python

- name: Dump wheels
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/clear-cache.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ on:
workflow_dispatch:
inputs:
pattern:
description: "patter for cleaning cache"
description: "pattern for cleaning cache"
default: "pip|conda"
required: false
type: string
Expand Down
7 changes: 7 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,13 @@ repos:
args: [--py38-plus]
name: Upgrade code

- repo: https://github.com/codespell-project/codespell
rev: v2.2.5
hooks:
- id: codespell
additional_dependencies: [tomli]
#args: ["--write-changes"]

- repo: https://github.com/PyCQA/docformatter
rev: v1.7.5
hooks:
Expand Down
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Fixed padding removal for 3d input in `MSSSIM` ([#1674](https://github.com/Lightning-AI/torchmetrics/pull/1674))
- Fixed `max_det_threshold` in MAP detection ([#1712](https://github.com/Lightning-AI/torchmetrics/pull/1712))
- Fixed states being saved in metrics that use `register_buffer` ([#1728](https://github.com/Lightning-AI/torchmetrics/pull/1728))
- Fixed states not being correctly synced and device transfered in `MeanAveragePrecision` for `iou_type="segm"` ([#1763](https://github.com/Lightning-AI/torchmetrics/pull/1763))
- Fixed states not being correctly synced and device transferred in `MeanAveragePrecision` for `iou_type="segm"` ([#1763](https://github.com/Lightning-AI/torchmetrics/pull/1763))
- Fixed use of `prefix` and `postfix` in nested `MetricCollection` ([#1773](https://github.com/Lightning-AI/torchmetrics/pull/1773))
- Fixed `ax` plotting logging in `MetricCollection ([#1783](https://github.com/Lightning-AI/torchmetrics/pull/1783))
- Fixed lookup for punkt sources being downloaded in `RougeScore` ([#1789](https://github.com/Lightning-AI/torchmetrics/pull/1789))
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ def metric_ddp(rank, world_size):
acc = metric.compute()
print(f"Accuracy on all data: {acc}, accelerator rank: {rank}")

# Reseting internal state such that metric ready for new data
# Resetting internal state such that metric ready for new data
metric.reset()

# cleanup
Expand Down Expand Up @@ -278,7 +278,7 @@ acc = torchmetrics.functional.classification.multiclass_accuracy(
### Covered domains and example metrics

In total TorchMetrics contains [100+ metrics](https://lightning.ai/docs/torchmetrics/stable/all-metrics.html), which
convers the following domains:
covers the following domains:

- Audio
- Classification
Expand All @@ -298,7 +298,7 @@ Each domain may require some additional dependencies which can be installed with
#### Plotting

Visualization of metrics can be important to help understand what is going on with your machine learning algorithms.
Torchmetrics have build-in plotting support (install dependencies with `pip install torchmetrics[visual]`) for nearly
Torchmetrics have built-in plotting support (install dependencies with `pip install torchmetrics[visual]`) for nearly
all modular metrics through the `.plot` method. Simply call the method to get a simple visualization of any metric!

```python
Expand Down
2 changes: 1 addition & 1 deletion docs/paper_JOSS/paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ @article{scikit_learn

@misc{keras,
title={Keras},
author={Chollet, Fran\c{c}ois and others},
author={Chollet, François and others},
year={2015},
publisher={GitHub},
howpublished={\url{https://github.com/fchollet/keras}},
Expand Down
2 changes: 1 addition & 1 deletion docs/paper_JOSS/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,6 @@ TorchMetrics is released under the Apache 2.0 license. The source code is availa

# Acknowledgement

The TorchMetrics team thanks Thomas Chaton, Ethan Harris, Carlos Mocholí, Sean Narenthiran, Adrian Wälchli, and Ananth Subramaniam for contributing ideas, participating in discussions on API design, and completing Pull Request reviews. We also thank all of our open-source contributors for reporting and resolving issues with this package. We are grateful to the PyTorch Lightning team for their ongoing and dedicated support of this project, and Grid.ai for providing computing resources and cloud credits needed to run our Continuos Integrations.
The TorchMetrics team thanks Thomas Chaton, Ethan Harris, Carlos Mocholí, Sean Narenthiran, Adrian Wälchli, and Ananth Subramaniam for contributing ideas, participating in discussions on API design, and completing Pull Request reviews. We also thank all of our open-source contributors for reporting and resolving issues with this package. We are grateful to the PyTorch Lightning team for their ongoing and dedicated support of this project, and Grid.ai for providing computing resources and cloud credits needed to run our Continuous Integrations.

# References
2 changes: 1 addition & 1 deletion docs/source/all-metrics.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
.. this page is refering other pages with `customcarditem`; bypass hierarchy is patch with redirect
.. this page is referring other pages with `customcarditem`; bypass hierarchy is patch with redirect

All TorchMetrics
================
Expand Down
8 changes: 4 additions & 4 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,14 +67,14 @@

def _set_root_image_path(page_path: str):
"""Set relative path to be from the root, drop all `../` in images used gallery."""
with open(page_path, encoding="UTF-8") as fo:
body = fo.read()
with open(page_path, encoding="UTF-8") as fopen:
body = fopen.read()
found = re.findall(r" :image: (.*)\.svg", body)
for occur in found:
occur_ = occur.replace("../", "")
body = body.replace(occur, occur_)
with open(page_path, "w", encoding="UTF-8") as fo:
fo.write(body)
with open(page_path, "w", encoding="UTF-8") as fopen:
fopen.write(body)


if SPHINX_FETCH_ASSETS:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/pages/implement.rst
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ can behave in two ways:
5. Calls ``compute()`` to calculate metric for current batch.
6. Restores the global state.

2. If ``full_state_update`` is ``False`` (default) the metric state of one batch is completly independent of the state
2. If ``full_state_update`` is ``False`` (default) the metric state of one batch is completely independent of the state
of other batches, which means that we only need to call ``update`` once.

1. Caches the global state.
Expand Down
4 changes: 2 additions & 2 deletions docs/source/pages/lightning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ The following contains a list of pitfalls to be aware of:

* Modular metrics contain internal states that should belong to only one DataLoader. In case you are using multiple DataLoaders,
it is recommended to initialize a separate modular metric instances for each DataLoader and use them separately. The same holds
for using seperate metrics for training, validation and testing.
for using separate metrics for training, validation and testing.

.. testcode:: python

Expand Down Expand Up @@ -194,7 +194,7 @@ The following contains a list of pitfalls to be aware of:

* Calling ``self.log("val", self.metric(preds, target))`` with the intention of logging the metric object. Because
``self.metric(preds, target)`` corresponds to calling the forward method, this will return a tensor and not the
metric object. Such logging will be wrong in this case. Instead it is important to seperate into seperate lines:
metric object. Such logging will be wrong in this case. Instead, it is essential to separate into several lines:

.. testcode:: python

Expand Down
14 changes: 7 additions & 7 deletions docs/source/pages/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ the native `MetricCollection`_ module can also be used to wrap multiple metrics.
self.metric1 = BinaryAccuracy()
self.metric2 = nn.ModuleList(BinaryAccuracy())
self.metric3 = nn.ModuleDict({'accuracy': BinaryAccuracy()})
self.metric4 = MetricCollection([BinaryAccuracy()]) # torchmetrics build-in collection class
self.metric4 = MetricCollection([BinaryAccuracy()]) # torchmetrics built-in collection class

def forward(self, batch):
data, target = batch
Expand Down Expand Up @@ -205,8 +205,8 @@ Most metrics in our collection can be used with 16-bit precision (``torch.half``
the following limitations:

* In general ``pytorch`` had better support for 16-bit precision much earlier on GPU than CPU. Therefore, we
recommend that anyone that want to use metrics with half precision on CPU, upgrade to atleast pytorch v1.6
where support for operations such as addition, subtraction, multiplication ect. was added.
recommend that anyone that want to use metrics with half precision on CPU, upgrade to at least pytorch v1.6
where support for operations such as addition, subtraction, multiplication etc. was added.
* Some metrics does not work at all in half precision on CPU. We have explicitly stated this in their docstring,
but they are also listed below:

Expand All @@ -216,9 +216,9 @@ the following limitations:

You can always check the precision/dtype of the metric by checking the `.dtype` property.

******************
Metric Arithmetics
******************
*****************
Metric Arithmetic
*****************

Metrics support most of python built-in operators for arithmetic, logic and bitwise operations.

Expand Down Expand Up @@ -484,7 +484,7 @@ argument can help:
of GPU. Only applies to metric states that are lists.

- ``compute_with_cache``: This argument indicates if the result after calling the ``compute`` method should be cached.
By default this is ``True`` meaning that repeated calls to ``compute`` (with no change to the metric state inbetween)
By default this is ``True`` meaning that repeated calls to ``compute`` (with no change to the metric state in between)
does not recompute the metric but just returns the cache. By setting it to ``False`` the metric will be recomputed
every time ``compute`` is called, but it can also help clean up a bit of memory.

Expand Down
8 changes: 4 additions & 4 deletions docs/source/pages/plotting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Plotting
`Scienceplot package <https://github.com/garrettj403/SciencePlots>`_ is also installed and all plots in
Torchmetrics will default to using that style.

Torchmetrics comes with build-in support for quick visualization of your metrics, by simply using the ``.plot`` method
Torchmetrics comes with built-in support for quick visualization of your metrics, by simply using the ``.plot`` method
that all modular metrics implement. This method provides a consistent interface for basic plotting of all metrics.

.. code-block:: python
Expand Down Expand Up @@ -146,7 +146,7 @@ a model over time, we could do it like this:
:include-source: false

Do note that metrics that do not return simple scalar tensors, such as `ConfusionMatrix`, `ROC` that have specialized
visualzation does not support plotting multiple steps, out of the box and the user needs to manually plot the values
visualization does not support plotting multiple steps, out of the box and the user needs to manually plot the values
for each step.

********************************
Expand Down Expand Up @@ -235,7 +235,7 @@ to rely on ``MetricTracker`` to keep track of the metrics over multiple steps.
# Extract all metrics from all steps
all_results = tracker.compute_all()

# Constuct a single figure with appropriate layout for all metrics
# Construct a single figure with appropriate layout for all metrics
fig = plt.figure(layout="constrained")
ax1 = plt.subplot(2, 2, 1)
ax2 = plt.subplot(2, 2, 2)
Expand All @@ -245,7 +245,7 @@ to rely on ``MetricTracker`` to keep track of the metrics over multiple steps.
confmat.plot(val=all_results[-1]['BinaryConfusionMatrix'], ax=ax1)
roc.plot(all_results[-1]["BinaryROC"], ax=ax2)

# For the remainig we plot the full history, but we need to extract the scalar values from the results
# For the remaining we plot the full history, but we need to extract the scalar values from the results
scalar_results = [
{k: v for k, v in ar.items() if isinstance(v, torch.Tensor) and v.numel() == 1} for ar in all_results
]
Expand Down
2 changes: 1 addition & 1 deletion docs/source/pages/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ The code below shows how to use the class-based interface:
acc = metric.compute()
print(f"Accuracy on all data: {acc}")

# Reseting internal state such that metric ready for new data
# Resetting internal state such that metric ready for new data
metric.reset()

.. testoutput::
Expand Down
2 changes: 1 addition & 1 deletion docs/source/references/utilities.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ dim_zero_sum
torchmetrics.utilities.distributed
**********************************

The `distributed` utilities are used to help with syncronization of metrics across multiple processes.
The `distributed` utilities are used to help with synchronization of metrics across multiple processes.

gather_all_tensors
~~~~~~~~~~~~~~~~~~
Expand Down
22 changes: 19 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,7 @@ addopts = [
"--color=yes",
"--disable-pytest-warnings",
]
# ToDo
#filterwarnings = ["error::FutureWarning"]
#filterwarnings = ["error::FutureWarning"] # ToDo
xfail_strict = true
junit_duration_report = "call"

Expand All @@ -40,10 +39,27 @@ exclude = "(.eggs|.git|.hg|.mypy_cache|.venv|_build|buck-out|build|dist)"

[tool.docformatter]
recursive = true
wrap-summaries = 120
# some docstring start with r"""
wrap-summaries = 119
wrap-descriptions = 120
blank = true

[tool.codespell]
#skip = '*.py'
quiet-level = 3
# Todo: comma separated list of words; waiting for:
# https://github.com/codespell-project/codespell/issues/2839#issuecomment-1731601603
# Todo: also adding links until they ignored by its: nature
# https://github.com/codespell-project/codespell/issues/2243#issuecomment-1732019960
ignore-words-list = """
rouge, \
mape, \
wil, \
fpr, \
raison, \
archiv
"""


[tool.ruff]
line-length = 120
Expand Down
2 changes: 1 addition & 1 deletion requirements/image.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,5 @@

scipy >1.0.0, <1.11.0
torchvision >=0.8, <=0.15.2
torch-fidelity <=0.4.0 # bumping to alow install version from master, now used in testing
torch-fidelity <=0.4.0 # bumping to allow install version from master, now used in testing
lpips <=0.1.4
2 changes: 1 addition & 1 deletion src/torchmetrics/__about__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
Pytorch Lightning, but got split off so users could take advantage of the large collection of metrics
implemented without having to install Pytorch Lightning (even though we would love for you to try it out).
We currently have around 100+ metrics implemented and we continuously are adding more metrics, both within
already covered domains (classification, regression ect.) but also new domains (object detection ect.).
already covered domains (classification, regression etc.) but also new domains (object detection etc.).
We make sure that all our metrics are rigorously tested such that you can trust them.
"""

Expand Down
Loading
Loading