-
-
Notifications
You must be signed in to change notification settings - Fork 620
Commit
- Loading branch information
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
# Sphinx build info version 1 | ||
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. | ||
config: c6e25873cc397085b21e3815fa910d32 | ||
tags: 645f666f9bcd5a90fca523b33c5a78b7 |
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
ignite.contrib.engines | ||
====================== | ||
|
||
Contribution module of engines and helper tools: | ||
|
||
ignite.contrib.engines.tbptt | ||
|
||
.. currentmodule:: ignite.contrib.engines.tbptt | ||
|
||
.. autosummary:: | ||
:nosignatures: | ||
:autolist: | ||
|
||
ignite.contrib.engines.common | ||
|
||
.. currentmodule:: ignite.contrib.engines.common | ||
|
||
.. autosummary:: | ||
:nosignatures: | ||
:autolist: | ||
|
||
Truncated Backpropagation Throught Time | ||
--------------------------------------- | ||
|
||
.. automodule:: ignite.contrib.engines.tbptt | ||
:members: | ||
|
||
Helper methods to setup trainer/evaluator | ||
----------------------------------------- | ||
|
||
.. automodule:: ignite.contrib.engines.common | ||
:members: |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,64 @@ | ||
ignite.contrib.handlers | ||
======================= | ||
|
||
Contribution module of handlers | ||
|
||
|
||
Parameter scheduler [deprecated] | ||
-------------------------------- | ||
|
||
.. deprecated:: 0.4.4 | ||
Use :class:`~ignite.handlers.param_scheduler.ParamScheduler` instead, will be removed in version 0.6.0. | ||
|
||
Was moved to :ref:`param-scheduler-label`. | ||
|
||
LR finder [deprecated] | ||
---------------------- | ||
|
||
.. deprecated:: 0.4.4 | ||
Use :class:`~ignite.handlers.lr_finder.FastaiLRFinder` instead, will be removed in version 0.6.0. | ||
|
||
Time profilers [deprecated] | ||
--------------------------- | ||
|
||
.. deprecated:: 0.4.6 | ||
Use :class:`~ignite.handlers.time_profilers.BasicTimeProfiler` instead, will be removed in version 0.6.0. | ||
Use :class:`~ignite.handlers.time_profilers.HandlersTimeProfiler` instead, will be removed in version 0.6.0. | ||
|
||
Loggers | ||
------- | ||
|
||
.. currentmodule:: ignite.contrib.handlers | ||
|
||
.. autosummary:: | ||
:nosignatures: | ||
:toctree: ../generated | ||
:recursive: | ||
|
||
base_logger | ||
clearml_logger | ||
mlflow_logger | ||
neptune_logger | ||
polyaxon_logger | ||
tensorboard_logger | ||
tqdm_logger | ||
|
||
visdom_logger | ||
wandb_logger | ||
|
||
.. seealso:: | ||
|
||
Below are a comprehensive list of examples of various loggers. | ||
|
||
* See `tensorboardX mnist example <https://github.com/pytorch/ignite/blob/master/examples/contrib/mnist/mnist_with_tensorboard_logger.py>`_ | ||
and `CycleGAN and EfficientNet notebooks <https://github.com/pytorch/ignite/tree/master/examples/notebooks>`_ for detailed usage. | ||
|
||
* See `visdom mnist example <https://github.com/pytorch/ignite/blob/master/examples/contrib/mnist/mnist_with_visdom_logger.py>`_ for detailed usage. | ||
|
||
* See `neptune mnist example <https://github.com/pytorch/ignite/blob/master/examples/contrib/mnist/mnist_with_neptune_logger.py>`_ for detailed usage. | ||
|
||
* See `tqdm mnist example <https://github.com/pytorch/ignite/blob/master/examples/contrib/mnist/mnist_with_tqdm_logger.py>`_ for detailed usage. | ||
|
||
* See `wandb mnist example <https://github.com/pytorch/ignite/blob/master/examples/contrib/mnist/mnist_with_wandb_logger.py>`_ for detailed usage. | ||
|
||
* See `clearml mnist example <https://github.com/pytorch/ignite/blob/master/examples/contrib/mnist/mnist_with_clearml_logger.py>`_ for detailed usage. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,56 @@ | ||
ignite.contrib.metrics | ||
====================== | ||
|
||
Contrib module metrics | ||
---------------------- | ||
|
||
.. currentmodule:: ignite.contrib.metrics | ||
|
||
.. autosummary:: | ||
:nosignatures: | ||
:toctree: ../generated | ||
|
||
AveragePrecision | ||
CohenKappa | ||
GpuInfo | ||
PrecisionRecallCurve | ||
ROC_AUC | ||
RocCurve | ||
|
||
Regression metrics | ||
------------------ | ||
|
||
.. currentmodule:: ignite.contrib.metrics.regression | ||
|
||
.. automodule:: ignite.contrib.metrics.regression | ||
|
||
|
||
Module :mod:`ignite.contrib.metrics.regression` provides implementations of | ||
metrics useful for regression tasks. Definitions of metrics are based on `Botchkarev 2018`_, page 30 "Appendix 2. Metrics mathematical definitions". | ||
|
||
.. _`Botchkarev 2018`: | ||
https://arxiv.org/ftp/arxiv/papers/1809/1809.03006.pdf | ||
|
||
Complete list of metrics: | ||
|
||
.. currentmodule:: ignite.contrib.metrics.regression | ||
|
||
.. autosummary:: | ||
:nosignatures: | ||
:toctree: ../generated | ||
|
||
CanberraMetric | ||
FractionalAbsoluteError | ||
FractionalBias | ||
GeometricMeanAbsoluteError | ||
GeometricMeanRelativeAbsoluteError | ||
ManhattanDistance | ||
MaximumAbsoluteError | ||
MeanAbsoluteRelativeError | ||
MeanError | ||
MeanNormalizedBias | ||
MedianAbsoluteError | ||
MedianAbsolutePercentageError | ||
MedianRelativeAbsoluteError | ||
R2Score | ||
WaveHedgesDistance |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,49 @@ | ||
:orphan: | ||
|
||
.. toggle:: | ||
|
||
.. testcode:: default, 1, 2, 3, 4, 5 | ||
|
||
from collections import OrderedDict | ||
|
||
import torch | ||
from torch import nn, optim | ||
|
||
from ignite.engine import * | ||
from ignite.handlers import * | ||
from ignite.metrics import * | ||
from ignite.utils import * | ||
from ignite.contrib.metrics.regression import * | ||
from ignite.contrib.metrics import * | ||
|
||
# create default evaluator for doctests | ||
|
||
def eval_step(engine, batch): | ||
return batch | ||
|
||
default_evaluator = Engine(eval_step) | ||
|
||
# create default optimizer for doctests | ||
|
||
param_tensor = torch.zeros([1], requires_grad=True) | ||
default_optimizer = torch.optim.SGD([param_tensor], lr=0.1) | ||
|
||
# create default trainer for doctests | ||
# as handlers could be attached to the trainer, | ||
# each test must define his own trainer using `.. testsetup:` | ||
|
||
def get_default_trainer(): | ||
|
||
def train_step(engine, batch): | ||
return batch | ||
|
||
return Engine(train_step) | ||
|
||
# create default model for doctests | ||
|
||
default_model = nn.Sequential(OrderedDict([ | ||
('base', nn.Linear(4, 2)), | ||
('fc', nn.Linear(2, 1)) | ||
])) | ||
|
||
manual_seed(666) |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,117 @@ | ||
ignite.distributed | ||
================== | ||
|
||
Helper module to use distributed settings for multiple backends: | ||
|
||
- backends from native torch distributed configuration: "nccl", "gloo", "mpi" | ||
|
||
- XLA on TPUs via `pytorch/xla <https://github.com/pytorch/xla>`_ | ||
|
||
- using `Horovod framework <https://horovod.readthedocs.io/en/stable/>`_ as a backend | ||
|
||
|
||
Distributed launcher and `auto` helpers | ||
--------------------------------------- | ||
|
||
We provide a context manager to simplify the code of distributed configuration setup for all above supported backends. | ||
In addition, methods like :meth:`~ignite.distributed.auto.auto_model`, :meth:`~ignite.distributed.auto.auto_optim` and | ||
:meth:`~ignite.distributed.auto.auto_dataloader` helps to adapt in a transparent way provided model, optimizer and data | ||
loaders to existing configuration: | ||
|
||
.. code-block:: python | ||
# main.py | ||
import ignite.distributed as idist | ||
def training(local_rank, config, **kwargs): | ||
print(idist.get_rank(), ": run with config:", config, "- backend=", idist.backend()) | ||
train_loader = idist.auto_dataloader(dataset, batch_size=32, num_workers=12, shuffle=True, **kwargs) | ||
# batch size, num_workers and sampler are automatically adapted to existing configuration | ||
# ... | ||
model = resnet50() | ||
model = idist.auto_model(model) | ||
# model is DDP or DP or just itself according to existing configuration | ||
# ... | ||
optimizer = optim.SGD(model.parameters(), lr=0.01) | ||
optimizer = idist.auto_optim(optimizer) | ||
# optimizer is itself, except XLA configuration and overrides `step()` method. | ||
# User can safely call `optimizer.step()` (behind `xm.optimizer_step(optimizier)` is performed) | ||
backend = "nccl" # torch native distributed configuration on multiple GPUs | ||
# backend = "xla-tpu" # XLA TPUs distributed configuration | ||
# backend = None # no distributed configuration | ||
# | ||
# dist_configs = {'nproc_per_node': 4} # Use specified distributed configuration if launch as python main.py | ||
# dist_configs["start_method"] = "fork" # Add start_method as "fork" if using Jupyter Notebook | ||
with idist.Parallel(backend=backend, **dist_configs) as parallel: | ||
parallel.run(training, config, a=1, b=2) | ||
Above code may be executed with `torch.distributed.launch`_ tool or by python and specifying distributed configuration | ||
in the code. For more details, please, see :class:`~ignite.distributed.launcher.Parallel`, | ||
:meth:`~ignite.distributed.auto.auto_model`, :meth:`~ignite.distributed.auto.auto_optim` and | ||
:meth:`~ignite.distributed.auto.auto_dataloader`. | ||
|
||
Complete example of CIFAR10 training can be found | ||
`here <https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10>`_. | ||
|
||
|
||
.. _torch.distributed.launch: https://pytorch.org/docs/stable/distributed.html#launch-utility | ||
|
||
|
||
ignite.distributed.auto | ||
----------------------- | ||
|
||
.. currentmodule:: ignite.distributed.auto | ||
|
||
.. autosummary:: | ||
:nosignatures: | ||
:toctree: generated | ||
|
||
DistributedProxySampler | ||
auto_dataloader | ||
auto_model | ||
auto_optim | ||
|
||
.. Note :: | ||
In distributed configuration, methods :meth:`~ignite.distributed.auto.auto_model`, :meth:`~ignite.distributed.auto.auto_optim` | ||
and :meth:`~ignite.distributed.auto.auto_dataloader` will have effect only when distributed group is initialized. | ||
ignite.distributed.launcher | ||
--------------------------- | ||
|
||
.. currentmodule:: ignite.distributed.launcher | ||
|
||
.. autosummary:: | ||
:nosignatures: | ||
:toctree: generated | ||
|
||
Parallel | ||
|
||
ignite.distributed.utils | ||
------------------------ | ||
|
||
This module wraps common methods to fetch information about distributed configuration, initialize/finalize process | ||
group or spawn multiple processes. | ||
|
||
.. currentmodule:: ignite.distributed.utils | ||
|
||
.. autosummary:: | ||
:nosignatures: | ||
:autolist: | ||
|
||
.. automodule:: ignite.distributed.utils | ||
:members: | ||
|
||
.. attribute:: has_native_dist_support | ||
|
||
True if `torch.distributed` is available | ||
|
||
.. attribute:: has_xla_support | ||
|
||
True if `torch_xla` package is found |