You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/blog/2020-09-10-pytorch-ignite.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -990,7 +990,7 @@ with idist.Parallel(backend=backend, **dist_configs) as parallel:
990
990
Please note that these `auto_*` methods are optional; a user is free use some of them and manually set up certain parts of the code if required. The advantage of this approach is that there is no under the hood inevitable objects' patching and overriding.
991
991
992
992
More details about distributed helpers provided by PyTorch-Ignite can be found in [the documentation](https://pytorch.org/ignite/distributed.html).
993
-
A complete example of training on CIFAR10 can be found [here](https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10).
993
+
A complete example of training on CIFAR10 can be found [here](https://github.com/pytorch/ignite/tree/master/examples/cifar10).
994
994
995
995
A detailed tutorial with distributed helpers is published [here](https://pytorch-ignite.ai/posts/distributed-made-easy-with-ignite/).
996
996
@@ -1012,11 +1012,11 @@ In addition, PyTorch-Ignite also provides several tutorials:
1012
1012
-[Basic example of LR finder on MNIST](https://github.com/pytorch/ignite/blob/master/examples/notebooks/FastaiLRFinder_MNIST.ipynb)
1013
1013
-[Benchmark mixed precision training on Cifar100: torch.cuda.amp vs nvidia/apex](https://github.com/pytorch/ignite/blob/master/examples/notebooks/Cifar100_bench_amp.ipynb)
1014
1014
-[MNIST training on a single TPU](https://github.com/pytorch/ignite/blob/master/examples/notebooks/MNIST_on_TPU.ipynb)
1015
-
-[CIFAR10 Training on multiple TPUs](https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10)
1015
+
-[CIFAR10 Training on multiple TPUs](https://github.com/pytorch/ignite/tree/master/examples/cifar10)
Copy file name to clipboardExpand all lines: src/blog/2021-06-28-pytorch-ignite-distributed.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -145,7 +145,7 @@ The code snippets below highlight the API's specificities of each of the distrib
145
145
146
146
PyTorch-Ignite's unified code snippet can be run with the standard PyTorch backends like `gloo` and `nccl` and also with Horovod and XLA for TPU devices. Note that the code is less verbose, however, the user still has full control of the training loop.
147
147
148
-
The following examples are introductory. For a more robust, production-grade example that uses PyTorch-Ignite, refer [here](https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10).
148
+
The following examples are introductory. For a more robust, production-grade example that uses PyTorch-Ignite, refer [here](https://github.com/pytorch/ignite/tree/master/examples/cifar10).
149
149
150
150
The complete source code of these experiments can be found [here](https://github.com/pytorch-ignite/idist-snippets).
151
151
@@ -285,7 +285,7 @@ while maintaining control and simplicity.
285
285
with distributed data parallel: native pytorch, pytorch-ignite,
0 commit comments