Skip to content

Commit e36c20b

Browse files
authored
Replaced links examples/contrib by examples
1 parent df95465 commit e36c20b

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

src/blog/2020-09-10-pytorch-ignite.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -990,7 +990,7 @@ with idist.Parallel(backend=backend, **dist_configs) as parallel:
990990
Please note that these `auto_*` methods are optional; a user is free use some of them and manually set up certain parts of the code if required. The advantage of this approach is that there is no under the hood inevitable objects' patching and overriding.
991991

992992
More details about distributed helpers provided by PyTorch-Ignite can be found in [the documentation](https://pytorch.org/ignite/distributed.html).
993-
A complete example of training on CIFAR10 can be found [here](https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10).
993+
A complete example of training on CIFAR10 can be found [here](https://github.com/pytorch/ignite/tree/master/examples/cifar10).
994994

995995
A detailed tutorial with distributed helpers is published [here](https://pytorch-ignite.ai/posts/distributed-made-easy-with-ignite/).
996996

@@ -1012,11 +1012,11 @@ In addition, PyTorch-Ignite also provides several tutorials:
10121012
- [Basic example of LR finder on MNIST](https://github.com/pytorch/ignite/blob/master/examples/notebooks/FastaiLRFinder_MNIST.ipynb)
10131013
- [Benchmark mixed precision training on Cifar100: torch.cuda.amp vs nvidia/apex](https://github.com/pytorch/ignite/blob/master/examples/notebooks/Cifar100_bench_amp.ipynb)
10141014
- [MNIST training on a single TPU](https://github.com/pytorch/ignite/blob/master/examples/notebooks/MNIST_on_TPU.ipynb)
1015-
- [CIFAR10 Training on multiple TPUs](https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10)
1015+
- [CIFAR10 Training on multiple TPUs](https://github.com/pytorch/ignite/tree/master/examples/cifar10)
10161016

10171017
and examples:
10181018

1019-
- [cifar10](https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10) (single/multi-GPU, DDP, AMP, TPUs)
1019+
- [cifar10](https://github.com/pytorch/ignite/tree/master/examples/cifar10) (single/multi-GPU, DDP, AMP, TPUs)
10201020
- [basic RL](https://github.com/pytorch/ignite/tree/master/examples/reinforcement_learning)
10211021
- [reproducible baselines for vision tasks:](https://github.com/pytorch/ignite/tree/master/examples/references)
10221022
- classification on ImageNet (single/multi-GPU, DDP, AMP)

src/blog/2021-06-28-pytorch-ignite-distributed.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ The code snippets below highlight the API's specificities of each of the distrib
145145

146146
PyTorch-Ignite's unified code snippet can be run with the standard PyTorch backends like `gloo` and `nccl` and also with Horovod and XLA for TPU devices. Note that the code is less verbose, however, the user still has full control of the training loop.
147147

148-
The following examples are introductory. For a more robust, production-grade example that uses PyTorch-Ignite, refer [here](https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10).
148+
The following examples are introductory. For a more robust, production-grade example that uses PyTorch-Ignite, refer [here](https://github.com/pytorch/ignite/tree/master/examples/cifar10).
149149

150150
The complete source code of these experiments can be found [here](https://github.com/pytorch-ignite/idist-snippets).
151151

@@ -285,7 +285,7 @@ while maintaining control and simplicity.
285285
with distributed data parallel: native pytorch, pytorch-ignite,
286286
slurm.
287287

288-
- [CIFAR10 example](https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10)
288+
- [CIFAR10 example](https://github.com/pytorch/ignite/tree/master/examples/cifar10)
289289
of distributed training on CIFAR10 with muliple configurations: 1 or
290290
multiple GPUs, multiple nodes and GPUs, TPUs.
291291

0 commit comments

Comments
 (0)