Skip to content

Commit fbf5970

Browse files
committed
Update README and change timmdocs link in documentation
1 parent 01a0e25 commit fbf5970

File tree

5 files changed

+268
-133
lines changed

5 files changed

+268
-133
lines changed

README.md

+7-1
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,12 @@ I'm fortunate to be able to dedicate significant time and money of my own suppor
2323

2424
## What's New
2525

26+
### April 22, 2022
27+
* `timm` models are now officially supported in [fast.ai](https://www.fast.ai/)! Just in time for the new Practical Deep Learning course. `timmdocs` documentation link updated to [timm.fast.ai](http://timm.fast.ai/).
28+
* Two more model weights added in the TPU trained [series](https://github.com/rwightman/pytorch-image-models/releases/tag/v0.1-tpu-weights). Some In22k pretrain still in progress.
29+
* `seresnext101d_32x8d` - 83.69 @ 224, 84.35 @ 288
30+
* `seresnextaa101d_32x8d` (anti-aliased w/ AvgPool2d) - 83.85 @ 224, 84.57 @ 288
31+
2632
### March 23, 2022
2733
* Add `ParallelBlock` and `LayerScale` option to base vit models to support model configs in [Three things everyone should know about ViT](https://arxiv.org/abs/2203.09795)
2834
* `convnext_tiny_hnf` (head norm first) weights trained with (close to) A2 recipe, 82.2% top-1, could do better with more epochs.
@@ -462,7 +468,7 @@ My current [documentation](https://rwightman.github.io/pytorch-image-models/) fo
462468

463469
[Getting Started with PyTorch Image Models (timm): A Practitioner’s Guide](https://towardsdatascience.com/getting-started-with-pytorch-image-models-timm-a-practitioners-guide-4e77b4bf9055) by [Chris Hughes](https://github.com/Chris-hughes10) is an extensive blog post covering many aspects of `timm` in detail.
464470

465-
[timmdocs](https://fastai.github.io/timmdocs/) is quickly becoming a much more comprehensive set of documentation for `timm`. A big thanks to [Aman Arora](https://github.com/amaarora) for his efforts creating timmdocs.
471+
[timmdocs](http://timm.fast.ai/) is quickly becoming a much more comprehensive set of documentation for `timm`. A big thanks to [Aman Arora](https://github.com/amaarora) for his efforts creating timmdocs.
466472

467473
[paperswithcode](https://paperswithcode.com/lib/timm) is a good resource for browsing the models within `timm`.
468474

docs/archived_changes.md

+129
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,134 @@
11
# Archived Changes
22

3+
### June 8, 2021
4+
* Add first ResMLP weights, trained in PyTorch XLA on TPU-VM w/ my XLA branch. 24 block variant, 79.2 top-1.
5+
* Add ResNet51-Q model w/ pretrained weights at 82.36 top-1.
6+
* NFNet inspired block layout with quad layer stem and no maxpool
7+
* Same param count (35.7M) and throughput as ResNetRS-50 but +1.5 top-1 @ 224x224 and +2.5 top-1 at 288x288
8+
9+
### May 25, 2021
10+
* Add LeViT, Visformer, Convit (PR by Aman Arora), Twins (PR by paper authors) transformer models
11+
* Cleanup input_size/img_size override handling and testing for all vision transformer models
12+
* Add `efficientnetv2_rw_m` model and weights (started training before official code). 84.8 top-1, 53M params.
13+
14+
### May 14, 2021
15+
* Add EfficientNet-V2 official model defs w/ ported weights from official [Tensorflow/Keras](https://github.com/google/automl/tree/master/efficientnetv2) impl.
16+
* 1k trained variants: `tf_efficientnetv2_s/m/l`
17+
* 21k trained variants: `tf_efficientnetv2_s/m/l_in21k`
18+
* 21k pretrained -> 1k fine-tuned: `tf_efficientnetv2_s/m/l_in21ft1k`
19+
* v2 models w/ v1 scaling: `tf_efficientnetv2_b0` through `b3`
20+
* Rename my prev V2 guess `efficientnet_v2s` -> `efficientnetv2_rw_s`
21+
* Some blank `efficientnetv2_*` models in-place for future native PyTorch training
22+
23+
### May 5, 2021
24+
* Add MLP-Mixer models and port pretrained weights from [Google JAX impl](https://github.com/google-research/vision_transformer/tree/linen)
25+
* Add CaiT models and pretrained weights from [FB](https://github.com/facebookresearch/deit)
26+
* Add ResNet-RS models and weights from [TF](https://github.com/tensorflow/tpu/tree/master/models/official/resnet/resnet_rs). Thanks [Aman Arora](https://github.com/amaarora)
27+
* Add CoaT models and weights. Thanks [Mohammed Rizin](https://github.com/morizin)
28+
* Add new ImageNet-21k weights & finetuned weights for TResNet, MobileNet-V3, ViT models. Thanks [mrT](https://github.com/mrT23)
29+
* Add GhostNet models and weights. Thanks [Kai Han](https://github.com/iamhankai)
30+
* Update ByoaNet attention modles
31+
* Improve SA module inits
32+
* Hack together experimental stand-alone Swin based attn module and `swinnet`
33+
* Consistent '26t' model defs for experiments.
34+
* Add improved Efficientnet-V2S (prelim model def) weights. 83.8 top-1.
35+
* WandB logging support
36+
37+
### April 13, 2021
38+
* Add Swin Transformer models and weights from https://github.com/microsoft/Swin-Transformer
39+
40+
### April 12, 2021
41+
* Add ECA-NFNet-L1 (slimmed down F1 w/ SiLU, 41M params) trained with this code. 84% top-1 @ 320x320. Trained at 256x256.
42+
* Add EfficientNet-V2S model (unverified model definition) weights. 83.3 top-1 @ 288x288. Only trained single res 224. Working on progressive training.
43+
* Add ByoaNet model definition (Bring-your-own-attention) w/ SelfAttention block and corresponding SA/SA-like modules and model defs
44+
* Lambda Networks - https://arxiv.org/abs/2102.08602
45+
* Bottleneck Transformers - https://arxiv.org/abs/2101.11605
46+
* Halo Nets - https://arxiv.org/abs/2103.12731
47+
* Adabelief optimizer contributed by Juntang Zhuang
48+
49+
### April 1, 2021
50+
* Add snazzy `benchmark.py` script for bulk `timm` model benchmarking of train and/or inference
51+
* Add Pooling-based Vision Transformer (PiT) models (from https://github.com/naver-ai/pit)
52+
* Merged distilled variant into main for torchscript compatibility
53+
* Some `timm` cleanup/style tweaks and weights have hub download support
54+
* Cleanup Vision Transformer (ViT) models
55+
* Merge distilled (DeiT) model into main so that torchscript can work
56+
* Support updated weight init (defaults to old still) that closer matches original JAX impl (possibly better training from scratch)
57+
* Separate hybrid model defs into different file and add several new model defs to fiddle with, support patch_size != 1 for hybrids
58+
* Fix fine-tuning num_class changes (PiT and ViT) and pos_embed resizing (Vit) with distilled variants
59+
* nn.Sequential for block stack (does not break downstream compat)
60+
* TnT (Transformer-in-Transformer) models contributed by author (from https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/TNT)
61+
* Add RegNetY-160 weights from DeiT teacher model
62+
* Add new NFNet-L0 w/ SE attn (rename `nfnet_l0b`->`nfnet_l0`) weights 82.75 top-1 @ 288x288
63+
* Some fixes/improvements for TFDS dataset wrapper
64+
65+
### March 7, 2021
66+
* First 0.4.x PyPi release w/ NFNets (& related), ByoB (GPU-Efficient, RepVGG, etc).
67+
* Change feature extraction for pre-activation nets (NFNets, ResNetV2) to return features before activation.
68+
69+
### Feb 18, 2021
70+
* Add pretrained weights and model variants for NFNet-F* models from [DeepMind Haiku impl](https://github.com/deepmind/deepmind-research/tree/master/nfnets).
71+
* Models are prefixed with `dm_`. They require SAME padding conv, skipinit enabled, and activation gains applied in act fn.
72+
* These models are big, expect to run out of GPU memory. With the GELU activiation + other options, they are roughly 1/2 the inference speed of my SiLU PyTorch optimized `s` variants.
73+
* Original model results are based on pre-processing that is not the same as all other models so you'll see different results in the results csv (once updated).
74+
* Matching the original pre-processing as closely as possible I get these results:
75+
* `dm_nfnet_f6` - 86.352
76+
* `dm_nfnet_f5` - 86.100
77+
* `dm_nfnet_f4` - 85.834
78+
* `dm_nfnet_f3` - 85.676
79+
* `dm_nfnet_f2` - 85.178
80+
* `dm_nfnet_f1` - 84.696
81+
* `dm_nfnet_f0` - 83.464
82+
83+
### Feb 16, 2021
84+
* Add Adaptive Gradient Clipping (AGC) as per https://arxiv.org/abs/2102.06171. Integrated w/ PyTorch gradient clipping via mode arg that defaults to prev 'norm' mode. For backward arg compat, clip-grad arg must be specified to enable when using train.py.
85+
* AGC w/ default clipping factor `--clip-grad .01 --clip-mode agc`
86+
* PyTorch global norm of 1.0 (old behaviour, always norm), `--clip-grad 1.0`
87+
* PyTorch value clipping of 10, `--clip-grad 10. --clip-mode value`
88+
* AGC performance is definitely sensitive to the clipping factor. More experimentation needed to determine good values for smaller batch sizes and optimizers besides those in paper. So far I've found .001-.005 is necessary for stable RMSProp training w/ NFNet/NF-ResNet.
89+
90+
### Feb 12, 2021
91+
* Update Normalization-Free nets to include new NFNet-F (https://arxiv.org/abs/2102.06171) model defs
92+
93+
### Feb 10, 2021
94+
* More model archs, incl a flexible ByobNet backbone ('Bring-your-own-blocks')
95+
* GPU-Efficient-Networks (https://github.com/idstcv/GPU-Efficient-Networks), impl in `byobnet.py`
96+
* RepVGG (https://github.com/DingXiaoH/RepVGG), impl in `byobnet.py`
97+
* classic VGG (from torchvision, impl in `vgg`)
98+
* Refinements to normalizer layer arg handling and normalizer+act layer handling in some models
99+
* Default AMP mode changed to native PyTorch AMP instead of APEX. Issues not being fixed with APEX. Native works with `--channels-last` and `--torchscript` model training, APEX does not.
100+
* Fix a few bugs introduced since last pypi release
101+
102+
### Feb 8, 2021
103+
* Add several ResNet weights with ECA attention. 26t & 50t trained @ 256, test @ 320. 269d train @ 256, fine-tune @320, test @ 352.
104+
* `ecaresnet26t` - 79.88 top-1 @ 320x320, 79.08 @ 256x256
105+
* `ecaresnet50t` - 82.35 top-1 @ 320x320, 81.52 @ 256x256
106+
* `ecaresnet269d` - 84.93 top-1 @ 352x352, 84.87 @ 320x320
107+
* Remove separate tiered (`t`) vs tiered_narrow (`tn`) ResNet model defs, all `tn` changed to `t` and `t` models removed (`seresnext26t_32x4d` only model w/ weights that was removed).
108+
* Support model default_cfgs with separate train vs test resolution `test_input_size` and remove extra `_320` suffix ResNet model defs that were just for test.
109+
110+
### Jan 30, 2021
111+
* Add initial "Normalization Free" NF-RegNet-B* and NF-ResNet model definitions based on [paper](https://arxiv.org/abs/2101.08692)
112+
113+
### Jan 25, 2021
114+
* Add ResNetV2 Big Transfer (BiT) models w/ ImageNet-1k and 21k weights from https://github.com/google-research/big_transfer
115+
* Add official R50+ViT-B/16 hybrid models + weights from https://github.com/google-research/vision_transformer
116+
* ImageNet-21k ViT weights are added w/ model defs and representation layer (pre logits) support
117+
* NOTE: ImageNet-21k classifier heads were zero'd in original weights, they are only useful for transfer learning
118+
* Add model defs and weights for DeiT Vision Transformer models from https://github.com/facebookresearch/deit
119+
* Refactor dataset classes into ImageDataset/IterableImageDataset + dataset specific parser classes
120+
* Add Tensorflow-Datasets (TFDS) wrapper to allow use of TFDS image classification sets with train script
121+
* Ex: `train.py /data/tfds --dataset tfds/oxford_iiit_pet --val-split test --model resnet50 -b 256 --amp --num-classes 37 --opt adamw --lr 3e-4 --weight-decay .001 --pretrained -j 2`
122+
* Add improved .tar dataset parser that reads images from .tar, folder of .tar files, or .tar within .tar
123+
* Run validation on full ImageNet-21k directly from tar w/ BiT model: `validate.py /data/fall11_whole.tar --model resnetv2_50x1_bitm_in21k --amp`
124+
* Models in this update should be stable w/ possible exception of ViT/BiT, possibility of some regressions with train/val scripts and dataset handling
125+
126+
### Jan 3, 2021
127+
* Add SE-ResNet-152D weights
128+
* 256x256 val, 0.94 crop top-1 - 83.75
129+
* 320x320 val, 1.0 crop - 84.36
130+
* Update results files
131+
3132
### Dec 18, 2020
4133
* Add ResNet-101D, ResNet-152D, and ResNet-200D weights trained @ 256x256
5134
* 256x256 val, 0.94 crop (top-1) - 101D (82.33), 152D (83.08), 200D (83.25)

0 commit comments

Comments
 (0)