Skip to content

Commit dda8fd8

Browse files
committed
Merge branch 'gcucurull-master'
2 parents 5c7d298 + 74fcd4c commit dda8fd8

File tree

135 files changed

+30599
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

135 files changed

+30599
-0
lines changed
+62
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
## How do I use this model on an image?
2+
To load a pretrained model:
3+
4+
```python
5+
import timm
6+
model = timm.create_model('{{ model_name }}', pretrained=True)
7+
model.eval()
8+
```
9+
10+
To load and preprocess the image:
11+
```python
12+
import urllib
13+
from PIL import Image
14+
from timm.data import resolve_data_config
15+
from timm.data.transforms_factory import create_transform
16+
17+
config = resolve_data_config({}, model=model)
18+
transform = create_transform(**config)
19+
20+
url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
21+
urllib.request.urlretrieve(url, filename)
22+
img = Image.open(filename).convert('RGB')
23+
tensor = transform(img).unsqueeze(0) # transform and add batch dimension
24+
```
25+
26+
To get the model predictions:
27+
```python
28+
import torch
29+
with torch.no_grad():
30+
out = model(tensor)
31+
probabilities = torch.nn.functional.softmax(out[0], dim=0)
32+
print(probabilities.shape)
33+
# prints: torch.Size([1000])
34+
```
35+
36+
To get the top-5 predictions class names:
37+
```python
38+
# Get imagenet class mappings
39+
url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
40+
urllib.request.urlretrieve(url, filename)
41+
with open("imagenet_classes.txt", "r") as f:
42+
categories = [s.strip() for s in f.readlines()]
43+
44+
# Print top categories per image
45+
top5_prob, top5_catid = torch.topk(probabilities, 5)
46+
for i in range(top5_prob.size(0)):
47+
print(categories[top5_catid[i]], top5_prob[i].item())
48+
# prints class names and probabilities like:
49+
# [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
50+
```
51+
52+
Replace the model name with the variant you want to use, e.g. `{{ model_name }}`. You can find the IDs in the model summaries at the top of this page.
53+
54+
To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use.
55+
56+
## How do I finetune this model?
57+
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
58+
```python
59+
model = timm.create_model('{{ model_name }}', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
60+
```
61+
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
62+
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
+64
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
"""
2+
Run this script to generate the model-index files in `models` from the templates in `.templates/models`.
3+
"""
4+
5+
import argparse
6+
from pathlib import Path
7+
8+
from jinja2 import Environment, FileSystemLoader
9+
10+
import modelindex
11+
12+
13+
def generate_readmes(templates_path: Path, dest_path: Path):
14+
"""Add the code snippet template to the readmes"""
15+
readme_templates_path = templates_path / "models"
16+
code_template_path = templates_path / "code_snippets.md"
17+
18+
env = Environment(
19+
loader=FileSystemLoader([readme_templates_path, readme_templates_path.parent]),
20+
)
21+
22+
for readme in readme_templates_path.iterdir():
23+
if readme.suffix == ".md":
24+
template = env.get_template(readme.name)
25+
26+
# get the first model_name for this model family
27+
mi = modelindex.load(str(readme))
28+
model_name = mi.models[0].name
29+
30+
full_content = template.render(model_name=model_name)
31+
32+
# generate full_readme
33+
with open(dest_path / readme.name, "w") as f:
34+
f.write(full_content)
35+
36+
37+
def main():
38+
parser = argparse.ArgumentParser(description="Model index generation config")
39+
parser.add_argument(
40+
"-t",
41+
"--templates",
42+
default=Path(__file__).parent / ".templates",
43+
type=str,
44+
help="Location of the markdown templates",
45+
)
46+
parser.add_argument(
47+
"-d",
48+
"--dest",
49+
default=Path(__file__).parent / "models",
50+
type=str,
51+
help="Destination folder that contains the generated model-index files.",
52+
)
53+
args = parser.parse_args()
54+
templates_path = Path(args.templates)
55+
dest_readmes_path = Path(args.dest)
56+
57+
generate_readmes(
58+
templates_path,
59+
dest_readmes_path,
60+
)
61+
62+
63+
if __name__ == "__main__":
64+
main()
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
# Adversarial Inception v3
2+
3+
**Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswithcode.com/method/auxiliary-classifier) to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). The key building block is an [Inception Module](https://paperswithcode.com/method/inception-v3-module).
4+
5+
This particular model was trained for study of adversarial examples (adversarial training).
6+
7+
The weights from this model were ported from [Tensorflow/Models](https://github.com/tensorflow/models).
8+
9+
{% include 'code_snippets.md' %}
10+
11+
## How do I train this model?
12+
13+
You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
14+
15+
## Citation
16+
17+
```BibTeX
18+
@article{DBLP:journals/corr/abs-1804-00097,
19+
author = {Alexey Kurakin and
20+
Ian J. Goodfellow and
21+
Samy Bengio and
22+
Yinpeng Dong and
23+
Fangzhou Liao and
24+
Ming Liang and
25+
Tianyu Pang and
26+
Jun Zhu and
27+
Xiaolin Hu and
28+
Cihang Xie and
29+
Jianyu Wang and
30+
Zhishuai Zhang and
31+
Zhou Ren and
32+
Alan L. Yuille and
33+
Sangxia Huang and
34+
Yao Zhao and
35+
Yuzhe Zhao and
36+
Zhonglin Han and
37+
Junjiajia Long and
38+
Yerkebulan Berdibekov and
39+
Takuya Akiba and
40+
Seiya Tokui and
41+
Motoki Abe},
42+
title = {Adversarial Attacks and Defences Competition},
43+
journal = {CoRR},
44+
volume = {abs/1804.00097},
45+
year = {2018},
46+
url = {http://arxiv.org/abs/1804.00097},
47+
archivePrefix = {arXiv},
48+
eprint = {1804.00097},
49+
timestamp = {Thu, 31 Oct 2019 16:31:22 +0100},
50+
biburl = {https://dblp.org/rec/journals/corr/abs-1804-00097.bib},
51+
bibsource = {dblp computer science bibliography, https://dblp.org}
52+
}
53+
```
54+
55+
<!--
56+
Type: model-index
57+
Collections:
58+
- Name: Adversarial Inception v3
59+
Paper:
60+
Title: Adversarial Attacks and Defences Competition
61+
URL: https://paperswithcode.com/paper/adversarial-attacks-and-defences-competition
62+
Models:
63+
- Name: adv_inception_v3
64+
In Collection: Adversarial Inception v3
65+
Metadata:
66+
FLOPs: 7352418880
67+
Parameters: 23830000
68+
File Size: 95549439
69+
Architecture:
70+
- 1x1 Convolution
71+
- Auxiliary Classifier
72+
- Average Pooling
73+
- Average Pooling
74+
- Batch Normalization
75+
- Convolution
76+
- Dense Connections
77+
- Dropout
78+
- Inception-v3 Module
79+
- Max Pooling
80+
- ReLU
81+
- Softmax
82+
Tasks:
83+
- Image Classification
84+
Training Data:
85+
- ImageNet
86+
ID: adv_inception_v3
87+
Crop Pct: '0.875'
88+
Image Size: '299'
89+
Interpolation: bicubic
90+
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_v3.py#L456
91+
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/adv_inception_v3-9e27bd63.pth
92+
Results:
93+
- Task: Image Classification
94+
Dataset: ImageNet
95+
Metrics:
96+
Top 1 Accuracy: 77.58%
97+
Top 5 Accuracy: 93.74%
98+
-->

0 commit comments

Comments
 (0)