From dffd8c4ff52cdb12c48f2120b60711ded47e9ddc Mon Sep 17 00:00:00 2001 From: Shubham Panchal Date: Tue, 1 Apr 2025 20:17:02 +0530 Subject: [PATCH] [docs] Update model-card for DINOv2 --- docs/source/en/model_doc/dinov2.md | 137 +++++++++++++++++++++-------- 1 file changed, 100 insertions(+), 37 deletions(-) diff --git a/docs/source/en/model_doc/dinov2.md b/docs/source/en/model_doc/dinov2.md index acf7b2060038..5d495de4d4e5 100644 --- a/docs/source/en/model_doc/dinov2.md +++ b/docs/source/en/model_doc/dinov2.md @@ -10,71 +10,134 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o specific language governing permissions and limitations under the License. --> -# DINOv2 - -
-PyTorch -Flax -FlashAttention -SDPA +
+
+ PyTorch + Flax + FlashAttention + SDPA +
-## Overview -The DINOv2 model was proposed in [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by -Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski. -DINOv2 is an upgrade of [DINO](https://arxiv.org/abs/2104.14294), a self-supervised method applied on [Vision Transformers](vit). This method enables all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. +# DINOv2 -The abstract from the paper is the following: +[DINOv2](https://huggingface.co/papers/2304.07193) is a vision foundation model that uses [ViT](./vit) as a feature extractor for multiple downstream tasks like image classification and depth estimation. It focuses on stabilizing and accelerating training through techniques like a faster memory-efficient attention, sequence packing, improved stochastic depth, Fully Sharded Data Parallel (FSDP), and model distillation. -*The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.* +You can find all the original DINOv2 checkpoints under the [Dinov2](https://huggingface.co/collections/facebook/dinov2-6526c98554b3d2576e071ce3) collection. -This model was contributed by [nielsr](https://huggingface.co/nielsr). -The original code can be found [here](https://github.com/facebookresearch/dinov2). +> [!TIP] +> Click on the DINOv2 models in the right sidebar for more examples of how to apply DINOv2 to different vision tasks. -## Usage tips +The example below demonstrates how to obtain an image embedding with [`Pipeline`] or the [`AutoModel`] class. -The model can be traced using `torch.jit.trace` which leverages JIT compilation to optimize the model making it faster to run. Note this still produces some mis-matched elements and the difference between the original model and the traced model is of the order of 1e-4. + + -```python +```py import torch -from transformers import AutoImageProcessor, AutoModel +from transformers import pipeline + +pipe = pipeline( + task="image-classification", + model="facebook/dinov2-small-imagenet1k-1-layer", + torch_dtype=torch.float16, + device=0 +) + +pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg") +``` + + + + +```py +import requests +from transformers import AutoImageProcessor, AutoModelForImageClassification from PIL import Image + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +image = Image.open(requests.get(url, stream=True).raw) + +processor = AutoImageProcessor.from_pretrained("facebook/dinov2-small-imagenet1k-1-layer") +model = AutoModelForImageClassification.from_pretrained( + "facebook/dinov2-small-imagenet1k-1-layer", + torch_dtype=torch.float16, + device_map="auto", + attn_implementation="sdpa" +) + +inputs = processor(images=image, return_tensors="pt") +logits = model(**inputs).logits +predicted_class_idx = logits.argmax(-1).item() +print("Predicted class:", model.config.id2label[predicted_class_idx]) +``` + + + + +Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends. + +The example below uses [torchao](../quantization/torchao) to only quantize the weights to int4. + +```py +# pip install torchao import requests +from transformers import TorchAoConfig, AutoImageProcessor, AutoModelForImageClassification +from torchao.quantization import Int4WeightOnlyConfig +from PIL import Image url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) -processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base') -model = AutoModel.from_pretrained('facebook/dinov2-base') +processor = AutoImageProcessor.from_pretrained('facebook/dinov2-giant-imagenet1k-1-layer') + +quant_config = Int4WeightOnlyConfig(group_size=128) +quantization_config = TorchAoConfig(quant_type=quant_config) + +model = AutoModelForImageClassification.from_pretrained( + 'facebook/dinov2-giant-imagenet1k-1-layer', + torch_dtype=torch.bfloat16, + device_map="auto", + quantization_config=quantization_config +) inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) -last_hidden_states = outputs[0] +logits = outputs.logits +predicted_class_idx = logits.argmax(-1).item() +print("Predicted class:", model.config.id2label[predicted_class_idx]) +``` -# We have to force return_dict=False for tracing -model.config.return_dict = False +## Notes -with torch.no_grad(): - traced_model = torch.jit.trace(model, [inputs.pixel_values]) - traced_outputs = traced_model(inputs.pixel_values) +- Use [torch.jit.trace](https://pytorch.org/docs/stable/generated/torch.jit.trace.html) to speedup inference. However, it will produce some mismatched elements. The difference between the original and traced model is 1e-4. -print((last_hidden_states - traced_outputs[0]).abs().max()) -``` + ```py + import torch + from transformers import AutoImageProcessor, AutoModel + from PIL import Image + import requests -## Resources + url = 'http://images.cocodataset.org/val2017/000000039769.jpg' + image = Image.open(requests.get(url, stream=True).raw) -A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DINOv2. + processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base') + model = AutoModel.from_pretrained('facebook/dinov2-base') -- Demo notebooks for DINOv2 can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/DINOv2). 🌎 + inputs = processor(images=image, return_tensors="pt") + outputs = model(**inputs) + last_hidden_states = outputs[0] - + # We have to force return_dict=False for tracing + model.config.return_dict = False -- [`Dinov2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb). -- See also: [Image classification task guide](../tasks/image_classification) + with torch.no_grad(): + traced_model = torch.jit.trace(model, [inputs.pixel_values]) + traced_outputs = traced_model(inputs.pixel_values) -If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. + print((last_hidden_states - traced_outputs[0]).abs().max()) + ``` ## Dinov2Config