-
Notifications
You must be signed in to change notification settings - Fork 28.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Community contributions] Model cards #36979
Comments
Hi. I would like to work on model card for gemma 2. |
Hi. I would like to work on model card for mistral. |
Hi @stevhliu , this is my first contribution so I have a really basic question . Should I clone every repo under mistralai? I just cloned the repo mistralai/Ministral-8B-Instruct-2410, but there are many other repos under mistralai. It's ok if I need to, but I just want to be sure. |
Hey , I would like to work on the model card for llama3 . |
Hey @NahieliV, welcome! You only need to modify the mistral.md file. This is just for the model cards in the Transformers docs rather than the Hub. |
Hey @stevhliu I would like to work on the model card for qwen2_5_vl. |
@stevhliu Is it not possible to automate with an LLM? |
hi @stevhliu i would be super grateful if you can let me work on the model card for code_llama |
Hey @stevhliu, I would like to work on the |
Hey @stevhliu , i would like to contribute to |
Hey @stevhliu , I would like to contribute to vitpose model card |
Hey @stevhliu, I would like to work on the |
Hey @stevhliu , I would like to contribute to |
To the folks who have been raising PR so far , just have a doubt did you get to install EDIT : Got it up and running, had to install all the libraries to make it run successfully. Initially felt doubtful about the need to install all the libraries such as flax but yea seems like it has to be installed too. |
Hey @stevhliu, I would like to work on the phi3 model card |
As you just going to edit the docs, you need not have a complete development setup. Fork the |
Hi @stevhliu, may I work on GPT-2? |
Hi @stevhliu I can work on MobileBERT |
Hi @stevhliu Can I work on blip_2? |
Hey, I would like to work on the model card for mistral3. |
Hey @wadkisson, GPT-2 is already taken. Feel free to pick another model you're interested in! 🤗 |
* Update code_llama.md aims to handle #36979 (comment) sub part of #36979 * Update docs/source/en/model_doc/code_llama.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/code_llama.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/code_llama.md Co-authored-by: Steven Liu <[email protected]> * make changes as per code review * chore: make the function smaller for attention mask visualizer * chore[docs]: update code_llama.md with some more suggested changes * Update docs/source/en/model_doc/code_llama.md Co-authored-by: Steven Liu <[email protected]> * chore[docs] : Update code_llama.md with indentation changes --------- Co-authored-by: Steven Liu <[email protected]>
I will now take fastspeech2_conformer if that is alright. @stevhliu |
Hi @stevhliu, I’d love to contribute to the Falcon model card—please let me know if I can take it up! |
Hi @stevhliu , I would like to work on the model card for |
Hey @Sudhesh-Rajan27, Falcon has already been completed. Feel free to choose another model you're interested in! 🤗 |
ohh!!Then can I do efficientnet. |
Hey @stevhliu, |
* Update code_llama.md aims to handle huggingface#36979 (comment) sub part of huggingface#36979 * Update docs/source/en/model_doc/code_llama.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/code_llama.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/code_llama.md Co-authored-by: Steven Liu <[email protected]> * make changes as per code review * chore: make the function smaller for attention mask visualizer * chore[docs]: update code_llama.md with some more suggested changes * Update docs/source/en/model_doc/code_llama.md Co-authored-by: Steven Liu <[email protected]> * chore[docs] : Update code_llama.md with indentation changes --------- Co-authored-by: Steven Liu <[email protected]>
Hey @stevhliu! Really excited about this initiative to improve the model cards. I'd be happy to help out by tackling the |
Hey @stevhliu! Quick question — should the model cards for |
Thanks @Vishesh-Mistry, you can combine mbart and mbart50 in one card like the original! I'll be off this week, but I'll review your PRs once I return 🤗 |
Hey friends! 👋
We are currently in the process of improving the Transformers model cards by making them more directly useful for everyone. The main goal is to:
Pipeline
,AutoModel
, andtransformers-cli
with available optimizations included. For large models, provide a quantization example so its easier for everyone to run the model.Compare the before and after model cards below:
With so many models in Transformers, we could really use some a hand with standardizing the existing model cards. If you're interested in making a contribution, pick a model from the list below and then you can get started!
Steps
Each model card should follow the format below. You can copy the text exactly as it is!
For examples, take a look at #36469 or the BERT, Llama, Llama 2, Gemma 3, PaliGemma, ViT, and Whisper model cards on the
main
version of the docs.Once you're done or if you have any questions, feel free to ping @stevhliu to review. Don't add
fix
to your PR to avoid closing this issue.I'll also be right there working alongside you and opening PRs to convert the model cards so we can complete this faster together! 🤗
Models
The text was updated successfully, but these errors were encountered: