Skip to content

Commit 25b7f27

Browse files
ArthurZuckeryonigozlanpcuencamolbapyoungkent
authored
Add llama4 (#37307)
* remove one of the last deps * update fast image processor after refactor * styling * more quality of life improvements * nit * update * cleanups * some cleanups * vllm updates * update fake image token * [convert] Fix typo * [convert] Strip extraneous bytes from shards * [convert] Minor fixes * [convert] Use num_experts * multi-image fixes in modeling + processor * fixup size * 128 experts * Use default rope * Unfuse mlp * simplify a lot inputs embeds merging * remove .item() 👀 * fix from review * Address feedback * Use None "default" for rope_scaling. Add eot. * set seed * return aspect ratios and bug fixes * Moe 128 rebased (#8) * 128 experts * Use default rope * Unfuse mlp * Address feedback * Use None "default" for rope_scaling. Add eot. * Meta/llama quant compat (#7) * add quant compatible model & conversion code for llama4 * fix a few issues * fix a few issues * minor type mapping fix --------- Co-authored-by: Lu Fang <[email protected]> * use a new config parameter to determine which model definition to use for MoE --------- Co-authored-by: Pedro Cuenca <[email protected]> Co-authored-by: Lu Fang <[email protected]> * un-comment write_tokenizer from converting script * remove un-used imports * [llama4] Pop aspect_ratios from image processor output in Llama4Processor Signed-off-by: Jon Swenson <[email protected]> * Fix parameter_count name * Update src/transformers/models/llama4/configuration_llama4.py * nit * Add changes for no_rope, moe_layers, chunked attention. Just need to test all * Update src/transformers/models/llama4/image_processing_llama4_fast.py * nit * fix post merge with main * support flex attention * fixes * fix * add layer * small updates * rebase and delete llm_compressor * nit * [llama4/mm] Add back <|image|> token that delimits global tile * [llama4/mm] Fix Llama 4 image processing unit tests * add explicit dtype Signed-off-by: Jon Swenson <[email protected]> * sdpa works * comment todo small * fix model loading Signed-off-by: Zijing Liu <[email protected]> * revert * nits * small fix for TP on 1 node * Read new params from config * Add <|eom|> * lol don't know how this got here * adding fp8 * Save processor, fix chat template * style * Add boi/eoi tokens We don't use them. * fixes for now flex seems to work :) * updates * nits * updates * missking keys * add context parallel * update * update * fix * nits * add worldsize and make eager attn work for vision * Ignore new key present in base models * add tp_plan * fix nope Signed-off-by: Zijing Liu <[email protected]> * minor fix Signed-off-by: Zijing Liu <[email protected]> * Clean up Llama4 vision model * current updates * add support for `attn_temperature_tuning` * add floor scale * add missing attn scales * push what works, dirty trick for the device synch * oups * Fix pad_token_id See https://huggingface.co/ll-re/Llama-4-Scout-17B-16E/discussions/2/files Confirmed in the original codebase. * fix causallml loading * rm * fix tied-weights * fix sdpa * push current version * should work with both short and long * add compressed_tensos & fix fbgemm tp * Fix flex impl * style * chunking * try to revert the potentially breaking change * fix auto factory * fix shapes in general * rm processing * commit cache utils cleanup * Fix context length * fix * allocate * update tp_plan * fix SDPA! * Add support for sparse `Llama4TextMoe` layer from the kernel hub * cleanup * better merge * update * still broken fixing now * nits * revert print * Write max_position_embeddings and max_model_length * Update modeling_llama4.py * Save attention_chunk_size * Sync eos terminators * Read initializer_range * style * remove `dict` * fix * eager should use `chunked_attention_mask` * revert * fixup * fix config * Revert "Merge pull request #36 from huggingface/sparse-llama4-moe" This reverts commit ccda19f, reversing changes made to a515579. * Fix typo and remove warning with compiled flex and chunked prefill * Fix MoE vs FF (#41) * fix * Use correct no_rope_layers if provided one is empty list * update tests * fix * skipping some tests * fix fp8 loading Signed-off-by: Zijing Liu <[email protected]> * fix text geneartion pipeline Signed-off-by: Zijing Liu <[email protected]> * eager needs 4D mask * fix * Some cleanup * fix * update * fix * replace correctly module * patch * modulelist * update * update * clean up * Don't move to `cuda:0` in distributed mode * restrict to compressed tensors for now * rm print * Docs! * Fixes * Update docs/source/en/model_doc/llama4.md Co-authored-by: Pedro Cuenca <[email protected]> * Fixes * cuda graph fix * revert some stuff * fixup * styling * Update src/transformers/models/llama4/modeling_llama4.py Co-authored-by: Arthur <[email protected]> * fixup * commit licence, cleanup here and there and style * more styling changes * fix dummies * fix and clean docstrings * remove comment * remove warning * Only fast image processor is supported * nit * trigger CI * fix issue with flex encoder * fix dynamic cache * Code quality * Code quality * fix more tests for now * Code quality * Code quality * Nuke bunch of failing stuff * Code quality * Code quality * cleanup removal of slow image processor * ruff fix fast image processor * fix * fix styling * Docs * Repo consistency * Repo consistency * fix sliding window issue * separate llama cache * styling * Repo consistency * Repo consistency * push waht works * L4 Repo consistency * Docs * fix last last alst alst alst alstsaltlsltlaslt --------- Signed-off-by: Jon Swenson <[email protected]> Signed-off-by: Zijing Liu <[email protected]> Co-authored-by: yonigozlan <[email protected]> Co-authored-by: Pedro Cuenca <[email protected]> Co-authored-by: Pablo Montalvo <[email protected]> Co-authored-by: Pablo Montalvo <[email protected]> Co-authored-by: Keyun Tong <[email protected]> Co-authored-by: Zijing Liu <[email protected]> Co-authored-by: Lu Fang <[email protected]> Co-authored-by: Zijing Liu <[email protected]> Co-authored-by: Jon Swenson <[email protected]> Co-authored-by: jmswen <[email protected]> Co-authored-by: MekkCyber <[email protected]> Co-authored-by: Mohamed Mekkouri <[email protected]> Co-authored-by: Mohit Sharma <[email protected]> Co-authored-by: Yong Hoon Shin <[email protected]> Co-authored-by: Marc Sun <[email protected]> Co-authored-by: drisspg <[email protected]> Co-authored-by: Cyril Vallez <[email protected]> Co-authored-by: Daniël de Kok <[email protected]> Co-authored-by: Lysandre <[email protected]> Co-authored-by: Ye (Charlotte) Qi <[email protected]> Co-authored-by: ydshieh <[email protected]>
1 parent aa40fda commit 25b7f27

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

45 files changed

+5527
-222
lines changed

Diff for: docs/source/en/_toctree.yml

+2
Original file line numberDiff line numberDiff line change
@@ -507,6 +507,8 @@
507507
title: Llama2
508508
- local: model_doc/llama3
509509
title: Llama3
510+
- local: model_doc/llama4
511+
title: Llama4
510512
- local: model_doc/longformer
511513
title: Longformer
512514
- local: model_doc/longt5

0 commit comments

Comments
 (0)