Skip to content

Commit 493c275

Browse files
authored
Fix(models/siglip): Add compatibility for Gemma models quantized by llm-compressor (#19643)
Signed-off-by: Vensenmu <[email protected]>
1 parent f39ab2d commit 493c275

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

vllm/model_executor/models/gemma3_mm.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -479,6 +479,7 @@ class Gemma3ForConditionalGeneration(nn.Module, SupportsMultiModal, SupportsPP,
479479
"model.vision_tower.": "vision_tower.",
480480
"model.multi_modal_projector.": "multi_modal_projector.",
481481
"lm_head.": "language_model.lm_head.",
482+
"vision_tower.vision_model.": "vision_model.",
482483
})
483484

484485
def __init__(self, *, vllm_config: VllmConfig, prefix: str = ""):

0 commit comments

Comments
 (0)