Skip to content

Add Gemma3 #390

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 13 commits into
base: main
Choose a base branch
from
Draft

Add Gemma3 #390

wants to merge 13 commits into from

Conversation

vbaddi
Copy link
Contributor

@vbaddi vbaddi commented May 6, 2025

No description provided.

@vbaddi vbaddi marked this pull request as draft May 6, 2025 07:53
@vbaddi vbaddi mentioned this pull request May 14, 2025
vbaddi and others added 7 commits May 14, 2025 10:06
Signed-off-by: vbaddi <[email protected]>
Signed-off-by: Mohit Soni <[email protected]>
Signed-off-by: vbaddi <[email protected]>
Signed-off-by: Mohit Soni <[email protected]>
Signed-off-by: Mohit Soni <[email protected]>
Signed-off-by: Mohit Soni <[email protected]>
Signed-off-by: Rishin Raj <[email protected]>
Signed-off-by: Mohit Soni <[email protected]>
Signed-off-by: Abukhoyer Shaik <[email protected]>
Signed-off-by: Asmita Goswami <[email protected]>
Signed-off-by: vbaddi <[email protected]>
Signed-off-by: Meet Patel <[email protected]>
Co-authored-by: Rishin Raj <[email protected]>
Co-authored-by: Abukhoyer Shaik <[email protected]>
Co-authored-by: asmigosw <[email protected]>
Co-authored-by: Vinayak Baddi <[email protected]>
Co-authored-by: Meet Patel <[email protected]>
Signed-off-by: Mohit Soni <[email protected]>
chunk_inputs["input_ids"] = lang_inputs["input_ids"][:, i * prefill_seq_len : (i + 1) * prefill_seq_len]
chunk_inputs["position_ids"] = lang_inputs["position_ids"][
:, i * prefill_seq_len : (i + 1) * prefill_seq_len
]
outputs = lang_session.run(chunk_inputs)
chunk_inputs["index"] = outputs["index_output"]
Copy link

@quic-xiyushi quic-xiyushi May 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain what are chunk_inputs["index"] and outputs["index_output"]?
Also, with the new approach, is batching supported?

@quic-akuruvil quic-akuruvil self-requested a review June 5, 2025 17:35
node_precision_info="fp32_nodes_gemma3_4b_text.yaml",
)
print(f"qpc path is {qpc_path}")
exec_info = qeff_model.generate(tokenizer, prompts=Constants.INPUT_STR, device_ids=[0])
Copy link
Contributor

@quic-akuruvil quic-akuruvil Jun 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@qcdipankar Is match obtained between original torch and AIC? Please report the single layer match.

@@ -0,0 +1,48 @@
# -----------------------------------------------------------------------------
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not be having separate script for running Language model. Please remove the script and add example for running text only input, text + image input and text with multi image input using QEFFAutoModelForImageTextToText.

@quic-rishinr
Copy link
Contributor

Please add the test, update the model in validated models list

@@ -0,0 +1,6 @@
# -----------------------------------------------------------------------------
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please update it to

-----------------------------------------------------------------------------

Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.

SPDX-License-Identifier: BSD-3-Clause

-----------------------------------------------------------------------------

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants