Skip to content

Issue on page /internvl2.5/evaluation #974

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Andrew-Koulogeorge opened this issue Apr 2, 2025 · 1 comment
Open

Issue on page /internvl2.5/evaluation #974

Andrew-Koulogeorge opened this issue Apr 2, 2025 · 1 comment

Comments

@Andrew-Koulogeorge
Copy link

Hi
I am trying to reproduce your results from the InternVL2.5 family not he MMMU dataset. When running InternVL2.5 family with VLMEvalKit framework for sizes larger then 2Bn on the MMMU dataset, there are some examples with many images (and thus many tokens) and this is giving me cuda out of memory errors (The hardware I am using has 48GB)

When you evaluated this model did you have more RAM? Did you implement model parallelism yourself?

@yuecao0119
Copy link
Collaborator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants