-
Notifications
You must be signed in to change notification settings - Fork 372
Local apps: add MLX LM #1443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Local apps: add MLX LM #1443
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(maybe let's add it as a local app and change the library snippets to python?)
yes, i think that's what we should do:
- (modeling) libraries are for actual code snippets, including in Python
- local apps are for CLIs
That's consistent with what we've done for llama.cpp:
llama.cpp and node-llama-cpp are apps
whereas llama-cpp-python is a "library"
prettyLabel: "MLX LM", | ||
docsUrl: "https://github.com/ml-explore/mlx-lm", | ||
mainTask: "text-generation", | ||
displayOnModelPage: isMlxModel, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i would only display it on actually supported models (so check the pipeline_tag)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(note, mainTask does not check the model's task on a model page, it's only used for the settings page)
return [ | ||
{ | ||
title: "Generate or start a chat session", | ||
setup: ["# Install MLX LM", "pip install mlx-lm"].join("\n"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or use uvx and drop the pip install?
or even better, uv tool install mlx-lm
setup: ["# Install MLX LM", "pip install mlx-lm"].join("\n"), | ||
content: [ | ||
"# One-shot generation", | ||
`mlx_lm.generate --model "${model.id}" --prompt "Hello"`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need for this one in case the model is conversational, IMO
Still hesitating between this (adding it as a local app) and updating the libraries snippets directly wdyt @pcuenca ? (maybe let's add it as a local app and change the library snippets to python?)