Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lm_studio如何本地化部署,env文件和config文件内容,求大神帮助 #1498

Open
Marsedward opened this issue Mar 31, 2025 · 2 comments

Comments

@Marsedward
Copy link

1、如果对接lm_studio,是否需要修改env文件内容,如何修改
Image
2、config文件如何设置?
本人的config如下:(跪谢大神)
type: llm
provider: litellm_llm
timeout: 120
models:

  • model: lm_studio/deepseek-coder-v2-lite-instruct-mlx # api_key: deepseek-coder-v2-lite-instruct-mlx
    api_base: http://172.24.12.23:1234
    kwargs:
    n: 1
    seed: 0
    max_completion_tokens: 4096
    reasoning_effort: low
    type: embedder
    provider: litellm_embedder
    models:
  • model: lm_studio/text-embedding-bge-m3
    alias: default
    api_base: http://172.24.12.23:1234
    timeout: 120

lm_studio model:deepseek-coder-v2-lite-instruct-mlx
本地IP:http://172.24.12.23:1234
通过orbstack部署的
一直有个红灯如下图:

Image
跪谢大佬、大神们协助,麻烦作者能不能把对接模型做的简单一些

@wwwy3y3
Copy link
Member

wwwy3y3 commented Apr 2, 2025

@cyyeh could you reply this one ? thanks

@cyyeh
Copy link
Member

cyyeh commented Apr 7, 2025

@Marsedward please use WREN_AI_SERVICE_VERSION=0.19.3 in ~/.wrenai/.env and use this config example: https://github.com/Canner/WrenAI/blob/main/wren-ai-service/docs/config_examples/config.lm_studio.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants