-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TabbyML plugin did not start when using LM Studio and qwen-2.5-coder-7b model #3925
Comments
Hi - can you share your config.toml to help debug the issue? |
I got same issue. My sample config [model.chat.http]
kind = "openai/chat"
model_name = "qwen2.5-coder-7b-instruct" # Example model
api_endpoint = "http://127.0.0.1:1234/v1" # LM Studio server endpoint with /v1 path
api_key = "" # No API key required for local deployment
[model.completion.http]
kind = "openai/completion"
model_name = "qwen2.5-coder-7b-instruct" # Example code completion model
api_endpoint = "http://127.0.0.1:1234/v1"
api_key = ""
prompt_template = "<PRE> {prefix} <SUF>{suffix} <MID>" # Example prompt template for CodeLlama models |
|
were you able to resolve the issue? |
Hi @ashm650 @tranghaviet , I have tested the chat LM Studio locally, and it works as expected. I noticed you were using 'qwen2.5-coder-7b-instruct' as a completions model, but that is incorrect. You should use the 'qwen' models without the '-instruct' suffix and with the proper prompt_template. Please use this as a reference: |
Tabby VS code plugin giving error Server start failed when trying to use local model by starting lm studio, it is not able to connect LM studio server.
The text was updated successfully, but these errors were encountered: