Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TabbyML plugin did not start when using LM Studio and qwen-2.5-coder-7b model #3925

Open
ashm650 opened this issue Mar 2, 2025 · 5 comments

Comments

@ashm650
Copy link

ashm650 commented Mar 2, 2025

Tabby VS code plugin giving error Server start failed when trying to use local model by starting lm studio, it is not able to connect LM studio server.

@wsxiaoys
Copy link
Member

wsxiaoys commented Mar 7, 2025

Hi - can you share your config.toml to help debug the issue?

@tranghaviet
Copy link

I got same issue. My sample config

[model.chat.http]
kind = "openai/chat"
model_name = "qwen2.5-coder-7b-instruct"  # Example model
api_endpoint = "http://127.0.0.1:1234/v1"    # LM Studio server endpoint with /v1 path
api_key = ""                                 # No API key required for local deployment

[model.completion.http]
kind = "openai/completion"
model_name = "qwen2.5-coder-7b-instruct"                 # Example code completion model
api_endpoint = "http://127.0.0.1:1234/v1"
api_key = ""
prompt_template = "<PRE> {prefix} <SUF>{suffix} <MID>"  # Example prompt template for CodeLlama models

@ashm650
Copy link
Author

ashm650 commented Mar 31, 2025

@wsxiaoys

disable = true # set to true to disable

[model.chat.http]
kind = "openai/chat"
model_name = "qwen2.5-coder-7b-instruct"  # Example model
api_endpoint = "http://localhost:1234/v1"    # LM Studio server endpoint with /v1 path
api_key = ""                                 # No API key required for local deployment

[model.completion.http]
kind = "openai/completion"
model_name = "qwen2.5-coder-7b-instruct"                 # Example code completion model
api_endpoint = "http://localhost:1234/v1"
api_key = ""
prompt_template = "<PRE> {prefix} <SUF>{suffix} <MID>"  # Example prompt template for CodeLlama models

[model.embedding.http]
kind = "openai/embedding"
model_name = "qwen2.5-coder-7b-instruct"
api_endpoint = "http://localhost:1234/v1"
api_key = ""

@ashm650
Copy link
Author

ashm650 commented Mar 31, 2025

I got same issue. My sample config

[model.chat.http]
kind = "openai/chat"
model_name = "qwen2.5-coder-7b-instruct" # Example model
api_endpoint = "http://127.0.0.1:1234/v1" # LM Studio server endpoint with /v1 path
api_key = "" # No API key required for local deployment

[model.completion.http]
kind = "openai/completion"
model_name = "qwen2.5-coder-7b-instruct" # Example code completion model
api_endpoint = "http://127.0.0.1:1234/v1"
api_key = ""
prompt_template = "

 {prefix} {suffix} "  # Example prompt template for CodeLlama models

were you able to resolve the issue?

@zwpaper
Copy link
Member

zwpaper commented Apr 2, 2025

Hi @ashm650 @tranghaviet , I have tested the chat LM Studio locally, and it works as expected.

I noticed you were using 'qwen2.5-coder-7b-instruct' as a completions model, but that is incorrect. You should use the 'qwen' models without the '-instruct' suffix and with the proper prompt_template. Please use this as a reference:
https://github.com/TabbyML/registry-tabby/blob/d57d6c6bb0d7a0718d401eafeb8f44ff17a81e29/models.json#L320-L335

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants