You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hi friends, i've tried following the example deepseek.config.yaml, and i cannot get it to work.
Below is my config file. What am i not doing correctly?
I'm using the docker-compose file in ./docker, and i've added the API keys to .env.
Running it with docker-compose --env-file .env up -d
# you should rename this file to config.yaml and put it in ~/.wrenai# please pay attention to the comments starting with # and adjust the config accordingly, 3 steps basically:# 1. you need to use your own llm and embedding models# 2. you need to use the correct pipe definitions based on https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml# 3. you need to fill in correct llm and embedding models in the pipe definitionstype: llmprovider: litellm_llmmodels:
# put DEEPSEEK_API_KEY=<your_api_key> in ~/.wrenai/.env
- api_base: https://api.deepseek.com/v1model: deepseek/deepseek-reasoneralias: defaulttimeout: 120kwargs:
n: 1temperature: 0response_format:
type: text
- api_base: https://api.deepseek.com/v1model: deepseek/deepseek-chattimeout: 120kwargs:
n: 1temperature: 0response_format:
type: text
- api_base: https://api.deepseek.com/v1model: deepseek/deepseek-codertimeout: 120kwargs:
n: 1temperature: 0response_format:
type: json_object
---
type: embedderprovider: litellm_embeddermodels:
# define OPENAI_API_KEY=<api_key> in ~/.wrenai/.env if you are using openai embedding model# please refer to LiteLLM documentation for more details: https://docs.litellm.ai/docs/providers
- model: text-embedding-3-large # put your embedding model name here, if it is not openai embedding model, should be <provider>/<model_name>alias: defaultapi_base: https://api.openai.com/v1 # change this according to your embedding modeltimeout: 120
---
type: engineprovider: wren_uiendpoint: http://wren-ui:3000
---
type: engineprovider: wren_ibisendpoint: http://wren-ibis:8000
---
type: document_storeprovider: qdrantlocation: http://qdrant:6333embedding_model_dim: 3072# put your embedding model dimension heretimeout: 120recreate_index: true
---
# please change the llm and embedder names to the ones you want to use# the format of llm and embedder should be <provider>.<model_name> such as litellm_llm.gpt-4o-2024-08-06# the pipes may be not the latest version, please refer to the latest version: https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
---
type: pipelinepipes:
- name: db_schema_indexingembedder: litellm_embedder.text-embedding-3-largedocument_store: qdrant
- name: historical_question_indexingembedder: litellm_embedder.text-embedding-3-largedocument_store: qdrant
- name: table_description_indexingembedder: litellm_embedder.text-embedding-3-largedocument_store: qdrant
- name: db_schema_retrievalllm: litellm_llm.deepseek-chatembedder: litellm_embedder.text-embedding-3-largedocument_store: qdrant
- name: historical_question_retrievalembedder: litellm_embedder.text-embedding-3-largedocument_store: qdrant
- name: sql_generationllm: litellm_llm.deepseek-chatengine: wren_ui
- name: sql_correctionllm: litellm_llm.deepseek-chatengine: wren_ui
- name: followup_sql_generationllm: litellm_llm.deepseek-chatengine: wren_ui
- name: sql_summaryllm: litellm_llm.deepseek-chat
- name: sql_answerllm: litellm_llm.deepseek-chat
- name: sql_breakdownllm: litellm_llm.deepseek-chatengine: wren_ui
- name: sql_expansionllm: litellm_llm.deepseek-chatengine: wren_ui
- name: semantics_descriptionllm: litellm_llm.deepseek-chat
- name: relationship_recommendationllm: litellm_llm.deepseek-chatengine: wren_ui
- name: question_recommendationllm: litellm_llm.deepseek-chat
- name: question_recommendation_db_schema_retrievalllm: litellm_llm.deepseek-chatembedder: litellm_embedder.text-embedding-3-largedocument_store: qdrant
- name: question_recommendation_sql_generationllm: litellm_llm.deepseek-chatengine: wren_ui
- name: intent_classificationllm: litellm_llm.deepseek-chatembedder: litellm_embedder.text-embedding-3-largedocument_store: qdrant
- name: data_assistancellm: litellm_llm.deepseek-chat
- name: sql_pairs_indexingdocument_store: qdrantembedder: litellm_embedder.text-embedding-3-large
- name: sql_pairs_retrievaldocument_store: qdrantembedder: litellm_embedder.text-embedding-3-largellm: litellm_llm.deepseek-chat
- name: preprocess_sql_datallm: litellm_llm.deepseek-chat
- name: sql_executorengine: wren_ui
- name: chart_generationllm: litellm_llm.deepseek-chat
- name: chart_adjustmentllm: litellm_llm.deepseek-chat
- name: sql_question_generationllm: litellm_llm.deepseek-chat
- name: sql_generation_reasoningllm: litellm_llm.deepseek-chat
- name: sql_regenerationllm: litellm_llm.deepseek-chatengine: wren_ui
---
settings:
engine_timeout: 30column_indexing_batch_size: 50table_retrieval_size: 10table_column_retrieval_size: 100allow_using_db_schemas_without_pruning: falsequery_cache_maxsize: 1000query_cache_ttl: 3600langfuse_host: https://cloud.langfuse.comlangfuse_enable: truelogging_level: DEBUGdevelopment: false
The text was updated successfully, but these errors were encountered:
hi friends, i've tried following the example deepseek.config.yaml, and i cannot get it to work.
Below is my config file. What am i not doing correctly?
I'm using the docker-compose file in ./docker, and i've added the API keys to .env.
Running it with
docker-compose --env-file .env up -d
The text was updated successfully, but these errors were encountered: