You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
model: deepseek/deepseek-chat # put your embedding model name here, if it is not openai embedding model, should be /<model_name>
alias: default
api_base: https://api.deepseek.com/v1 # change this according to your embedding model
api_key_name: EMBEDDER_OLLAMA_API_KEY
timeout: 120
type: document_store
provider: qdrant
location: http://localhost:6333
embedding_model_dim: 1024 # put your embedding model dimension here
timeout: 120
recreate_index: true
please change the llm and embedder names to the ones you want to use
the format of llm and embedder should be .<model_name> such as litellm_llm.gpt-4o-2024-08-06
settings:
column_indexing_batch_size: 50
table_retrieval_size: 10
table_column_retrieval_size: 100
allow_using_db_schemas_without_pruning: false # if you want to use db schemas without pruning, set this to true. It will be faster
allow_intent_classification: true
allow_sql_generation_reasoning: true
query_cache_maxsize: 1000
query_cache_ttl: 3600
langfuse_host: https://cloud.langfuse.com
langfuse_enable: true
logging_level: DEBUG
development: true
historical_question_retrieval_similarity_threshold: 0.9
sql_pairs_similarity_threshold: 0.7
sql_pairs_retrieval_max_size: 10
instructions_similarity_threshold: 0.7
instructions_top_k: 10
This is my env ,I only I only filled it out DeepSeek key
When I started running, I encountered the following problem,May I ask if I need to make any modifications
This is my conf.yml
type: llm
provider: litellm_llm
models:
model: deepseek/deepseek-chat
timeout: 120
kwargs:
n: 1
temperature: 0
response_format:
type: text
type: embedder
provider: litellm_embedder
models:
define OPENAI_API_KEY=<api_key> in ~/.wrenai/.env if you are using openai embedding model
please refer to LiteLLM documentation for more details: https://docs.litellm.ai/docs/providers
alias: default
api_base: https://api.deepseek.com/v1 # change this according to your embedding model
api_key_name: EMBEDDER_OLLAMA_API_KEY
timeout: 120
type: engine
provider: wren_ui
endpoint: http://localhost:3000
type: engine
provider: wren_ibis
endpoint: http://localhost:8000
type: document_store
provider: qdrant
location: http://localhost:6333
embedding_model_dim: 1024 # put your embedding model dimension here
timeout: 120
recreate_index: true
please change the llm and embedder names to the ones you want to use
the format of llm and embedder should be .<model_name> such as litellm_llm.gpt-4o-2024-08-06
the pipes may be not the latest version, please refer to the latest version: https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
type: pipeline
pipes:
embedder: litellm_embedder.deepseek/deepseek-chat
document_store: qdrant
embedder: litellm_embedder.deepseek/deepseek-chat
document_store: qdrant
embedder: litellm_embedder.deepseek/deepseek-chat
document_store: qdrant
llm: litellm_llm.deepseek/deepseek-chat
embedder: litellm_embedder.deepseek/deepseek-chat
document_store: qdrant
embedder: litellm_embedder.deepseek/deepseek-chat
document_store: qdrant
llm: litellm_llm.deepseek/deepseek-chat
engine: wren_ui
llm: litellm_llm.deepseek/deepseek-chat
engine: wren_ui
llm: litellm_llm.deepseek/deepseek-chat
engine: wren_ui
llm: litellm_llm.deepseek/deepseek-chat
llm: litellm_llm.deepseek/deepseek-chat
engine: wren_ui
llm: litellm_llm.deepseek/deepseek-chat
engine: wren_ui
llm: litellm_llm.deepseek/deepseek-chat
engine: wren_ui
llm: litellm_llm.deepseek/deepseek-chat
llm: litellm_llm.deepseek/deepseek-chat
engine: wren_ui
llm: litellm_llm.deepseek/deepseek-chat
llm: litellm_llm.deepseek/deepseek-chat
embedder: litellm_embedder.deepseek/deepseek-chat
document_store: qdrant
llm: litellm_llm.deepseek/deepseek-chat
engine: wren_ui
llm: litellm_llm.deepseek/deepseek-chat
llm: litellm_llm.deepseek/deepseek-chat
llm: litellm_llm.deepseek/deepseek-chat
embedder: litellm_embedder.deepseek/deepseek-chat
document_store: qdrant
llm: litellm_llm.deepseek/deepseek-chat
document_store: qdrant
embedder: litellm_embedder.deepseek/deepseek-chat
document_store: qdrant
embedder: litellm_embedder.deepseek/deepseek-chat
llm: litellm_llm.deepseek/deepseek-chat
llm: litellm_llm.deepseek/deepseek-chat
engine: wren_ui
llm: litellm_llm.deepseek/deepseek-chat
llm: litellm_llm.deepseek/deepseek-chat
llm: litellm_llm.deepseek/deepseek-chat
llm: litellm_llm.deepseek/deepseek-chat
engine: wren_ui
embedder: litellm_embedder.deepseek/deepseek-chat
document_store: qdrant
embedder: litellm_embedder.deepseek/deepseek-chat
document_store: qdrant
engine: wren_ibis
document_store: qdrant
document_store: qdrant
document_store: qdrant
embedder: litellm_embedder.deepseek/deepseek-chat
settings:
column_indexing_batch_size: 50
table_retrieval_size: 10
table_column_retrieval_size: 100
allow_using_db_schemas_without_pruning: false # if you want to use db schemas without pruning, set this to true. It will be faster
allow_intent_classification: true
allow_sql_generation_reasoning: true
query_cache_maxsize: 1000
query_cache_ttl: 3600
langfuse_host: https://cloud.langfuse.com
langfuse_enable: true
logging_level: DEBUG
development: true
historical_question_retrieval_similarity_threshold: 0.9
sql_pairs_similarity_threshold: 0.7
sql_pairs_retrieval_max_size: 10
instructions_similarity_threshold: 0.7
instructions_top_k: 10
This is my env ,I only I only filled it out DeepSeek key
COMPOSE_PROJECT_NAME=wrenai
PLATFORM=linux/amd64
PROJECT_DIR=.
service port
WREN_ENGINE_PORT=8080
WREN_ENGINE_SQL_PORT=7432
WREN_AI_SERVICE_PORT=5555
WREN_UI_PORT=3000
IBIS_SERVER_PORT=8000
WREN_UI_ENDPOINT=http://wren-ui:${WREN_UI_PORT}
ai service settings
QDRANT_HOST=qdrant
SHOULD_FORCE_DEPLOY=1
vendor keys
LLM_OPENAI_API_KEY=
EMBEDDER_OPENAI_API_KEY=
LLM_AZURE_OPENAI_API_KEY=
EMBEDDER_AZURE_OPENAI_API_KEY=
QDRANT_API_KEY=
DEEPSEEK_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
version
CHANGE THIS TO THE LATEST VERSION
WREN_PRODUCT_VERSION=0.15.4
WREN_ENGINE_VERSION=0.14.3
WREN_AI_SERVICE_VERSION=0.15.18
IBIS_SERVER_VERSION=0.14.3
WREN_UI_VERSION=0.20.2
WREN_BOOTSTRAP_VERSION=0.1.5
user id (uuid v4)
USER_UUID=
for other services
POSTHOG_API_KEY=phc_nhF32aj4xHXOZb0oqr2cn4Oy9uiWzz6CCP4KZmRq9aE
POSTHOG_HOST=https://app.posthog.com
TELEMETRY_ENABLED=true
this is for telemetry to know the model, i think ai-service might be able to provide a endpoint to get the information
GENERATION_MODEL=gpt-4o-mini
LANGFUSE_SECRET_KEY=
LANGFUSE_PUBLIC_KEY=
the port exposes to the host
OPTIONAL: change the port if you have a conflict
HOST_PORT=3000
AI_SERVICE_FORWARD_PORT=5555
Wren UI
EXPERIMENTAL_ENGINE_RUST_VERSION=false
EMBEDDER_OLLAMA_API_KEY=123456
The text was updated successfully, but these errors were encountered: