Skip to content

chatchat.init_database:worker:61 #5297

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
kenvinsuxun opened this issue Mar 31, 2025 · 1 comment
Open

chatchat.init_database:worker:61 #5297

kenvinsuxun opened this issue Mar 31, 2025 · 1 comment

Comments

@kenvinsuxun
Copy link

以下是配置文件model_settings.yaml
`# 模型配置项

默认选用的 LLM 名称

DEFAULT_LLM_MODEL: deepseek-r1:7b

默认选用的 Embedding 名称

DEFAULT_EMBEDDING_MODEL: quentinz/bge-large-zh-v1.5:latest

AgentLM模型的名称 (可以不指定,指定之后就锁定进入Agent之后的Chain的模型,不指定就是 DEFAULT_LLM_MODEL)

Agent_MODEL: ''

默认历史对话轮数

HISTORY_LEN: 10

大模型最长支持的长度,如果不填写,则使用模型默认的最大长度,如果填写,则为用户设定的最大长度

MAX_TOKENS:

LLM通用对话参数

TEMPERATURE: 0.7

支持的Agent模型

SUPPORT_AGENT_MODELS:

  • chatglm3-6b
  • glm-4
  • openai-api
  • Qwen-2
  • qwen2-instruct
  • gpt-3.5-turbo
  • gpt-4o

LLM模型配置,包括了不同模态初始化参数。

model 如果留空则自动使用 DEFAULT_LLM_MODEL

LLM_MODEL_CONFIG:
preprocess_model:
model: ''
temperature: 0.05
max_tokens: 4096
history_len: 10
prompt_name: default
callbacks: false
llm_model:
model: ''
temperature: 0.9
max_tokens: 4096
history_len: 10
prompt_name: default
callbacks: true
action_model:
model: ''
temperature: 0.01
max_tokens: 4096
history_len: 10
prompt_name: ChatGLM3
callbacks: true
postprocess_model:
model: ''
temperature: 0.01
max_tokens: 4096
history_len: 10
prompt_name: default
callbacks: true
image_model:
model: sd-turbo
size: 256*256

# 模型加载平台配置

# 平台名称

platform_name: ollama

# 平台类型

# 可选值:['xinference', 'ollama', 'oneapi', 'fastchat', 'openai', 'custom openai']

platform_type: ollama

# openai api url

api_base_url: http://127.0.0.1:9997/v1

# api key if available

api_key: EMPTY

# API 代理

api_proxy: ''

# 该平台单模型最大并发数

api_concurrencies: 5

# 是否自动获取平台可用模型列表。设为 True 时下方不同模型类型可自动检测

auto_detect_model: false

# 该平台支持的大语言模型列表,auto_detect_model 设为 True 时自动检测

llm_models: []

# 该平台支持的嵌入模型列表,auto_detect_model 设为 True 时自动检测

embed_models: []

# 该平台支持的图像生成模型列表,auto_detect_model 设为 True 时自动检测

text2image_models: []

# 该平台支持的多模态模型列表,auto_detect_model 设为 True 时自动检测

image2text_models: []

# 该平台支持的重排模型列表,auto_detect_model 设为 True 时自动检测

rerank_models: []

# 该平台支持的 STT 模型列表,auto_detect_model 设为 True 时自动检测

speech2text_models: []

# 该平台支持的 TTS 模型列表,auto_detect_model 设为 True 时自动检测

text2speech_models: []

MODEL_PLATFORMS:

  • platform_name: ollama
    platform_type: ollama
    api_base_url: http://127.0.0.1:11434/v1
    api_key: EMPTY
    api_proxy: ''
    api_concurrencies: 5
    auto_detect_model: false
    llm_models:
    • qwen:7b
    • qwen2:7b
    • deepseek-r1:7b
    • deepseek-r1:1.5b
      embed_models:
    • quentinz/bge-large-zh-v1.5:latest
      text2image_models: []
      image2text_models: []
      rerank_models: []
      speech2text_models: []
      text2speech_models: []`

运行 chatchat init之后报错

文档切分示例:page_content='See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/372669736\nCreating Large Language Model Applications Utilizing LangChain: A Primer on\nDeveloping LLM App
s Fast\nArticle\xa0\xa0in\xa0\xa0International Conference on Applied Engineering and Natural Sciences · July 2023\nDOI: 10.59287/icaens.1127\nCITATIONS\n0\nREADS\n47\n2 authors:\nSome of the authors of this publication are also working on these
related projects:\nTHALIA: Test Harness for the Assessment of Legacy Information Integration Approaches View project\nAnalysis of Feroresonance with Signal Processing Technique View project\nOguzhan Topsakal\nFlorida Polytechnic University\n29 PUBLICATIONS\xa0\xa0\xa0155 CITATIONS\nSEE PROFILE' metadata={'source': 'D:\chatchat_data\data\knowledge_base\samples\content\test_files\langchain.pdf'}
2025-04-01 07:10:18.980 | ERROR | chatchat.init_database:worker:61 - Error raised by inference API HTTP code: 500, {"error":{}}

Copy link

dosubot bot commented Mar 31, 2025

I found a similar discussion regarding issues during the model loading phase in Langchain-Chatchat, which might be relevant to your situation. You can check it out here [1]. However, it seems that no solution was found in the comments.

Regarding your issue, a 500 Internal Server Error typically indicates a problem on the server side. Here are a few things you might want to check in your model_settings.yaml:

  1. API Base URL: Ensure that the api_base_url is correct and the server is running at that address. The URL should be accessible and not blocked by any firewall or network settings.

  2. API Key: If your platform requires an API key, make sure it is correctly set. Currently, it is marked as EMPTY.

  3. Model Configuration: Verify that the models specified in llm_models and embed_models are correctly configured and available on the platform.

  4. Concurrency Settings: Check if the api_concurrencies setting is appropriate for your server's capacity.

  5. Error Logging: Since the error is logged, check the server logs for more detailed error messages that might give you more insight into what is causing the 500 error.

If these configurations are correct and the issue persists, it might be helpful to look into the server-side logs or any additional error messages that could provide more context.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant