-
-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Error using rerank twice #4967
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Very similar issue here with v2.26.0-cublas-cuda12-ffmpeg. Model config in helm modelsConfigs:
bge-m3.yaml: |
name: bge-m3
backend: sentencetransformers
embeddings: true
parameters:
model: BAAI/bge-m3
context_size: 8192
cuda: true
f16: true
bge-reranker-v2-m3.yaml: |
name: bge-reranker-v2-m3
backend: rerankers
parameters:
model: BAAI/bge-reranker-v2-m3
context_size: 8192
cuda: true
f16: true local-ai log
|
Hi @knaga35 I somehow solved this issue by deleting our existing knowledge base and then create a new one. |
@kyleli666 |
LocalAI version:
Environment, CPU architecture, OS, and Version:
Describe the bug
-- The first time, it is simply reranked in the knowledge base.
-- The second time, it is reranked in the Multi-path Retrieval mode. (It seems to be occurred the error.)
To Reproduce
Create knowledge
Retrieval Setting : Vector Search
Rerank Model : BAAI/bge-reranker-v2-m3
Create a chatbot application
Add the knowledge base in the context.
Retrieval Setting : Rerank Model
Rerank Model : BAAI/bge-reranker-v2-m3
Enter content in the box to start debugging the Chatbot
There is no problem depending on the content of the question, but the following error may occur.
Expected behavior
Logs
Additional context
The text was updated successfully, but these errors were encountered: