-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
添加llama.cpp提供的llama-server服务支持 #792
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
请用openai like接入。 |
我查看了服务的高级文档,在Rockylinux9上的bash shell 里使用如下方法测试,没有成功,一直卡在这个位置: 第一种:
第二种:
原来试用ollama的api能成功。 |
请提供一些控制台输出 |
环境: ● llama-qwen2.5.service - llama-qwen2.5.server 3月 24 12:40:54 192.168.0.110 llama-server[4690]: <|im_start|>user 激活虚拟环境: 设置环境变量: config.json文件内容: 运行命令: 终端输出: |
看上去是连接超时了,llama.cpp连通性正常吗? |
llama.cpp连通性正常,我用chatbox连接能正常翻译,用webui也能访问和翻译。
能连接翻译,但在翻译到100多页时(全文600多页),又出现连接超时提示。 |
奇怪 |
在什么场景下,需要你请求的功能?
我在尝试使用pdf2zh连接llama.cpp的llama-server提供的api :http://127.0.0.1:8080/v1时,总是失败,提示不支持,还请大佬添加此服务,非常感激!
解决方案
No response
其他内容
No response
The text was updated successfully, but these errors were encountered: