Skip to content

报错[xinference] Server Unavailable Error, Failed to rerank documents, detail: [address=192.168.1.29:57290, pid=11640] Error when sending message #3229

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 of 3 tasks
Hiroshi230 opened this issue Apr 11, 2025 · 1 comment
Milestone

Comments

@Hiroshi230
Copy link

Hiroshi230 commented Apr 11, 2025

System Info / 系統信息

算力端在winserver2019上,dify在虚拟机docker里,我需要用虚拟机里的dify调用算力端的9997端口,dify设置rerank的时候报错如下内容,embedding模型dify能够导入,sentence-transformers已经<4.0.0。

[xinference] Server Unavailable Error, Failed to rerank documents, detail: [address=192.168.1.29:57290, pid=11640] Error when sending message 3a11c129d2bb66a0a835e4660d670d2f3dea936d90e9c44cbd331e36a3a00b0f. Caused by PicklingError('Could not pickle object as excessively deep recursion required.'). Original error: RecursionError('maximum recursion depth exceeded')Traceback: File "D:\anaconda\envs\xinference\lib\site-packages\xoscar\backends\pool.py", line 667, in send result = await self._run_coro(message.message_id, coro) File "D:\anaconda\envs\xinference\lib\site-packages\xoscar\backends\pool.py", line 370, in _run_coro return await coro File "D:\anaconda\envs\xinference\lib\site-packages\xoscar\api.py", line 384, in __on_receive__ return await super().__on_receive__(message) # type: ignore File "xoscar\\core.pyx", line 558, in __on_receive__ File "xoscar\\core.pyx", line 520, in xoscar.core._BaseActor.__on_receive__ File "xoscar\\core.pyx", line 521, in xoscar.core._BaseActor.__on_receive__ File "xoscar\\core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__ File "D:\anaconda\envs\xinference\lib\site-packages\xinference\core\model.py", line 106, in wrapped_func ret = await fn(self, *args, **kwargs) File "D:\anaconda\envs\xinference\lib\site-packages\xinference\core\utils.py", line 93, in wrapped ret = await func(*args, **kwargs) File "D:\anaconda\envs\xinference\lib\site-packages\xinference\core\model.py", line 916, in rerank return await self._call_wrapper_json( File "D:\anaconda\envs\xinference\lib\site-packages\xinference\core\model.py", line 641, in _call_wrapper_json return await self._call_wrapper("json", fn, *args, **kwargs) File "D:\anaconda\envs\xinference\lib\site-packages\xinference\core\model.py", line 141, in _async_wrapper return await fn(self, *args, **kwargs) File "D:\anaconda\envs\xinference\lib\site-packages\xinference\core\model.py", line 666, in _call_wrapper ret = await asyncio.to_thread(fn, *args, **kwargs) File "D:\anaconda\envs\xinference\lib\asyncio\threads.py", line 25, in to_thread return await loop.run_in_executor(None, func_call) File "D:\anaconda\envs\xinference\lib\concurrent\futures\thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "D:\anaconda\envs\xinference\lib\site-packages\xinference\model\rerank\core.py", line 275, in rerank self._model.model.n_tokens = 0 File "D:\anaconda\envs\xinference\lib\site-packages\torch\nn\modules\module.py", line 2032, in __setattr__ super().__setattr__(name, value) File "D:\anaconda\envs\xinference\lib\site-packages\xinference\model\rerank\core.py", line 131, in n_tokens self._local_data.n_tokens = new_n_tokens File "D:\anaconda\envs\xinference\lib\site-packages\xinference\model\rerank\core.py", line 134, in __getattr__ return getattr(self._module, attr) File "D:\anaconda\envs\xinference\lib\site-packages\xinference\model\rerank\core.py", line 134, in __getattr__ return getattr(self._module, attr) File "D:\anaconda\envs\xinference\lib\site-packages\xinference\model\rerank\core.py", line 134, in __getattr__ return getattr(self._module, attr) [Previous line repeate

Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?

  • docker / docker
  • pip install / 通过 pip install 安装
  • installation from source / 从源码安装

Version info / 版本信息

xinference 1.4.0

The command used to start Xinference / 用以启动 xinference 的命令

启动用的是192.168.1.29:9997这个端口启用

Reproduction / 复现过程

Expected behavior / 期待表现

@XprobeBot XprobeBot added this to the v1.x milestone Apr 11, 2025
@qinxuye
Copy link
Contributor

qinxuye commented Apr 14, 2025

能不能重新贴下错误栈。看不清楚。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants