Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

接入O1模型不支持图像输入(Access O1 model does not support image input.) #6382

Open
supersnoopy opened this issue Mar 17, 2025 · 8 comments
Labels
bug Something isn't working

Comments

@supersnoopy
Copy link

📦 部署方式

Docker

📌 软件版本

v2.15.8

💻 系统环境

Other Linux

📌 系统版本

CentOS 7.9

🌐 浏览器

Safari

📌 浏览器版本

18.1

🐛 问题描述

接入O1模型后,选择O1模型的时候不支持图像的输入,已经接入的gpt-4o是支持图像输入的
After connecting to the O1 model, image input is not supported when the O1 model is selected. The gpt-4o that has been connected supports image input.

📷 复现步骤

No response

🚦 期望结果

No response

📝 补充信息

No response

@supersnoopy supersnoopy added the bug Something isn't working label Mar 17, 2025
@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Title: Access O1 model does not support image input.)

@QAbot-zh
Copy link

可以使用这个功能:

Image

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


This function can be used:

Image

@supersnoopy
Copy link
Author

可以使用这个功能:

Image

感谢您提出的指导,根据上述添加“ - VISION_MODELS=o1”,o1模型下已经支持输入图片,但是又报了新的问题
{
"error": {
"message": "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead. (request id: )",
"type": "invalid_request_error",
"param": "max_tokens",
"code": "unsupported_parameter"
}
}

@supersnoopy
Copy link
Author

Bot detected the issue body's language is not English, translate it automatically.

This function can be used:

Image

Thank you for your guidance. According to the above, add "- VISION_MODELS=o1". The input of pictures is already supported under the o1 model, but a new problem has been reported.
{
"error": {
"message": "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead. (request id: )",
"type": "invalid_request_error",
"param": "max_tokens",
"code": "unsupported_parameter"
}
}

@QAbot-zh
Copy link

那不行了,是程序缺陷,只能等修改了

Image

@supersnoopy
Copy link
Author

那不行了,是程序缺陷,只能等修改了

Image
好的,感谢

@Davidlasky
Copy link

Image

just split and add one more condition statement lol, works on mine

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants