Issue with Custom LLM Configuration: 404 Error and Request Flow #15318
Unanswered
sowjanyagunupuru
asked this question in
General
Replies: 1 comment 2 replies
-
This is where we hand over the key to the OpenAI SDK. We always need to set a key, as the SDK throws an error otherwise. By setting Can you check whether your URL is offering an Open AI compatible endpoint, usually they end in |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have configured our custom-built LLM URL in the customOpenAImodels section of the settings.json file and selected it as the language model for the @coder agent. However, when I enter input and click "Send," I receive a 404 status code (no body)). Since no API keys are required to hit my URL, I have set apiKey to true.
"ai-features.openAiCustom.customOpenAiModels":[
{
"model": "pilot-demo",
"url": "https://pilot-demo.sample.com:2543/sample",
"id": "pilot-demo",
"apiKey": true,
"enableStreaming": true
}
]
When I use the same URL in Postman, it returns the expected response.
I would appreciate guidance on where the chat input is managed and how it can be modified before being included in the payload request to hit the API of the local LLM. Additionally, I would like to understand how the response is displayed.
Your help and suggestions would be invaluable in helping us move forward.
Beta Was this translation helpful? Give feedback.
All reactions