You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a usecase where I can connect to different inference endpoints given their base_url/api_key.
Is it possible to pass multiple inference endpoints in the config.json?
Since we can bridge several MCP servers, I would like to bridge to different OpenAI API-compatible endpoints.
If not, could we pass the pair (base_url, api_key) at runtime to the completion routes?
If none of the above is possible, it seems that I would need to add another bridge in front of MCP-bridge to aggregate multiple OpenAI API-compatible endpoints, which feels like a lot of bridges...
The text was updated successfully, but these errors were encountered:
When you design an experiment, you essentially provide a dataset, a model, and a set of metrics. A model can be any OpenAI-API compatible endpoint the user want to evaluate. I am currently working to integrate an MCP bridge so that it is possible to specify tooling when designing a model within an experiment.
+1 to model prefixing. My use case is I have ollama behind my mcp bridge instance and have a chat client configured against both ollama and mcp bridge and want to distinguish between the base models returned from ollama models endpoint and those returns from mcp-bridge's
Hi,
I have a usecase where I can connect to different inference endpoints given their base_url/api_key.
Is it possible to pass multiple inference endpoints in the config.json?
Since we can bridge several MCP servers, I would like to bridge to different OpenAI API-compatible endpoints.
If not, could we pass the pair (base_url, api_key) at runtime to the completion routes?
If none of the above is possible, it seems that I would need to add another bridge in front of MCP-bridge to aggregate multiple OpenAI API-compatible endpoints, which feels like a lot of bridges...
The text was updated successfully, but these errors were encountered: