Skip to content

Commit 221987d

Browse files
authoredFeb 10, 2025··
Docs for v0.1.17 (#75)
* Add muxing docs * Add OpenRouter endpoint docs * Add Kodu docs
1 parent 1a93f17 commit 221987d

31 files changed

+805
-221
lines changed
 

‎docs/about/changelog.md

+17-2
Original file line numberDiff line numberDiff line change
@@ -13,12 +13,27 @@ Major features and changes are noted here. To review all updates, see the
1313

1414
Related: [Upgrade CodeGate](../how-to/install.md#upgrade-codegate)
1515

16-
- **New integration: Open Interpreter** - xx Feb\
17-
2025 CodeGate v0.1.16 introduces support for
16+
- **Model muxing** - 7 Feb, 2025\
17+
With CodeGate v0.1.17 you can use the new `/v1/mux` endpoint to configure
18+
model selection based on your workspace! Learn more in the
19+
[model muxing guide](../features/muxing.md).
20+
21+
- **OpenRouter endpoint** - 7 Feb, 2025\
22+
CodeGate v0.1.17 adds a dedicated `/openrouter` provider endpoint for
23+
OpenRouter users. This endpoint currently works with Continue, Cline, and Kodu
24+
(Claude Coder).
25+
26+
- **New integration: Open Interpreter** - 4 Feb, 2025\
27+
CodeGate v0.1.16 added support for
1828
[Open Interpreter](https://github.com/openinterpreter/open-interpreter) with
1929
OpenAI-compatible APIs. Review the
2030
[integration guide](../integrations/open-interpreter.mdx) to get started.
2131

32+
- **New integration: Claude Coder** - 28 Jan, 2025\
33+
CodeGate v0.1.14 also introduced support for Kodu's
34+
[Claude Coder](https://www.kodu.ai/extension) extension. See the
35+
[integration guide](../integrations/kodu.mdx) to learn more.
36+
2237
- **New integration: Cline** - 28 Jan, 2025\
2338
CodeGate version 0.1.14 adds support for [Cline](https://cline.bot/) with
2439
Anthropic, OpenAI, Ollama, and LM Studio. See the

‎docs/features/muxing.md

+129
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,129 @@
1+
---
2+
title: Model muxing
3+
description: Configure a per-workspace LLM
4+
sidebar_position: 35
5+
---
6+
7+
## Overview
8+
9+
_Model muxing_ (or multiplexing), allows you to configure your AI assistant once
10+
and use [CodeGate workspaces](./workspaces.mdx) to switch between LLM providers
11+
and models without reconfiguring your development environment. This feature is
12+
especially useful when you're working on multiple projects or tasks that require
13+
different AI models.
14+
15+
For each CodeGate workspace, you can select the AI provider and model
16+
combination you want to use. Then, configure your AI coding tool to use the
17+
CodeGate muxing endpoint `http://localhost:8989/v1/mux` as an OpenAI-compatible
18+
API provider.
19+
20+
To change the model currently in use, simply switch your active CodeGate
21+
workspace.
22+
23+
```mermaid
24+
flowchart LR
25+
Client(AI Assistant/Agent)
26+
CodeGate{CodeGate}
27+
WS1[Workspace-A]
28+
WS2[Workspace-B]
29+
WS3[Workspace-C]
30+
LLM1(OpenAI/<br>o3-mini)
31+
LLM2(Ollama/<br>deepseek-r1)
32+
LLM3(OpenRouter/<br>claude-35-sonnet)
33+
34+
Client ---|/v1/mux| CodeGate
35+
CodeGate --> WS1
36+
CodeGate --> WS2
37+
CodeGate --> WS3
38+
WS1 --> |api| LLM1
39+
WS2 --> |api| LLM2
40+
WS3 --> |api| LLM3
41+
```
42+
43+
## Use cases
44+
45+
- You have a project that requires a specific model for a particular task, but
46+
you also need to switch between different models during the course of your
47+
work.
48+
- You want to experiment with different LLM providers and models without having
49+
to reconfigure your AI assistant/agent every time you switch.
50+
- Your AI coding assistant doesn't support a particular provider or model that
51+
you want to use. CodeGate's muxing provides an OpenAI-compatible abstraction
52+
layer.
53+
- You're working on a sensitive project and want to use a local model, but still
54+
have the flexibility to switch to hosted models for other work.
55+
- You want to control your LLM provider spend by using lower-cost models for
56+
some tasks that don't require the power of more advanced (and expensive)
57+
reasoning models.
58+
59+
## Configure muxing
60+
61+
To use muxing with your AI coding assistant, you need to add one or more AI
62+
providers to CodeGate, then select the model you want to use on a workspace.
63+
64+
CodeGate supports the following LLM providers for muxing:
65+
66+
- Anthropic
67+
- llama.cpp
68+
- LM Studio
69+
- Ollama
70+
- OpenAI (and compatible APIs)
71+
- OpenRouter
72+
- vLLM
73+
74+
### Add a provider
75+
76+
1. In the [CodeGate dashboard](http://localhost:9090), open the **Providers**
77+
page from the **Settings** menu.
78+
1. Click **Add Provider**.
79+
1. Enter a display name for the provider, then select the type from the
80+
drop-down list. The default endpoint and authentication type are filled in
81+
automatically.
82+
1. If you are using a non-default endpoint, update the **Endpoint** value.
83+
1. Optionally, add a **Description** for the provider.
84+
1. If the provider requires authentication, select the **API Key**
85+
authentication option and enter your key.
86+
87+
When you save the settings, CodeGate connects to the provider to retrieve the
88+
available models.
89+
90+
:::note
91+
92+
For locally-hosted models, you must use `http://host.docker.internal` instead of
93+
`http://localhost`
94+
95+
:::
96+
97+
### Select the model for a workspace
98+
99+
Open the settings of one of your [workspaces](./workspaces.mdx) from the
100+
Workspace selection menu or the
101+
[Manage Workspaces](http://localhost:9090/workspaces) screen.
102+
103+
In the **Preferred Model** section, select the model to use with the workspace.
104+
105+
### Manage existing providers
106+
107+
To edit a provider's settings, click the Manage button next to the provider in
108+
the list. For providers that require authentication, you can leave the API key
109+
field blank to preserve the current value.
110+
111+
To delete a provider, click the trash icon next to it. If this provider was in
112+
use by any workspaces, you will need to update their settings to choose a
113+
different provider/model.
114+
115+
### Refresh available models
116+
117+
To refresh the list of models available from a provider, in the Providers list,
118+
click the Manage button next to the provider to refresh, then save it without
119+
making any changes.
120+
121+
## Configure your client
122+
123+
Configure the OpenAI-compatible API base URL of your AI coding assistant/agent
124+
to `http://localhost:8989/v1/mux`. If your client requires a model name and/or
125+
API key, you can enter any values since CodeGate manages the model selection and
126+
authentication.
127+
128+
For specific instructions, see the
129+
[integration guide](../integrations/index.mdx) for your client.

‎docs/features/workspaces.mdx

+9-4
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,13 @@ Workspaces offer several key features:
2525

2626
- **Custom instructions**: Customize your interactions with LLMs by augmenting
2727
your AI assistant's system prompt, enabling tailored responses and behaviors
28-
for different types of tasks. CodeGate includes a library of community prompts
29-
that can be easily customized for specific tasks. You can also create your
30-
own.
28+
for different types of tasks. Choose from CodeGate's library of community
29+
prompts or create your own.
30+
31+
- [**Model muxing**](./muxing.md): Configure the LLM provider/model for each
32+
workspace, allowing you to configure your AI assistant/agent once and switch
33+
between different models on the fly. This is useful when working on multiple
34+
projects or tasks that require different AI models.
3135

3236
- **Prompt and alert history**: Your LLM interactions (prompt history) and
3337
CodeGate security detections (alert history) are recorded in the active
@@ -112,7 +116,8 @@ In the workspace list, open the menu (**...**) next to a workspace to
112116
**Activate**, **Edit**, or **Archive** the workspace.
113117

114118
**Edit** opens the workspace settings page. From here you can rename the
115-
workspace, set the custom prompt instructions, or archive the workspace.
119+
workspace, select the LLM provider and model (see [Model muxing](./muxing.md)),
120+
set the custom prompt instructions, or archive the workspace.
116121

117122
**Archived** workspaces can be restored or permanently deleted from the
118123
workspace list or workspace settings screen.

‎docs/how-to/configure.md

+9-17
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,15 @@
11
---
2-
title: Configure CodeGate
2+
title: Advanced configuration
33
description: Customizing CodeGate's application settings
4-
sidebar_position: 20
4+
sidebar_position: 30
55
---
66

77
## Customize CodeGate's behavior
88

9-
The CodeGate container runs with default settings to support Ollama, Anthropic,
10-
and OpenAI APIs with typical settings. To customize the behavior, you can add
11-
extra configuration parameters to the container as environment variables:
9+
The CodeGate container runs with defaults that work with supported LLM providers
10+
using typical settings. To customize CodeGate's application settings like
11+
provider endpoints and logging level, you can add extra configuration parameters
12+
to the container as environment variables:
1213

1314
```bash {2}
1415
docker run --name codegate -d -p 8989:8989 -p 9090:9090 \
@@ -31,22 +32,13 @@ CodeGate supports the following parameters:
3132
| `CODEGATE_OPENAI_URL` | `https://api.openai.com/v1` | Specifies the OpenAI engine API endpoint URL. |
3233
| `CODEGATE_VLLM_URL` | `http://localhost:8000` | Specifies the URL of the vLLM server to use. |
3334

34-
## Example: Use CodeGate with OpenRouter
35+
## Example: Use CodeGate with a remote Ollama server
3536

36-
[OpenRouter](https://openrouter.ai/) is an interface to many large language
37-
models. CodeGate's vLLM provider works with OpenRouter's API when used with the
38-
Continue IDE plugin.
39-
40-
To use OpenRouter, set the vLLM URL when you launch CodeGate:
37+
Set the Ollama server's URL when you launch CodeGate:
4138

4239
```bash {2}
4340
docker run --name codegate -d -p 8989:8989 -p 9090:9090 \
44-
-e CODEGATE_VLLM_URL=https://openrouter.ai/api \
41+
-e CODEGATE_OLLAMA_URL=https://my.ollama-server.example \
4542
--mount type=volume,src=codegate_volume,dst=/app/codegate_volume \
4643
--restart unless-stopped ghcr.io/stacklok/codegate
4744
```
48-
49-
Then,
50-
[configure the Continue IDE plugin](../integrations/continue.mdx?provider=vllm)
51-
to use CodeGate's vLLM endpoint (`http://localhost:8989/vllm`) along with the
52-
model you'd like to use and your OpenRouter API key.

‎docs/how-to/dashboard.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Access the dashboard
33
description: View alerts and usage history
4-
sidebar_position: 30
4+
sidebar_position: 20
55
---
66

77
## Enable dashboard access

‎docs/index.md

+20-3
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,20 @@ sequenceDiagram
3131
deactivate CodeGate
3232
```
3333

34+
## Key features
35+
36+
CodeGate includes several key features for privacy, security, and coding
37+
efficiency, including:
38+
39+
- [Secrets encryption](./features/secrets-encryption.md) to protect your
40+
sensitive credentials
41+
- [Dependency risk awareness](./features/dependency-risk.md) to update the LLM's
42+
knowledge of malicious or deprecated open source packages
43+
- [Model muxing](./features/muxing.md) to quickly select the best LLM
44+
provider/model for your current task
45+
- [Workspaces](./features/workspaces.mdx) to organize and customize your LLM
46+
interactions
47+
3448
## Supported environments
3549

3650
CodeGate supports several development environments and AI providers.
@@ -41,20 +55,23 @@ AI coding assistants / IDEs:
4155

4256
- **[Cline](./integrations/cline.mdx)** in Visual Studio Code
4357

44-
CodeGate supports Ollama, Anthropic, OpenAI-compatible APIs, and LM Studio
45-
with Cline
58+
CodeGate supports Ollama, Anthropic, OpenAI and compatible APIs, OpenRouter,
59+
and LM Studio with Cline
4660

4761
- **[Continue](./integrations/continue.mdx)** with Visual Studio Code and
4862
JetBrains IDEs
4963

5064
CodeGate supports the following AI model providers with Continue:
5165

5266
- Local / self-managed: Ollama, llama.cpp, vLLM
53-
- Hosted: Anthropic, OpenAI and OpenAI-compatible APIs like OpenRouter
67+
- Hosted: Anthropic, OpenAI and compatible APIs, and OpenRouter
5468

5569
- **[GitHub Copilot](./integrations/copilot.mdx)** with Visual Studio Code
5670
(JetBrains coming soon!)
5771

72+
- **[Kodu / Claude Coder](./integrations/kodu.mdx)** in Visual Studio Code with
73+
OpenAI-compatible APIs
74+
5875
- **[Open Interpreter](./integrations/open-interpreter.mdx)** with
5976
OpenAI-compatible APIs
6077

‎docs/integrations/aider.mdx

+3
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,9 @@ CodeGate works with the following AI model providers through aider:
1717
- Hosted:
1818
- [OpenAI](https://openai.com/api/) and OpenAI-compatible APIs
1919

20+
You can also configure [CodeGate muxing](../features/muxing.md) to select your
21+
provider and model using [workspaces](../features/workspaces.mdx).
22+
2023
:::note
2124

2225
This guide assumes you have already installed aider using their

‎docs/integrations/cline.mdx

+37-4
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,11 @@ CodeGate works with the following AI model providers through Cline:
1818
- [LM Studio](https://lmstudio.ai/)
1919
- Hosted:
2020
- [Anthropic](https://www.anthropic.com/api)
21-
- [OpenAI](https://openai.com/api/) and OpenAI-compatible APIs
21+
- [OpenAI](https://openai.com/api/) and compatible APIs
22+
- [OpenRouter](https://openrouter.ai/)
23+
24+
You can also configure [CodeGate muxing](../features/muxing.md) to select your
25+
provider and model using [workspaces](../features/workspaces.mdx).
2226

2327
## Install the Cline extension
2428

@@ -42,10 +46,36 @@ in the VS Code documentation.
4246

4347
import ClineProviders from '../partials/_cline-providers.mdx';
4448

49+
:::note
50+
51+
Cline has two modes: Plan and Act. Each mode can be uniquely configured with a
52+
different provider and model, so you need to configure both.
53+
54+
:::
55+
4556
To configure Cline to send requests through CodeGate:
4657

47-
1. Open the Cline extension sidebar from the VS Code Activity Bar and open its
48-
settings using the gear icon.
58+
1. Open the Cline extension sidebar from the VS Code Activity Bar. Note your
59+
current mode, Plan or Act.
60+
61+
<ThemedImage
62+
alt='Cline mode - plan'
63+
sources={{
64+
light: useBaseUrl('/img/integrations/cline-mode-plan-light.webp'),
65+
dark: useBaseUrl('/img/integrations/cline-mode-plan-dark.webp'),
66+
}}
67+
width={'400px'}
68+
/>
69+
<ThemedImage
70+
alt='Cline mode - act'
71+
sources={{
72+
light: useBaseUrl('/img/integrations/cline-mode-act-light.webp'),
73+
dark: useBaseUrl('/img/integrations/cline-mode-act-dark.webp'),
74+
}}
75+
width={'400px'}
76+
/>
77+
78+
1. Open the Cline settings using the gear icon.
4979

5080
<ThemedImage
5181
alt='Cline extension settings'
@@ -60,7 +90,10 @@ To configure Cline to send requests through CodeGate:
6090

6191
<ClineProviders />
6292

63-
1. Click **Done** to save the settings.
93+
1. Click **Done** to save the settings for your current mode.
94+
95+
1. Switch your Cline mode from Act to Plan or vice-versa, open the settings, and
96+
repeat the configuration for your desired provider & model.
6497

6598
## Verify configuration
6699

‎docs/integrations/continue.mdx

+132-86
Original file line numberDiff line numberDiff line change
@@ -18,12 +18,15 @@ CodeGate works with the following AI model providers through Continue:
1818

1919
- Local / self-managed:
2020
- [Ollama](https://ollama.com/)
21-
- [llama.cpp](https://github.com/ggerganov/llama.cpp)
2221
- [vLLM](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html)
22+
- [llama.cpp](https://github.com/ggerganov/llama.cpp) (advanced)
2323
- Hosted:
24-
- [OpenRouter](https://openrouter.ai/)
2524
- [Anthropic](https://www.anthropic.com/api)
2625
- [OpenAI](https://openai.com/api/)
26+
- [OpenRouter](https://openrouter.ai/)
27+
28+
You can also configure [CodeGate muxing](../features/muxing.md) to select your
29+
provider and model using [workspaces](../features/workspaces.mdx).
2730

2831
## Install the Continue plugin
2932

@@ -92,8 +95,9 @@ To configure Continue to send requests through CodeGate:
9295
"apiBase": "http://127.0.0.1:8989/<provider>"
9396
```
9497

95-
Replace `/<provider>` with one of: `/anthropic`, `/ollama`, `/openai`, or
96-
`/vllm` to match your LLM provider.
98+
Replace `/<provider>` with one of: `/v1/mux` (for CodeGate muxing),
99+
`/anthropic`, `/ollama`, `/openai`, `/openrouter`, or `/vllm` to match your
100+
LLM provider.
97101

98102
If you used a different API port when launching the CodeGate container,
99103
replace `8989` with your custom port number.
@@ -111,49 +115,37 @@ provider. Replace the values in ALL_CAPS. The configuration syntax is the same
111115
for VS Code and JetBrains IDEs.
112116

113117
<Tabs groupId="provider" queryString="provider">
114-
<TabItem value="ollama" label="Ollama" default>
115-
116-
You need Ollama installed on your local system with the server running
117-
(`ollama serve`) to use this provider.
118-
119-
CodeGate connects to `http://host.docker.internal:11434` by default. If you
120-
changed the default Ollama server port or to connect to a remote Ollama
121-
instance, launch CodeGate with the `CODEGATE_OLLAMA_URL` environment variable
122-
set to the correct URL. See [Configure CodeGate](../how-to/configure.md).
118+
<TabItem value="mux" label="CodeGate muxing" default>
123119

124-
Replace `MODEL_NAME` with the names of model(s) you have installed locally using
125-
`ollama pull`. See Continue's
126-
[Ollama provider documentation](https://docs.continue.dev/customize/model-providers/ollama).
120+
First, configure your [provider(s)](../features/muxing.md#add-a-provider) and
121+
select a model for each of your
122+
[workspace(s)](../features/workspaces.mdx#manage-workspaces) in the CodeGate
123+
dashboard.
127124

128-
We recommend the [Qwen2.5-Coder](https://ollama.com/library/qwen2.5-coder)
129-
series of models. Our minimum recommendation is:
130-
131-
- `qwen2.5-coder:7b` for chat
132-
- `qwen2.5-coder:1.5b` for autocomplete
133-
134-
These models balance performance and quality for typical systems with at least 4
135-
CPU cores and 16GB of RAM. If you have more compute resources available, our
136-
experimentation shows that larger models do yield better results.
125+
Configure Continue as shown. Note, the `model` and `apiKey` settings are
126+
required by Continue, but their value is not used.
137127

138128
```json title="~/.continue/config.json"
139129
{
140130
"models": [
141131
{
142-
"title": "CodeGate-Ollama",
143-
"provider": "ollama",
144-
"model": "MODEL_NAME",
145-
"apiBase": "http://localhost:8989/ollama"
132+
"title": "CodeGate-Mux",
133+
"provider": "openai",
134+
"model": "fake-value-not-used",
135+
"apiKey": "fake-value-not-used",
136+
"apiBase": "http://localhost:8989/v1/mux"
146137
}
147138
],
148139
"modelRoles": {
149-
"default": "CodeGate-Ollama",
150-
"summarize": "CodeGate-Ollama"
140+
"default": "CodeGate-Mux",
141+
"summarize": "CodeGate-Mux"
151142
},
152143
"tabAutocompleteModel": {
153-
"title": "CodeGate-Ollama-Autocomplete",
154-
"provider": "ollama",
155-
"model": "MODEL_NAME",
156-
"apiBase": "http://localhost:8989/ollama"
144+
"title": "CodeGate-Mux-Autocomplete",
145+
"provider": "openai",
146+
"model": "fake-value-not-used",
147+
"apiKey": "fake-value-not-used",
148+
"apiBase": "http://localhost:8989/v1/mux"
157149
}
158150
}
159151
```
@@ -195,6 +187,54 @@ Replace `YOUR_API_KEY` with your
195187
}
196188
```
197189

190+
</TabItem>
191+
<TabItem value="ollama" label="Ollama">
192+
193+
You need Ollama installed on your local system with the server running
194+
(`ollama serve`) to use this provider.
195+
196+
CodeGate connects to `http://host.docker.internal:11434` by default. If you
197+
changed the default Ollama server port or to connect to a remote Ollama
198+
instance, launch CodeGate with the `CODEGATE_OLLAMA_URL` environment variable
199+
set to the correct URL. See [Configure CodeGate](../how-to/configure.md).
200+
201+
Replace `MODEL_NAME` with the names of model(s) you have installed locally using
202+
`ollama pull`. See Continue's
203+
[Ollama provider documentation](https://docs.continue.dev/customize/model-providers/ollama).
204+
205+
We recommend the [Qwen2.5-Coder](https://ollama.com/library/qwen2.5-coder)
206+
series of models. Our minimum recommendation is:
207+
208+
- `qwen2.5-coder:7b` for chat
209+
- `qwen2.5-coder:1.5b` for autocomplete
210+
211+
These models balance performance and quality for typical systems with at least 4
212+
CPU cores and 16GB of RAM. If you have more compute resources available, our
213+
experimentation shows that larger models do yield better results.
214+
215+
```json title="~/.continue/config.json"
216+
{
217+
"models": [
218+
{
219+
"title": "CodeGate-Ollama",
220+
"provider": "ollama",
221+
"model": "MODEL_NAME",
222+
"apiBase": "http://localhost:8989/ollama"
223+
}
224+
],
225+
"modelRoles": {
226+
"default": "CodeGate-Ollama",
227+
"summarize": "CodeGate-Ollama"
228+
},
229+
"tabAutocompleteModel": {
230+
"title": "CodeGate-Ollama-Autocomplete",
231+
"provider": "ollama",
232+
"model": "MODEL_NAME",
233+
"apiBase": "http://localhost:8989/ollama"
234+
}
235+
}
236+
```
237+
198238
</TabItem>
199239
<TabItem value="openai" label="OpenAI">
200240

@@ -234,13 +274,26 @@ Replace `YOUR_API_KEY` with your
234274
</TabItem>
235275
<TabItem value="openrouter" label="OpenRouter">
236276

237-
CodeGate's vLLM provider supports OpenRouter, a unified interface for hundreds
238-
of commercial and open source models. You need an
239-
[OpenRouter](https://openrouter.ai/) account to use this provider.
277+
OpenRouter is a unified interface for hundreds of commercial and open source
278+
models. You need an [OpenRouter](https://openrouter.ai/) account to use this
279+
provider.
280+
281+
:::info Known issues
282+
283+
**Auto-completion support**: currently, CodeGate's `/openrouter` endpoint does
284+
not work with Continue's `tabAutocompleteModel` setting for fill-in-the-middle
285+
(FIM). We are
286+
[working to resolve this issue](https://github.com/stacklok/codegate/issues/980).
287+
288+
**DeepSeek models**: there is a bug in the current release version of Continue
289+
affecting DeepSeek models (ex: `deepseek/deepseek-r1`), you need to run the
290+
pre-release version of the Continue extension.
291+
292+
:::
240293

241294
Replace `MODEL_NAME` with one of the
242295
[available models](https://openrouter.ai/models), for example
243-
`qwen/qwen-2.5-coder-32b-instruct`.
296+
`anthropic/claude-3.5-sonnet`.
244297

245298
Replace `YOUR_API_KEY` with your
246299
[OpenRouter API key](https://openrouter.ai/keys).
@@ -250,20 +303,56 @@ Replace `YOUR_API_KEY` with your
250303
"models": [
251304
{
252305
"title": "CodeGate-OpenRouter",
253-
"provider": "vllm",
306+
"provider": "openrouter",
254307
"model": "MODEL_NAME",
255308
"apiKey": "YOUR_API_KEY",
256-
"apiBase": "http://localhost:8989/vllm"
309+
"apiBase": "http://localhost:8989/openrouter"
257310
}
258311
],
259312
"modelRoles": {
260313
"default": "CodeGate-OpenRouter",
261314
"summarize": "CodeGate-OpenRouter"
315+
}
316+
}
317+
```
318+
319+
</TabItem>
320+
<TabItem value="vllm" label="vLLM">
321+
322+
You need a
323+
[vLLM server](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html)
324+
running locally or access to a remote server to use this provider.
325+
326+
CodeGate connects to `http://localhost:8000` by default. If you changed the
327+
default Ollama server port or to connect to a remote Ollama instance, launch
328+
CodeGate with the `CODEGATE_VLLM_URL` environment variable set to the correct
329+
URL. See [Configure CodeGate](../how-to/configure.md).
330+
331+
A vLLM server hosts a single model. Continue automatically selects the available
332+
model, so the `model` parameter is not required. See Continue's
333+
[vLLM provider guide](https://docs.continue.dev/customize/model-providers/more/vllm)
334+
for more information.
335+
336+
If your server requires an API key, replace `YOUR_API_KEY` with the key.
337+
Otherwise, remove the `apiKey` parameter from both sections.
338+
339+
```json title="~/.continue/config.json"
340+
{
341+
"models": [
342+
{
343+
"title": "CodeGate-vLLM",
344+
"provider": "vllm",
345+
"apiKey": "YOUR_API_KEY",
346+
"apiBase": "http://localhost:8989/vllm"
347+
}
348+
],
349+
"modelRoles": {
350+
"default": "CodeGate-vLLM",
351+
"summarize": "CodeGate-vLLM"
262352
},
263353
"tabAutocompleteModel": {
264-
"title": "CodeGate-OpenRouter-Autocomplete",
354+
"title": "CodeGate-vLLM-Autocomplete",
265355
"provider": "vllm",
266-
"model": "MODEL_NAME",
267356
"apiKey": "YOUR_API_KEY",
268357
"apiBase": "http://localhost:8989/vllm"
269358
}
@@ -331,49 +420,6 @@ In the Continue config file, replace `MODEL_NAME` with the file name without the
331420
}
332421
```
333422

334-
</TabItem>
335-
<TabItem value="vllm" label="vLLM">
336-
337-
You need a
338-
[vLLM server](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html)
339-
running locally or access to a remote server to use this provider.
340-
341-
CodeGate connects to `http://localhost:8000` by default. If you changed the
342-
default Ollama server port or to connect to a remote Ollama instance, launch
343-
CodeGate with the `CODEGATE_VLLM_URL` environment variable set to the correct
344-
URL. See [Configure CodeGate](../how-to/configure.md).
345-
346-
A vLLM server hosts a single model. Continue automatically selects the available
347-
model, so the `model` parameter is not required. See Continue's
348-
[vLLM provider guide](https://docs.continue.dev/customize/model-providers/more/vllm)
349-
for more information.
350-
351-
If your server requires an API key, replace `YOUR_API_KEY` with the key.
352-
Otherwise, remove the `apiKey` parameter from both sections.
353-
354-
```json title="~/.continue/config.json"
355-
{
356-
"models": [
357-
{
358-
"title": "CodeGate-vLLM",
359-
"provider": "vllm",
360-
"apiKey": "YOUR_API_KEY",
361-
"apiBase": "http://localhost:8989/vllm"
362-
}
363-
],
364-
"modelRoles": {
365-
"default": "CodeGate-vLLM",
366-
"summarize": "CodeGate-vLLM"
367-
},
368-
"tabAutocompleteModel": {
369-
"title": "CodeGate-vLLM-Autocomplete",
370-
"provider": "vllm",
371-
"apiKey": "YOUR_API_KEY",
372-
"apiBase": "http://localhost:8989/vllm"
373-
}
374-
}
375-
```
376-
377423
</TabItem>
378424
</Tabs>
379425

‎docs/integrations/copilot.mdx

+1-1
Original file line numberDiff line numberDiff line change
@@ -235,7 +235,7 @@ Copilot chat and type `codegate version`. You should receive a response like
235235
"CodeGate version 0.1.13".
236236

237237
<ThemedImage
238-
alt='Verify Continue integration'
238+
alt='Verify Copilot integration'
239239
sources={{
240240
light: useBaseUrl('/img/integrations/copilot-codegate-version-light.webp'),
241241
dark: useBaseUrl('/img/integrations/copilot-codegate-version-dark.webp'),

‎docs/integrations/kodu.mdx

+126
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,126 @@
1+
---
2+
title: Use CodeGate with Claude Coder by Kodu.ai
3+
description: Configure the Kodu / Claude Coder extension for VS Code
4+
sidebar_label: Kodu / Claude Coder
5+
sidebar_position: 60
6+
---
7+
8+
import useBaseUrl from '@docusaurus/useBaseUrl';
9+
import ThemedImage from '@theme/ThemedImage';
10+
11+
[Claude Coder](https://www.kodu.ai/extension) by Kodu.ai is an AI coding agent
12+
extension for Visual Studio Code that can help programmers of all skill levels
13+
take their project from idea to execution.
14+
15+
CodeGate supports OpenAI-compatible APIs and OpenRouter with Claude Coder.
16+
17+
You can also configure [CodeGate muxing](../features/muxing.md) to select your
18+
provider and model using [workspaces](../features/workspaces.mdx).
19+
20+
## Install the Claude Coder extension
21+
22+
The Claude Coder extension is available in the
23+
[Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=kodu-ai.claude-dev-experimental).
24+
25+
Install the extension using the **Install** link on the Marketplace page or
26+
search for "Claude Coder" or "Kodu" in the Extensions panel within VS Code.
27+
28+
You can also install from the CLI:
29+
30+
```bash
31+
code --install-extension kodu-ai.claude-dev-experimental
32+
```
33+
34+
If you need help, see
35+
[Managing Extensions](https://code.visualstudio.com/docs/editor/extension-marketplace)
36+
in the VS Code documentation.
37+
38+
## Configure Claude Coder to use CodeGate
39+
40+
import KoduProviders from '../partials/_kodu-providers.mdx';
41+
42+
1. Open the Claude Coder extension sidebar from the VS Code Activity Bar and
43+
open its settings using the gear icon.
44+
45+
1. On the **Preferences** tab, scroll down and click the link next to "Want to
46+
use a custom provider?"
47+
48+
<ThemedImage
49+
alt='Claude Coder extension settings'
50+
sources={{
51+
light: useBaseUrl('/img/integrations/kodu-settings-light.webp'),
52+
dark: useBaseUrl('/img/integrations/kodu-settings-dark.webp'),
53+
}}
54+
width={'480px'}
55+
/>
56+
57+
1. Select your provider and configure as detailed here:
58+
59+
<KoduProviders />
60+
61+
1. Click **Save Settings** and confirm that you want to apply the model.
62+
63+
## Verify configuration
64+
65+
To verify that you've successfully connected Claude Coder / Kodu to CodeGate,
66+
start a new task in the Claude Coder sidebar and type `codegate version`. You
67+
should receive a response like "CodeGate version: v0.1.15":
68+
69+
<ThemedImage
70+
alt='Claude Coder verification'
71+
sources={{
72+
light: useBaseUrl('/img/integrations/kodu-codegate-version-light.webp'),
73+
dark: useBaseUrl('/img/integrations/kodu-codegate-version-dark.webp'),
74+
}}
75+
width={'520px'}
76+
/>
77+
78+
Start a new task and try asking CodeGate about a known malicious Python package:
79+
80+
```plain title="Claude Coder chat"
81+
Tell me how to use the invokehttp package from PyPI
82+
```
83+
84+
CodeGate responds with a warning and a link to the Stacklok Insight report about
85+
this package:
86+
87+
```plain title="Claude Coder chat"
88+
Warning: CodeGate detected one or more malicious, deprecated or archived packages.
89+
90+
• invokehttp: https://www.insight.stacklok.com/report/pypi/invokehttp
91+
92+
The `invokehttp` package from PyPI has been identified as malicious and should
93+
not be used. Please avoid using this package and consider using a trusted
94+
alternative such as `requests` for making HTTP requests in Python.
95+
96+
Here is an example of how to use the `requests` package:
97+
98+
...
99+
```
100+
101+
## Next steps
102+
103+
Learn more about CodeGate's features and how to use them:
104+
105+
- [Access the dashboard](../how-to/dashboard.md)
106+
- [CodeGate features](../features/index.mdx)
107+
108+
## Remove CodeGate
109+
110+
If you decide to stop using CodeGate, follow these steps to remove it and revert
111+
your environment.
112+
113+
1. Remove the custom base URL from your Claude Coder provider settings.
114+
115+
1. Stop and remove the CodeGate container:
116+
117+
```bash
118+
docker stop codegate && docker rm codegate
119+
```
120+
121+
1. If you launched CodeGate with a persistent volume, delete it to remove the
122+
CodeGate database and other files:
123+
124+
```bash
125+
docker volume rm codegate_volume
126+
```

‎docs/integrations/open-interpreter.mdx

+56-20
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,12 @@ import TabItem from '@theme/TabItem';
1111
[Open Interpreter](https://github.com/openinterpreter/open-interpreter) lets
1212
LLMs run code locally through a ChatGPT-like interface in your terminal.
1313

14-
CodeGate works with [OpenAI](https://openai.com/api/) and OpenAI-compatible APIs
14+
CodeGate works with [OpenAI](https://openai.com/api/) and compatible APIs
1515
through Open Interpreter.
1616

17+
You can also configure [CodeGate muxing](../features/muxing.md) to select your
18+
provider and model using [workspaces](../features/workspaces.mdx).
19+
1720
:::note
1821

1922
This guide assumes you have already installed Open Interpreter using their
@@ -26,34 +29,67 @@ This guide assumes you have already installed Open Interpreter using their
2629
To configure Open Interpreter to send requests through CodeGate, run
2730
`interpreter` with the
2831
[API base setting](https://docs.openinterpreter.com/settings/all-settings#api-base)
29-
set to CodeGate's local API port, `http://localhost:8989/openai`.
32+
set to CodeGate's local API port, `http://localhost:8989/<provider>`.
3033

31-
By default, CodeGate connects to the [OpenAI API](https://openai.com/api/). To
32-
use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
33-
[configuration parameter](../how-to/configure.md#config-parameters) when you run
34-
CodeGate.
34+
<Tabs groupId="provider" queryString="provider">
35+
<TabItem value="mux" label="CodeGate muxing" default>
36+
First, configure your [provider(s)](../features/muxing.md#add-a-provider) and
37+
select a model for each of your
38+
[workspace(s)](../features/workspaces.mdx#manage-workspaces) in the CodeGate dashboard.
3539

36-
<Tabs>
37-
<TabItem value="current" label="Open Interpreter v0.4.x" default>
38-
```bash
39-
interpreter --api_base http://localhost:8989/openai --api_key YOUR_API_KEY --model MODEL_NAME
40-
```
40+
When you run `interpreter`, the API key parameter is required but the value is
41+
not used. The `--model` setting must start with `openai/` but the actual model
42+
is determined by your CodeGate workspace.
4143

42-
</TabItem>
43-
<TabItem value="dev" label="v1.0 dev branch">
44-
If you are running Open Interpreter's v1.0
45-
[development branch](https://github.com/OpenInterpreter/open-interpreter/tree/development):
44+
<Tabs groupId="interpreter-version">
45+
<TabItem value="current" label="Open Interpreter v0.4.x" default>
46+
```bash
47+
interpreter --api_base http://localhost:8989/v1/mux --api_key fake-value-not-used --model openai/fake-value-not-used
48+
```
4649

47-
```bash
48-
interpreter --api-base http://localhost:8989/openai --api-key YOUR_API_KEY --model MODEL_NAME
49-
```
50+
</TabItem>
51+
<TabItem value="dev" label="v1.0 dev branch">
52+
If you are running Open Interpreter's v1.0
53+
[development branch](https://github.com/OpenInterpreter/open-interpreter/tree/development):
5054

51-
</TabItem>
52-
</Tabs>
55+
```bash
56+
interpreter --api-base http://localhost:8989/v1/mux --api-key fake-value-not-used --model openai/fake-value-not-used
57+
```
58+
59+
</TabItem>
60+
61+
</Tabs>
62+
</TabItem>
63+
<TabItem value="openai" label="OpenAI">
64+
You need an [OpenAI API](https://openai.com/api/) account to use this provider.
65+
To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
66+
[configuration parameter](../how-to/configure.md) when you launch CodeGate.
67+
68+
<Tabs groupId="interpreter-version">
69+
<TabItem value="current" label="Open Interpreter v0.4.x" default>
70+
```bash
71+
interpreter --api_base http://localhost:8989/openai --api_key YOUR_API_KEY --model MODEL_NAME
72+
```
73+
74+
</TabItem>
75+
<TabItem value="dev" label="v1.0 dev branch">
76+
If you are running Open Interpreter's v1.0
77+
[development branch](https://github.com/OpenInterpreter/open-interpreter/tree/development):
78+
79+
```bash
80+
interpreter --api-base http://localhost:8989/openai --api-key YOUR_API_KEY --model MODEL_NAME
81+
```
82+
83+
</TabItem>
84+
85+
</Tabs>
5386

5487
Replace `YOUR_API_KEY` with your OpenAI API key, and `MODEL_NAME` with your
5588
desired model, like `openai/gpt-4o-mini`.
5689

90+
</TabItem>
91+
</Tabs>
92+
5793
:::info
5894

5995
The `--model` parameter value must start with `openai/` for CodeGate to properly

‎docs/partials/_aider-providers.mdx

+62-44
Original file line numberDiff line numberDiff line change
@@ -5,56 +5,24 @@ import TabItem from '@theme/TabItem';
55

66
import LocalModelRecommendation from './_local-model-recommendation.md';
77

8-
<Tabs groupId="aider-provider" queryString="provider">
9-
<TabItem value="openai" label="OpenAI" default>
8+
<Tabs groupId="provider" queryString="provider">
9+
<TabItem value="mux" label="CodeGate muxing" default>
10+
First, configure your [provider(s)](../features/muxing.md#add-a-provider) and
11+
select a model for each of your
12+
[workspace(s)](../features/workspaces.mdx#manage-workspaces) in the CodeGate dashboard.
1013

11-
You need an [OpenAI API](https://openai.com/api/) account to use this provider.
12-
To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
13-
[configuration parameter](../how-to/configure.md#config-parameters).
14-
15-
Before you run aider, set environment variables for your API key and to set the
16-
API base URL to CodeGate's API port. Alternately, use one of aider's other
17-
[supported configuration methods](https://aider.chat/docs/config/api-keys.html)
18-
to set the corresponding values.
19-
20-
<Tabs groupId="os">
21-
<TabItem value="macos" label="macOS / Linux" default>
22-
23-
```bash
24-
export OPENAI_API_KEY=<YOUR_API_KEY>
25-
export OPENAI_API_BASE=http://localhost:8989/openai
26-
```
27-
28-
:::note
29-
30-
To persist these variables, add them to your shell profile (e.g., `~/.bashrc` or
31-
`~/.zshrc`).
32-
33-
:::
14+
Run aider with the OpenAI base URL set to `http://localhost:8989/v1/mux`. You
15+
can do this with the `OPENAI_API_BASE` environment variable or on the command
16+
line as shown below.
3417

35-
</TabItem>
36-
<TabItem value="windows" label="Windows">
18+
The `--openai-api-key` parameter is required but the value is not used. The
19+
`--model` setting must start with `openai/` but the actual model is determined
20+
by your CodeGate workspace.
3721

3822
```bash
39-
setx OPENAI_API_KEY <YOUR_API_KEY>
40-
setx OPENAI_API_BASE http://localhost:8989/openai
23+
aider --openai-api-base http://localhost:8989/v1/mux --openai-api-key fake-value-not-used --model openai/fake-value-not-used
4124
```
4225

43-
:::note
44-
45-
Restart your shell after running `setx`.
46-
47-
:::
48-
49-
</TabItem>
50-
</Tabs>
51-
52-
Replace `<YOUR_API_KEY>` with your
53-
[OpenAI API key](https://platform.openai.com/api-keys).
54-
55-
Then run `aider` as normal. For more information, see the
56-
[aider docs for connecting to OpenAI](https://aider.chat/docs/llms/openai.html).
57-
5826
</TabItem>
5927
<TabItem value="ollama" label="Ollama">
6028

@@ -116,5 +84,55 @@ locally using `ollama pull`.
11684
For more information, see the
11785
[aider docs for connecting to Ollama](https://aider.chat/docs/llms/ollama.html).
11886

87+
</TabItem>
88+
<TabItem value="openai" label="OpenAI">
89+
90+
You need an [OpenAI API](https://openai.com/api/) account to use this provider.
91+
To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
92+
[configuration parameter](../how-to/configure.md#config-parameters).
93+
94+
Before you run aider, set environment variables for your API key and to set the
95+
API base URL to CodeGate's API port. Alternately, use one of aider's other
96+
[supported configuration methods](https://aider.chat/docs/config/api-keys.html)
97+
to set the corresponding values.
98+
99+
<Tabs groupId="os">
100+
<TabItem value="macos" label="macOS / Linux" default>
101+
102+
```bash
103+
export OPENAI_API_KEY=<YOUR_API_KEY>
104+
export OPENAI_API_BASE=http://localhost:8989/openai
105+
```
106+
107+
:::note
108+
109+
To persist these variables, add them to your shell profile (e.g., `~/.bashrc` or
110+
`~/.zshrc`).
111+
112+
:::
113+
114+
</TabItem>
115+
<TabItem value="windows" label="Windows">
116+
117+
```bash
118+
setx OPENAI_API_KEY <YOUR_API_KEY>
119+
setx OPENAI_API_BASE http://localhost:8989/openai
120+
```
121+
122+
:::note
123+
124+
Restart your shell after running `setx`.
125+
126+
:::
127+
128+
</TabItem>
129+
</Tabs>
130+
131+
Replace `<YOUR_API_KEY>` with your
132+
[OpenAI API key](https://platform.openai.com/api-keys).
133+
134+
Then run `aider` as normal. For more information, see the
135+
[aider docs for connecting to OpenAI](https://aider.chat/docs/llms/openai.html).
136+
119137
</TabItem>
120138
</Tabs>

‎docs/partials/_cline-providers.mdx

+81-39
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,28 @@ import ThemedImage from '@theme/ThemedImage';
77

88
import LocalModelRecommendation from './_local-model-recommendation.md';
99

10-
<Tabs groupId="cline-provider" queryString="provider">
11-
<TabItem value="anthropic" label="Anthropic" default>
10+
<Tabs groupId="provider" queryString="provider">
11+
<TabItem value="mux" label="CodeGate muxing" default>
12+
First, configure your [provider(s)](../features/muxing.md#add-a-provider) and
13+
select a model for each of your
14+
[workspace(s)](../features/workspaces.mdx#manage-workspaces) in the CodeGate dashboard.
15+
16+
In the Cline settings, choose **OpenAI Compatible** as your provider. Set the
17+
**Base URL** to `http://localhost:8989/v1/mux`. Enter anything you want into the
18+
API key and model ID fields; these are required but not used since the actual
19+
provider and model is determined by your CodeGate workspace.
20+
21+
<ThemedImage
22+
alt='Cline settings for muxing'
23+
sources={{
24+
light: useBaseUrl('/img/integrations/cline-provider-mux-light.webp'),
25+
dark: useBaseUrl('/img/integrations/cline-provider-mux-dark.webp'),
26+
}}
27+
width={'540px'}
28+
/>
29+
30+
</TabItem>
31+
<TabItem value="anthropic" label="Anthropic">
1232

1333
You need an [Anthropic API](https://www.anthropic.com/api) account to use this
1434
provider.
@@ -30,24 +50,47 @@ To enable CodeGate, enable **Use custom base URL** and enter
3050
/>
3151

3252
</TabItem>
33-
<TabItem value="openai" label="OpenAI">
53+
<TabItem value="lmstudio" label="LM Studio">
3454

35-
You need an [OpenAI API](https://openai.com/api/) account to use this provider.
36-
To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
37-
[configuration parameter](../how-to/configure.md) when you launch CodeGate.
55+
You need LM Studio installed on your local system with a server running from LM
56+
Studio's **Developer** tab to use this provider. See the
57+
[LM Studio docs](https://lmstudio.ai/docs/api/server) for more information.
3858

39-
In the Cline settings, choose **OpenAI Compatible** as your provider, enter your
40-
OpenAI API key, and set your preferred model (example: `gpt-4o-mini`).
59+
Cline uses large prompts, so you will likely need to increase the context length
60+
for the model you've loaded in LM Studio. In the Developer tab, select the model
61+
you'll use with CodeGate, open the **Load** tab on the right and increase the
62+
**Context Length** to _at least_ 18k (18,432) tokens, then reload the model.
4163

42-
To enable CodeGate, set the **Base URL** to `https://localhost:8989/openai`.
64+
<ThemedImage
65+
alt='LM Studio dev server'
66+
sources={{
67+
light: useBaseUrl('/img/integrations/lmstudio-server-light.webp'),
68+
dark: useBaseUrl('/img/integrations/lmstudio-server-dark.webp'),
69+
}}
70+
width={'800px'}
71+
/>
72+
73+
CodeGate connects to `http://host.docker.internal:1234` by default. If you
74+
changed the default LM Studio server port, launch CodeGate with the
75+
`CODEGATE_LM_STUDIO_URL` environment variable set to the correct URL. See
76+
[Configure CodeGate](/how-to/configure.md).
77+
78+
In the Cline settings, choose LM Studio as your provider and set the **Base
79+
URL** to `http://localhost:8989/lm_studio`.
80+
81+
Set the **Model ID** to `lm_studio/<MODEL_NAME>`, where `<MODEL_NAME>` is the
82+
name of the model you're serving through LM Studio (shown in the Developer tab),
83+
for example `lm_studio/qwen2.5-coder-7b-instruct`.
84+
85+
<LocalModelRecommendation />
4386

4487
<ThemedImage
45-
alt='Cline settings for OpenAI'
88+
alt='Cline settings for LM Studio'
4689
sources={{
47-
light: useBaseUrl('/img/integrations/cline-provider-openai-light.webp'),
48-
dark: useBaseUrl('/img/integrations/cline-provider-openai-dark.webp'),
90+
light: useBaseUrl('/img/integrations/cline-provider-lmstudio-light.webp'),
91+
dark: useBaseUrl('/img/integrations/cline-provider-lmstudio-dark.webp'),
4992
}}
50-
width={'540px'}
93+
width={'635px'}
5194
/>
5295

5396
</TabItem>
@@ -79,47 +122,46 @@ locally using `ollama pull`.
79122
/>
80123

81124
</TabItem>
82-
<TabItem value="lmstudio" label="LM Studio">
125+
<TabItem value="openai" label="OpenAI">
83126

84-
You need LM Studio installed on your local system with a server running from LM
85-
Studio's **Developer** tab to use this provider. See the
86-
[LM Studio docs](https://lmstudio.ai/docs/api/server) for more information.
127+
You need an [OpenAI API](https://openai.com/api/) account to use this provider.
128+
To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
129+
[configuration parameter](../how-to/configure.md) when you launch CodeGate.
87130

88-
Cline uses large prompts, so you will likely need to increase the context length
89-
for the model you've loaded in LM Studio. In the Developer tab, select the model
90-
you'll use with CodeGate, open the **Load** tab on the right and increase the
91-
**Context Length** to _at least_ 18k (18,432) tokens, then reload the model.
131+
In the Cline settings, choose **OpenAI Compatible** as your provider, enter your
132+
OpenAI API key, and set your preferred model (example: `gpt-4o-mini`).
133+
134+
To enable CodeGate, set the **Base URL** to `http://localhost:8989/openai`.
92135

93136
<ThemedImage
94-
alt='LM Studio dev server'
137+
alt='Cline settings for OpenAI'
95138
sources={{
96-
light: useBaseUrl('/img/integrations/lmstudio-server-light.webp'),
97-
dark: useBaseUrl('/img/integrations/lmstudio-server-dark.webp'),
139+
light: useBaseUrl('/img/integrations/cline-provider-openai-light.webp'),
140+
dark: useBaseUrl('/img/integrations/cline-provider-openai-dark.webp'),
98141
}}
99-
width={'800px'}
142+
width={'540px'}
100143
/>
101144

102-
CodeGate connects to `http://host.docker.internal:1234` by default. If you
103-
changed the default LM Studio server port, launch CodeGate with the
104-
`CODEGATE_LM_STUDIO_URL` environment variable set to the correct URL. See
105-
[Configure CodeGate](/how-to/configure.md).
145+
</TabItem>
146+
<TabItem value="openrouter" label="OpenRouter">
106147

107-
In the Cline settings, choose LM Studio as your provider and set the **Base
108-
URL** to `http://localhost:8989/lm_studio`.
148+
You need an [OpenRouter](https://openrouter.ai/) account to use this provider.
109149

110-
Set the **Model ID** to `lm_studio/<MODEL_NAME>`, where `<MODEL_NAME>` is the
111-
name of the model you're serving through LM Studio (shown in the Developer tab),
112-
for example `lm_studio/qwen2.5-coder-7b-instruct`.
150+
In the Cline settings, choose **OpenAI Compatible** as your provider (NOT
151+
OpenRouter), enter your
152+
[OpenRouter API key](https://openrouter.ai/settings/keys), and set your
153+
[preferred model](https://openrouter.ai/models) (example:
154+
`anthropic/claude-3.5-sonnet`).
113155

114-
<LocalModelRecommendation />
156+
To enable CodeGate, set the **Base URL** to `http://localhost:8989/openrouter`.
115157

116158
<ThemedImage
117-
alt='Cline settings for LM Studio'
159+
alt='Cline settings for OpenRouter'
118160
sources={{
119-
light: useBaseUrl('/img/integrations/cline-provider-lmstudio-light.webp'),
120-
dark: useBaseUrl('/img/integrations/cline-provider-lmstudio-dark.webp'),
161+
light: useBaseUrl('/img/integrations/cline-provider-openrouter-light.webp'),
162+
dark: useBaseUrl('/img/integrations/cline-provider-openrouter-dark.webp'),
121163
}}
122-
width={'635px'}
164+
width={'540px'}
123165
/>
124166

125167
</TabItem>

‎docs/partials/_kodu-providers.mdx

+45
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
{/* This content is pulled out as an include because Prettier can't handle the indentation needed to get this to appear in the right spot under a list item. */}
2+
3+
import Tabs from '@theme/Tabs';
4+
import TabItem from '@theme/TabItem';
5+
6+
<Tabs groupId="provider" queryString="provider">
7+
<TabItem value="mux" label="CodeGate muxing" default>
8+
First, configure your [provider(s)](../features/muxing.md#add-a-provider) and
9+
select a model for each of your
10+
[workspace(s)](../features/workspaces.mdx#manage-workspaces) in the CodeGate dashboard.
11+
12+
In the **Provider Settings** settings, select **OpenAI Compatible**. Set the
13+
**Base URL** to `http://localhost:8989/v1/mux`.
14+
15+
Enter anything you want into the Model ID and API key fields; these are not used
16+
since the actual provider and model is determined by your CodeGate workspace.
17+
18+
</TabItem>
19+
<TabItem value="openai" label="OpenAI">
20+
21+
You need an [OpenAI API](https://openai.com/api/) account to use this provider.
22+
To use a different OpenAI-compatible endpoint, set the `CODEGATE_OPENAI_URL`
23+
[configuration parameter](../how-to/configure.md) when you launch CodeGate.
24+
25+
In the **Provider Settings** settings, select **OpenAI Compatible**. Set the
26+
**Base URL** to `http://localhost:8989/openai`.
27+
28+
Enter the **Model ID** and your
29+
[OpenAI API key](https://platform.openai.com/api-keys). A reasoning model like
30+
`o1-mini` or `o3-mini` is recommended.
31+
32+
</TabItem>
33+
<TabItem value="openrouter" label="OpenRouter">
34+
35+
You need an [OpenRouter](https://openrouter.ai/) account to use this provider.
36+
37+
In the **Provider Settings** settings, select **OpenAI Compatible**. Set the
38+
**Base URL** to `http://localhost:8989/openrouter`.
39+
40+
Enter your [preferred model](https://openrouter.ai/models) for the **Model ID**
41+
(example: `anthropic/claude-3.5-sonnet`) and add your
42+
[OpenRouter API key](https://openrouter.ai/keys).
43+
44+
</TabItem>
45+
</Tabs>

‎eslint.config.mjs

+14
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,13 @@ export default [
1010
{ ignores: ['.docusaurus/', 'build/', 'node_modules/'] },
1111
{ files: ['**/*.{js,mjs,cjs,ts,jsx,tsx}'] },
1212
{ languageOptions: { globals: globals.node } },
13+
1314
pluginJs.configs.recommended,
1415
...tseslint.configs.recommended,
1516
pluginReact.configs.flat.recommended,
1617
eslintConfigPrettier,
18+
19+
// Configs for .mdx files
1720
{
1821
...mdx.flat,
1922
processor: mdx.createRemarkProcessor({
@@ -23,8 +26,19 @@ export default [
2326
rules: {
2427
...mdx.flat.rules,
2528
'react/no-unescaped-entities': 'off',
29+
'react/jsx-no-undef': ['error', { allowGlobals: true }],
30+
},
31+
languageOptions: {
32+
...mdx.flat.languageOptions,
33+
globals: {
34+
...mdx.flat.languageOptions.globals,
35+
// Add global components from src/theme/MDXComponents.tsx here
36+
Columns: 'readonly',
37+
Column: 'readonly',
38+
},
2639
},
2740
},
41+
2842
{
2943
settings: {
3044
react: {

‎src/components/Column/index.tsx

+21
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
import React, { ReactNode, CSSProperties } from 'react';
2+
// Import clsx library for conditional classes.
3+
import clsx from 'clsx';
4+
5+
interface ColumnProps {
6+
children: ReactNode;
7+
className?: string;
8+
style?: CSSProperties;
9+
}
10+
11+
// Define the Column component as a function
12+
// with children, className, style as properties
13+
// Look https://infima.dev/docs/ for learn more.
14+
// Style only affects the element inside the column, but we could have also made the same distinction as for the classes.
15+
export default function Column({ children, className, style }: ColumnProps) {
16+
return (
17+
<div className={clsx('col', className)} style={style}>
18+
{children}
19+
</div>
20+
);
21+
}

‎src/components/Columns/index.tsx

+24
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
import React, { ReactNode, CSSProperties } from 'react';
2+
// Import clsx library for conditional classes.
3+
import clsx from 'clsx';
4+
5+
interface ColumnsProps {
6+
children: ReactNode;
7+
className?: string;
8+
style?: CSSProperties;
9+
}
10+
11+
// Define the Columns component as a function
12+
// with children, className, and style as properties
13+
// className will allow you to pass either your custom classes or the native infima classes https://infima.dev/docs/layout/grid.
14+
// Style" will allow you to either pass your custom styles directly, which can be an alternative to the "styles.module.css" file in certain cases.
15+
export default function Columns({ children, className, style }: ColumnsProps) {
16+
return (
17+
// This section encompasses the columns that we will integrate with children from a dedicated component to allow the addition of columns as needed
18+
<div className='container center'>
19+
<div className={clsx('row', className)} style={style}>
20+
{children}
21+
</div>
22+
</div>
23+
);
24+
}

‎src/theme/MDXComponents.tsx

+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
/*
2+
Extra components to load into the global scope.
3+
See https://docusaurus.io/docs/markdown-features/react#mdx-component-scope
4+
5+
To avoid linting errors, add these to the `languageOptions.globals` section
6+
for mdx files in the `eslint.config.mjs` file
7+
*/
8+
9+
import MDXComponents from '@theme-original/MDXComponents';
10+
import Columns from '@site/src/components/Columns';
11+
import Column from '@site/src/components/Column';
12+
13+
export default {
14+
// Reusing the default mapping
15+
...MDXComponents,
16+
Columns,
17+
Column,
18+
};
16.9 KB
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.

0 commit comments

Comments
 (0)
Please sign in to comment.