Skip to content

Commit 7cc382d

Browse files
authored
feat: added o4-mini support (#221)
1 parent 8d63cb6 commit 7cc382d

File tree

2 files changed

+8
-2
lines changed

2 files changed

+8
-2
lines changed

examples/core.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ async def async_stream():
113113
return latencies
114114

115115
def build_chat_request(model: str, chat_input: str, is_stream: bool, max_tokens: int=1000):
116-
if model.startswith(('o1', 'o3')):
116+
if model.startswith(('o1', 'o3', 'o4')):
117117
chat_request = {
118118
"chat_input": chat_input,
119119
"model": model,
@@ -156,7 +156,7 @@ def multiple_provider_runs(provider:str, model:str, num_runs:int, api_key:str, *
156156
def run_chat_all_providers():
157157
# OpenAI
158158
multiple_provider_runs(provider="openai", model="gpt-4o-mini", api_key=os.environ["OPENAI_API_KEY"], num_runs=1)
159-
multiple_provider_runs(provider="openai", model="o3-mini", api_key=os.environ["OPENAI_API_KEY"], num_runs=1)
159+
multiple_provider_runs(provider="openai", model="o4-mini", api_key=os.environ["OPENAI_API_KEY"], num_runs=1)
160160
#multiple_provider_runs(provider="openai", model="o1-preview", api_key=os.environ["OPENAI_API_KEY"], num_runs=1)
161161

162162

libs/core/llmstudio_core/config.yaml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -238,6 +238,12 @@ providers:
238238
input_token_cost: 0.0000011
239239
cached_token_cost: 0.00000055
240240
output_token_cost: 0.0000044
241+
o4-mini:
242+
mode: chat
243+
max_completion_tokens: 200000
244+
input_token_cost: 0.0000011
245+
cached_token_cost: 0.000000275
246+
output_token_cost: 0.0000044
241247
gpt-4o-mini:
242248
mode: chat
243249
max_tokens: 128000

0 commit comments

Comments
 (0)