Skip to content

gpt 3 turbo 16k #104

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,9 @@ cd app && npm install && npx parcel watch src/index.html --no-cache
cd server && pip3 install -r requirements.txt && cd .. && python3 -m server.app
```

After starting the server, `models.json` is copied to `~/.config/openplayground/models.json` and this is used instead of the bundled `models.json`. This allows you to add your own models to the playground and version them in your dotfiles.


## Docker

```sh
Expand All @@ -55,7 +58,7 @@ First volume is optional. It's used to store API keys, models settings.
- Measure and display time to first token
- Setup automatic builds with GitHub Actions
- The default parameters for each model are configured in the `server/models.json` file. If you find better default parameters for a model, please submit a pull request!
- Someone can help us make a homebrew package, and a dockerfile
- Someone can help us make a homebrew package
- Easier way to install open source models directly from openplayground, with `openplayground install <model>` or in the UI.
- Find and fix bugs
- ChatGPT UI, with turn-by-turn, markdown rendering, chatgpt plugin support, etc.
Expand Down
2 changes: 1 addition & 1 deletion server/lib/inference/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -326,7 +326,7 @@ def __openai_text_generation__(self, provider_details: ProviderDetails, inferenc

def openai_text_generation(self, provider_details: ProviderDetails, inference_request: InferenceRequest):
# TODO: Add a meta field to the inference so we know when a model is chat vs text
if inference_request.model_name in ["gpt-3.5-turbo", "gpt-4"]:
if inference_request.model_name in ["gpt-3.5-turbo", "gpt-3.5-turbo-16k", "gpt-4"]:
self.__error_handler__(self.__openai_chat_generation__, provider_details, inference_request)
else:
self.__error_handler__(self.__openai_text_generation__, provider_details, inference_request)
Expand Down
48 changes: 48 additions & 0 deletions server/models.json
Original file line number Diff line number Diff line change
Expand Up @@ -243,6 +243,54 @@
}
}
},
"gpt-3.5-turbo-16k": {
"enabled": false,
"status": "ready",
"capabilities": [
"logprobs"
],
"parameters": {
"temperature": {
"value": 0.5,
"range": [
0.1,
1
]
},
"maximumLength": {
"value": 200,
"range": [
50,
16384
]
},
"topP": {
"value": 1,
"range": [
0.1,
1
]
},
"presencePenalty": {
"value": 0,
"range": [
0,
1
]
},
"frequencyPenalty": {
"value": 0,
"range": [
0,
1
]
},
"stopSequences": {
"value": [],
"range": []
}
}
},
"gpt-4": {
"enabled": false,
"status": "ready",
Expand Down