This repository is a specialized fork of the OpenAI Realtime Console. It includes the original console features along with custom modifications designed to interact specifically with the Reachy2 emotions project.
- Installation: Follow the original installation instructions below.
- Running the application:
- After installation, launch the console with
npm run dev
- Go to the address that appears on the console with your web browser, e.g.
http://127.0.0.1:3000/
- Click on "Start session".
- Ensure the Reachy2 Emotions module is running in server mode so the robot reacts appropriately.
- Start talking into your microphone.
- The console sends emotion names detected from your speech via a Flask server.
- After installation, launch the console with
After clicking on "Start session", talk into your microphone and stop the session. A small audio player should appear in the bottom right of the screen allowing you to hear what was said. Use this to check if your microphone setup is working properly.
- Modify the main prompt by editing the file located at:
client/components/ToolPanel.jsx
Any changes made to this file will affect the behavior and emotion detection logic.
This is an example application showing how to use the OpenAI Realtime API with WebRTC.
Before you begin, you'll need an OpenAI API key - create one in the dashboard here. Create a .env
file from the example file and set your API key in there:
cp .env.example .env
Running this application locally requires Node.js to be installed. Install dependencies for the application with:
npm install
Start the application server with:
npm run dev
This should start the console application on http://localhost:3000.
Note: The server.js
file uses @fastify/vite to build and serve the React frontend contained in the /client
folder. You can find the configuration in the vite.config.js
file.
The previous version of this application that used WebSockets on the client (not recommended in client-side browsers) can be found here.
MIT