Implementation of OpenAI's Text-To-Speech in Unity - synthesize any text and play it via any AudioSource.
This project integrates OpenAI's Text-to-Speech API into any Unity project, allowing users to convert and synthesize text to spoken audio via any AudioSource component within seconds.
It does not use third-party libraries, making it super lightweight and easy to use cross-platform.
- Added a new custom editor script for easily setting up TTS within your Unity project (see demo videos below)
Once you've installed this project, setting up OpenAI's TTS within Unity takes seconds.
Integrate this project (either in your existing project, or in a fresh one), then open up the TTS Setup
Prefab and click through the steps, as shown in this demo:
OpenAI.TTS.Setup.Demo.mp4
I've also added a quick UI example scene, so you can tinker around with some TTS settings easily:
OpenAI.TTS.for.Unity.Demo.mp4
- Download the latest release
.unitypackage
. - Import it into your own project, e.g. via
Assets > Import Package
. - Either open the
OpenAI-TTS-Example
scene, or open up theTTS Setup
Prefab and click through the installation steps.
- Optional: Change the
TTSManager
Prefab settings to your liking (useful if you want to have different entities with predefined voices, speeds, etc.)
- Reference the
TTSManager
and callTTSManager.SynthesizeAndPlay
via script.
- Open up the
TTS Setup
Prefab. - Add your OpenAI API key first, then click through the installation steps.
- Optional: Change the
TTSManager
Prefab settings to your liking (useful if you want to have different entities with predefined voices, speeds, etc.)
- Reference a
TTSManager
and callTTSManager.SynthesizeAndPlay
via script.
This project is a prototype and serves as a basic example of integrating OpenAI's TTS API with Unity. Feel free to create a PR 😊