Update OpenAI TTS config to allow a custom BaseURL allowing for any TTS engine with a compatible API #2466
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request Type
Relevant Issues
resolves #xxx
What is in this change?
This adds an input field for a custom BaseURL for the OpenAI TTS endpoint when selecting OpenAI as a TTS provider.
Since OpenAI has become an industry standard many tools have taken to mirroring their API for ease of integration.
With this change it allows for a much richer choice of options to use as a TTS provider.
I tested it using AllTalk_TTS's OpenAI speech endpoint and it works wonderfully.
Developer Validations
I ran
yarn lintfrom the root of the repo & committed changes[ x] Relevant documentation has been updated
couldn't find any relevant docs, but it is notated in the env comments
[ x] I have tested my code functionality
[ x] Docker build succeeds locally
In order to debug this, I needed to get the frontend/server/collector all working since the docker takes a VERY long time on the production-build 3/3 chown step. There has got to be a way to speed that up. Also, when running the servers independently, it doesn't save the config in any .env file so with every restart I had to set it up again, don't know what I missed there. I was also running it under WSL ubuntu, so that was it's own pita, my dev machine is a MBP, but only my PC can run the models :/