OpenAI has begun alpha testing its upcoming ‘advanced voice mode’ feature by inviting a small group of ChatGPT Plus users to start using it – with new features allowing it to sense emotion in the voice.
OpenAI continues to make strides within the artificial intelligence space, now launching alpha testing on its highly anticipated advanced voice mode feature – which essentially allows you to translate yourself into almost any language on the fly.
The Microsoft-based artificial intelligence organization announced the rollout last night via an X post, stating that it will “continue to add more people on a rolling basis and plan for everyone on Plus to have access in the fall.”. That means we should see a complete launch by the end of the year.
OpenAI begins launch of advanced voice mode
The advanced voice mode feature has been tested across 45+ languages by over 100 external testers, with the company taking action to protect people’s privacy by training the model to only speak in one of four preset voices. Alongside this, OpenAI has also “built systems to block outputs that differ from those voices” – implementing guardrails to block requests for violent or copyrighted content.
The new voice mode will also allow users to speak to ChatGPT in real-time, with near-instant responses from the tool. Alongside this, users will also be able to interrupt ChatGPT during a response – a factor that has been an increasing issue for AI Assistants in the past.
The audio-driven feature had been delayed from June to July, with the brand apparently stating it needed time to reach its own launch standards. The company continues to work on new generative AI products, with the goal of maintaining its edge over the competition in a time that continues to see AI boom.