/socialsamosa/media/media_files/2026/01/02/fi-29-2026-01-02-11-43-50.png)
OpenAI has consolidated several engineering, product and research teams to overhaul its audio models in preparation for an audio-focused personal device expected to launch in approximately one year, TechCrunch reported.
The restructuring has taken place over the past two months. The company's new audio model, scheduled for early 2026, will reportedly sound more natural, handle interruptions during conversation, and speak while users are talking - a capability current models lack.
OpenAI is said to be developing a family of devices that could include glasses or smart speakers without screens.
The development follows a broader industry movement toward audio-based interfaces. Meta recently introduced a feature for its Ray-Ban smart glasses using a five-microphone array to amplify conversations in noisy environments. Google began testing Audio Overviews in June, which convert search results into spoken summaries. Tesla is integrating xAI's chatbot Grok into its vehicles as a voice assistant for navigation and climate control.
Several startups have also released audio-focused devices, including the Humane AI Pin, a screenless wearable, and the Friend AI pendant, which records audio. Two companies, Sandbar and one led by Pebble founder Eric Migicovsky, are developing AI rings expected in 2026.
Former Apple design chief Jony Ive joined OpenAI's hardware efforts through the company's acquisition of his firm io for $6.5 billion in May. According to The Information, Ive has prioritized reducing device dependence and views audio-first design as an opportunity to address concerns about past consumer technology.
Smart speakers with voice assistants are currently used in more than one-third of U.S. households.
/socialsamosa/media/agency_attachments/PrjL49L3c0mVA7YcMDHB.png)
/socialsamosa/media/media_files/2025/12/31/10-2-2025-12-31-12-43-35.jpg)
Follow Us