Software improvements may be the key to the great sound quality of the new Apple HomePod.
The HomePod looks like older smart speakers that were taken off the market. Apple just brought back the new HomePod, and early reviews say it sounds even better than the old ones. Experts say that algorithms that use the power of computers to improve sound quality are thanked for better sound.
An audio architect at Ambiq named Prakash Khanduri told Lifewire in an email interview that computational audio is a big part of giving users the best sound quality. “All mobile and wearable devices use computational audio to improve the sound quality when there is a lot of background noise, for example. The devices improve audio quality by using machine learning and traditional digital signal processing algorithms in dedicated processors
Better Sound Through Tech
Some people who review technology have already heard music from the new HomePod, and most of them liked it. Apple says in a press release that the great sound of the HomePod comes from “advanced computational audio for a groundbreaking listening experience, including support for immersive Spatial Audio tracks.”
The company also brags that the HomePod “has a custom-engineered high-excursion woofer, a powerful motor that drives the diaphragm a remarkable 20mm, a built-in bass-EQ mic, and a beamforming array of five tweeters around the base that work together to create a powerful sound. When the S7 chip is combined with software and system-sensing technology, it offers even more advanced computational audio that makes the most of the full potential of its acoustic system for a groundbreaking listening experience.”
The Power of Computation
Khanduri said that computational audio uses many algorithms, such as those for audio coding, speech recognition, echo cancellation, active noise cancellation, acoustic noise cancellation, dynamic range compression, and automatic gain control. The algorithms learn from the situations and customer needs and then tweak themselves to give the best possible user experience.
“Audio is very personal and depends a lot on the user’s needs and the form factors they have available,” Khanduri said. “The machine learning algorithms are made to figure out the environment, the sounds, and the user’s needs, and then pull the desired features from the mixed signal.”
In an email interview, Daniel Davis, president of the audio company BEACN, said that computational audio is now everywhere. “Whether you’re listening to your favourite song or talking to a friend on your cell phone, at some point along the way the sound is turned into a digital signal, processed and improved, and then turned back into an analogue audio signal that we can hear,” he said.
James Abbott, the Music Industry & Technologies Chair at Syracuse University, told Lifewire in an email that people carry around portable devices that can compute as well as professional systems from as recently as a decade ago. He said that smart audio devices at home could listen and change the sound processing so that the audio content fits the room’s acoustics. This would improve the sound quality of tiny speaker drivers.
“Our phones can process music made in immersive audio formats and play it through our earbuds while keeping the 360-degree sound localization of the immersive format,” he said. “Voice assistants can understand what you’re saying even if your favourite music is playing loudly in the background. The algorithms are smart enough to pick out your voice from almost any level of background noise.”