What if you want to mute your co-workers’ chatter in the office, but you need to know when your boss is talking? New AI headphones might be the answer.
Google, Microsoft, Facebook. Many Silicon Valley giants want employees back in the office, and most of them are of open-space design.
While open-space offices are specifically chosen to nourish collaboration, close proximity to co-workers might provoke quite a distinct reaction – most people either dislike open offices or completely despise them.
According to Harvard Business Review, when firms switched to open offices, face-to-face interactions fell by 70%.
In the article published by Scientific American, 358 respondents from 18 companies complained about noise, distraction, and soullessness in the open-plan offices. The lack of privacy can negatively impact productivity and employee well-being, leading many to seek refuge in the cocoon of their headphones.
Companies are rushing to provide sound-absorbing solutions or quiet spaces to ensure that employees working from the office are not distracted. However, AI might also be useful in this case.
Researchers from the University of Washington have created a prototype of AI-powered headphones. These noise-canceling headphones are unique because they create a “sound bubble” with a three- to six-foot radius, which filters out unwanted sounds and leaves only important ones.
A small computer on the headphones uses a neural network to track when sounds reach each microphone. The system suppresses noises originating outside the bubble while amplifying and replaying the sounds within it.
AI model reduces the volume of voices and sounds outside the bubble, even if the distant sounds are louder than those inside the bubble. For example, while hearing what’s going on at your table, you might be protected from the background noise in the restaurant.
To train the system to create sound bubbles, the researchers needed a real-world dataset of sounds at different distances, which didn’t exist.
They placed headphones on a mannequin head that rotated on a robotic platform while a moving speaker played sounds. They collected data from the mannequin and human users in 22 indoor spaces, including offices and homes.
“Humans aren’t great at perceiving distances through sound, particularly when there are multiple sound sources around them,” said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering.
“Our abilities to focus on the people in our vicinity can be limited in places like loud restaurants, so creating sound bubbles on a hearable has not been possible so far. Our AI system can actually learn the distance for each sound source in a room and process this in real-time, within 8 milliseconds, on the hearing device itself.”
The code for the proof-of-concept device is openly available for others to build upon, while the researchers are launching a startup to bring this technology to market. The study was published in Nature Electronics on November 14th.
Your email address will not be published. Required fields are markedmarked