Working Hours: Monday - Friday, 10am - 05pm

Deep neural networks show promise as models of human hearing

Computational models that mimic the structure and function of the human auditory system could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces. A new study from MIT has found that modern computational models derived from machine learning are moving closer to this goal.

Models of hearing

Deep neural networks are computational models that consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks. This type of model has become widely used in many applications, and neuroscientist have begun to explore the possibility that these systems can also be used to describe how the human brain performs certain tasks.

When a neural network is performing a task, its processing units generate activation patterns in response to each audio input it receives. Those model representations of the input can be compared to the activation patterns seen in fMRI brain scans of people listening to the same input.

Hierarchical processing

The new study also supports the idea that the human auditory cortex has some degree of hierarchical organization, in which processing is divided into stages that support distinct computational functions. As in the 2018 study, the researchers found that representations generated in earlier stages of the model most closely resemble those seen in the primary auditory cortex, while representations generated in later model stages more closely resemble those generated in brain regions beyond the primary cortex.

Additionally, the researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.

“The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain.”

Brendon Peterson

Artificial Intelligence Expert

Hierarchical processing

The new study also supports the idea that the human auditory cortex has some degree of hierarchical organization, in which processing is divided into stages that support distinct computational functions. As in the 2018 study, the researchers found that representations generated in earlier stages of the model most closely resemble those seen in the primary auditory cortex, while representations generated in later model stages more closely resemble those generated in brain regions beyond the primary cortex.

Additionally, the researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.

Share:

03 Comments

Patrick Fay

Dictum erat aliquam sit turpis rutrum nulla feugiat sodales ullamcorper. Vulputate consequat nunc nunc condimentum egestas orci.

Replay

Judy Treutel

Faucibus orci cras a pulvinar eget pharetra elit vel. Proin enim nunc sagittis velit. Malesuada in pharetra pulvinar tristique amet.

Replay

Darrell Steward

Sed pharetra egestas pellentesque felis egestas ac. Eget ac dignissim leo habitant odio vulputate. Mattis orci ac pellentesque tellus.

Replay

Post a comment

Your email address will not be published. Required fields are marked*

What’s Next

A Journey into the Heart of Innovation

Artificial Intelligence (AI) has transcended the realm of science fiction, becoming a pervasive force that...

Learn More

Tackle Global Challenges with Technology

Explore the inspiring initiatives where AI is harnessed for the greater good. From addressing environment...

Learn More

The Intersection between Technology and Art

Contrary to popular belief, AI is not just algorithms and data; it's also a powerful tool for unleashing...

Learn More