Working Hours: Monday - Friday, 10am - 05pm
Computational models that mimic the structure and function of the human auditory system could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces. A new study from MIT has found that modern computational models derived from machine learning are moving closer to this goal.
Deep neural networks are computational models that consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks. This type of model has become widely used in many applications, and neuroscientist have begun to explore the possibility that these systems can also be used to describe how the human brain performs certain tasks.
When a neural network is performing a task, its processing units generate activation patterns in response to each audio input it receives. Those model representations of the input can be compared to the activation patterns seen in fMRI brain scans of people listening to the same input.
The new study also supports the idea that the human auditory cortex has some degree of hierarchical organization, in which processing is divided into stages that support distinct computational functions. As in the 2018 study, the researchers found that representations generated in earlier stages of the model most closely resemble those seen in the primary auditory cortex, while representations generated in later model stages more closely resemble those generated in brain regions beyond the primary cortex.
Additionally, the researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.
“The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain.”
The new study also supports the idea that the human auditory cortex has some degree of hierarchical organization, in which processing is divided into stages that support distinct computational functions. As in the 2018 study, the researchers found that representations generated in earlier stages of the model most closely resemble those seen in the primary auditory cortex, while representations generated in later model stages more closely resemble those generated in brain regions beyond the primary cortex.
Additionally, the researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.
Unleash the power of AI-driven creativity and elevate your projects to new heights.
03 Comments
Patrick Fay
Dictum erat aliquam sit turpis rutrum nulla feugiat sodales ullamcorper. Vulputate consequat nunc nunc condimentum egestas orci.
ReplayJudy Treutel
Faucibus orci cras a pulvinar eget pharetra elit vel. Proin enim nunc sagittis velit. Malesuada in pharetra pulvinar tristique amet.
ReplayDarrell Steward
Sed pharetra egestas pellentesque felis egestas ac. Eget ac dignissim leo habitant odio vulputate. Mattis orci ac pellentesque tellus.
Replay