Patel “is really a scholar of the science of music and the brain, with a particular interest in music and language,” said NCCAM director Dr. Josephine Briggs, introducing the lecture. “That’s of great interest to NCCAM, as we are interested in the therapeutic potential of music.”
Advanced Tools, Better Insights
Development of neuroimaging tools—fMRI, PET, MEG and ERP, for example—that can examine the brain in multiple temporal and spatial scales give “us windows onto the brain that we never had before,” said Patel.
Magnetoencephalography (MEG) maps brain activity via magnetic fields made by naturally occurring electrical currents. Event-related potentials (ERPs) are brain response measurements captured via electroencephalography.
“One of the things we’ve learned quickly with these techniques is that music processing is widely distributed in the brain,” he said. “There is no single music center in the brain.”
Patel contrasted brain images of musicians and non-musicians.
Patel showed an fMRI of a non-musician’s brain listening to music sequences—harmonies. Brain areas activated by the music included not only the expected auditory regions, but also frontal, temporal and parietal locations.
“This is just one aspect of music—the harmonic,” Patel said. “If I could overlay activations for rhythm, memory, emotion…we’d see wide swaths of the brain activated. This raises the idea that music has the potential to interact with many brain functions that are also distributed in different regions of the cortex such as language, memory and attention.”
Patel discussed transient effects or “what happens when the music is on” as well as lasting effects, what happens in the weeks, months or even years afterwards. His talk targeted specifically musical training’s effect on other cognitive domains.
Why does the brain respond so strongly to music? Patel cited the “Multiple Mechanisms Theory,” which suggests that music activates six different pathways to emotion in the brain.
Acknowledging music’s remarkable influence over our emotions and the significant research already existing in that area, he said, “I’ll be treating emotion as an enabler of other biological effects. It’s a powerful internal reward that drives other brain systems.”
To illustrate music’s transient effects, Patel talked about a 2011 randomized, double-blind, placebo-controlled study of people undergoing total hip joint replacement under light sedation (spinal anesthesia).
NCCAM director Dr. Josephine Briggs introduced Patel as “a scholar of the science of music and the brain.”
Photos: Bill Branson
One group listened to instrumental music, the other group heard sounds of ocean waves. Results? Music listeners consumed about 15 percent less anesthesia to reach target sedation levels and produced about 20 percent lower amounts of the stress hormone cortisol during the surgery.
What neural mechanisms might explain these effects? Study authors offered three possibilities, working alone or in combination:
- Activation of the mesolimbic dopaminergic system. Release of the natural chemical dopamine has been linked to pleasure/reward areas in the brain.
- Downregulation of the central nucleus of amygdala. The amygdala is associated with brain regions that respond to pain.
- Engagement of cognitive/attention resources.
Music, Language—Housemates in the Brain?
Patel also talked about music and neural plasticity. In slides of the brains of children, he pointed out evidence of faster brain development—“more gray matter in certain areas, including the corpus callosum”—in kids learning to play a musical instrument.
“We know now that musical training drives structural and functional changes in the brain,” he said, citing 2012 research by Herholz & Zatorre in Neuron.
Patel said that what we already know about neighborhoods in the mind and how they function suggests it’s possible that music training can change the way the brain handles language and speech.
Linguistic and non-linguistic auditory domains in the brain overlap, he pointed out. Language, like music, involves processing complex hierarchical sound sequences.
So looking at potential practical applications, might training in music prime pre-literate brains for reading?
“That may seem far-fetched,” Patel said, “but there’s a growing body of evidence that there are links between early rhythmic skills, musical training and phonological awareness, [which is] the ability to understand that words are made up of individual sounds that have to be segmented out and that can be manipulated. That’s a key step in learning how to read.”
Current research suggests that auditory processing is not a one-way street, he said. Scientists in recent years have moved away from the idea that hearing is a direct process—from cochlea to cortex; research now suggests a more complex auditory process that involves cognition. In other words, sound traffic flows in several lanes, in many directions across the brain.
“It’s a very dynamic system that tunes itself,” Patel said.
Patel recently proposed the OPERA hypothesis to explain why music training might enhance neural encoding for speech. OPERA stands for:
- Overlap in the brain networks shared by music and language
- higher Precision demands in music than in speech
- positive Emotion
- extensive Repetition
- focused Attention
“The key novel idea is that there is an asymmetry, that music is demanding more of the nervous system than speech does in terms of certain auditory processing,” he explained. “Since speech and music share certain brain networks for auditory processing, speech benefits because of the plastic changes in those networks.”
Patel said he’s now expanding on the OPERA framework and developing studies (see ‘SIMPHONY’ sidebar) with colleagues to further document the links between music and language.
Following his lecture, Patel addressed questions about people who start and later quit music training and whether learning to play music might help you learn a foreign language.
The entire lecture is archived online at http://videocast.nih.gov/launch.asp?17947.
Hear a ‘SIMPHONY?’
Dr. Aniruddh Patel of Tufts University says the time is ripe for more music cognition research. One example he cited is SIMPHONY or Studying the Influence Music Practice Has On Neurodevelopment in Youth. He and colleagues began the study in July 2012 to look at how music training affects development of language, executive function and attention.
SIMPHONY involves three groups of children ages 5 to 8—those in music training, active control group taking martial arts and passive control taking no extracurricular activity. Youngsters will be tested once a year for 5 years; they’ll have 6 hours of behavioral/cognitive testing, MRIs and EEGs.
Collaborating to get SIMPHONY started were Dr. Terry Jernigan and Dr. John Iversen at the University of California, San Diego, and Dalouge Smith of the San Diego Youth Symphony. The project is now being led by Iversen.
The study was based on PLING (Pediatric Longitudinal Imaging Neurocognition and Genetics), an NIH-funded study started in 2011 by scientists at UCSD to examine children’s brain development using MRI. NICHD awarded the PLING grant; Jernigan is principal investigator.