Zion-Golumbic et al., Neuron

Say what? A cartoon depicting two conversations occurring on different "wavelengths" at a cocktail party. The brains of listeners are able to ignore the nearby voices of others while homing in on just one speaker.

Solving the 'Cocktail Party Problem'

Many smartphones claim to filter out background noise, but they've got nothing on the human brain. We can tune in to just one speaker at a noisy cocktail party with little difficulty—an ability that has been a scientific mystery since the early 1950s. Now, researchers argue that the competing noise of other partygoers is filtered out in the brain before it reaches regions involved in higher cognitive functions, such as language and attention control. Their experiments were the first to demonstrate this process.

The scientists didn't do anything as social as attend a noisy party. Instead, Charles Schroeder, a psychiatrist at the Columbia University College of Physicians and Surgeons in New York City, and colleagues

recorded the brain activity of six people with intractable epilepsy who required brain surgery. In order to identify the part of their brains responsible for seizures, the patients underwent 1 to 4 weeks of observation through electrocorticography (ECoG), a technique that provides precise neural recordings via electrodes placed directly on the surface of the brain. Schroeder and his team, using the ECoG data, conducted their experiments during this time.

The researchers showed the patients two videos simultaneously, each of a person telling a 9- to 12-second story; they were asked to concentrate on just one speaker. To determine which neural recordings corresponded to the "ignored" and "attended" speech, the team reconstructed speech patterns from the brain's electrical activity using a mathematical model. The scientists then matched the reconstructed patterns with the original patterns coming from the ignored and attended speakers.

The patients' brains had registered both attended and ignored speech, though they showed some preference for the attended speech, the researchers report online today in Neuron. Because the researchers were able to record several regions of the patients' brains, they saw that regions associated with "higher-order" abilities—like the inferior frontal cortex, which is involved with language—had only representations of attended speech. Moreover, this representation of attended speech improved as the speaker's story unfolded. These findings support a continuous model of attention—called the "selective entrainment hypothesis"—in which the brain tracks and becomes increasingly selective to a particular voice.

The research supports the selective entrainment hypothesis, agrees Jason Bohland, director of Boston University's Quantitative Neuroscience Laboratory, but it "doesn't necessarily tell us how that happens. That's a really hard question, and is still left very much up in the air."

Though a technology less-invasive than ECoG would be needed, Bohland and Schroeder agree that this research could help provide good clinical markers for people with certain social disorders. People with attention deficit disorder, for example, may struggle in tracking specific voices or filtering out unwanted neural representations of sounds. And those problems should be represented in their brain activity.

Schroeder explained that this study was a part of a new wave of research that aims to "approximate a map of the total brain circuit that's involved in [complex] things like speech and music perception, which people consider—rightly or wrongly—to be uniquely human."

Posted in Brain & Behavior