- News Home
6 March 2014 1:04 pm ,
Vol. 343 ,
Antiretroviral drugs can protect people from becoming infected by HIV. But so-called pre-exposure prophylaxis, or PrEP...
Two studies show that eating a diet low in protein and high in carbohydrates is linked to a longer, healthier life, and...
Considered an icon of conservation science, researchers at World Wildlife Fund (WWF) headquarters in Washington, D.C.,...
The new atlas, which shows the distribution of important trace metals and other substances, is the first product of...
Early in April, the first of a fleet of environmental monitoring satellites will lift off from Europe's spaceport in...
Since 2000, U.S. government health research agencies have spent almost $1 billion on an effort to churn out thousands...
Magdalena Koziol, a former postdoc at Yale University, was the victim of scientific sabotage. Now, she is suing the...
- 6 March 2014 1:04 pm , Vol. 343 , #6175
- About Us
A Baby Step for Computer Learning
23 July 2007 (All day)
Infants still have a chubby leg up on the best supercomputers when it comes to picking up languages, but now researchers have created a program that can teach itself to distinguish vowel sounds like those in "train" and "bed." The software, the first to figure out vowel categories from human sounds, lends credence to the notion that language is more learned than innate.
Psychologists have long been baffled by how children catch on to language's nuances. For decades, researchers debated whether language is acquired through experience or whether humans are wired for it. The dispute motivated computer scientists to explore the question electronically, using so-called neural networks, computer programs in which layers of virtual neurons send messages back and forth. Just as the brain strengthens its own neural connections by repetition, the neural network learns by reinforcing the same output from similar inputs. For example, if the computer has just sorted "play" into the "long a" vowel category, it's more likely "hay" would end up in the same category on a future iteration. But mothers don't conveniently sort out vowel sounds, like "ay," "ee," "eye," "oh," and "ewe," into neat categories for their infants. Somehow, the child figures out how many vowel categories are supposed to exist while sorting through sounds.
James "Jay" McClelland, a cognitive neuroscientist at Stanford University in Palo Alto, California, wanted his computer to do the same. So he and colleagues recorded 30 mothers reading aloud to their infants and fed the audio clips into a computer that was able to categorize the sounds by their duration and quality, or the particular resonances in the vocal tract. To keep things simple, the team only used four vowel sounds, the ones in "beet," "bait," "bit," and "bet."
Instead of telling his neural network ahead of time how many categories to expect, the researchers had their network constantly guess how many categories there ought to be while analyzing thousands of sound clips. The program quickly began clustering the sounds into only a few vowel categories. It could lump the vowel sounds from the average mother into four categories more than 80% of the time, McClelland's team reports online this week in the Proceedings of the National Academy of Sciences. And that, says McClelland, bolsters the idea that much can be learned with little assumed.
Experts are giving the research a thumbs-up. "It's an important first step in the right direction," says cognitive robotics expert Bart de Boer of the University of Groningen, the Netherlands, especially because the neural network is biologically plausible in how it reinforces behavior. Infants use more than sound to learn language, and McClelland would like to see a more powerful neural network that could "lip-read" by accompanying sounds with a picture of the mouth. But if a human brain learns anything like the way a computer does, the findings mean that language acquisition isn't as hard-wired as once thought.