- News Home
10 April 2014 11:44 am ,
Vol. 344 ,
The Pyrenean ibex, an impressive mountain goat that lived in the central Pyrenees in Spain, went extinct in 2000. But a...
Tight budgets are forcing NASA to consider turning off one or more planetary science projects that have completed their...
Ebola is not a stranger to West Africa—an outbreak in the 1990s killed chimpanzees and sickened one researcher. But the...
In an as-yet-unpublished report, an international panel of geoscientists has concluded that a pair of deadly...
Tropical disease experts tried and failed before to eradicate yaws, a rare disfiguring disease of poor countries. Now,...
Since 2002, researchers have reported that agricultural communities in the hot and humid Pacific Coast of Central...
Balkan endemic kidney disease surfaced in the 1950s and for decades defied attempts to finger the cause. It occurred...
- 10 April 2014 11:44 am , Vol. 344 , #6180
- About Us
A Baby Step for Computer Learning
23 July 2007 (All day)
Infants still have a chubby leg up on the best supercomputers when it comes to picking up languages, but now researchers have created a program that can teach itself to distinguish vowel sounds like those in "train" and "bed." The software, the first to figure out vowel categories from human sounds, lends credence to the notion that language is more learned than innate.
Psychologists have long been baffled by how children catch on to language's nuances. For decades, researchers debated whether language is acquired through experience or whether humans are wired for it. The dispute motivated computer scientists to explore the question electronically, using so-called neural networks, computer programs in which layers of virtual neurons send messages back and forth. Just as the brain strengthens its own neural connections by repetition, the neural network learns by reinforcing the same output from similar inputs. For example, if the computer has just sorted "play" into the "long a" vowel category, it's more likely "hay" would end up in the same category on a future iteration. But mothers don't conveniently sort out vowel sounds, like "ay," "ee," "eye," "oh," and "ewe," into neat categories for their infants. Somehow, the child figures out how many vowel categories are supposed to exist while sorting through sounds.
James "Jay" McClelland, a cognitive neuroscientist at Stanford University in Palo Alto, California, wanted his computer to do the same. So he and colleagues recorded 30 mothers reading aloud to their infants and fed the audio clips into a computer that was able to categorize the sounds by their duration and quality, or the particular resonances in the vocal tract. To keep things simple, the team only used four vowel sounds, the ones in "beet," "bait," "bit," and "bet."
Instead of telling his neural network ahead of time how many categories to expect, the researchers had their network constantly guess how many categories there ought to be while analyzing thousands of sound clips. The program quickly began clustering the sounds into only a few vowel categories. It could lump the vowel sounds from the average mother into four categories more than 80% of the time, McClelland's team reports online this week in the Proceedings of the National Academy of Sciences. And that, says McClelland, bolsters the idea that much can be learned with little assumed.
Experts are giving the research a thumbs-up. "It's an important first step in the right direction," says cognitive robotics expert Bart de Boer of the University of Groningen, the Netherlands, especially because the neural network is biologically plausible in how it reinforces behavior. Infants use more than sound to learn language, and McClelland would like to see a more powerful neural network that could "lip-read" by accompanying sounds with a picture of the mouth. But if a human brain learns anything like the way a computer does, the findings mean that language acquisition isn't as hard-wired as once thought.