If you've got a deep, dark secret, you can rest assured that today's brain imaging technology is a long way from being able to read your mind. But scientists are working on it. In tomorrow's issue of Nature, a team of neuroscientists describes a computer model that can identify a photograph a person has just seen by decoding patterns of neural activity collected by a functional magnetic resonance imaging (fMRI) scanner.
Researchers have had some previous success at using fMRI to determine what people have seen. But these studies either involved simple patterns (ScienceNOW, 25 April 2005) or focused on sorting objects into categories, such as houses and faces (Science, 28 September 2001). In the new study, neuroscientist Jack Gallant and colleagues at the University of California, Berkeley, attempted a more difficult feat: using activity in the brain's visual cortex to identify which of a large set of photographs a subject had just seen--even if he'd never seen it before.
In the first phase of the study, two subjects--co-authors Kendrick Kay and Thomas Naselaris--each viewed 1750 photographs of a wide variety of objects and scenes while an fMRI scanner monitored responses in their visual cortex. Based on the fMRI data, the researchers divided the visual cortex into small cubes and created a mathematical model to characterize how each of these volumes responds to various visual features. For example, one cube might be most active whenever a photograph contained closely spaced vertical lines in the center. By combining the models for hundreds of cubes, the researchers hoped to predict how the visual cortex would respond to any given image.
To test these predictions, Kay and Naselaris went back into the scanner and viewed 120 photos they'd never seen before. Then the researchers compared the visual cortex activity to the activity predicted by the model for each photo. The model matched the pattern of brain activity with the correct photo 110 times out of 120 for Naselaris and 86 of 120 times for Kay. (Blind guessing would yield just one correct match, on average.) When Naselaris viewed a set of 1000 novel photos, the model still identified the correct image 82% of the time--an impressive feat given that the larger set contained more images with features in common.
The model is an improvement on past efforts because it incorporates hard-won findings from previous studies about how the visual system works, says Brian Wandell, a neuroscientist at Stanford University in Palo Alto, California. "It uses our knowledge of the brain in a way that's more profound than some of the [other] experiments."
That doesn't mean, however, that mind-reading brain scanners are just around the corner. Gallant notes that the new model can only identify photographs from a known set--as of yet, no computer model can use fMRI data to reconstruct what someone has actually seen. One day, it may be possible to reconstruct the visual content of dreams or memory, Gallant says, but that's still far in the future. In other words, you've still got time to clean up your act.