Sophisticated mathematical modeling methods and a “CatCam” that captures feline-centric video of a forest are two elements of a new effort to explain how the brain’s visual circuitry processes real scenes. The new model of the neural responses of a major visual-processing brain region promises to significantly advance understanding of vision.
Valerio Mante and colleagues published a description of their model and its properties in an article in the May 22, 2008, issue of the journal Neuron, published by Cell Press.
The researchers sought to develop the new model because until now, studies of the visual system have used simple stimuli such as dots, bars and gratings.
“Such simple, artificial stimuli present overwhelming advantages in terms of experimental control: their simple visual features can be tailored to isolate and study the function of one or few of the several mechanisms shaping the responses of visual neurons,” wrote the researchers. “Ultimately, however, we need to understand how neurons respond not only to these simple stimuli but also to image sequences that are arbitrarily complex, including those encountered in natural vision. The visual system evolved while viewing complex scenes, and its function may be uniquely adapted to the structure of natural images,” they wrote.
Specifically, the researchers sought to model the neuronal response the lateral geniculate nucleus (LGN) in the thalamus, a brain region that processes raw visual signals received from the retina.
To gather data for the model, they first recorded from LGN neurons in anesthetized cats, as the cats were presented with images of drifting gratings of different sizes, locations and spatial and temporal frequency. They also varied luminance and contrast of the images.
From these data, they created a mathematical model that aimed to describe how these neurons respond and adapt to such complex, changing stimuli. Their ultimate goal was to create a model that would describe not just neural response to the gratings, but to the complexities of natural scenes.
To test their model, they presented cats with two kinds of natural scenes, while recording from LGN neurons. One of these scenes was video recorded from a “CatCam” mounted on the head of a cat as it roamed through a forest. The other was short sequences from the cartoon Disney movie Tarzan.
The researchers found that their model predicted “much of the responses to complex, rapidly changing stimuli… Specifically, the model captures how these responses are affected by changes in luminance and contrast level, overcoming many of the shortcomings of simpler models,” they wrote.
“Even though our model does not capture the operation of all known nonlinear mechanisms, it promises to be a useful tool to understand the computations performed by the early visual system,” they wrote.
Mante and colleagues have provided “a long-needed bridge between the two stimulus worlds,” wrote Garrett Stanley of the Georgia Institute of Technology, in a preview of the paper in the same issue of Neuron.
“By creating an encoding model from a set of experiments involving sinusoidal gratings at different mean luminances and contrasts, and subsequently demonstrating that this model predicts the neuronal response to an entirely different class of visual stimuli based on the visual scene alone, Mante, et al. have made this problem general and provided a powerful description of the encoding properties of the pathway,” wrote Stanley.
The researchers include Valerio Mante, Vincent Bonin, and Matteo Carandini, of the Smith-Kettlewell Eye Research Institute, San Francisco, CA.
Note: This story has been adapted from a news release issued by Cell Press
Technorati Tags: modeling, natural sceanes,
No comments:
Post a Comment