Reconstructing a video from the retinal activity. Left: two example stimulus frames displayed to the rat retina. Middle and right: Reconstructions obtained with two different methods (sparse linear decoding in the middle and nonlinear decoding on the right). Green circles denote true disc positions. (Credit: Botella-Soler et al.)
Using artificial intelligence techniques, researchers successfully took signals from the retinas of rats and reconstructed movies of what they saw, a new study finds.
The retina is the layer of cells at the back of the eyeball that helps the eye sense light. The brain then decodes electrical signals transmitted from the retina’s neurons to reconstruct what the eye saw.
Eyeing the Eye
Much remains uncertain about what the brain actually does to decode retinal signals. This kind of research could help inform the design of retinal implants that aim to help people see again, said study senior author Gašper Tkačik, a neuroscientist at the Institute of Science and Technology Austria in Klosterneuburg.
The researchers’ ultimate aim was to help figure out what goes on between photons hitting our eyeballs and an image appearing in our brains. Understanding how the retina translates tiny packages of light into signals the brain can understand is a big piece of the puzzle.
To date, most efforts to study this decoding have depended on signals from less than 50 retinal neurons at once. In addition, that kind work relied on signals from retinas exposed to relatively simple visual cues, such as “a single dot or bar moving at a certain speed in the same direction each time, very simple things far from the kinds of images retinas actually process,” Tkačik said.
Instead, Tkačik and his colleagues examined signals from patches of about 100 retinal neurons from rats generated in response to short movies of small disks moving in a complex random pattern. “We decided to bite the bullet and see how far we could push it,” Tkačik said. “We designed a much more complicated stimulus that we view as a bridge to the real thing.”
Findings in Translation
The scientists then experimented with a variety of decoding techniques on these retinal signals to see which ones did best at reconstructing the movies frame by frame, pixel by pixel. The simplest decoder simply summed up all the signals it received in order to form an image, what’s called a linear decoder. Though it was much more basic than other methods available, they found it was still able to produce accurate reconstructions of the movies.
The researchers also tested more sophisticated nonlinear decoders, “which describes a whole zoo of techniques,” Tkačik said. One nonlinear decoder they used involved a kind of machine-learning technique often employed to recognize typed or handwritten text. Another nonlinear decoder the scientists tested was a neural network, a kind of artificial intelligence system loosely based on how our brains operate. Computerized “neurons” are fed data and cooperate to solve a problem, such as reconstructing an image.
The neural net then repeatedly adjusted the connections between its neurons and determined if those new patterns of connections were better at solving the problem. Over time, the neural net discovered which patterns were best at computing solutions and adopted them as their defaults, mimicking the process of learning in the human brain.
The researchers found these two very different nonlinear decoders reconstructed the videos more accurately than the linear decoder, with 10 to 30 percent lower error levels. “We were surprised — we had thought that maybe one would work while the other wouldn’t, but both performed almost equally well,” Tkačik said.
Tkačik and his colleagues suggested that the fact that both nonlinear decoders analyzed the signal from each neuron in the context of previous signals from the same neuron helped them ignore erroneous signals the retina might spontaneously generate. In contrast, the linear decoder might essentially “hallucinate” based off these faulty signals and construct an image of something that wasn’t really there.
All in all, this work could lead to a better understanding of how the brain decodes signals and what different types of retinal neurons actually do, the researchers said. “We want to know how the brain constructs visual reality from the retina,” Tkačik said.
In the future, the scientists want to try interpreting even more complex videos. “We’re far from getting a computational idea of how even a simple piece of neural tissue like the retina works, much less the brain, but understanding the retina might be within reach in the not too distant future,” Tkačik said.
The scientists detailed their findings online May 10 in the journal PLOS Computational Biology.