Your Brain Is an Energy-Efficient ‘Prediction Machine’
How our brain, a three-pound mass of tissue encased within a bony skull, creates perceptions from sensations is a long-standing mystery. Abundant evidence and decades of sustained research suggest that the brain cannot simply be assembling sensory information, as though it were putting together a jigsaw puzzle, to perceive its surroundings. This is borne out by the fact that the brain can construct a scene based on the light entering our eyes, even when the incoming information is noisy and ambiguous.
Consequently, many neuroscientists are pivoting to a view of the brain as a “prediction machine.” Through predictive processing, the brain uses its prior knowledge of the world to make inferences or generate hypotheses about the causes of incoming sensory information. Those hypotheses—and not the sensory inputs themselves—give rise to perceptions in our mind’s eye. The more ambiguous the input, the greater the reliance on prior knowledge.
“The beauty of the predictive processing framework [is] that it has a really large—sometimes critics might say too large—capacity to explain a lot of different phenomena in many different systems,” said Floris de Lange, a neuroscientist at the Predictive Brain Lab of Radboud University in the Netherlands.
However, the growing neuroscientific evidence for this idea has been mainly circumstantial and is open to alternative explanations. “If you look into cognitive neuroscience and neuro-imaging in humans, [there’s] a lot of evidence—but super-implicit, indirect evidence,” said Tim Kietzmann of Radboud University, whose research lies in the interdisciplinary area of machine learning and neuroscience.
So researchers are turning to computational models to understand and test the idea of the predictive brain. Computational neuroscientists have built artificial neural networks, with designs inspired by the behavior of biological neurons, that learn to make predictions about incoming information. These models show some uncanny abilities that seem to mimic those of real brains. Some experiments with these models even hint that brains had to evolve as prediction machines to satisfy energy constraints.
And as computational models proliferate, neuroscientists studying live animals are also becoming more convinced that brains learn to infer the causes of sensory inputs. While the exact details of how the brain does this remain hazy, the broad brushstrokes are becoming clearer.
Unconscious Inferences in Perception
Predictive processing may seem at first like a counterintuitively complex mechanism for perception, but there is a long history of scientists turning to it because other explanations seemed wanting. Even a thousand years ago, the Muslim Arab astronomer and mathematician Hasan Ibn Al-Haytham highlighted a form of it in his Book of Optics to explain various aspects of vision. The idea gathered force in the 1860s, when the German physicist and physician Hermann von Helmholtz argued that the brain infers the external causes of its incoming sensory inputs rather than constructing its perceptions “bottom up” from those inputs.
Helmholtz expounded this concept of “unconscious inference” to explain bi-stable or multi-stable perception, in which an image can be perceived in more than one way. This occurs, for example, with the well-known ambiguous image that we can perceive as a duck or a rabbit: Our perception keeps flipping between the two animal images. In such cases, Helmholtz asserted that the perception must be an outcome of an unconscious process of top-down inferences about the causes of sensory data since the image that forms on the retina doesn’t change.
During the 20th century, cognitive psychologists continued to build the case that perception was a process of active construction that drew on both bottom-up sensory and top-down conceptual inputs. The effort culminated in an influential 1980 paper, “Perceptions as Hypotheses,” by the late Richard Langton Gregory, which argued that perceptual illusions are essentially the brain’s erroneous guesses about the causes of sensory impressions. Meanwhile, computer vision scientists stumbled in their efforts to use bottom-up reconstruction to enable computers to see without an internal “generative” model for reference.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.