Researchers used brain MRIs to build an AI that thinks like a human brain — and it is more resilient than standard deep learning
A team of researchers in Beijing has built an artificial neural network modelled directly on the primate brain’s visual system — and the result is an AI that makes decisions more like a human and holds up far better under stress than conventional deep learning models.
The work, published this month in the Proceedings of the National Academy of Sciences, introduces what the authors call a Primate-Informed Neural Network, or PINN. Rather than stacking generic computational layers as most deep learning architectures do, the PINN replicates the specific wiring of the dorsal visual pathway — the chain of brain regions that processes motion and spatial information in primates, running from the lateral geniculate nucleus (LGN) through primary visual cortex (V1), motion area MT, and into the lateral intraparietal area (LIP), where decisions are formed.
Building a brain, not just imitating one
The distinction matters. Most so-called “brain-inspired” AI borrows loosely from neuroscience — the neuron as a threshold unit, layers as cortical areas — but discards the actual dynamics. The PINN keeps them. Each simulated neuron and synapse follows equations that capture the real timing, excitation, and inhibition found in primate cortex. The model was fed random-dot motion stimuli, the same kind used in classic primate electrophysiology experiments, and asked to decide whether dots were moving left or right.
Without any large-scale training regime, the PINN reproduced the decision-making behaviour seen in both monkeys and humans: the characteristic speed-accuracy trade-off, the gradual accumulation of evidence, the neural “ramping” activity in LIP that precedes a choice. It matched patterns from real primate recordings that purely data-driven networks struggle to replicate even after extensive training.
Tougher under pressure
The researchers then stress-tested both the PINN and a comparable convolutional neural network (CNN) they call MotionNet, running four types of perturbation: adding noise, selectively damaging units, corrupting weights, and degrading the input stimulus. The PINN degraded far more gracefully in every condition. The CNN collapsed quickly once perturbations exceeded a threshold; the PINN continued functioning — much as a real brain continues to function after partial injury.
The team traced this robustness to the model’s energy landscape. By mapping out how the LIP module’s activity settles into stable states during decision-making, they showed the PINN operates in a regime with deeper, more separated attractor basins than the CNN — meaning perturbations are less likely to knock it into the wrong decision state. This kind of landscape analysis is borrowed directly from theoretical neuroscience and would be difficult to apply to a standard deep network.
Using MRI to tune the model
The most novel contribution may be the fine-tuning strategy. The researchers recruited human volunteers, measured their white-matter tract properties and functional connectivity using MRI, and then had them perform the same motion-perception task. They found specific structural MRI metrics — fractional anisotropy of fibre tracts connecting the relevant brain areas, resting-state connectivity between LIP and other regions — that predicted individual differences in perceptual performance.
Those correlations were then mapped onto the PINN’s parameters. Instead of searching the full parameter space during optimisation, the model was nudged toward configurations that matched the brain-behaviour relationships measured in real people. The result was better task performance, improved adaptability to novel stimuli, and a drastically smaller search space — all while keeping the model biologically plausible.
The approach is an unusually direct pipeline from neuroimaging data to AI design. Most brain-inspired AI work treats neuroscience as motivation for an architecture choice made years ago; here the human brain data actively constrain the optimisation in real time.
What it means for AI
The authors are careful not to oversell. The PINN is a specialised model for a well-defined perceptual task, not a general-purpose system. Its advantage over CNNs is clearest in low-data and high-noise regimes — exactly the conditions where deep learning tends to fail and where the brain tends to excel.
But the framework they describe — build from biological dynamics, validate against electrophysiology, fine-tune with neuroimaging — is a methodology that could in principle extend to other brain circuits and other cognitive tasks. The code for both the PINN and MotionNet is publicly available on GitHub.
The work came out of Qiyuan Laboratory and Beijing Normal University, supported by the National Natural Science Foundation of China.
Source: Su et al., “Primate-informed neural network for visual decision-making,” PNAS, January 9, 2026.



