Bold claim: AI reveals hidden brain tricks behind covert attention and even uncovers new neuron types. If you’ve ever tracked a glance without moving your eyes—like scanning a road while driving or reading a room for reactions to a joke—you’ve used covert attention. Scientists know it happens, but its brain-based underpinnings have been fuzzy. A team from UC Santa Barbara—Sudhanshu Srivastava, Miguel Eckstein, and William Wang—uses convolutional neural networks (CNNs) to illuminate this phenomenon and, in the process, identifies emergent neuron types that are later confirmed with real mouse brain data.
This work is being hailed as a concrete example of AI driving advances in neuroscience, cognitive science, and psychology. The findings appear in the Proceedings of the National Academy of Sciences.
Emergent properties and newly discovered neuron types
Attention in the brain acts like a spotlight or zoom lens, directing resources to a region in our visual field to improve perception. Modern computational and neurobiological models often embed an attention mechanism that enhances processing at the attended location by boosting signal or reducing noise.
In typical experiments on covert attention, researchers present a cue (a flash or arrow) before or with a briefly shown target and measure faster, more accurate detection when the cue and target align. The cue is thought to orient the brain’s attention system to the target location, altering how we process the visual input.
Because covert attention feels so natural to humans, it’s easy to assume it’s tied to conscious awareness and specialized brain modules in the parietal region. Yet recent work has shown similar attentional behaviors in animals with simpler brains—archer fish, mice, and even bees—raising the possibility that covert attention could arise from networks across the brain rather than a single attention module.
Mapping how the brain optimizes attention for accuracy is hard: the human brain contains billions of neurons in a highly dynamic system, and current imaging cannot capture the activity of every single neuron. The next best thing is to study artificial systems that resemble brain networks. By building a relatively simple brain-like model and giving it tasks humans perform, researchers can peer under the hood of the AI to infer how real brains might organize to accomplish the same tasks.
Srivastava, Eckstein, and Wang demonstrated in 2024 that a CNN composed of 200,000 to 1 million neurons can display hallmark covert attention behaviors in various target-detection tasks, even though the model lacked an explicit attention-orienting mechanism. This suggested that covert attention could emerge from learning to detect targets as effectively as possible, in both artificial and biological systems.
Inside the CNN: what enables emergent covert attention?
The 2025 study asks a deeper question: what in the CNN creates these emergent attentional abilities? The team analyzed the internal workings of the CNNs instead of treating them as opaque black boxes. If single-neuron recordings in a real brain can only capture thousands of cells, a million CNN units offer a detailed map of how many units respond to cues and targets across the visual field.
Using a population of 1.8 million artificial neurons (180,000 units across 10 trained CNNs), the researchers subjected the AI to a Posner cueing task, a classic test of cue-based attention that measures how cueing affects speed and accuracy in detecting targets.
They found CNN units that behaved much like neurons observed in primates and mice, even though these units were not explicitly designed to perform attention. More intriguingly, the team identified several previously unreported CNN neuron types. One notable finding is the presence of “cue inhibitory” units, which react more weakly when a cue is present.
The standout discovery is a “location opponent” type. These neurons excite activity when the target and cue align at a specific location but suppress activity at other locations. In effect, they amplify signals where the target is expected and dampen signals elsewhere—a kind of spatial push-pull mechanism not typically described in covert attention studies focusing on excitatory responses alone.
This push-pull dynamic mirrors opponent processes found in other visual domains, such as red-vs-green color channels or motion-sensitive neurons that prefer certain directions while suppressing others. It represents a novel paradigm for understanding how attention-related signals might be computed in a distributed network.
Interestingly, the researchers also found a CNN unit that combines cue-based opponency with summation for the target at multiple locations. This particular type did not appear in mouse data, which hints at possible biological constraints that the AI model does not mimic.
How far do these findings apply to humans? The researchers acknowledge it’s early days, but the work shows that covert attention may involve a richer set of neural mechanisms than previously thought. Not only can emergent attentional behavior arise, but CNNs can also predict fresh neural-type properties that haven’t been reported before.
Srivastava emphasizes that this work fundamentally shifts how we think about attention. The team is continuing to explore how human and machine intelligence intersect, under UCSB’s Mind & Machine Intelligence Initiative, which brings together researchers across AI and mind studies.
Source:
Srivastava, S., et al. (2025). Emergent neuronal mechanisms mediating covert attention in convolutional neural networks. Proceedings of the National Academy of Sciences. doi: 10.1073/pnas.2411909122.