You've probably heard it by now: AI images are getting so good that we can't tell them apart from real photos anymore. But that's not quite the whole story.
Researchers at Vanderbilt University just published findings that flip the script. It turns out some people are genuinely better at spotting AI-generated faces than others—and it has almost nothing to do with how smart you are, how much you know about AI, or even how good you normally are at recognizing faces.
The predictor is something much more specific: object recognition ability. The skill that lets you spot the subtle differences between visually similar things—a talent that also helps radiologists find tumors in X-rays or musicians read sheet music—translates directly to catching AI fakes.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxWhat Actually Works
Isabel Gauthier and her team at Vanderbilt created the first formal test specifically designed to measure how well people can distinguish real faces from AI-generated ones. They expected intelligence, familiarity with AI tools, or strong face recognition skills to matter most. None of those things did.
Instead, the data pointed to something quieter: participants with stronger object recognition abilities consistently identified AI faces more accurately. And when researchers tested them again later, those same people performed consistently. This is a stable trait—not something that fluctuates based on mood or recent news about AI.
The finding is particularly interesting because object recognition ability shows up across wildly different domains. The same visual skill that helps someone detect lung nodules in medical imaging, identify cancerous blood cells under a microscope, or read musical notation also helps them catch AI-generated faces. This suggests we're looking at a genuinely general visual capacity—something deeper than expertise in any one area.
"We were shocked to see how intelligence or even technology training did not help," Gauthier said. "These results highlight a visual ability that has very general applications. It's a stable trait that helps people meet new perceptual challenges, including those created by AI."
The Distribution Problem
Here's what matters most about this research: it reframes the entire conversation about AI deception. The media narrative tends toward panic—AI is so realistic now that we're all fooled. But that's not what the data shows.
What you actually have is a distribution. Some people genuinely can't tell the difference between real and AI-generated faces. Some are doing it well. Some are somewhere in the middle. It's not that everyone is equally vulnerable or equally skilled. The vulnerability is real for some people, but it's not universal.
Gauthier emphasized this point directly: "There is this general message we hear in the media that AI images are so realistic that we can't tell the difference, and I think that's misleading. As AI becomes ever present in our reality, I think it's useful to know that some people are better at this than others."
This matters because it suggests the solution isn't to give everyone the same AI literacy training or the same skepticism tips. If object recognition ability is the key factor, then the real question becomes: can this skill be trained? And if so, how?
The research doesn't answer that yet. But it does identify what we're actually looking for—not intelligence or tech knowledge, but a specific visual capacity that some people have more of than others. That's a genuine insight into how human perception works in a world where synthetic images are becoming routine.
What comes next is understanding whether object recognition ability can be developed, and whether training people on this specific skill might be more effective than generic AI awareness campaigns.









