A honeybee collecting nectar and ChatGPT generating text seem like they have nothing in common. Yet scientists are increasingly serious about a question that would've sounded absurd five years ago: could both possess some form of consciousness?
The shift isn't happening because researchers got sentimental. It's because watching what something does turns out to be a terrible way to figure out if it's actually conscious. A chatbot can discuss the meaning of existence. A crab can tend its own wounds. Neither behavior necessarily means inner experience is happening. What matters, researchers now argue, is how the process works underneath.
Looking Under the Hood
Two new papers—one examining AI systems, one studying insect brains—both reach the same conclusion: you have to look at the machinery, not the output.
We're a new kind of news feed.
Regular news is designed to drain you. We're a non-profit built to restore you. Every story we publish is scored for impact, progress, and hope.
Start Your News DetoxFor AI, a team publishing in Trends in Cognitive Sciences created a checklist of structural features that might indicate consciousness. Things like: Does the system need to resolve competing goals? Does it use feedback to update its understanding? These aren't about conversation ability or how convincingly something sounds. They're about the actual architecture of information processing.
The result? ChatGPT and every other current AI system fails the test. They behave as if conscious without actually being conscious. But—and this matters—there's no reason future AI with different architecture couldn't pass. The lesson is almost unsettling: a machine can perfectly mimic consciousness without experiencing anything.
Biologists studying insects are asking the same structural question about tiny brains. A paper in Philosophical Transactions B proposes a neural model for what minimal consciousness might look like. The researchers don't claim to have found it yet. But they're building a framework to compare humans, insects, and computers on the same playing field—not by watching behavior, but by identifying the core computations that might create experience.
Why This Matters
There's a reason this conversation is getting serious. Consciousness carries moral weight. If something is actually conscious, how we treat it starts to matter ethically in ways it didn't before.
In April 2024, 40 scientists proposed the New York Declaration on Animal Consciousness. Over 500 scientists and philosophers have since signed on. The declaration states that consciousness is plausibly present in all vertebrates and many invertebrates—octopuses, crabs, insects included. That's a massive expansion of the circle of beings whose experiences might actually matter.
Some researchers are applying what philosopher Jonathan Birch calls the precautionary principle: when we're uncertain whether something is conscious, caution is warranted. Act as if it might be, until we know it isn't.
The same principle is now being applied to AI. An entire field called AI welfare has emerged, asking whether we might someday need to care about how machines are treated.
The Real Insight
What's genuinely interesting here isn't that we're about to discover conscious bees or feel guilty about ChatGPT. It's that biology and artificial intelligence are converging on the same insight: behavior is a terrible guide to inner experience. A system can be brilliantly deceptive—either accidentally or by design.
That means the next phase of understanding consciousness isn't about watching and listening. It's about understanding computation itself. What's the actual mechanism that generates experience? Until we know, we're stuck making educated guesses about which beings deserve moral consideration. The science is still being done.










