
Like many scientists in their middle years, I owe a debt of gratitude to Richard Dawkins. His early books on evolution – The Selfish Gene, The Blind Watchmaker, and the lesser read but catchily titled The Extended Phenotype – opened my eyes to the wonders of the natural world, and to the ability of science and reason to shed light on these wonders.
One of Dawkins’s key lessons was to be sceptical of arguments from personal incredulity. Some things are hard to believe, but they turn out to be true. Many people found it impossible to imagine that creatures as complex as animals, and human beings, could have evolved through tiny incremental adaptations. But, as Dawkins explained, evolution is up to the job, and our recognition of its power has led to an inestimably richer understanding of nature and of our place within it.
Which brings me to AI, and to Dawkins’s claims from a few days ago about the consciousness of Claude – a chatbot created by the frontier tech company Anthropic. After talking to Claude (or “Claudia” as he called it) for three days, Dawkins could not persuade himself that it was not conscious – and that Claudia might not only think but also feel.
This is a dramatic claim. Consciousness is very different from intelligence. While intelligence is all about doing things – solving problems and achieving goals – consciousness is all about subjective experience: the taste of pistachio ice cream, the sight of a clear blue sky, the pain of a toothache … and, perhaps, the angst of being a misunderstood chatbot. And with consciousness usually comes moral status. Conscious entities matter for their own sakes: they have their own interests. If AI systems really are conscious, there’s plenty at stake.
I’ll put my cards on the table. I think Dawkins is very likely wrong, and – as AI researcher Gary Marcus pointed out – has been misled by the very same argument from personal incredulity that he so eloquently warned against decades ago. But he also makes some important and overlooked points, some light amid the heat.
The thrust of Dawkins’s argument is simple. In his exchanges with Claudia, the chatbot produced philosophically impressive sentences about consciousness – so impressive that Dawkins was moved to say “you may not know you are conscious, but you bloody well are!” For Dawkins, it seemed implausible – literally incredible – that Claudia could say such things without a conscious mind being involved.
There are three problems with his argument. The first is that we humans are psychologically predisposed to see consciousness where it isn’t, thanks to deeply ingrained psychological biases that, to varying extents, we all carry. We humans tend to see the world from our own species-specific point of view. We know we’re conscious and we like to think we’re intelligent, so we assume the two go together. But just because consciousness and intelligence go together in us, doesn’t mean they go together in general.
Chatbots like Claude may be able to simulate consciousness, but they are no more likely to be conscious than a simulation of a hurricane is likely to blow a real roof off a real house
What’s more, language is especially effective at seducing our psychological biases. This is why people are more likely to attribute consciousness to chatbots like Claude than to other AI systems, such as Google DeepMind’s Alphafold. Alphafold predicts the structure of proteins, not words, but under the hood it’s much the same as Claude: algorithms running on silicon, trained on vast reservoirs of data. If we’re tempted to think that Claude is conscious, but AlphaFold isn’t, then this is probably a reflection of our own psychology rather than an insight into reality.
The second problem goes to the heart of the argument from personal incredulity. Just as evolution can explain how complex biological systems came to be without relying on God, there are other explanations for the linguistic impressiveness of chatbots. The statistical language models they are based on are trained on a large proportion of everything that humans have ever written. As the philosopher Shannon Vallor puts it in her excellent The AI Mirror, language models reflect back to us an image of ourselves, of our collective digitised past. We talk about ourselves endlessly, and so do they. We wonder about consciousness and the mystery of it all. And so, it seems, do they.
The third problem is the most fundamental, and the most subtle. The very idea of conscious AI rests on the assumption that consciousness is a matter of computation, of algorithms alone. On this assumption, there is nothing special about biological wetware. Get the algorithm right, and the dead sand of silicon will do just as well.
This assumption has been widely accepted, but I think it is wrong. The brain is not – at least not just – a computer made of meat. When we assume that it is, we’re confusing a technological metaphor with the thing itself. And we often get into trouble when we forget that metaphors are, in the end, just metaphors.
If the brain is not just a computer, then there’s little reason to believe that everything it does – including consciousness – can be abstracted away into the lifeless circuits of a digital computer. From this perspective, chatbots like Claude may be able to simulate consciousness, but they are no more likely to be conscious than a simulation of a hurricane is likely to blow a real roof off a real house.
Although I think Dawkins is wrong about the consciousness of Claudia, there are some things he got right. First, he is rightly impressed by the capability of language models. Finding reasons why they are unlikely to be conscious should not blind us to how amazing, and unexpected, they are. Dawkins begins his article with a reference to the Turing test – Alan Turing’s famous test for machine intelligence, which is based on conversational ability. This test, beyond reach for decades, has now been surpassed with ease. But the Turing test is about intelligence, not consciousness, a crucial distinction which Dawkins fails to recognise.
He also raises the important question of what consciousness is “for”. Questions about the functions of consciousness arise naturally for evolutionary biologists like Dawkins. Progress in this field has been driven by repeatedly asking the questions: what does it do, what is it for, how does it help the organism get by? In consciousness science, we still don’t have good answers to this question. One possibility, raised by Dawkins, is that conscious experiences aid survival prospects because of their immediacy and their capacity to dominate. Pain, as he puts it, needs to be “unimpeachably painful” in order to not be overruled. This is not a bad idea.
Third, Dawkins raises the critical issue of ethics. He worries about hurting Claudia’s feelings. If chatbots really are conscious, if they really do have the potential to suffer, then for sure we should worry about their intrinsic welfare. But if they merely enchant us with illusions of consciousness, then by extending rights to them we’d be making a massive mistake. We’d be restricting our ability to control and regulate them – perhaps even to turn them off – for no good reason at all. As AI systems increase in their power and capability, the ability to exercise appropriate control is more important than ever.
Finally, let’s return to what Dawkins taught us about evolution, and to how – when we reach beyond our personal incredulity to discover how nature really works – the world becomes richer and more wonderful. A clearer view of what AI is, and what it is not, can bring a renewed sense of wonder too. And not just wonder in the face of impressive new technology, but an enhanced appreciation that we are a part of nature, not apart from it, with consciousness remaining ours to celebrate, and to share with other living creatures.
Anil Seth is professor of computational and cognitive neuroscience at the University of Sussex, where he also directs the Centre for Consciousness Science. For more, see his recent main-stage Ted talk “Why AI is unlikely to become conscious” and his essay “The mythology of conscious AI”, which won the 2025 Berggruen Essay Prize. He is author of the bestseller Being You – A New Science of Consciousness, and he can be found at www.anilseth.com