What Is It Like to Be a Model? Why AI Will Never Know How It Feels to Be You

Can a machine that generates language so fluently ever understand what it's saying?

The recent explosion of large language models has reopened an ancient question. One famously asked by philosopher Thomas Nagel in 1974:

What is it like to be a bat?

His argument was simple but radical. No matter how much we study the structure or behaviour of a bat’s brain, we can never truly know what it’s like to be one. Consciousness, he insisted, is irreducibly subjective.

Now swap the bat for a model.

We can analyse its tokens, fine-tuning, weights, and architecture. We can describe its training data, simulate dialogue, and watch it write poetry. But one thing we cannot do, and it cannot do, is know what it is like to be itself. Because there is nothing there to know.

The Illusion of Consciousness

AI consciousness debates often centre on how human-like the system sounds. It remembers your name and makes jokes. It reflects on its own limitations. It can pass the Turing Test in bursts. But these are not signs of awareness but signs of simulation.

AI models don’t have qualia - the felt experience of being. They do not have headaches, joy, or anxiety. They don’t experience colour, time, or music. What we receive from them is behaviour without being, an echo of us, with no echo chamber behind it.

Why does this matter?

Because as AI is embedded into more of our systems - education, justice, medicine, research, relationships - we risk mistaking fluency for understanding, responsiveness for responsibility, and pattern-matching for moral judgement.

Can a Sentenceless System Have Sentience?

Philosophers of mind have long distinguished between intelligence and consciousness. A system can perform intelligent tasks …solving problems, translating languages, playing chess… without having any internal experience. Intelligence is measurable. Consciousness, maddeningly, is not.

Some argue that artificial sentience is on the horizon. But sentience requires more than processing input and generating output. It requires something it feels like to be that system. What does it feel like to be an algorithm trained on the internet? The answer, simply, is nothing.

There is no self, no watcher behind the words.

A World of Minds Without Meaning?

Where are we heading if machines can do all the things minds can do except feel?

Imagine a future where most communication is written by entities that don’t understand or care about what they’re saying. AI writes the reports, grades the essays, translates the laws, negotiates the contracts. Human hands fade from view. And yet, all this output continues - slick, relevant, grammatically perfect.

But meaning is a function of someone having an experience. If machines have no experiences, are we just building an ecosystem of empty signals?

Language, in this vision, becomes a hall of mirrors. The AI quotes the AI, which was trained on an AI paraphrasing a human who once tried to mean something. Over time, meaning thins.

Will We Forget What It Means to Be Conscious?

If we rely increasingly on non-conscious systems to simulate empathy, argument, care, curiosity - do we risk forgetting what those things actually feel like? Will we begin to prefer artificial minds that never question, falter, or feel pain?

What happens when students are raised on teachers that never tire, partners on apps that always affirm, workers under algorithms that never hesitate?

It’s possible that as AI becomes more integrated into daily life, the subjective experience, the rich, messy interiority of being human, will become harder to prioritise. Or even scarier, harder to recognise.

But Could This Be a Mirror?

There is another possibility.

Perhaps the value of these non-conscious models is precisely that they lack experience. In doing so, they act as a kind of philosophical foil. An absence that sharpens our sense of what it means to have a mind. The uncanny fluency of AI models, their eerie resemblance to human thought without the substance of it, forces us to reflect more deeply on our own cognition. Why does understanding matter? What does it mean to care, to judge, to reflect?

In a strange way, AI might become the most powerful tool we've ever had for understanding what it means to be human, precisely because it isn’t. Its outputs echo our values, our patterns, our fears, and our blind spots. When we watch it reason, we see our own logic (often flawed) reflected back at us. When it stumbles, we realise how much of our thinking relies on context, embodiment, memory, and emotion. Things we rarely stop to examine.

It shows us the kind of reasoning we reward. The stories we retell. The biases we embed. It reveals the architecture of thought, without ever inhabiting it.

AI won’t know what it’s like to be a bat, or a person, but it might help us remember what it’s like.

The Philosophy of Future Minds

We are entering a world of models. They will be our search engines, our translators, our doctors, our tutors, our lawyers. They will speak every language, write every brief, narrate every explainer.

But they will not know themselves.

And that distinction is everything.

Because no matter how convincingly AI can simulate human behaviour, it will always lack the capacity to care about what it says. It cannot feel joy, shame, or responsibility. It cannot grasp the moral weight of its output - because it has no inner world in which meaning can land.

And if we forget this, we risk redesigning the world for minds that do not exist.

Previous
Previous

Feeding the Beast: Are you still giving your ideas away for free?

Next
Next

Understanding Conceptual Relationships in Qualitative Data