That Rat Wasn’t Real - And Neither Is a Lot of What We’re Reading Lately
In July, a scientific journal embarrassingly retracted an article after someone finally noticed something strange. The paper included an illustration of a lab rat… that didn’t actually resemble a rat.
Not even close.
It had glossy, Pixar-style fur. Human-like hands. Eyes too big for its head. The kind of thing you’d expect in an animation, not a neuroscience paper on drug delivery.
Turns out, the authors had used an AI image generator to create the figure. And somehow, through peer review, editorial checks, and publication, no one noticed it wasn’t real.
It went viral for a few days. People laughed. But it wasn’t just a funny mistake. It was a warning.
If a fake rat can pass peer review, what else is slipping through?
Generative AI is now everywhere in academia. It’s not just helping to clean up grammar or summarise articles—it’s increasingly being used to shape ideas, generate arguments, and, in this case, produce scientific images. Sometimes it's helpful. But too often, it’s completely detached from the reality it's meant to represent.
That rat wasn’t the first hallucination to make it through review. It was just the most obvious.
This particular example wasn't even subtle. As science integrity expert Elisabeth Bik put it, it was “a sad example” of how generative AI can quietly erode the trustworthiness of research. And Victoria Samanidou asked the obvious question on behalf of many stunned academics: how on earth did this get through peer review?
It’s a good question. Because the problem wasn’t just the rat—it was the system that failed to notice it. A system overwhelmed by volume, stretched thin by time, and increasingly reliant on things that sound right rather than things that are right.
Why this hits harder in qualitative research
In qualitative work, this shift is even more dangerous because our job is to interpret mess, not smooth it over. Generative tools might help surface patterns, but they also risk removing the friction that leads to actual insight.
When an AI model proposes themes, paraphrases quotes, or "summarises" participant narratives it’s inserting distance. And when that distance is mistaken for clarity, we end up with research that reads well but thinks poorly.
It might not look as obviously wrong as a cartoon rat, but the loss is just as real. A paper that glosses over participant nuance or rewords everything into neutral corporate-speak is a failure of care.
What we can do
Remember what tools are for, and what they’re not.
Stay close to the data
Don’t let summaries do your seeing for you. Go back to the actual words. Listen again.Interrogate ease
If something feels too clean, it probably is. That “perfect” theme? Where did it come from?Make your process transparent
Good research can be followed. If you can’t explain how you got there, it might not be insight yet.Protect interpretation as an act of thinking
Don’t outsource meaning. The researcher matters—for their perspective, their struggle, their care.
Yes, the image was ridiculous. But the real issue isn’t that AI tools hallucinate but that we’re starting to treat that hallucination as harmless. Useful, even.
And in doing so, we risk turning analysis into performance. Clean sentences, persuasive themes, logical arguments. Without any real encounter with the rawness of data.
Insight doesn’t come from prompts. It comes from attention. It comes from discomfort. It comes from the long, slow work of noticing.
If we start skipping that, if we stop asking whether the thing we’re producing is even ours anymore - we’re not saving time.
We’re removing the purpose from our own work.