The CEO of ChatGPT Is Surprised That People Trust It

Why Sam Altman’s cautionary words deserve more attention from researchers, educators, and everyday users.


In the debut episode of OpenAI’s official podcast, Sam Altman - the man behind the world’s most widely used large language model - expressed something unexpected. Concern.

Not about the technology’s limits, or the regulatory landscape, or the lawsuits piling up. His concern was about us.

“People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don’t trust that much,” Altman said.

Despite countless reminders that generative AI is prone to hallucination, misinformation, and factual falsities, ChatGPT continues to be used as a go-to tool for research, writing, parenting, education, and healthcare queries… if you can believe. Altman himself admitted he used the chatbot when his son was born, but he stopped short of recommending it as a reliable source.

“It’s not super reliable,” he said. “We need to be honest about that.”

When he admitted on his own company’s podcast that he turned to ChatGPT during the early months of his son’s life, it wasn’t entirely surprising. Except, perhaps, to him.

“I mean, clearly, people have been able to take care of babies without ChatGPT for a long time,” he said. “I don’t know how I would’ve done that.”

It’s a seemingly candid moment. The comment reads less like a plug for his product and more like a confession of digital-age parenthood. In many ways, Altman’s reliance on ChatGPT reflects the human instinct to reach for any form of reassurance when confronted with uncertainty. Especially the kind that wakes you up at 3am.

But it also complicates his broader point. We probably trust these tools more than we should.

Like millions of other parents before him, Altman was essentially doing what humans have done since the birth of the search engine. Typing in late-night questions and hoping the internet has answers. The difference is the interface. Instead of clicking through a bottomless pit of search results or combing through parenting forums and Facebook groups (which are often just as speculative and anecdotal), Altman spoke to a chatbot. A conversational tool that presents itself with coherence and confidence - even when it’s wrong.

This is precisely what he finds concerning. While ChatGPT may feel more helpful, more immediate, or less judgemental than online strangers or outdated blogs, its tendency to "hallucinate" makes it just as fallible (if not more dangerous) when we forget that it has no true knowledge of the world.

And that’s the paradox Altman seems to be grappling with. The tool is easy, often useful, occasionally brilliant - but also super unreliable. And yet we use it anyway, not because we blindly trust it, but because we’re human, and sometimes we just want an answer. Any answer.

For Researchers and Educators

Altman’s ambivalence points to a wider tension, one that scholars, teachers, and students are increasingly being forced to navigate. How do we integrate generative AI into academic work without diluting the standards that make research trustworthy?

It’s easy to frame the issue as one of “trust,” but in practice, it’s about something messier: time pressure, intellectual fatigue, and the desire to feel competent in a sea of uncertainty. The temptation to ask ChatGPT to draft a summary, outline a paper, or interpret a theory isn’t just about convenience—it’s about needing to get something done. And when the output is coherent, even compelling, it can be difficult to remember that it’s often built on no more than probability and stylistic mimicry.

This especially matters in education and research - spaces that are supposed to be built on evidence, transparency, and reasoned argument.

Language Is Not Knowledge

Perhaps the real challenge is that ChatGPT doesn’t sound like a tool. It sounds like a person. It responds in full paragraphs. It references peer-reviewed material (whether or not it’s real). It hedges and qualifies, like a good academic should. But that rhetorical polish masks a deeper emptiness. The model does not understand what it says. It cannot judge the strength of an argument. And it has no stake in being right.

For students just learning how to engage critically with sources, or for researchers juggling dozens of competing demands, it’s not hard to see how this distinction gets blurred. But as Altman himself suggests, it’s a mistake to let the fluency of the interface override the fallibility of the system.

A Teachable Moment

Altman’s unease is also a gift to educators. It gives us a way to talk about digital literacy that goes beyond plagiarism or prompting. It lets us ask students to examine how we know what we know, and what we consider a trustworthy source.

Rather than banning these tools outright, or embracing them without critique, the most productive path is to integrate them into the classroom as objects of study. Ask students to test the limits of ChatGPT. Compare its outputs with peer-reviewed material. Analyse its rhetorical strategies. Discuss what it means for a machine to sound credible.

Not blind trust. Not blanket scepticism. Just the kind of healthy scrutiny academia has always encouraged.

Next
Next

Scaling Qualitative Research Without Losing Its Soul? Lessons from Using R on Big Qual Data