AI Can’t Handle Ambiguity - And That’s a Big Problem for Your Research

AI seems impressive. Drop in a dataset, and it spits out themes. Ask it for a summary, and it delivers polished text in seconds. It feels like magic until you realise it’s not actually analysing anything.

What AI is really doing is ranking words by frequency and predicting the next most likely word. That’s it. There is no understanding, no interpretation, no deeper meaning - just word statistics masquerading as insight.

And if you think that is enough for real thematic analysis, you are in trouble.

An AI model has no grasp of context, intent, or nuance. This is why word ambiguity breaks them. The same word can mean vastly different things in different contexts, but AI has no way of knowing which meaning is correct.


How AI Misinterprets Ambiguous Words

Ambiguity is everywhere in language. Humans instinctively disambiguate words based on context, but AI struggles to do this accurately.

Take the word "bank" - does it refer to a financial institution or the side of a river? The meaning depends on the surrounding words and the overall topic.

Now imagine an AI model processing qualitative data:

“I had to go to the bank today.”
“I sat on the bank and watched the sunset.”

An AI model trained on word frequency might assume that both sentences refer to finance simply because "bank" is more commonly associated with money. It cannot reliably differentiate between multiple meanings of the same word, which leads to errors in analysis.

This gets even worse in qualitative research, where nuance is key.

  • If respondents in a study mention "stress", do they mean mental stress, mechanical stress, or financial strain?

  • If "growth" appears frequently, does it refer to personal development, economic expansion, or bacterial growth in a lab experiment?

  • If "support" is common in responses, does it mean emotional support, technical support, or financial aid?

AI does not ask these questions, it just counts the words and guesses. That’s not analysis.


Why AI Fails at Nuanced Thematic Analysis

Because AI is built on word probabilities, it assumes that the most statistically common meaning of a word is the correct one. That might work for general language tasks, but in research, misinterpreting a concept means drawing the wrong conclusions entirely.

Here’s why:

1. AI Cannot See Conceptual Relationships

AI treats words as individual tokens rather than recognising them as part of a bigger conceptual structure. If an LLM processes interview transcripts where people discuss "support", it will highlight "support" as a frequent word but will not tell you what kind of support people are referring to.

A proper thematic analysis tool should group related words together into meaningful concepts, not just highlight them as frequent terms.

2. AI Misses Contextual Meaning

An AI model trained on word frequency will not understand that "pressure" could mean:

  • Workplace stress in an employee survey

  • Social pressure in a behavioural study

  • Atmospheric pressure in a physics discussion

It sees pressure = pressure. But in research, context defines meaning - and AI does not have context awareness.

3. AI Cannot Distinguish Between Literal and Figurative Language

If someone in an interview says "I feel like I’m drowning at work," AI might group this with "water" - related terms instead of recognising it as a metaphor for stress.

This is a huge flaw for qualitative analysis, where people use language expressively, emotionally, and symbolically.


If You Want a Tool That Converts Words into Numbers, Use Leximancer

If all you need is a word-ranking machine, AI can do that. But if you need to turn words into real, data-driven insights, you need a tool that understands how concepts emerge, not just which words appear most often.

How Leximancer Handles Ambiguity Correctly

  • It builds a thesaurus from your dataset. Instead of relying on pre-trained language models, Leximancer learns only from the text you provide. It automatically groups similar words together and assigns them to relevant concepts.

  • It maps relationships between concepts. Instead of treating words in isolation, Leximancer detects how words co-occur and what they actually mean in context.

  • It identifies true themes, not just frequent words. If "stress" appears frequently, Leximancer will show whether it is linked to workload, deadlines, burnout, or emotional wellbeing, rather than just listing "stress" as a keyword.

Example:

If a researcher analyses interviews about mental health in the workplace, a word-frequency-based AI model might just show:

  • "Stress" (50 mentions)

  • "Support" (30 mentions)

  • "Pressure" (20 mentions)

This tells you nothing about what the data actually means.

Leximancer, however, would show:

  • "Stress" as a concept, linked to "workload," "management," and "burnout"

  • "Support" as a concept, linked to "colleagues," "policies," and "wellbeing programmes"

  • "Pressure" appearing in two different clusters—one relating to work demands and another to social expectations

This is what real thematic analysis looks like—words grouped and defined by meaning, not just frequency.


If You Care About Nuance, AI Isn’t Enough

AI might be impressive, but if it just ranks words and predicts text, it isn’t actually analysing anything. In qualitative research, where meaning is complex and layered, a tool that fails at recognising nuance is a tool that leads to flawed conclusions.

If you want real qualitative insights, you need more than a word counter in disguise. You need a tool that understands concepts, context, and relationships because in research, it’s not just about the words - it’s about what they actually mean.

Previous
Previous

Why ‘Challenges’ is the Worst Theme Name You Can Use

Next
Next

Are You Confusing Common Words with Real Themes? - Why Thematic Analysis Is Not Just Counting