Why Qual Papers Get Rejected

The truth is, rejection isn’t about the topic. It’s often about the execution.

If you’re working in qualitative research, here are some of the most common reasons journal reviewers hit reject - and what you can do to stay on the right side of the decision.

1. Your Themes Are Vague or Feel Obvious

This is one of the most frequent bits of feedback editors give: “The findings are descriptive, not analytical.”

If your themes are broad, generic, or sound like something that could’ve come from a dinner party discussion - think “communication,” “identity,” “power” - you’re in trouble.

Instead, themes need to feel earned. They should emerge from the data with enough texture that readers say, “Ah, I hadn’t thought of it quite that way.” A concept map can help you see what your data is actually saying - not just what you expected it to say.

At Leximancer, we’ve seen researchers use thematic mapping to sharpen their analysis and avoid defaulting to familiar buzzwords. By surfacing co-occurrence patterns and clusters, you can refine vague labels into real insights.

2. The Analysis Path Isn’t Clear

If a reviewer can’t follow how you got from raw data to final conclusions, they can’t trust your findings. And if they can’t trust your findings, you won’t make it past peer review.

We get it - qualitative work is inherently interpretive. But interpretive doesn’t mean opaque.

What you can do is make your process transparent. If you’re coding manually, show how codes were developed, refined, and grouped. If you're using software, explain how themes emerged, and how you engaged with them.

This is where Leximancer helps you build credibility. The software doesn’t just hand you a list of themes. It shows why those concepts are present, and how they're connected across the dataset. That kind of traceability can make the difference between reviewer confusion and reviewer confidence.

3. It’s Been Done Before (and Better)

This one’s hard to hear. But originality is everything.

Too many submissions restate what’s already known, especially when using well-worn frameworks or methods. And while replication has value, you need to show why your version matters.

You should position your work in the literature carefully. Where are the gaps? What questions are still unanswered? How does your analysis challenge, extend, or nuance existing findings?

Even the way you approach your data can set you apart. We’ve seen researchers bring fresh perspectives to familiar topics - mapping unexpected connections or tracking how concept emphasis shifts across demographics or time periods. Those subtle innovations can become the contribution.

4. Too Much Description, Not Enough Interpretation

Another common editorial sigh: “The paper lacks theoretical depth.”

If your findings section reads like a list of quotes and observations with minimal commentary, it may seem half-baked.

Analysis means stepping back and making meaning. What does this quote tell us? How does it relate to the broader concept? What tensions or contradictions does it surface?

Conceptual maps can give you a birds-eye view of the data, which helps you move beyond isolated quotes and start theorising. Leximancer doesn’t replace your thinking, it gives it structure. And in publishing, structure is everything.

Avoiding the Trap

Rejection isn’t failure, it’s feedback. And understanding the patterns behind it can help you build better research and tell a clearer story.

Here’s the key takeaway. It’s not about analysing faster (although that’s icing on the cake). It’s about analysing better, and with rigour, transparency, and a clear contribution.

And if that’s what you’re aiming for, we’re here to help.


Interested in how Leximancer can support your next publication?
Explore our thematic mapping tools and discover how to clarify your findings without losing your voice.

Book a demo and see for yourself

Next
Next

Stop Summarising and Start Publishing: What Reviewers Actually Want