The Evolution of Literature Reviews: Automated Analysis and the Effect on Academic Synthesis

In the 1980s, the average time to complete a literature review for a doctoral thesis was estimated at 6 to 12 months, depending on how many journal subscriptions your library could afford and whether the microfiche machine was working.

Today? Researchers can analyse thousands of papers in under an hour. Not by reading faster, but by using tools that read differently.

Welcome to the era of automated analysis. Where machines eyes help you see patterns you didn’t know to look for.

Literature Reviews Weren’t Designed for This Much Literature

The sheer volume of academic output is staggering. According to a 2022 estimate, more than 3 million scholarly articles are published every year. That’s more than 8,200 a day.

You’re not lazy if you can’t read them all. You’re just human.

Automated analysis tools were born out of this reality. They don’t replace the literature review, they scale it. They offer a way to navigate the flood, not get swept away by it.


Traditionally, literature reviews have relied heavily on manual sorting, thematic coding, and synthesis. A labour-intensive process that inevitably filters what we notice through our own mental models. But automated tools are now helping researchers move beyond anecdotal evidence and subjective impressions to systematically detect trends, gaps, and novel associations in the literature.

Here’s how the new generation of automated tools is reshaping the process:

1. Systematic Review Platforms

Examples: Covidence, Rayyan
These platforms streamline the most tedious part of the review process: screening titles and abstracts against inclusion/exclusion criteria. Built-in features for collaboration, blinding, and PRISMA compliance make them indispensable for large-scale systematic reviews.

Impact: Time savings and consistency. You can now screen 1,000 articles in a fraction of the time it used to take - without compromising on transparency or rigour.

2. Bibliometric Tools

Examples: VOSviewer, CiteSpace
These tools go beyond keyword searches. They map co-authorship networks, identify key hubs in citation networks, and uncover how research fields evolve over time.

Impact: By visualising intellectual lineages and revealing who’s citing whom, bibliometric tools expose invisible academic structures—like which scholars or journals act as bridges between subfields.

3. Thematic Analysis Tools

Examples: Leximancer, NVivo
This is where the transformation gets genuinely exciting. While systematic review tools help you sort and bibliometric tools map who’s saying what, thematic tools like Leximancer help you understand what is being said, and how ideas are connected.

Leximancer, for instance, automatically identifies key concepts in your literature corpus, maps co-occurrence patterns, and clusters related themes. Unlike manual coding, it doesn’t rely on pre-existing categories, which reduces the risk of confirmation bias.

Impact: These tools allow you to surface emergent themes, unexpected relationships, and blind spots that may otherwise go unnoticed. It’s especially valuable in interdisciplinary or exploratory reviews.


What You Gain from Automation (Besides Time)

Saving time is great. But the real benefit is perspective.

-> Seeing the Forest and the Trees

Automated review methods offer scale.

Whether you’re dealing with a corpus of 80 articles or 8,000, the ability to visualise how concepts relate (over time, across journals, within disciplines) can illuminate structural patterns in the literature. These include:

  • The emergence or decline of certain research topics

  • Fragmentation (or consolidation) within a field

  • Surprising conceptual overlaps between seemingly unrelated areas

Suddenly, your lit review becomes more than a checkbox, it becomes a story. One with structure, tension, unresolved questions, and maybe even a plot twist or two.

-> Discovering the Gaps

One of the most powerful affordances of automated tools is their ability to illuminate absence. In a field dominated by certain voices or paradigms, they can highlight neglected themes or underrepresented topics. This doesn’t mean the machine ‘knows’ what’s missing, it means it exposes what we might be conditioned not to look for.

For emerging scholars especially, this is transformative. Rather than simply replicating established frames, they can identify blind spots in the literature where new questions can grow.


No, You’re Not Being Replaced

Automation doesn’t mean disengagement. It doesn’t mean delegating thinking to a machine. Rather, it’s about creating room for better thinking, freeing up cognitive energy for synthesis, argumentation, and critique.

Think of automated analysis not as a shortcut, but as scaffolding. It provides an initial sketch from which the human researcher can build nuance and depth. You still bring the judgement, the theoretical lens, the critical eye. But now you’re not starting from scratch.

Where This Is Going

The literature review is no longer just a gateway to a research paper, it is becoming a research output in its own right. With the rise of meta-research, interdisciplinary synthesis, and open-access databases, our ability to map and understand knowledge itself is more important than ever.

Automated analysis doesn’t just help us keep up, it helps us ask better questions, see the bigger picture, and ultimately, produce more insightful, credible research.

So, make sure to ask yourself next lit review:
What patterns are you missing?
And what tools could help you find them?

Previous
Previous

Do You Really Need a Framework? The Case for Starting with the Data

Next
Next

Teaching Qualitative Methods in the Digital Age: Rethinking the Curriculum for a New Generation of Researchers