Scaling the Strain – How AI Is Supporting Peer Review at Every Stage

Scaling the Strain – How AI Is Supporting Peer Review at Every Stage

As manuscript submissions rise and reviewer pools stagnate, publishers are turning to artificial intelligence to keep the scholarly publishing engine running. In the opening presentation of the webinar, Fabio Di Bello from the University of Chieti outlined a comprehensive vision of how AI tools are reshaping each step of the peer review process, not as replacements for human judgment, but as accelerators of quality and efficiency.

AI in Peer Review: From Burden to Breakthrough

Peer review is under strain. Between 2016 and 2022, indexed publications in Scopus and Web of Science grew by 50%. But the pool of reviewers hasn’t scaled in kind. According to Nature, just 20% of researchers perform the majority of peer reviews, leading to delays and burnout. As Fabio Di Bello emphasized, AI enters this picture not to displace humans, but to relieve the pressure.

Fabio framed his talk around three pillars of the peer review process: manuscript screening, reviewer selection, and content evaluation, each now infused with AI support.

  1. Manuscript Screening: The First Line of Defense

AI acts as an assistant editor at the gate, helping triage incoming submissions quickly and fairly. Fabio highlighted several use cases:

  • Scope Alignment: AI tools use natural language processing (NLP) to assess whether a manuscript fits a journal’s aims and scope.
  • Formatting Compliance: Tools like Penelope.ai can instantly verify adherence to author guidelines, flagging missing ethical statements or incorrect citation formats.
  • Language Quality Checks: Especially for non-native submissions, AI tools suggest clearer phrasing and improve readability before peer review begins.
  • Plagiarism and Integrity Checks: Services like iThenticate scan for duplicate text, while newer AI models can even detect fabricated citations, manipulated images, and suspicious authorship patterns.

These tools surface red flags early, preventing flawed or unethical submissions from wasting editorial and reviewer time.

  1. Reviewer Selection: Speed, Fairness, and Inclusion

Finding the right reviewer has long been a pain point. AI changes that.

  • Expertise Matching: Platforms like Prophy tap into databases of 170+ million publications to find domain experts based on semantic similarity, not just keyword overlap.
  • Conflict of Interest Detection: AI maps co-authorships and institutional affiliations to flag potential biases.
  • Load Balancing and DEI: Algorithms can surface early-career reviewers and global scholars who might be overlooked by manual selection, promoting diversity and easing reviewer fatigue.

Fabio cited Springer Nature data suggesting AI-based tools can reduce reviewer-finding time by over 70%.

  1. Content Evaluation: AI as Pre-Reviewer

While AI can’t yet replace the nuanced critique of a human reviewer, it excels at structural and technical checks:

  • Statistical Review: Tools like StatReviewer flag missing p-values, questionable sample sizes, and misused statistical tests.
  • Reproducibility Scoring: AI checks for blinding, randomization, and methodological transparency, helping uphold scientific rigor.
  • Citation Integrity: Scite.ai assesses whether cited papers support or contradict an author’s claims and flags citation manipulation.
  • Image Analysis: AI can detect duplicated or altered images, a growing issue in biomedical publishing.

These systems serve as tireless assistants, freeing human reviewers to focus on scientific novelty and interpretation.

Large Language Models (LLMs): Helpers, Not Judges

Fabio acknowledged the growing use of generative AI tools like ChatGPT and Gemini by reviewers themselves:

  • Some use them to draft review summaries or rephrase comments.
  • Others rely on them for structuring feedback or clarifying methodological critique.

A 2024 study found that 17% of peer reviews at top AI conferences contained signs of LLM use, underscoring their quiet pervasiveness.

Still, Fabio stressed caution: LLMs can hallucinate, fabricate citations, and generate confident-sounding nonsense. Reviewers should use these tools transparently and only for writing support, not for scientific judgment.

The Benefits and Boundaries of AI in Peer Review

Fabio closed with a summary of what AI offers:

  • Speed: Faster screening and reviewer assignment.
  • Efficiency: Automated metadata checks and formatting.
  • Consistency: Standardized decision-support across submissions.
  • Scalability: Handling surging volumes without expanding staff.
  • Equity: Broadening the reviewer pool and reducing gatekeeping.

But he also flagged key caveats:

  • Transparency: Editors must know what AI flagged and why.
  • Oversight: No AI decision should be final without human review.
  • Community Buy-In: Publishers must earn the trust of researchers, and train editors to use these tools responsibly.

Ultimately, Fabio predicted a hybrid future: human judgment augmented by increasingly intelligent machines. The goal? A peer review process that is not only faster and fairer, but more rigorous and transparent.

From Theory to Infrastructure

Fabio Di Bello’s presentation offered a structured map of where AI fits into peer review today. But it also underscored an emerging challenge: AI systems work best when they are well-integrated into publishing platforms and editorial workflows. As the ecosystem evolves, interoperability, transparency, and community alignment will become just as critical as technical capability.

In the next post, we’ll turn to the field: How do these tools actually perform in practice? Lucia Steele from AboutScience offers a candid case study of experimentation, skepticism, and surprising insights.

AI Meets Peer Review – Insights from the Frontlines

The scholarly publishing landscape is being reshaped by artificial intelligence, and peer review is squarely in its path. At HighWire’s recent Best Practice webinar, “Understanding How AI Tools Are Used in Peer Review: Practical Insights for Editors and Publishers,” an international panel explored the current and future role of AI in the peer review process. This three-part blog series distills the insights of our expert speakers:

  • Fabio Di Bello (University of Chieti) offered a structured deep dive into how AI is enhancing core editorial workflows.
  • Lucia Steele (AboutScience) shared findings from a practical experiment comparing AI-generated peer review reports to human ones.
  • Sven Fund (Reviewer Credits) zoomed out to analyze how AI fits into the broader system, from reviewer behavior to sustainability.

Each post aims to capture not only what’s happening now, but what may lie ahead as we grapple with the ethical, operational, and scientific implications of AI in peer review.

– By Tony Alves

Latest news and blog articles