The Future of Peer Review: Incentives, Preprints, and the Human Factor

The Future of Peer Review: Incentives, Preprints, and the Human Factor

At the 10th Peer Review Congress in Chicago, September 2025, three themes emerged that point to where peer review may be heading: how to motivate and sustain reviewers, how preprints are reshaping the publishing ecosystem, and why, even in an age of AI, the human element is important.

Rethinking Incentives for Reviewers

Peer review depends on the voluntary labor of scholars. But as submissions rise and reviewer fatigue grows, the question of incentives has become urgent.

  • A quasi-randomized trial at Critical Care Medicine (715 invitations, Sept 2023–Mar 2024) tested $250 reviewer payments. The results: reviewers offered payment were 36% more likely to complete their review (completion rates rose from 42% to 50%) and returned reports slightly faster (median 11 days vs 12). However, quality scores were unchanged. Scaling the program would have cost the journal $150,000 annually. The conclusion: payments may improve timeliness but raise sustainability and equity concerns.
  • A BMJ survey of 183 patient and public reviewers found that 48% said they would be more likely to accept if paid £50, while 32% preferred a one-year journal subscription. But nearly one-third said money or perks would not influence them. Respondents were split: some welcomed payment as fair recognition, others feared it would compromise altruism or attract low-quality engagement.

The debate showed that incentives are not one-size-fits-all. While money helps in some contexts, recognition also matters. At the Congress, several speakers emphasized the growing value of ORCID-linked reviewer credit, badges, and certificates, which give reviewers visibility and academic credit without introducing financial conflicts.

Preprints: From Disruption to Integration

If incentives can influence the supply of reviewers, preprints are reshaping the demand side; how manuscripts flow through the system.

In his Douglas Altman Lecture, Malcolm MacLeod posed the provocative question: “Does the journal article have a future?” His critique was blunt: journals publish too much low-quality research, and peer review often fails to guarantee quality. Meanwhile, preprints provide rapid dissemination, undermining journals’ historic monopoly. MacLeod argued that unless journals add demonstrable value through transparency, reproducibility checks, and meaningful peer review, their place in the ecosystem will be questioned.

Other presentations reinforced the point:

  • A study of open peer review histories (n=40,844 reports) showed that when reviews were published alongside papers, they were longer (22 vs 17 sentences), contained more actionable suggestions, and provided better training material for early-career researchers. In other words, transparency improved both the quality of reviews and their value to the community.
  • BMJ’s randomized trial of an open-science checklist tested whether embedding reproducibility prompts during peer review improved outcomes. Early results suggested modest gains, but many authors ignored the checklist unless editors enforced compliance.
  • A UK survey on reproducibility practices revealed cultural barriers to open science: researchers cited lack of training, institutional support, and fears of exposing mistakes or being scooped as reasons for not sharing protocols or data.

The take-home: preprints are here to stay, and journals must integrate them rather than resist them. That means linking submissions to preprint versions, embracing open review where possible, and embedding reproducibility frameworks directly into peer review.

The Human Factor in an AI Age

Perhaps the most striking tension at the Congress was between the enthusiasm for AI and the insistence on keeping humans in the loop.

  • Zak Kohane’s plenary described NEJM’s experiment with an “AI Fast Track,” where GPT-5 and Gemini Pro acted as reviewers. The models flagged trial design flaws and statistical anomalies that some human reviewers missed. Yet Kohane cautioned that AI cannot judge context or take responsibility, roles that remain squarely human.
  • A study comparing AI-generated reviews with human ones found that large language models could produce feedback that editors rated as constructive and comprehensive. But the presenters insisted on safeguards: transparency, editorial oversight, and exclusion of AI-only reviews from decision-making.
  • A presentation showed that AI cut outcome-switching detection time in clinical trials from 27 minutes to just 2, but human reviewers were still needed to determine whether the deviations were legitimate.

The trial illustrated a crucial point: even when process innovations or AI tools can add value, it is still human expertise that determines whether peer review feedback is rigorous and actionable. The human factor — judgment, accountability, and fairness — remains at the center of peer review’s credibility.

Across sessions, the consensus was that peer review’s future is hybrid. AI can handle repetitive tasks, checklist compliance, fraud detection, outcome comparisons, but judgment, accountability, and trust remain human responsibilities.

This aligns with the Drummond Rennie Lecture by Ana Marušić, which emphasized that authorship and peer review must remain anchored in contributor accountability. Even as AI tools proliferate, systems like CRediT contributor roles and ORCID verification are essential to ensure that responsibility is transparent and traceable.

Looking Ahead: Toward a Sustainable, Transparent Future

Taken together, the Congress painted a picture of a peer review system in transition:

  • Incentives will likely become more flexible, blending payments in some cases with recognition systems that link reviewer contributions to academic credit.
  • Preprints will continue to grow, forcing journals to articulate their added value — through transparency, integrity checks, and reproducibility measures.
  • AI will increasingly support, but not replace, reviewers and editors — speeding processes while humans retain the role of decision-makers and guarantors of integrity.

The human factor is central in each of these shifts. Reviewers need recognition. Authors and editors need transparency and reproducibility. Readers and funders need assurance that research is trustworthy. AI may reshape workflows, but people remain accountable for the scientific record.

Closing Thoughts

At the 10th Peer Review Congress, the mood was both anxious and optimistic. Fraud and misconduct are scaling, reviewers are fatigued, and AI poses new challenges. Yet the community is also building tools, policies, and collaborations that could make peer review more robust, transparent, and inclusive than ever before. The future of peer review will not be defined by whether we pay reviewers, publish preprints, or adopt AI. It will be defined by whether we can align these innovations with the human values of accountability, fairness, and trust. Preprints may change how science is shared, AI may change how it’s checked, but people still decide what is true.

– By Tony Alves

Read the previous part

Latest news and blog articles