Every four years, the International Congress on Peer Review and Scientific Publication brings together researchers, editors, funders, and publishers to examine how science is communicated, and how that process can be improved. The 10th Congress, held in Chicago in September 2025, made it clear: artificial intelligence (AI) has arrived inside the workflow, misconduct remains rampant, and journals are re-thinking their value in an open-science era.
In this post, I’ll highlight six key themes from the congress that interested me most. These will be expanded in more detail in follow-up posts during this week, which also happens to be Peer Review Week! The theme of this year’s Peer Review Week is “Rethinking Peer Review in the AI Era”, and so much of my focus will be on AI.
1. AI in the Peer Review Process
AI is no longer hypothetical in publishing and it’s already being used, often invisibly.
- JAMA Network analysis of 82,829 submissions showed author use of AI more than doubled from 2023 to 2025 (1.6% → 4.2%). Most disclosures were for language editing (50%), but some cited statistical help (12%) or drafting text (8%). ChatGPT was the most named tool (63%).
- AACR data from 46,500 abstracts and 29,500 reviews showed almost no AI use before late 2022, but a steep rise after ChatGPT’s release. By 2024, nearly a quarter of abstracts carried detectable AI signatures.
- Journals themselves are experimenting: NEJM’s “AI Fast Track” pilot, presented by Zak Kohane, used GPT-5 and Gemini Pro to review clinical trial submissions. These models flagged methodological flaws that human reviewers missed.
The message: AI is already part of peer review, from writing to reviewing to editorial screening. The challenge is making its use transparent and accountable.
2. Reviewers’ Use of AI and What They Need
The most sensitive AI topic was not authors but reviewers.
So what’s the problem? Lack of transparency. If reviewers use AI to improve clarity, summarize methods, or check compliance with CONSORT guidelines, should they have to declare it? Editors at the congress suggested yes, and that journals should consider offering in-platform AI helpers so reviewers don’t need to resort to undisclosed outside tools. This would especially benefit early-career researchers and those for whom English is not a first language, the groups already shown to rely more heavily on AI.
3. Research Integrity: What’s Broken, and What Works
The Congress made clear that AI is only part of the story. The deeper crisis is research integrity.
- Paper mills are operating at industrial scale. The pro network alone produced 1,517 papers across 380 journals, involving 4,500+ authors in 46 countries. Springer identified 8,432 submissions tied to the mill and published nearly 80 despite detection.
- Fake reviewers and personas are being used to validate fraudulent papers. One operation created 26 false identities, half of which successfully became peer reviewers, citing and endorsing each other’s fabricated work.
- Image manipulation remains endemic: analysis of 8,002 retracted papers found image problems (especially gel blots) as the leading cause. In materials science, a review of 11,000+ SEM-based papers found 21% had misidentified instruments — a red flag for paper-mill activity.
- Authorship manipulation is also common: Taylor & Francis reported that 81% of requests to add new authors after submission were denied, and these requests were strongly correlated with other red-flag behaviors.
But there are solutions: PLOS presented its multi-layered integrity screening, combining STM Integrity Hub duplicate checks, image forensics, and study-type audits. The result? Desk rejections rose from 13% in 2021 to 40% in 2025 conserving reviewer capacity and filtering out bad science earlier.
4. Keeping Humans in the Loop
Despite enthusiasm for AI, speakers consistently emphasized: AI cannot take responsibility for research integrity.
The future is hybrid: AI as a screener and summarizer, humans as interpreters and accountable decision-makers.
5. Peer Review Incentives and Timing
Peer review runs on goodwill, but that goodwill is under strain.
- A trial at Critical Care Medicine tested $250 reviewer payments: completion rates rose from 42% to 50%, and turnaround dropped slightly (12 → 11 days). Quality scores, however, did not improve. Scaling the program would cost $150,000 per year for that journal alone.
- A BMJ survey of 183 patient and public reviewers found that 48% would be more likely to accept if paid £50, while 32% preferred a one-year journal subscription. But a third said payment wouldn’t affect their decision, and some worried it would “attract the wrong motives.”
The evidence suggests money helps with a bit with speed, but recognition matters too. ORCID-linked reviewer credit, certificates, and visible acknowledgment are scalable and may be more sustainable.
6. Preprints and Open Science
Finally, the Congress wrestled with the role of preprints in open science.
- Malcolm MacLeod’s Altman Lecture challenged journals to ask what value they add when preprints can disseminate research faster. He argued that journals publish “too much low-quality research” and that their criticisms of preprints ring hollow given peer review’s inconsistency.
- Studies of open peer review showed published reports were longer (22 vs 17 sentences), contained more detailed suggestions, and provided valuable training material for early-career researchers.
- BMJ’s randomized trial of an open-science checklist is testing whether embedding reproducibility prompts in peer review improves quality.
- A UK survey confirmed cultural barriers: researchers often avoid sharing protocols and data out of fear of mistakes being exposed or being “scooped.”
Preprints and open-science checklists can help make research more transparent. Journals that embrace them can position themselves as adding rigor and reproducibility on top of dissemination.
Closing Thoughts
The 10th Peer Review Congress revealed both a deep anxiety and a cautious optimism. AI is creeping into every part of the workflow, often faster than policies can keep up. Integrity threats are multiplying, but new screening tools are proving effective. Reviewers are strained, but creative incentive models are being tested. And preprints are forcing journals to define their true value in an open-science ecosystem.
In this post I’ve sketched the landscape across six themes which also highlight the Peer Review Week theme: AI in peer review, AI used by reviewers, research integrity, humans in the loop, incentives, and preprints. In the coming days, I’ll dive deeper into these areas in three dedicated posts:
- AI and Peer Review: Opportunity and Risk
- Integrity at Scale: Fighting Paper Mills, Fake Reviewers, and Fraud
- The Future of Peer Review: Incentives, Preprints, and the Human Factor
The message from Chicago was unmistakable: peer review is changing fast. In this series I will show how the industry is examining those changes, reacting to those changes, and what the recommendations to deal with those changes are from those who think deeply about peer review.
– By Tony Alves
Read the next part