This spring HighWire’s Senior Vice President and head of Product Management, Tony Alves, is attending several industry events, including the National Academy of Sciences’ Journal Summit, STM’s Research Integrity Masterclass, and the annual meetings of both the Council of Science Editors and the Society for Scholarly Publishers. For the next several weeks Tony will provide useful summaries of some of the most important sessions, highlighting insights and hot-takes from those meetings.
I attended the STM Research Integrity Masterclass in April, in Washington DC, as a member of the program committee. One of the topics covered in the meeting that I would like to share here relates to the recent phenomenon of mass retractions. In the past few years there has been some high-profile mass retractions of hundreds of research papers, which can be seen as both worrying as well as encouraging. It is worrying that so many potentially false papers have entered the research ecosystem. It is encouraging that publishers are taking this very seriously and are taking measures to detect and identify these papers.
Research integrity is one of the most urgent issues facing scholarly publishers. I’ve noted before, in my reporting from the 2022 Peer Review Congress, and in posts on my involvement in STM’s Research Integrity Hub, that the public’s trust in science is in decline. Some of that can be blamed on the politicization of science, some of it can be blamed on misrepresented studies that often make provocative claims. These perception issues are compounded by the forces that drive researchers to create and submit fake science, thus infecting the scientific record with false claims. The drivers that compel researchers to take short cuts or falsify findings are most often financially motivated or linked to career advancement. There are multiple threats to the integrity of the scientific record, such as paper mills that sell authorship on fake research papers, manipulation of images and data to either enhance or alter actual findings, citation rings where authors cite each other’s papers even when they are not relevant, and fraudulent peer reviewers who provide positive reviews in exchange for cash or other favors.
All scholarly publishers face this problem, but it is the largest publishers who most often need to look at retracting large numbers of published articles, sometimes all at once. Investigating and issuing retractions, even of a single article, can be an onerous process, as discussed in HighWire’s Best Practice webinar “The Effects of Retractions on the Scholarly Record”. There are unique challenges for publishers who are investigating and retracting hundreds of papers at once, requiring special processes and cross-organization collaboration. Several publishers shared their strategies and provided advice at the STM meeting, however, being under “Chatham House Rules” I will discuss two specific experiences in general and will not use names.
Recently, a large commercial publisher discovered coordinated fraud across multiple, unrelated special issues. While examining reviewer reports from these unrelated papers, the publisher found that multiple reviews contained the same text. Following a deeper examination, the publisher suspected that the guest editors for those special issues were working with the reviewers in order to get the papers published. This investigation is ongoing, and it serves as a good model for how to handle a large number of retractions, without getting into the intricacies of author intention, and without the need to depend on actions from institutions who tend to move slowly.
The publisher set up a cross functional team of integrity experts, legal counsel, analytics experts, and members of the editorial team. Using COPE Retraction Guidelines, this team conducted both manual and automated investigations to identify possibly compromised articles. This experience has resulted in a standard methodology for investigating possible fraud that involves a manual scoring scheme combined with data analytics, which can be modified depending on what kind of check is being undertaken. This standardized approach is important for detecting and identifying possible fraud in a large corpus. Some of the fraudulent practices this publisher discovered are peer review manipulation, selling of authorship, mismatches between methods and conclusions, irrelevant references, and tortured phrasing.
In order to efficiently investigate and handle mass retractions, the focus and blame is on manipulation of the editorial process, rather than on the authors’ intent. Although the corresponding author is informed of the investigation, the retraction notice will emphasize manipulation and irregularities in the publishing process. This focus on the editorial process is more straightforward for the publisher, by keeping the author and the author’s institution out of the spotlight.
This approach to mass retractions has some drawbacks. It is a public relations challenge because it brings into question the effectiveness of the peer review process, there is always a chance that a legitimate article will get caught up in the dragnet, and too much publicity around suspect cases might alert the perpetrators and undermine the investigation. There needs to be a balance, because publicizing mass fraud and discussing how to combat it helps other publishers by making them aware of the signals.
Another large commercial publisher discussed their experience combatting paper mills and issuing retractions on a similarly massive scale. Retracting a paper can be a long process with multiple interactions with authors, funders, and institutions. However, with paper mills and the researchers who utilize them, the traditional approach often results in misdirection, delay, and frustration. There is general agreement that the COPE Retraction Guidelines are useful and important, however, they are often more suitable for individual cases of misconduct rather than the scale of misconduct that is taking place with paper mills.
In reaction to a recent discovery of a widespread paper mill infiltration, the publisher built a “retraction factory” to handle the retractions. In accordance with COPE guidelines, they ensured that authors were contacted at least two times ahead of any retraction, and then made sure the authors were given a chance to comment. The articles that were to be retracted were handled in batches and the process was implemented according to a strict schedule. On week 1 a templated notice was sent to all authors on the submission. Responses as well as bounced emails were monitored. On week 2, reminders were then sent to all non-bounced authors, while alternative emails were sought for the others. On week 3, draft retraction notices, including author responses, if received, were sent to all contactable authors. For bounced authors, the first notice was sent to their alternative email in week 3, and a reminder was sent in week 4. On week 5, finalized retraction notices were released to production for publication.
The aim was to have a streamlined process, while still noting an author’s objection. The publisher needed to demonstrate that reasonable efforts were made to inform the author, while minimizing both reputational risk and legal risk. The publisher reported that the vast majority of authors never responded, and those few who did disagreed with the retraction. Also, very few denied any responsibility, and some even agreed with the retraction. The low response rate from authors is disappointing, though it may also show that many authors were not willing to dispute the retraction. Overall, the publisher viewed this activity as a success. There has been no adverse legal action, the scientific community has reacted positively, the effort has helped to clean up the scientific literature, and they have increased their watch list of suspicious entities.
The need for mass retractions is an uncommon occurrence. Many in scholarly publishing are increasingly aware of the issue, thanks to the efforts of organizations like the Committee on Publication Ethics (COPE), who produce useful guidelines, and the National Information Standards Organization (NISO), with the Communication of Retractions, Removals, and Expressions of Concern (CREC) working group. In addition, there are more opportunities to catch fraud at the beginning of the workflow, with plagiarism and image manipulation software, and with help from STM’s Integrity Hub, and industry-wide collaboration to identify simultaneous submissions and paper mill activity at the point of manuscript submission.
– By Tony Alves
Latest news and blog articles
Life Science Alliance Renews Partnership with HighWire