A quick rundown from the 9th Peer Review Congress- Part 1- Author and contributor misconduct

A quick rundown from the 9th Peer Review Congress- Part 1- Author and contributor misconduct

Happy Peer Review Week 2022! This year’s Peer Review Week theme is “Research Integrity: Creating and supporting trust in research.” In celebration of this year’s theme, I will recap select sessions from the International Congress on Peer Review and Scientific Publication (PRC), which took place in Chicago Illinois from September 7th to 9th. This conference generally takes place every four years and features presentations and posters covering the academic research of peer review and scholarly publishing practices. 

As niche as this may sound, I find it to be one of the most diverse publishing conferences that I attend. This could be because the sessions and the presenters are discussing various academic aspects of peer review, and they are diving deep into the research projects that they have been working on for years. Their research covers the entire gambit of the scientific publishing process, from authorship and the behavior of authors to editorial and reviewer bias to assessment of the effectiveness of policies and processes. The researchers examine the use of guidelines, the preference of funders, concordance and discordance between preprints and published papers, problems with systematic review, reporting clinical trials, policies around retractions, how to detect fraud and other ethical issues, the list goes on and on!

Much of the research presented at the PRC has a direct bearing on improving trust in the publishing process as well as finding way to root out fraud. In fact, the opening lecture, Bias, Spin and Problems with Transparency of Research, by Isabelle Boutron addressed this straight on by defining spin as “intentional or unintentional reporting that fails to faithfully reflect the findings and could affect the impressions of the results”. Boutron said that peer reviewers have trouble detecting spin, and that unrealistic expectations of the peer review process should be scaled back. To paraphrase PRC founder Drummond Rennie, everyone brings prejudice, misunderstanding and knowledge gaps to the process, which means there needs to be mechanisms to correct for these biases. Some solutions recommended by Boutron inlcude, change the format of scholarly communication to make it easier to interrogate and correct, make peer review more flexible and adaptable to the study format, and assist reviewers with useful AI tools.

The first morning of the PRC was all about author and contributor misconduct. Trust in research starts with trust in researchers themselves. The first plenary session, Prevalence of Honorary Authorship According to Different Authorship Recommendations and Contributor Role Taxonomy (CRediT) Statements, presented by Nicola Di Girolamo, looked at self-reported CRediT statements, and measured those statements against two different sets of authorship guidelines to determine authorship authenticity. One set of guidelines was developed by the International Committee of Medical Journal Editors (ICMJE), the other was a looser adaption of the ICMJE guidelines. The researchers examined articles published by the Public Library of Science (PLOS) from July, 2017 to October, 2021, to see if any of the authors might actually be considered “honorary authors”, individuals who did not meet the minimum qualifications to be considered authors, or “supply authors”, individuals who provided supplies or funding but did not contribute to the design or interpretation of the research. The study found that approximately one third of the authors did not qualify for full authorship, based on ICMJE guidelines. The percentage dropped considerably when the looser guidelines were applied, but they were still significant. The good news is that the percentages consistently decreased from 2017 to 2021 in both cases. The study recommends stricter, more consistent criteria for authorship be developed and adopted across all of science to avoid confusion that likely leads to honorary and supply authorship. I can’t help but wonder if the drop in honorary and supply authorship is a result of a more consistent and better understanding of the CRediT taxonomy by researchers and publishers. Di Girolamo’s study can be found here: https://osf.io/ezpxs/ 

The second plenary session also addressed author misconduct in the form of image misuse. Daniel Evanko presented Use of Artificial Intelligence-based Tool for Detecting Image Duplication Prior to Manuscript Acceptance. This study described the use of a commercial AI tool called Proofig to detect image reuse in manuscripts submitted to the American Association of Cancer Research (AACR). Image reuse is a common problem and is not only hard to detect, it is also time intensive when done manually. In a comparison of time required to screen for image reuse, the average analysis time for manual checking was 8 minutes, while the AI-supported process took 4.4 minutes. Effectiveness was also improved using AI, with manual checking detecting 5 duplications in 3 manuscripts, and AI screening catching 11 duplications in 8 manuscripts. One of the findings that I found encouraging is that most duplications were unintentional, and authors were often able to supply the correct image. It seems that many labs use image and data repositories and do not practice good data hygiene. A discussion of AACR’s use of the AI tool Proofig can be found here: https://www.nature.com/articles/d41586-021-03807-6 

The third plenary session of the morning targeted paper mills, which are organizations that create fake papers and sell authorship on those papers to desperate researchers who need to bolster their CV. This is a topic that I have been examining for a few years while taking part in STM’s Simultaneous Submission working group, a project managed by STM’s new STM Solutions division. The presentation, Publication and Collaboration Anomalies in Academic Papers Originating from a Russian-Based Paper Mill, given by Anna Abalkina, examined the work of a Russian paper mill by looking at the commercial offers to researchers being made by that organization, and then finding published papers that matched those offers. The study found that 451 papers were published in 159 journals. More that 6000 authorship slots were sold to around 800 different scholars from at least 39 countries. Because the paper mill usually targeted different journals each time, it would be hard to detect these fraudulent papers using simple similarity checking. Abalkina provided some tips for editors who might be suspicious of fraud, such as checking to see of authors have collaborated in the past, checking so see if they have common research interests or common affiliations, and if editors see that the paper’s authors specialize in different disciplines, or don’t specialize in the topic of the paper at all, investigate! This is a great use case for the work that STM Solutions is doing with the development of their Integrity Hub, a collaboration between publishers to detect image and paper duplication and paper mill activity.  Abalkina’s study can be found here: https://arxiv.org/abs/2112.13322 

The final paper, Effect of Alerting Authors of Systematic Reviews and Guidelines That Research They Cited Had Been Retracted, presented by Alison Avenell, addresses a systemic problem that plagues research, the proliferation of retracted science through citations. This is particularly problematic when it happens in systematic reviews of clinical trial, since often diagnosis and treatment guidelines are based on those reviews. The researchers found 27 retracted reports of clinical trials, and then performed literature searches to find systematic reviews that cited those trials. They found 88 systematic reviews that fit their criteria, and after evaluating the impact of the retracted article, the researchers found that 44% of the reviews were significantly impacted if the retracted trials were removed. The study also evaluated author and editor reactions and actions after being notified multiple times of the retracted trials. The results here were disappointing as very few corrections or even acknowledgement of the notification was made by the authors or the journals. This study shows the need for more openness and greater proactive action around surfacing retracted science. In fact, there is an important initiative headed by the National Information Standards Organization (NISO) called Communication of Retractions, Removals, and Expressions of Concern or CORREC. The CORREC working group is creating a recommended practice that “describes the involved parties, along with their responsibilities, actions, notifications, and the metadata necessary to communicate retracted research across stakeholders’ systems”. It will also address “the dissemination of retraction information (metadata & display) to support a consistent, timely transmission of that information to the reader (machine or human), directly or through citing publications, addressing requirements both of the retracted publication and of the retraction notice or expression of concern”. Avenell’s study can be found here: https://www.tandfonline.com/doi/full/10.1080/08989621.2022.2082290 

Tomorrow I will provide a summary of sessions from the PRC that discuss peer reviewer conduct, the effectiveness of peer review, and measures being taken to improve reviewer performance. A dependable and transparent peer review process is integral to trust in research, and I will highlight studies that address this topic. 

By Tony Alves

Latest news and blog articles