Exploring AI’s Transformative Role in Scholarly Publishing at ALPSP 2024

Exploring AI’s Transformative Role in Scholarly Publishing at ALPSP 2024

AI was one of the most prominent topics discussed at the ALPSP Annual Conference and Awards 2024, held at the Hilton Manchester Deansgate from September 11–13, reflecting its growing influence in shaping the future of research dissemination and integrity. From its ability to enhance content discovery to the ethical concerns it raises, AI’s role in scholarly publishing is complex and evolving. Below, we review three of the most notable sessions from ALPSP 2024 that addressed AI’s potential, challenges, and future in the industry.

Keynote: Scholarly Publishing in the Era of Open Access and Generative AI

Jake Okechukwu Effoduh, Assistant Professor at the Lincoln Alexander School of Law, opened the conference with a keynote that explored how generative AI is transforming the landscape of scholarly publishing. Effoduh’s insights provided a sobering yet forward-thinking reflection on the opportunities and challenges posed by AI, particularly in relation to Open Access (OA).

Effoduh emphasized that while AI offers numerous benefits in terms of content generation and workflow optimization, it also presents ethical challenges—chief among them the exploitation of Open Access content. AI, particularly generative models like ChatGPT, relies heavily on publicly available OA data to train models, but this can occur without crediting or compensating the original authors. This raises significant concerns, particularly for researchers in the Global South, where access to resources and funding is already limited.

Effoduh posed critical questions: Does Open Access actually exacerbate inequalities? He suggested that although OA was designed to democratize access to knowledge, it may inadvertently disadvantage researchers from less affluent regions. The wealthier Global North benefits from free access to knowledge while contributing little to its creation. Generative AI tools like ChatGPT and Google’s Bard use these OA databases to generate outputs, often with little concern for the ethical implications of scraping scholarly content.

Effoduh also highlighted AI’s current limitations—generative AI doesn’t create new knowledge but instead synthesizes existing content, potentially reinforcing biases present in its source material. This can lead to incomplete or flawed research outputs, which may perpetuate misinformation if not carefully monitored. Effoduh called on the academic publishing community to build trust between researchers, technologists, and publishers, ensuring that AI is used responsibly to preserve the integrity of scholarly work.

Panel Discussion: The Role of Human Editors in an AI World

This very engaging panel discussion addressed the increasing integration of AI in scholarly publishing and how it is reshaping editorial roles. Moderated by James Butcher from Journalology, the panel featured insights from Jessamy Bagenal (Senior Executive Editor at The Lancet), Emily Chenette (Editor-in-Chief at PLOS ONE), and Ian Mulvany (Chief Technology Officer at BMJ).

The panel discussed the ways AI is already affecting researchers and how publishers and editors can support them in navigating this evolving landscape. Ian Mulvany noted that AI is increasingly used to streamline routine tasks, but questioned whether researchers are receiving enough guidance on how to properly use these tools. Emily Chenette pointed out that there is a lack of standardization across the industry regarding AI use, making it difficult for authors to understand when and how AI should be incorporated into their research processes. Both panelists emphasized the need for transparency in AI usage, but acknowledged that the community lacks clear, unified guidelines, leading some authors to use AI aggressively without sufficient oversight.

A key topic of discussion was how AI is challenging traditional concepts of authorship. Jessamy Bagenal highlighted that while AI tools, such as large language models (LLMs), primarily serve as efficiency boosters, they do not engage in human-like thought or idea generation. The Lancet asks authors to declare if AI tools were used and examines the prompts and outputs to ensure clarity in AI’s contribution. Mulvany expanded on this, explaining that while AI can generate text, the intellectual ownership of ideas remains with the author. He noted that future AI advancements, such as embedding references to validate generated content, could help address current limitations, like AI’s tendency to produce false information or “hallucinations.”

The panel debated whether it is acceptable to use AI-generated content in scholarly research. Ian Mulvany proposed that if AI-generated content results in truthful and accurate assertions, it might be acceptable, shifting the focus from how the content is produced to the reliability of the information itself. This sparked a broader discussion on how AI’s role in generating content could redefine acceptable practices in research publication.

Jessamy Bagenal noted that AI is becoming increasingly essential in editorial workflows, particularly in fraud detection and improving production efficiency. However, Mulvany argued that AI’s potential extends beyond making current tasks faster—it could fundamentally transform how the scholarly corpus is curated, analyzed, and disseminated. He emphasized that the industry has not yet fully realized AI’s broader applications, which could revolutionize the way research content is managed and accessed.

The discussion moved toward how AI might assist in peer review and quality control. Emily Chenette remarked that PLOS ONE has seen a decline in submission quality, increasing the burden of quality control on editors. AI tools could help manage this growing volume, offering suggestions or identifying weaknesses in submitted papers. Mulvany mentioned that AI tools like ChatGPT could assist in peer review by generating helpful comments, potentially lightening the workload of human reviewers.

The issue of bias in AI systems was another focal point of the session. Jessamy Bagenal raised concerns that much of the medical literature has a Western-centric, white bias, and while some believe it is easier to remove bias from algorithms than from humans, the panel agreed that bias remains a significant challenge. Mulvany added that while bias can be confronted once it is recognized, AI tools that generate health information based on non-authoritative sources could lead to dangerous misinterpretations. This is particularly problematic in medical publishing, where accurate, unbiased information is critical.

Rather than simplifying information, Ian Mulvany suggested that AI should be used to enrich scholarly content. He provided an example of transforming academic articles into more engaging formats, such as podcasts, to enhance accessibility and trust. However, he cautioned against reducing complex research to overly simplistic summaries, which he likened to “low-resolution maps” that omit important details. Instead, AI should be leveraged to amplify the depth and engagement of academic content.

The panelists agreed that while AI offers promising tools for enhancing editorial workflows, the role of human editors will remain crucial in ensuring the accuracy, integrity, and trustworthiness of scholarly content. AI can assist with routine tasks, but human editors are essential in addressing biases, verifying the truthfulness of AI-generated content, and maintaining ethical standards in publishing.

The panel agreed that while AI holds great potential for improving efficiency in editorial processes and enhancing the scholarly publishing experience, human editors must remain vigilant in ensuring quality control and ethical standards. There is a need for clear guidelines and transparent practices regarding AI’s role in scholarly publishing, but there is also room for AI to help make scholarly content more accessible and engaging without sacrificing its depth or complexity. As AI continues to evolve, the publishing industry must balance innovation with the core principles of research integrity and trust.

Session: An Update on the European Accessibility Act (EAA)

Emphasizing the role of AI in accessibility, Simon Holt, Publisher and Co-chair of Elsevier Enabled, provided an update on how the European Accessibility Act (EAA) is impacting scholarly publishers. With a focus on making digital content—from books to journals—accessible to all, Holt discussed the growing importance of AI in achieving these goals. As the industry grapples with new regulations, AI is emerging as both a powerful tool and a challenge, particularly when it comes to ensuring content meets stringent accessibility requirements.

Accessibility is crucial not just for compliance, but to ensure equitable access to information for all readers. According to the United Nations, 15% of the world’s working-age population lives with some form of disability, ranging from visual impairments to cognitive limitations, photosensitivity, and motor disabilities. Scholarly publishers must make their content accessible to:

  • Readers without vision, including those who rely on screen readers
  • Individuals with limited physical strength or reach, impacting how they interact with digital platforms
  • Neurodivergent readers or those with photosensitive seizures
  • People with limited cognition, hearing, or vocal capability

But accessibility is not limited to those with disabilities. AI-driven features like closed captions and text-to-speech tools also benefit individuals who speak English as a second language, or those who consume media predominantly on mobile devices—something that 75% of people report doing. AI’s role in making content universally accessible has become a central focus as publishers work toward full EAA compliance.

AI is already playing a significant role in helping publishers meet accessibility standards, but it also presents challenges. As Holt explained, AI excels at transcription and basic tasks, such as generating alt-text for simple images or converting spoken content into captions. These capabilities allow AI to automate many of the tedious aspects of accessibility work, making it possible to scale efforts across a publisher’s entire catalogue, including both frontlist and backlist titles.

However, AI struggles when faced with more complex content. For instance, AI can generate basic alt-text for photographs but falters when dealing with complex graphs, charts, or detailed medical images—common features in academic publishing. Similarly, while AI can transcribe videos into captions with relative accuracy, it struggles to summarize content or interpret nuanced academic material. In these cases, human oversight remains critical, and Elsevier’s Responsible Use of AI policy ensures that there will always be a “human in the loop” to validate AI-generated outputs.

Holt emphasized that while AI is a valuable tool, its limitations mean that it cannot be relied upon for every aspect of accessibility. The industry must approach AI with caution, ensuring that it is used responsibly to enhance, not undermine, the integrity of scholarly content.

The European Accessibility Act (EAA) mandates that by 2025, all digital content—including eBooks, websites, and eCommerce platforms—must be fully accessible to users with disabilities. Non-compliance could result in hefty fines, with Germany alone expecting fines of up to €100,000 per instance. Holt discussed how AI will be pivotal in ensuring that publishers can meet these requirements, particularly as they grapple with making backlist content accessible.

Despite the previously mentioned concerns, AI can make a significant impact is in the generation of alt-text, the descriptions that make images and graphs accessible to visually impaired users. Holt explained that authors typically lack the technical writing expertise needed to create high-quality alt-text for complex academic images, making this a labor-intensive process. To address this, Elsevier plans to have AI-powered suppliers generate the initial alt-text, which authors can then review during the proofing stage. This workflow allows AI to handle the heavy lifting while ensuring that human experts maintain control over the final output.

However, AI’s ability to handle complex alt-text remains limited. For instance, while AI can describe a photograph or simple chart, it struggles to accurately describe intricate medical imagery, such as MRI scans or histopathology slides. This is where human intervention is essential to ensure that the alt-text is accurate and meaningful.

One of the biggest challenges for publishers is ensuring that backlist content—titles published before accessibility standards were common practice—meets EAA requirements. Holt explained that Elsevier is taking a prioritized approach to backlist accessibility, focusing on high-use titles first. Here, AI can play a critical role in identifying and processing large volumes of content, speeding up the process of making backlist titles accessible.

For lower-use titles, Holt mentioned that Elsevier is exploring an “on request” model, where readers can request that specific backlist titles be made accessible. AI could assist in this process by generating the basic accessibility features for requested titles, reducing the time and cost associated with manual updates.

AI’s transcription capabilities are particularly valuable for making audio and video content accessible. Holt pointed out that AI is highly effective at transcribing spoken content into captions, a feature that benefits not only users with hearing impairments but also those who are consuming content in noisy environments or in a language that isn’t their native tongue.

However, AI still struggles with summarizing complex academic discussions, which limits its usefulness for captioning longer lectures or in-depth discussions common in scholarly work. Human editors must step in to ensure that the captions provide an accurate and coherent summary, especially for content-heavy academic material.

As publishers move toward full compliance with the European Accessibility Act, Holt emphasized that this is not a “one and done” process. Accessibility standards will continue to evolve, and AI will need to adapt alongside them. Holt warned that even after achieving full compliance, publishers will likely face additional terms and conditions during contract renewals, RFPs, and audits, which will require continuous updates to accessibility features and documentation.

Moreover, AI will need to evolve to meet the growing demands of accessibility legislation. While AI can handle transcription and basic alt-text generation today, future advancements in natural language processing and machine learning could enable AI to take on more complex tasks, such as summarizing dense academic papers or interpreting intricate visual data. For now, however, AI must be seen as a tool that complements human expertise rather than replacing it.

The role of AI in accessibility is both promising and challenging. While AI offers powerful tools to streamline the creation of accessible content, it is not a panacea. Human oversight is essential to ensure that AI-generated outputs are accurate, nuanced, and compliant with accessibility standards. As the European Accessibility Act deadline approaches, publishers must rely on AI to meet the demands of making vast amounts of digital content accessible, but they must also be prepared to integrate human expertise where AI falls short.

In the coming years, AI’s role in accessibility will continue to grow, and publishers like Elsevier are leading the charge in figuring out how to use AI responsibly. By combining AI’s efficiency with human judgment, the scholarly publishing industry can ensure that all readers, regardless of their abilities, have equal access to knowledge.

AI’s Future in Scholarly Publishing: Key Takeaways

The keynote and discussions at ALPSP 2024 highlighted the rapid advancement of AI-driven tools in scholarly publishing. However, these tools frequently rely on freely accessible content without compensating the original creators, raising serious concerns about intellectual property (IP) and copyright, particularly regarding how generative AI models utilize publicly available data. This issue is especially critical in regions like the Global South, where researchers already face challenges accessing publishing platforms, and the misuse of their content exacerbates these inequities.

Key Points Raised:

  • Fair use vs. exploitation: Generative AI frequently scrapes vast amounts of Open Access (OA) content, often without compensating the original creators. This sparks a debate about balancing innovation and fair use. How can this issue be addressed to foster AI innovation without exploiting creators?
  • Licensing AI-generated content: There is an urgent need for publishers to implement clear policies regarding AI in content creation to ensure that authorship rights and IP are respected. New licensing models, such as Responsible AI Licenses (RAIL), could help manage AI’s ethical use of scholarly data, ensuring that content creators are properly acknowledged and compensated.

The conference stressed the importance of developing frameworks that strike a balance between innovation and fair compensation. Without clear licensing agreements and regulations, AI’s usage of vast amounts of open data risks becoming exploitative rather than beneficial to the scholarly community.

Ethical and Practical Concerns:

The discussions at ALPSP 2024 underscored AI’s growing role in scholarly publishing. While AI presents numerous opportunities to improve workflows, enhance content discovery, and expand research accessibility, it simultaneously raises concerns around bias, content integrity, and intellectual property rights. Human editors and researchers are essential in ensuring that AI is deployed responsibly and that the quality and integrity of scholarly content are maintained.

As AI technologies evolve, the scholarly publishing community must collaborate to develop ethical frameworks and best practices for integrating AI into academic workflows. Maintaining a balance between technological innovation and human oversight is critical to maximizing AI’s potential while preserving the values of equity, trust, and integrity in academic research. AI’s influence on scholarly publishing is still emerging, and it will undoubtedly be transformative in the coming years.

– by Tony Alves

Read the next part

Latest news and blog articles