The STM Frankfurt Conference 2024 highlighted the transformative role of artificial intelligence in scholarly publishing, addressing both its potential and its challenges. Experts from across the industry gathered to discuss how AI can enhance content quality, streamline workflows, and drive innovation. At the same time, they explored the ethical and policy considerations necessary to preserve the integrity of scholarly communication.
This blog series builds on the insights shared during the conference. Following an in-depth look at Linda S. Bishai’s keynote, “Our Future with AI – Getting it Right,” each post shows how her key concepts—such as the Ethical, Legal, and Societal Implications (ELSI) framework—resonate with the views of other panelists.
This series offers an exploration of the industry’s collective efforts to embrace AI while ensuring trust and quality remain central to research communication.
Key Themes from Linda S. Bishai’s Keynote, “Our Future with AI – Getting It Right”
- The Need for Ethical, Legal, and Societal Implications (ELSI) Framework: Bishai highlighted the risks of an unregulated AI future, cautioning against dystopian outcomes. She advocated for the ELSI framework, which DARPA employs to guide ethical and reliable AI development. The ELSI approach ensures that AI remains human-centered and addresses the societal impact of its use.
- Multidisciplinary Approach to AI Development: The ELSI framework brings together diverse experts, including ethicists, social scientists, and technologists, to consider the broader consequences of AI. This multidisciplinary method helps confront complex ethical questions at every stage of AI’s lifecycle, from conception to decommissioning.
- Autonomy Standards: In military contexts, where AI might make critical life-and-death decisions, Bishai emphasized dynamic ethical guidelines. These guidelines are relevant in a non-military context as well. AI should respect societal norms, cultural differences, and legal strictures.
- Lessons from Ethical Failures: Bishai cited Google Glass as an example of ethical oversight failure. Despite its innovation, it faced backlash due to privacy violations and physical discomfort. The product’s failure underscored the importance of considering whether technology should be built, beyond whether it can
- Benchmarks for Ethical AI: Bishai argued for systems that promote ethical behavior rather than facilitate misuse. She pointed to the U.S. Department of Defense’s efforts to embed ethical commitments in procurement, contrasting this with the private sector’s lack of transparency, where privacy policies obscure ethical principles.
- Challenges in Private Sector AI Development: In the private sector, the push for rapid innovation often overlooks ethical considerations. Bishai stressed that ELSI-driven research can help private companies plan more responsibly, ensuring long-term societal benefits while addressing potential harms.
- AI’s Role in Society: Bishai posed critical questions about AI’s purpose, advocating for its use as a tool to augment human capabilities, not replace them. She stressed the importance of understanding AI’s limitations, including its inherent biases, to ensure it complements human efforts responsibly.
Bishai’s keynote called for embedding ethical considerations at every stage of AI development. This approach allows society to harness AI’s potential while mitigating risks, ensuring that AI serves humanity responsibly and ethically.
PANEL DISCUSSION
Executive Panel: AI in Scholarly Publishing
In a forward-looking panel moderated by Caroline Sutton, CEO of STM, industry leaders examined the impact of artificial intelligence on scholarly publishing. Panelists Sarah Tegen (ACS), Daniel Ebneter (Karger), Aaron Wood (APA), and Frederick Fenter (Frontiers) offered nuanced insights into AI’s role in enhancing research, emphasizing stewardship, transparency, and accountability. Their discussion expanded upon Bishai’s keynote themes listed above, particularly the need for ethical frameworks and human oversight to navigate the evolving AI landscape.
Harnessing AI as a Scholarly Tool
Frederick Fenter set the stage by underscoring AI’s dual role as a powerful tool and a potential ethical minefield. He emphasized the importance of transparency in determining AI’s role within the scientific process, particularly in writing and accountability. This mirrors Bishai’s focus on the Ethical, Legal, and Societal Implications (ELSI) framework, advocating for thoughtful application of AI technologies in science.
Sarah Tegen highlighted the educational imperative, encouraging researchers to critically assess why they use AI. She emphasized reflective practices, aligning with Bishai’s argument for fostering AI literacy across all levels of academic research to ensure responsible use.
Daniel Ebneter discussed “human recoil,” a phenomenon where rapid technological advances outpace societal comfort. He advocated for establishing early guardrails, a sentiment that echoes Bishai’s call for proactive measures in AI governance to avoid future pitfalls.
Aaron Wood offered a more optimistic view, emphasizing the publishing industry’s readiness to tackle AI’s challenges. He pointed to existing legal structures, like copyright and privacy policies, as a foundation for developing responsible AI systems. This aligns with Bishai’s advocacy for a regulatory environment that fosters ethical AI integration.
Stewardship and Accountability
Stewardship emerged as a central theme, with all panelists agreeing that publishers bear a significant responsibility in preserving the integrity of scholarly records. Frederick Fenter stressed the need for robust systems to ensure accurate attribution and maintain rigorous standards, even as AI-generated content becomes more prevalent.
Sarah Tegen proposed leveraging publishers’ influence to negotiate ethical uses of content by large language models (LLMs). She highlighted the potential for trade groups to unify efforts, aligning with Bishai’s view on collective action for ethical AI practices.
Daniel Ebneter focused on the risks of integrating public and private content in LLMs, calling for innovation in publishing platforms to address these challenges. His perspective reinforced the idea of publishers as active custodians of research quality, a point Bishai consistently underscores.
Transparency and Ethical Use
Aaron Wood emphasized that transparency must guide AI’s development and deployment. He pointed to emerging tools like retrieval-augmented generation (RAG) to ensure that AI respects proper content attribution—a direct response to Bishai’s concerns about opaque AI systems.
The panel also discussed the need for collaborative dialogue between publishers and AI developers. Daniel suggested that rather than adopting a defensive stance, publishers should position themselves as partners in improving AI systems. This approach aligns with Bishai’s advocacy for fostering open conversations to address AI’s ethical and societal impacts.
Looking Ahead
As the session concluded, Caroline Sutton posed a critical question: What does the next phase of AI in scholarly publishing look like? Aaron Wood highlighted emerging frameworks, such as AutoGenIt, which offer customizable workflows for AI integration. He stressed the importance of ongoing innovation to ensure AI tools enhance, rather than replace, human judgment.
Frederick Fenter predicted that LLMs would become the dominant medium for information distribution, making proper attribution and peer review integration even more crucial. Sarah Tegen raised concerns about losing the serendipity of discovery in AI-driven systems, questioning how to balance efficiency with the unexpected insights that often drive scientific breakthroughs.
The panel echoed a shared commitment to stewarding the ethical integration of AI in scholarly publishing. Building on Bishai’s foundational themes, the discussion underscored the need for human oversight, collaborative governance, and innovative frameworks to ensure AI serves as a force for advancing trusted research. The message was clear: while AI offers transformative potential, its success hinges on responsible, transparent, and accountable implementation.