This website may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the website work as you expect it to and give you a more personalized web experience. We respect your right to privacy, so you can choose not to allow some types of cookies. Click on the different category headings to find out more. You can accept or refuse our use of cookies, by moving the selector switch in each category to change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer you.
The STM Frankfurt Conference 2024 highlighted the transformative role of artificial intelligence in scholarly publishing, addressing both its potential and its challenges. Experts from across the industry gathered to discuss how AI can enhance content quality, streamline workflows, and drive innovation. At the same time, they explored the ethical and policy considerations necessary to preserve the integrity of scholarly communication.
This blog series builds on the insights shared during the conference. Following an in-depth look at Linda S. Bishai’s keynote, “Our Future with AI – Getting it Right,” each post shows how her key concepts—such as the Ethical, Legal, and Societal Implications (ELSI) framework—resonate with the views of other panelists.
This series offers an exploration of the industry’s collective efforts to embrace AI while ensuring trust and quality remain central to research communication.
Key Themes from Linda S. Bishai’s Keynote, “Our Future with AI – Getting It Right”
- The Need for Ethical, Legal, and Societal Implications (ELSI) Framework: Bishai highlighted the risks of an unregulated AI future, cautioning against dystopian outcomes. She advocated for the ELSI framework, which DARPA employs to guide ethical and reliable AI development. The ELSI approach ensures that AI remains human-centered and addresses the societal impact of its use.
- Multidisciplinary Approach to AI Development: The ELSI framework brings together diverse experts, including ethicists, social scientists, and technologists, to consider the broader consequences of AI. This multidisciplinary method helps confront complex ethical questions at every stage of AI’s lifecycle, from conception to decommissioning.
- Autonomy Standards: In military contexts, where AI might make critical life-and-death decisions, Bishai emphasized dynamic ethical guidelines. These guidelines are relevant in a non-military context as well. AI should respect societal norms, cultural differences, and legal strictures.
- Lessons from Ethical Failures: Bishai cited Google Glass as an example of ethical oversight failure. Despite its innovation, it faced backlash due to privacy violations and physical discomfort. The product’s failure underscored the importance of considering whether technology should be built, beyond whether it can
- Benchmarks for Ethical AI: Bishai argued for systems that promote ethical behavior rather than facilitate misuse. She pointed to the U.S. Department of Defense’s efforts to embed ethical commitments in procurement, contrasting this with the private sector’s lack of transparency, where privacy policies obscure ethical principles.
- Challenges in Private Sector AI Development: In the private sector, the push for rapid innovation often overlooks ethical considerations. Bishai stressed that ELSI-driven research can help private companies plan more responsibly, ensuring long-term societal benefits while addressing potential harms.
- AI’s Role in Society: Bishai posed critical questions about AI’s purpose, advocating for its use as a tool to augment human capabilities, not replace them. She stressed the importance of understanding AI’s limitations, including its inherent biases, to ensure it complements human efforts responsibly.
Bishai’s keynote called for embedding ethical considerations at every stage of AI development. This approach allows society to harness AI’s potential while mitigating risks, ensuring that AI serves humanity responsibly and ethically.
PANEL DISCUSSION
Looking at Quality, Quantity, and Openness Through the Lens of AI
The panel discussion, moderated by Chris Graf, Research Integrity Director at Springer Nature Group, brought together diverse perspectives on the evolving role of AI in scholarly publishing. Panelists Linda S. Bishai, Adam Day (Clear Skies), Pascal Hetzscholdt (Wiley), and Chloe Chadwick (Oxford Internet Institute) explored the intersection of technology, ethics, and human oversight, building upon themes introduced in Bishai’s keynote that I’ve described above.
AI’s Role in Research and Publishing
Chloe Chadwick opened the discussion by examining how AI is integrated into various stages of research. She emphasized that AI tools streamline processes like data analysis and manuscript preparation, allowing researchers to focus on substantive work. However, Chadwick echoed Bishai’s concern about transparency, particularly on preprint servers where scrutiny of AI’s role is limited. This aligns with Bishai’s advocacy for the Ethical, Legal, and Societal Implications (ELSI) framework, highlighting the need for ethical foresight across all platforms.
Adam Day offered a complementary viewpoint, describing AI as a force that could shift the industry’s focus from quantity and access back toward quality. Like Bishai, he stressed that human oversight is indispensable. He proposed that researchers engage in “conversations” with AI models to extract insights, ensuring these tools enhance, rather than replace, human judgment.
Challenges in AI Data and Content Moderation
Pascal Hetzscholdt provided insight into the risks associated with AI training data. Drawing parallels to Bishai’s examples of ethical failures, he cautioned against the “black box” nature of AI systems, where the source and quality of data are often unclear. Hetzscholdt argued for robust content moderation practices, ensuring AI-generated outputs are both trustworthy and replicable. This reinforces Bishai’s call for transparency and rigorous oversight to mitigate risks inherent in opaque AI models.
The Redefinition of Quality
Bishai revisited her keynote themes by exploring how AI is reshaping societal perceptions of quality. She noted the decline in critical thinking skills among students, which she attributes in part to over-reliance on digital tools. Bishai’s solution—teaching AI literacy from an early age—resonated with the panel’s broader discussion on the importance of ethical education. By understanding AI’s limitations, users can better navigate the digital landscape and discern quality content.
A Path Forward
As the discussion concluded, the panelists agreed on a central tenet: while AI holds immense potential, its success depends on responsible integration into human-led systems. Bishai’s ELSI framework provided a foundational lens through which the panel explored these issues, emphasizing the importance of trust, transparency, and ethical design in AI-driven scholarly publishing.
This panel, like the broader conference, underscored a critical message: the future of AI in research hinges not just on technological innovation but on our collective commitment to advancing it responsibly.
Latest news and blog articles
23.10.2024
The European Respiratory Society Launches its Integrated Content Platform on HighWire Hosting
12.06.2024