This website may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the website work as you expect it to and give you a more personalized web experience. We respect your right to privacy, so you can choose not to allow some types of cookies. Click on the different category headings to find out more. You can accept or refuse our use of cookies, by moving the selector switch in each category to change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer you.
The STM Frankfurt Conference 2024 highlighted the transformative role of artificial intelligence in scholarly publishing, addressing both its potential and its challenges. Experts from across the industry gathered to discuss how AI can enhance content quality, streamline workflows, and drive innovation. At the same time, they explored the ethical and policy considerations necessary to preserve the integrity of scholarly communication.
This blog series builds on the insights shared during the conference. Following an in-depth look at Linda S. Bishai’s keynote, “Our Future with AI – Getting it Right,” each post shows how her key concepts—such as the Ethical, Legal, and Societal Implications (ELSI) framework—resonate with the views of other panelists.
This series offers an exploration of the industry’s collective efforts to embrace AI while ensuring trust and quality remain central to research communication.
Key Themes from Linda S. Bishai’s Keynote, “Our Future with AI – Getting It Right”
- The Need for Ethical, Legal, and Societal Implications (ELSI) Framework: Bishai highlighted the risks of an unregulated AI future, cautioning against dystopian outcomes. She advocated for the ELSI framework, which DARPA employs to guide ethical and reliable AI development. The ELSI approach ensures that AI remains human-centered and addresses the societal impact of its use.
- Multidisciplinary Approach to AI Development: The ELSI framework brings together diverse experts, including ethicists, social scientists, and technologists, to consider the broader consequences of AI. This multidisciplinary method helps confront complex ethical questions at every stage of AI’s lifecycle, from conception to decommissioning.
- Autonomy Standards: In military contexts, where AI might make critical life-and-death decisions, Bishai emphasized dynamic ethical guidelines. These guidelines are relevant in a non-military context as well. AI should respect societal norms, cultural differences, and legal strictures.
- Lessons from Ethical Failures: Bishai cited Google Glass as an example of ethical oversight failure. Despite its innovation, it faced backlash due to privacy violations and physical discomfort. The product’s failure underscored the importance of considering whether technology should be built, beyond whether it can
- Benchmarks for Ethical AI: Bishai argued for systems that promote ethical behavior rather than facilitate misuse. She pointed to the U.S. Department of Defense’s efforts to embed ethical commitments in procurement, contrasting this with the private sector’s lack of transparency, where privacy policies obscure ethical principles.
- Challenges in Private Sector AI Development: In the private sector, the push for rapid innovation often overlooks ethical considerations. Bishai stressed that ELSI-driven research can help private companies plan more responsibly, ensuring long-term societal benefits while addressing potential harms.
- AI’s Role in Society: Bishai posed critical questions about AI’s purpose, advocating for its use as a tool to augment human capabilities, not replace them. She stressed the importance of understanding AI’s limitations, including its inherent biases, to ensure it complements human efforts responsibly.
Bishai’s keynote called for embedding ethical considerations at every stage of AI development. This approach allows society to harness AI’s potential while mitigating risks, ensuring that AI serves humanity responsibly and ethically.
PANEL DISCUSSION
Listening and Responding: Academic Publishers Talk AI Readiness
In a panel moderated by Roger C. Schonfeld, Vice President of Organizational Strategy at Ithaka, key players in scholarly publishing discussed how their organizations are preparing to meet the opportunities and challenges posed by artificial intelligence. Panelists Steven Heffner (IEEE), Miriam Maus (IOPP), Priya Madina (Taylor & Francis), and Marie Soulière (Frontiers) shared their perspectives on leveraging AI while safeguarding the integrity of scholarly publishing. Echoing themes from Linda S. Bishai’s keynote, the conversation underscored the importance of ethical stewardship, transparency, and human oversight in integrating AI into the academic publishing process.
Moderated by Roger C. Schonfeld, Vice President of Organizational Strategy at Ithaka, key Panelists Steven Heffner, Miriam Maus, Priya Madina, and Marie Soulière shared their perspectives on leveraging AI while safeguarding the integrity of scholarly publishing. Echoing the themes from Bishai’s keynote which I’ve described above, the conversation underscored the importance of ethical stewardship, transparency, and human oversight in integrating AI into the academic publishing process.
Unlocking AI’s Potential
Priya Madina opened by highlighting AI’s capacity to level the playing field in global access to scholarly content. She emphasized that while experimentation with AI is crucial for innovation, it should not come at the expense of content integrity. This aligns with Bishai’s advocacy for an Ethical, Legal, and Societal Implications (ELSI) framework to ensure AI is used responsibly.
Steven Heffner shared how IEEE is leveraging AI to enhance efficiency, particularly in maintaining research integrity. However, he framed these advancements as just the beginning, suggesting that AI’s true potential lies in addressing complex global issues like climate change through multidisciplinary collaboration. This perspective resonates with Bishai’s vision of AI as a tool for solving grand challenges, provided it is guided by human ethics and oversight.
Miriam Maus presented a more cautious approach, detailing how IOP Publishing carefully integrates AI through external partnerships. For IOP, AI’s role is to streamline peer review while leaving final editorial decisions to humans. Similarly, Bishai has stressed that AI should support human judgment rather than supplant it, particularly in contexts with high ethical stakes.
Marie Soulière from Frontiers discussed her organization’s long-standing use of AI, which has evolved from research integrity checks to enhancing customer relationship management. She highlighted Frontiers’ commitment to rigorous testing before deploying new AI tools—a practice that aligns with Bishai’s emphasis on accountability and trust in AI systems.
The Necessity of Human Oversight
A recurring theme was the indispensable role of human oversight in the AI-driven publishing process. Marie Soulière pointed to the Committee on Publication Ethics (COPE) guidelines, which stress that AI cannot make final publication decisions. Instead, AI tools at Frontiers are subjected to extensive testing under human supervision before they are fully adopted. This reflects Bishai’s view that AI should be introduced cautiously and ethically, with continuous human involvement to mitigate risks.
Miriam Maus echoed this sentiment, emphasizing the need for the publishing industry to pause and thoroughly assess AI’s long-term implications. She highlighted the reputational risks of over-reliance on AI, advocating for a slower, more deliberate integration process to ensure ethical compliance—a call that mirrors Bishai’s argument for measured, ethical AI deployment.
Ethical and Legal Considerations
Priya Madina stressed the importance of understanding the legal frameworks surrounding AI, particularly as international regulations evolve. She argued for greater AI literacy among publishers and researchers to navigate these complexities effectively. This complements Bishai’s call for transparency and clear ethical benchmarks in AI use, ensuring that all stakeholders are informed and accountable.
Steven Heffner raised concerns about the indiscriminate application of AI across the publishing process, particularly in peer review. He argued that AI should not be used as a stopgap for systemic issues like excessive submissions, urging the industry to address these root problems directly. This aligns with Bishai’s broader advocacy for thoughtful, purpose-driven AI implementation that supports human values.
Stewardship of the Scholarly Record
The panelists agreed that publishers have a vital role as stewards of the scholarly record. Steven Heffner described the current state of AI in publishing as still maturing, with much work needed to establish clear legal and ethical boundaries. He underscored the importance of content provenance, ensuring that AI systems are used to build knowledge responsibly.
Miriam Maus and Marie Soulière both highlighted the need for transparency in how AI companies utilize publishers’ content, emphasizing that clear attribution and ethical licensing are critical. Soulière stressed listening to authors’ concerns about how their work is integrated into AI systems, ensuring their intellectual property is respected.
This panel reinforced the importance of balancing innovation with ethical responsibility in AI adoption. Building on Bishai’s foundational themes, the discussion highlighted the critical need for transparency, human oversight, and stewardship in the scholarly publishing industry’s AI journey. As AI continues to reshape the landscape, publishers must navigate its complexities with care, ensuring that trust and integrity remain at the forefront of academic research.
Latest news and blog articles
23.10.2024
The European Respiratory Society Launches its Integrated Content Platform on HighWire Hosting
12.06.2024