Insights on AI Readiness from Scholarly Communities

Insights on AI Readiness from Scholarly Communities

The STM Frankfurt Conference 2024 highlighted the transformative role of artificial intelligence in scholarly publishing, addressing both its potential and its challenges. Experts from across the industry gathered to discuss how AI can enhance content quality, streamline workflows, and drive innovation. At the same time, they explored the ethical and policy considerations necessary to preserve the integrity of scholarly communication.

This blog series builds on the insights shared during the conference. Following an in-depth look at Linda S. Bishai’s keynote, “Our Future with AI – Getting it Right,” each post shows how her key concepts—such as the Ethical, Legal, and Societal Implications (ELSI) framework—resonate with the views of other panelists.

This series offers an exploration of the industry’s collective efforts to embrace AI while ensuring trust and quality remain central to research communication.

Key Themes from Linda S. Bishai’s Keynote, “Our Future with AI – Getting It Right”

  1. The Need for Ethical, Legal, and Societal Implications (ELSI) Framework: Bishai highlighted the risks of an unregulated AI future, cautioning against dystopian outcomes. She advocated for the ELSI framework, which DARPA employs to guide ethical and reliable AI development. The ELSI approach ensures that AI remains human-centered and addresses the societal impact of its use.
  2. Multidisciplinary Approach to AI Development: The ELSI framework brings together diverse experts, including ethicists, social scientists, and technologists, to consider the broader consequences of AI. This multidisciplinary method helps confront complex ethical questions at every stage of AI’s lifecycle, from conception to decommissioning.
  3. Autonomy Standards: In military contexts, where AI might make critical life-and-death decisions, Bishai emphasized dynamic ethical guidelines. These guidelines are relevant in a non-military context as well. AI should respect societal norms, cultural differences, and legal strictures.
  4. Lessons from Ethical Failures: Bishai cited Google Glass as an example of ethical oversight failure. Despite its innovation, it faced backlash due to privacy violations and physical discomfort. The product’s failure underscored the importance of considering whether technology should be built, beyond whether it can
  5. Benchmarks for Ethical AI: Bishai argued for systems that promote ethical behavior rather than facilitate misuse. She pointed to the U.S. Department of Defense’s efforts to embed ethical commitments in procurement, contrasting this with the private sector’s lack of transparency, where privacy policies obscure ethical principles.
  6. Challenges in Private Sector AI Development: In the private sector, the push for rapid innovation often overlooks ethical considerations. Bishai stressed that ELSI-driven research can help private companies plan more responsibly, ensuring long-term societal benefits while addressing potential harms.
  7. AI’s Role in Society: Bishai posed critical questions about AI’s purpose, advocating for its use as a tool to augment human capabilities, not replace them. She stressed the importance of understanding AI’s limitations, including its inherent biases, to ensure it complements human efforts responsibly.

Bishai’s keynote called for embedding ethical considerations at every stage of AI development. This approach allows society to harness AI’s potential while mitigating risks, ensuring that AI serves humanity responsibly and ethically.

 PANEL DISCUSSION

Community Insights on AI Readiness

The panel titled “Community Insights on AI Readiness” explored the complexities of AI integration in scholarly research. Moderated by Heather Staines, the discussion brought together Lisa Janicke Hinchliffe (University of Illinois at Urbana-Champaign), Marc Ratkovic (University of Mannheim), Gracian Chimwaza (ITOCA), and Anne Taylor (Wellcome Trust), each offering diverse viewpoints on the opportunities and challenges AI presents in advancing trusted research. This session built on Bishai’s foundational ideas listed above, particularly the critical importance of transparency, ethics, and community-driven approaches to AI adoption.

AI Usage Across Global Regions

Gracian Chimwaza highlighted the uneven landscape of AI readiness, particularly in Africa. He pointed out that while many educators and researchers are beginning to adopt AI tools, the digital divide remains a significant obstacle. However, Chimwaza echoed Bishai’s assertion that the ethical use of AI is a universal challenge, transcending regional boundaries. Both stressed that AI must serve as an equalizer, reducing disparities rather than exacerbating them.

Anne Taylor shared her experiences at the Wellcome Trust, where AI’s role in evaluating research proposals is carefully scrutinized. She emphasized the importance of identifying when and how AI has been applied, ensuring that its use aligns with ethical standards—a principle directly linked to Bishai’s ELSI framework. Like Bishai, Taylor sees the potential for AI to enhance decision-making processes but insists on human oversight to maintain accountability.

The Role of AI in Academic Output

Marc Ratkovic provided insights into how AI is transforming research workflows, particularly for early-career researchers. He noted that while AI can help non-native English speakers and streamline document preparation, overreliance on these tools may hinder the development of critical thinking skills. This concern mirrors Bishai’s warning about the broader societal impact of AI on education and cognitive abilities. Both argue for a balanced approach, where AI aids but does not overshadow the human element in knowledge creation.

Lisa Janicke Hinchliffe contextualized AI within the broader digital transformation of academia. She identified the current “AI era” as a pivotal moment, much like the rise of the Internet and Web 2.0. Hinchliffe argued that AI should not merely replicate existing processes but create new pathways for knowledge generation. This aligns with Bishai’s call for AI literacy and the need to critically evaluate the tools researchers choose to adopt.

Balancing Ethical Use and Innovation

The panelists universally agreed on the importance of transparency in AI’s role within scholarly publishing. Anne Taylor cautioned against viewing AI as a replacement for human judgment, particularly in peer review. Similarly, Bishai has consistently argued that AI systems should support ethical behavior rather than automate critical decisions without oversight.

Marc Ratkovic expanded on this by urging institutions to maintain a clear rationale for AI use. Whether enhancing productivity or improving research quality, these tools must be wielded thoughtfully. Ratkovic’s perspective reinforces Bishai’s emphasis on setting and adhering to ethical benchmarks in AI development and application.

 

The panel emphasized the need for ongoing dialogue among researchers, publishers, and funders. While AI holds transformative potential, its implementation must be guided by principles of responsibility and transparency. Building on Bishai’s keynote themes, this session underscored the collective responsibility of the global research community to ensure that AI serves as a force for good, advancing both the quality and integrity of scholarly work.

Latest news and blog articles