STM Frankfurt Conference 2024: Advancing Trusted Research in the AI Era

STM Frankfurt Conference 2024: Advancing Trusted Research in the AI Era

As artificial intelligence continues to transform industries worldwide, scholarly publishing faces unique opportunities and challenges. The STM Frankfurt Conference 2024 brought together experts to explore the critical role of AI in shaping the future of trusted research. From enhancing content quality and efficiency to raising ethical and policy concerns, the discussions at this year’s STM Frankfurt meeting highlighted the industry’s dual responsibility: harnessing AI’s potential while safeguarding the integrity of scholarly communication.

The conference reflected on the industry’s past evolution—from print to digital—and its impact on accessibility, infrastructure, and trust. This historical perspective underscored the next frontier: adapting to AI. Unlike the unverified, user-generated content that populates much of the internet, STM members produce journals, books, and databases that form the backbone of reliable AI applications. However, AI also presents new challenges, such as altering revenue models and raising questions about data ethics, rights management, and open access.

Introducing Our Blog Series

 In this blog series, I will dig into the insights shared at STM Frankfurt 2024. Each post will explore the key themes discussed, starting with an analysis of Linda S. Bishai’s keynote, “Our Future with AI – Getting it Right.” Bishai’s compelling address outlined the critical need for an Ethical, Legal, and Societal Implications (ELSI) approach to AI development, offering a framework for building responsible and trustworthy AI technologies.

Following this first post, subsequent entries will examine how Bishai’s concepts intersect with the perspectives of other conference panelists. Topics will include:

  • The Role of AI in Balancing Quality, Quantity, and Openness
  • Insights on AI Readiness from Scholarly Communities
  • Publishers’ Approaches to AI Adoption
  • Industry Leaders’ Views on AI’s Strategic Role in Scholarly Publishing

This series aims to provide a comprehensive overview of the ongoing conversations around AI in scholarly publishing, highlighting both the opportunities for innovation and the critical importance of maintaining trust and quality in research.

Keynote: “Our Future with AI – Getting it Right”

Linda S. Bishai, a Research Staff Member at the Institute for Defense Analyses, delivered a keynote titled “Our Future with AI – Getting it Right”. In her address, Bishai explored the rapid development of artificial intelligence technologies and the critical ethical, legal, and societal issues that accompany them. She warned of the dangers of an unregulated AI future, invoking dystopian scenarios like those depicted in Blade Runner. Bishai emphasized that the key to avoiding such outcomes lies in the widespread adoption of an Ethical, Legal, and Societal Implications (ELSI) framework, a methodology currently employed by the Defense Advanced Research Projects Agency (DARPA), which has demonstrated its value in producing robust and reliable technologies.

The ELSI Approach: A Multidisciplinary Lens

Bishai underscored the importance of incorporating diverse perspectives, especially from the humanities—such as politics, culture, and history—into AI development. The ELSI framework facilitates a holistic approach by engaging experts across various fields, including military ethicists, social scientists, legal professionals, technologists, and humanities scholars. This multidisciplinary approach ensures that the far-reaching implications of AI, including second- and third-order effects, are thoroughly considered.

Bishai highlighted that ELSI-driven research is designed to confront challenging questions at every stage of AI development—from conceptualization and team formation to implementation and eventual decommissioning. These difficult questions are integral to ensuring that AI technologies remain human-centered and socially beneficial.

Autonomy Standards and Military Applications

In military settings, where AI might make life-and-death decisions, the ELSI framework is vital. Bishai explained that the development of autonomy standards in warfare requires careful consideration of societal norms, rules of war, and cultural and religious contexts. These ethical guidelines are dynamic and must adapt to future scenarios and emerging technologies.

Her team, which includes military commanders, JAGs (military legal advisors), and even critics advocating for bans on autonomous weapons, provides a broad spectrum of ethical viewpoints. This diversity ensures that AI systems are designed with comprehensive ethical considerations in mind, reducing the risk of misuse in critical situations.

Ethical Failures and Lessons Learned

Bishai illustrated the importance of ethical foresight with the example of Google Glass. Launched in 2013, the innovative product failed due to what Bishai called “human recoil.” The device raised significant privacy concerns, allowing users to record others in public without consent. Additionally, physical issues like eye strain and headaches contributed to its downfall. The failure of Google Glass stemmed from a narrow focus on technological feasibility—whether it could be built—without asking whether it should be built.

This example reinforced Bishai’s core message: AI technologies should not be developed purely for innovation’s sake. Developers must carefully consider their impact on human behavior, privacy, health, and well-being.

Benchmarks and Ethical Design

Instead of merely questioning whether an AI system is ethical, Bishai argued that we should evaluate whether the system enables ethical behavior. Does it encourage users to act ethically, or does it facilitate misuse? Establishing clear ethical benchmarks is essential, but equally important is understanding the rationale behind these benchmarks.

She pointed out that the U.S. Department of Defense (DoD) integrates ethical commitments into its procurement processes, guided by international human rights treaties. However, in the private sector, there is often a lack of transparency regarding ethical standards. Privacy policies frequently obscure the ethical principles underlying AI systems, making it difficult for users to discern their alignment with societal values.

Challenges in Private Sector AI Development

Bishai acknowledged that much of AI development occurs in the private sector, where the pressure for rapid results can overshadow ethical considerations. This lack of transparency creates tension between achieving quick innovation and getting it right. However, she noted that robust ELSI research can help mitigate these issues by compelling developers to carefully map out decision pathways and consider potential outcomes.

Key Considerations for AI’s Role in Society

Finally, Bishai posed critical questions about the purpose of AI in society. She emphasized that AI should enhance human capabilities, not seek to replace them entirely. While AI systems, including generative AI, can offer significant assistance in areas where humans struggle, they also reflect human flaws and biases. Recognizing these limitations is crucial to understanding when AI should serve as a tool to augment human efforts rather than replicate human judgment.

Bishai’s keynote was a call to action: by embedding ethical considerations at every stage of AI development, society can harness the potential of these powerful technologies while safeguarding against their risks.

Audience Q&A

Question 1: How should governments handle the tension between copyright and AI development? Will they mandate the ELSI (Ethical, Legal, and Social Issues) approach in AI development?

Answer: Bishai acknowledged that many AI products have already been developed without incorporating the ELSI approach, making it difficult to retroactively apply ethical guidelines. In sectors like the military, ELSI is more established due to the significant risks associated with weapons and defense technologies, but this level of rigor is lacking in commercial AI development.

Bishai highlighted the challenges of applying ELSI to products that are already on the market, especially when the AI systems are governed by “agree or don’t use” policies. These policies force users to accept terms without fully understanding the ethical ramifications of how the technology operates. Bishai suggested that instead of focusing solely on these immediate hurdles, the goal should be to mitigate potential long-term harms. Governments and regulatory bodies need to remain vigilant, identifying unintended consequences as they arise, and swiftly addressing them by calling out unethical practices and pushing for corrective actions.

She emphasized that AI development is not static. Human rights often improve when mistakes are made visible and corrected, and this iterative process can guide the responsible evolution of AI. Bishai asserted that policymakers must remain adaptable, ready to revise and improve regulations as AI technology continues to evolve.

Question 2: We’re hearing from clients that private companies developing large language models (LLMs) are competing to sell content they’ve collected without permission. How should we respond?

Answer: Bishai recommended a proactive approach to addressing the unethical use of data in AI training. Rather than focusing solely on companies that develop these models, she suggested engaging with the companies that are purchasing the AI-generated data. She posed the critical question: Have you considered the risks? Bishai encouraged asking these companies whether they’ve thought about the potential consequences of using data without proper authorization, including the risk of legal challenges.

She stressed that companies should conduct due diligence and consider the long-term implications of using AI-generated content, particularly when it comes to intellectual property and the potential harm to their reputation. Bishai advised treating this as a research project—gathering evidence of misuse, conducting thorough investigations, and exposing unethical practices. The goal is not to shut down AI development but to ensure it is done with ethical foresight. Bishai emphasized that a critical part of this process is encouraging transparency and accountability, prompting companies to think about the lasting impact of their actions.

Question 3: There’s often tension between scientists and political stakeholders when it comes to AI development. How do you approach making recommendations?

Answer: Bishai clarified that her role is not to make explicit recommendations but to force developers and policymakers to think critically about the choices they are making. Rather than giving direct advice, she encourages AI developers to ask themselves difficult questions. This process of inquiry helps reveal underlying assumptions and challenges developers to explore alternatives that they may not have considered.

For instance, if a developer claims that a certain data set or algorithm is the best choice, Bishai’s response would be to ask why that choice is being made and whether other possibilities have been explored. This kind of questioning pushes developers to dig deeper and examine their decisions from multiple angles, including ethical and societal perspectives. Bishai emphasized the importance of leaving a “trail of decisions” throughout the AI development process, which allows stakeholders to reflect on and understand why certain paths were chosen and, if necessary, to reassess and correct those choices later.

By asking “why” at every stage, developers can avoid taking the obvious path without considering alternatives, ensuring that AI systems are built with a more comprehensive understanding of the implications.

Question 4: It’s easy to highlight stories of AI gone wrong. Can you provide an example of a success story where AI was developed using the ELSI approach?

Answer: Bishai shared a positive example of AI being developed with human-centered ethical considerations. She described a project that involved building an automated system into a robot designed to assist patients with dementia. The robot was tasked with ensuring patients took their medication. However, rather than simply administering the medication automatically, the system was designed to give patients a choice—allowing them to decide when or whether to take the medication.

The robot would then inform the doctor if the patient didn’t take their medication, thereby preserving the patient’s agency while ensuring the doctor had the information needed to provide proper care. This system was built with the explicit goal of enhancing the ethical dimensions of caregiving, allowing patients to retain a sense of control over their treatment. The system provided doctors with real-time updates, ensuring that medical professionals remained in the loop while maintaining the dignity and autonomy of the patient.

Bishai contrasted this example with other AI systems that remove human oversight entirely, noting that systems designed to work with humans—rather than replace them—can lead to better outcomes. She highlighted that this kind of approach should serve as a model for future AI development, where technology supports and enhances human decision-making, rather than bypassing it.

Question 5: How do you consider the different constituents and stakeholders involved in AI development, particularly when there are conflicting interests?

Answer: Bishai explained that stakeholder analysis is critical in AI development, as the different constituents often have competing interests and priorities. In military applications, for example, AI systems must adhere to the law of armed conflict, where ethical questions about the right to take life come into play. Bishai emphasized the need for AI developers to carefully consider the long-term societal implications of their systems, particularly in contexts where life and death decisions are involved.

Civilian harm mitigation is one of the core concerns in military AI, but Bishai acknowledged that this principle can extend to other areas, such as healthcare and education, where the stakes may not involve life or death but still have significant ethical ramifications. In her view, a successful AI system must balance the needs of all stakeholders—military personnel, civilians, policymakers, and end-users—ensuring that the technology operates ethically across all domains.

She concluded by reiterating that AI development should never be an isolated, one-dimensional process. By continuously engaging with diverse stakeholders, developers can create systems that respect a broader set of ethical and societal values.

Latest news and blog articles