Oct 4, 2023 Article

“Middleware” and Modalities for the International Governance of AI

The rapid rise and broad adoption of generative AI technologies underscore the urgent need for comprehensive governance that encompasses every step of an AI system throughout the complex stages of its history. When AI technologies are used in the structures and institutions of society with care, caution, and consistency, they have the potential to foster collective progress and elevate capabilities. However, if deployed rashly or without safeguards, they pose substantial risks. They have the potential to destabilize societies, jeopardize public and individual safety, amplify existing inequalities, and undermine international relations. The expansive reach and influence of these AI technologies underscore the urgency for envisioning new international governance modalities and safeguards.

Historically, the AI research and development sector has resisted government oversight in favor of self-regulation, while governments have lagged behind in even grappling with the need for oversight. This approach is inadequate. As a handful of corporations dominantly control AI technologies, which permeate every facet of our lives, and challenges in various areas continue to mount, there emerges a power imbalance. The need for rigorous global and ethical oversight and decisive regulatory action becomes undeniable.

Many states and regional groups are either implementing restrictions or contemplating them, especially as investments in AI-powered national language model technologies and downstream applications grow. However, some have not begun any formal deliberations, while others are voicing concerns about lagging in this rapidly evolving field due to their technological inexperience and limited engagement capability. This has resulted in a fragmented governance landscape marked by a disparate proliferation of models and a patchwork of rules that reflect differing cultural norms and strategic goals. Notably, only a handful of laws have been enacted that specifically address AI.

To some degree, competition in the regulatory landscape can be beneficial: centralized technology governance can stifle innovation and agility. However, too much fragmentation can allow unethical practices to become established through “forum shopping,” in which companies pivot towards jurisdictions with more lenient regulations. Addressing this risk requires global collaboration.

It is clear that AI needs a customized international governance framework drawing on the models developed in organizations, like the IPCC to assess the scale and impacts of climate change; the IAEA to enhance the contribution of atomic energy to peace, health and prosperity while ensuring that it is not used to further any military purpose; and CERN to advance research in fundamental physics. Such an approach, bringing together political and technical functions, would serve as a bridge between technologists and policymakers by balancing promotion and control to address the gaps left by current mechanisms and approaches. Furthermore, such a framework should promote cooperation and dialogue among stakeholders and engage the public in meaningful and informed discussions.

The ultimate objective should be binding global regulations, premised on instruments for monitoring, reporting, verification, and, where necessary, enforcement, ideally supported by a treaty. However, immediate steps towards this framework should be initiated now. These intermediary actions can be likened to "middleware" in computer science, which enhances interoperability among diverse devices and systems. This governance "middleware" can not only connect and align existing efforts but pave the way to more enforceable measures in future.

«Middleware» entities and providers could both be established under the auspices of existing organizations, newly created, or a combination bringing in other initiatives with demonstrated global legitimacy and technical proficiency. The activities proposed below need not all be performed by one entity and the functions can be distributed.

In seeking to achieve global AI governance, we face two pressing risks: the potential failure of well-intentioned, but overly ambitious efforts and proposals that limit their thrust to admirable objectives. The areas and modalities proposed below aim to overcome political divergence, organizational red tape, deterministic tech narratives, and efforts to maintain industry self-regulation. They are intended as tangible proposals designed to help navigate the complexities of global technological governance and overcome fragmentation. While this paper doesn’t delve into the specifics, it’s evident that some of the proposed areas for further consideration might require concessions regarding both emergent and existing intellectual property. It’s acknowledged that national security imperatives will play a significant role in how these issues are addressed. Therefore, highlighting the collective benefits and gains of demonstrating robust technology governance for international stability will be crucial for the acceptance of any proposals.

It’s important to note that the list is not ranked by importance. Instead, each item is presented as a potential approach and area for further exploration. Some of these might be immediately relevant, while others may not be. The activities suggested for middleware governance entities are based on the Framework for the International Governance of AI. This framework represents a collaborative effort between the Artificial Intelligence & Equality Initiative (AIEI) at Carnegie Council for Ethics in International Affairs (CCEIA) and the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA). The modalities proposed below are structured to reflect elements suitable for a more functional middleware approach to facilitate and expedite the transition to a formal international agreement, provided states concur and agree on a feasible path forward.

AI Impact Hub: A global hub, developed in close collaboration with relevant technical communities based on the intended objectives, could monitor high-impact AI systems, their uses, and edge cases worldwide. Such a registry can promote collaboration by documenting both the training data and the subsequent outcomes and impacts of these systems.

Assessment of Acceptance Levels: Evaluating and publicizing which normative frameworks, regulations, best practices, and standards in AI governance have garnered the most global acceptance and demonstrated impact can enhance their adoption. This is especially pertinent when considering access to generative AI technologies and models, transparency of training data, compute resources, environmental impact, declarations regarding product maturity, and the use of data commons.

Data Lineage and Provenance Registry: Capturing the derivation history of a data product, originating from its original sources, is essential to establish reliable data lineage and maintain robust data provenance practices. Relying solely on watermarking as a standalone initiative is insufficient, as it primarily addresses ownership and copyright. To ensure comprehensive data traceability and integrity, a comprehensive approach combining watermarking and detailed lineage tracking is necessary.

Interoperable Data Sharing Frameworks: Interoperable Data Sharing Frameworks: For effective global oversight, crafting and adopting international standards and frameworks that standardize data governance practices across regions is paramount. This becomes especially crucial considering the vast volume of private data used in the creation of proprietary technologies. By establishing these harmonized frameworks, we can promote enhanced interoperability and deepen public trust in worldwide data-sharing initiatives. This collaborative effort demands the combined actions of governments, organizations, and industry stakeholders to ensure that data exchanges are transparent, properly disclosed, and secure for all parties involved.

Global “red-teaming”: The establishment of independent, multi-disciplinary expert rosters, often referred to as “red teams,” with updates every two years to address conflicts of interest, and with participants required to declare any involvement in AI-related developments. These teams would be tasked with scrutinizing the positive and negative implications of AI across various sectors, developing scientific and engineering tools for assessing the safety and impact of AI systems through duration and value chain analysis, and anticipating future developments.

Technology Alignment Council: Establishing a standing consultation body that meets quarterly, consisting of global technology firms to share real-time governance insights, can foster cohesive and synergistic advancement of interoperable and traceable AI technologies. This method emphasizes continuous dialogue and collaboration among tech giants and stakeholders. It helps to prevent undue concentration of power and champions transparency, interoperability, and traceability in the pursuit of responsible AI development.

Global Best Practices: Policy templates and model safeguard systems could be devised to recognize the universal-use characteristics of AI technology. These systems should aim to balance the encouragement of beneficial AI applications with the establishment of strong controls to counteract adversarial uses and reduce harmful outcomes. Adopting this strategy may offer a more even playing field, especially for nations and entities striving to stay abreast of rapid technological progress and shifting regulatory environments. Achieving agreement on balancing the encouragement of beneficial AI applications with the establishment of strong controls to counter adversarial uses and reduce harmful outcomes is core to making any meaningful progress in AI (and other emerging tech) governance.

Annual Report: An annual report that compiles and synthesizes research on emerging AI trends, investments, and product launches—and evaluates relevant governance frameworks—can be game-changing. This report would also cover global drivers, potential threats, threat actors, and opportunities. By proposing potential scenarios, providing actionable recommendations for governments and international organizations, and including a detailed timeline to prevent delays in follow up actions, it would substantially strengthen informed decision-making and elevate public discourse.

Global Incident Database: A global database, crafted for both anonymous and identified reporting of significant AI-related incidents and building upon existing local efforts, could lower the barriers to reporting and heighten the incentives to do so. This prospective tool should be managed by a middleware entity with demonstrated technical capabilities to assess claims. It could catalyze cross-border collaboration and ensure consistent threat analysis. This database would provide a secure platform to discuss and evaluate emerging threats, enhancing global preparedness and serving as a measure to bolster confidence.

Technology Literacy: Entity or entities should be collaborating with educational institutions and leveraging freely accessible Creative Commons-licensed standards, materials, and platforms to promote AI literacy. This aligns with UNESCO's guidance initiative. Such initiatives emphasise the importance of investing in education, empowering individuals to effectively navigate the AI landscape and tap into its potential for both personal and societal betterment. All educational curricula should be adapted, with incentives and requirements introduced for companies to explain how their technology operates, how the systems are constructed, by whom, using which resources, and for what purpose. This ensures that people, especially children, organically perceive these systems as advanced yet flawed computational tools and engage with them accordingly.

Technology Passports: A “technology passport” system could streamline assessments across jurisdictions, allowing stakeholders to scrutinize a technology’s outcomes through its journey. Such a passport would be an evolving document in which AI systems accumulate “stamps,” signifying that expert review had verified the system’s adherence to predetermined criteria. Stamps would be given at key moments in the system’s evolutionary journey and value chain, such as when it integrates new data, connects to other systems, is decommissioned, or is applied to a new purpose.

International Standards Repository: Establishing a comprehensive repository, potentially managed by a middleware entity within an academic environment, that centralizes references to existing AI standards, elucidates disparities, evaluates the necessity for updates in light of the advancing technology landscape, illustrates practical applications, and tackles ethical concerns and conflicts of interest. This repository would bolster transparency, guide decision-making, and promote collaboration in safeguarding against ethically misguided applications, all while preventing standards influenced by conflicts of interest from supplanting robust governance. It would allow interested parties to navigate available resources and determine the best approaches for use cases. Importantly, this repository would be made freely accessible, even though some of the standards referenced may still be proprietary to the issuing standards organizations.

Open-source Sandboxes: Creating frameworks for open-source sandboxes or testbeds to enable developers and implementing agents to ethically and transparently test, validate, verify, and provide technical oversight for AI capabilities, whether they are human-machine or machine-machine interactions. These frameworks for sandboxes will be designed with open-source and reproducible solution architectures in mind. This open-source approach not only holds significant potential for applications within the field of AI but also addresses the unprecedented convergence of technologies that AI enables. If successful, many of the proposed components within these open-source sandboxes could serve as models for governing yet-to-be-anticipated fields of scientific discovery and technological innovation.

Technological Tools: Ongoing development of software and hardware tools, including cryptographic methods and security protocols, forms the foundation for creating strong, secure, and reliable systems to protect against cyber threats, data mishandling, unscrupulous data mining, and related vulnerabilities. This development can also enhance process efficiency, potentially resulting in substantial time and cost savings. Furthermore, to cultivate a culture of security that extends beyond technical approaches, it is important to encourage the sharing of knowledge and expertise among a wide array of global stakeholders. This collaborative effort can foster continuous refinement of tools that are free, accessible, and adapt to the ever-changing technological landscape.

Collaborative Policy Forums: A dedicated Assembly, akin to the State Parties Conferences often associated with international treaties, could function as a forum for reaching agreements regarding the prohibition or restriction of AI technologies and their applications in situations that pose undesirable risks with potential consequences for international stability and security. Such gatherings could strengthen of technology and AI governance, even when a formal treaty is absent. This would prove particularly valuable in scenarios where AI models and its downstream applications may conflict with established normative frameworks, arms control instruments, as well as principles related to politics, society, and human rights.

Declaration Portal: Establishing a "declaration portal," drawing inspiration from the arms control and non-proliferation regime, requiring state and corporate actors to disclose their AI developments, approaches, and deployments, would encourage transparency and adherence to globally agreed norms and standards, promote international technical cooperation and knowledge exchange, and serve as a confidence-building measure.

Global Certification: Develop global certification programs aimed at integrating universally agreed-upon ethical principles, including addressing tension points and trade-offs, into AI processes. Ideally, these certification programs should be conducted in collaboration with credible professional technical organizations, leveraging their demonstrated expertise in developing such programs to ensure that the certification process goes beyond theoretical concepts and provides practical solutions for addressing ethical considerations that are clearly defined in existing normative instruments addressing political, social, environmental, and human rights.

***

A global technolology and AI governance framework needs to be both flexible and agile. Proactively fostering dialogue and confidence building measures can make the implementation of a global framework more reliable and facilitate prompt responses to relevant issues. Thus, governing a swiftly evolving technology requires a comprehensive and federated approach, covering each technology's journey from its inception to obsolescence.

While corporate input is valuable for shaping any technology-related framework, maintaining an open-source, community-driven, and independent approach is essential for transparency. Any framework should clearly define accountability, specifying what is being developed, by whom, under which guidance, by which standards, and for what intended purpose. In doing so, it can prompt companies to both showcase their dedication to transparent, safe, and accountable AI deployment and foster broader stakeholder collaboration.

There are still numerous unresolved questions concerning navigating different regulatory landscapes, mitigating geopolitical tensions, and balancing various corporate interests. For instance, how will the proposed mechanisms be implemented and monitored? How can we safeguard the political autonomy, corporate independence, technical integrity, and the reliability of the individuals, institutions, and middleware entities influencing the development and use of these technologies? When political and technical disagreements arise, who will resolve them? Is there a role, for example, for the International Court of Justice to address legal disputes according to international laws and provide guidance on AI-related questions with transnational implications? Alternatively, is there a need to establish a separate judicial settlement body to address potential claims of harmful uses with global implications?

If we can find answers to these and other questions that will undoubtedly materialize as our use of these technologies evolves, a globally accepted framework for AI governance could serve as a steppingstone for governing future scientific and technological advancements, extending beyond AI.

About this proposal

This proposal builds on collaborative effort between the Artificial Intelligence & Equality Initiative (AIEI) at Carnegie Council for Ethics in International Affairs (CCEIA) and the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA). The proposal benefits from and builds on the expertise and experiences of vast numbers of brilliant people working in the field of AI and governance.

Established in 2020, the AIEI is a vibrant and result-driven community of practice dedicated to scrutinizing the impacts of AI on societal equality. With a committed goal to foster ethical integration and empowerment in AI advancements, it champions the development and deployment of AI technologies that are just, inclusive, and firmly rooted in pragmatic and responsible principles. This dynamic initiative brings together a globally representative Advisory Board, encompassing members from over 20 nations across every continent. These advisors are luminaries in their respective fields, hailing from academia, governmental bodies, multinational institutions, NGOs, and the business sector, blending technological insights with geopolitical expertise.

ADDITIONAL RESOURCES:

AI Red Team/Hack the Future: Redefining Red Teaming, July 2023

Association for Computer Machinery Statement on Generative AI, September 2023

Council of Europe's Work in progress, July 2023

Credo.ai: The Hacker Mindset: 4 Lessons for AI from DEF CON 31, August 2023

IEEE Statement on Generative AI, June 2023

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this article are those of the author and do not necessarily reflect the position of Carnegie Council.

You may also like

FEB 7, 2023 Podcast

Technology Governance and the Role of Multilateralism, with Amandeep Singh Gill

In this "AIEI" podcast, Wendell Wallach and Anja Kaspersen are joined by Ambassador Amandeep Singh Gill, UN Secretary-General Guterres' envoy on technology. They discuss some ...

MAY 13, 2022 Article

Ethics As We Know it is Gone. It's Time for Ethics Re-envisioned.

Given the troubling state of international affairs there is reason to be greatly concerned about how ethics is framed or co-opted. To meet this moment, ...

DEC 9, 2021 Podcast

Ethics, Governance, and Emerging Technologies: A Conversation with the Carnegie Climate Governance Initiative (C2G) and Artificial Intelligence & Equality Initiative (AIEI)

Emerging technologies with global impact are creating new ungoverned spaces at a rapid pace. The leaders of Carnegie Council's C2G and AIEI initiatives discuss ...