Jul 5, 2023 Article

A Framework for the International Governance of AI


Promoting the benefits of innovative technologies requires addressing potential societal disruptions and ensuring public safety and security. The rapid deployment of generative artificial intelligence (AI) applications underscores the urgency of establishing robust governance mechanisms for effective ethical and legal oversight. This concept note proposes the immediate creation of a global AI observatory supported by cooperative consultative mechanisms to identify and disseminate best practices, standards, and tools for the comprehensive international governance of AI systems.


This initiative directs attention to practical ways to put in place a governance framework that builds on existing resources and can have an immediate effect. Such a framework would enable the constructive use of AI and related technologies while helping to prevent immature uses or misuses that cause societal disruption or pose threats to public safety and international stability.

From Principles to Practice

Numerous codes of conduct or lists of principles for the responsible use of AI already exist. Those issued by UNESCO and the OECD/G20 are the two most widely endorsed. In recent years, various institutions have been working to turn these principles into practice through domain-specific standards. A few States and regions have made proposals and even enacted constraints upon specific uses of AI. For example, the European Commission released a comprehensive legal framework (EU AI Act) aiming to ensure safe, transparent, traceable, non-discriminatory, and environmentally sound AI systems overseen by humans. The Beijing Artificial Intelligence principles were followed with new regulations placed upon corporations and applications by the Cyberspace Administration of China. Various initiatives at the federal and state level in the United States further emphasize the need for a legislative framework. The UN Secretary-General also recently proposed a High-Level Panel to consider IAEA-like oversight of AI.

Proposed Framework

Governance of AI is difficult because it impacts nearly every facet of modern life. Challenges range from interoperability to ensuring applications contribute to—and do not undermine—the realization of the SDGs. These challenges change throughout the lifecycle of a system and as technologies evolve.

A global governance framework must build upon the work of respected existing institutions and new initiatives fulfilling key tasks, such as monitoring, verification, and enforcement of compliance. Only a truly agile and flexible approach to governance can provide continuous oversight for evolving technologies that have broad applications, with differing timelines for realization and deployment, and a plethora of standards and practices with differing purposes.

Mindful of political divergences around issues of technology policy and governance, it will take time to create a new global body. Nevertheless, specific functions can and should be attended to immediately. For example, a global observatory for AI can be managed within an existing neutral intermediary capable of working in a distributed manner with other nonprofit technical bodies and agencies qualified in matters related to AI research and its impact on society.

To establish an effective international AI governance structure, five symbiotic components are necessary:

1. A neutral technical organization to sort through which legal frameworks, best practices, and standards have risen to the highest level of global acceptance. Ongoing reassessments will be necessary as the technologies and regulatory paradigms evolve.

2. A Global AI Observatory (GAIO) tasked with standardized reporting, at both general and domain-specific levels, on the characteristics, functions, and features of AI and related systems released and deployed. These efforts will enable assessment of AI systems’ compliance with existing standards that have been agreed upon. Reports should be updated in as close to real-time as possible to facilitate the coordination of early responses before significant harm has been effected. The observatories which already exist, such as that at the OECD, do not represent all countries and stakeholders, nor do they provide oversight, enable sufficient depth of analysis, or fulfill all the tasks proposed below.

  • GAIO would orchestrate global debate and cooperation by convening experts and other relevant and inclusive stakeholders as needed.
  • GAIO would publish an annual report on the state of AI which analyzes key issues, patterns, standardization efforts, and investments that have arisen during the previous year, and the choices governments, elected leaders, and international organizations need to consider. This would involve strategic foresight and scenarios focused primarily on technologies likely to go live in the succeeding two to three years. These reports will encourage the broadest possible agreement on the purposes and applicable norms of AI platforms and specific systems.
  • GAIO would develop and continuously update four registries. Together, a registry of adverse incidents and a registry of new, emerging, and (where possible) anticipated applications will help government and international regulators to attend to potential harm before deployment of new systems.
  • The third registry will track the history of AI systems, including information on testing, verification, updates, and the experience of States that have deployed them. This will help the many countries that lack the resources to evaluate such systems. A fourth registry will maintain a global repository for data, code, and model provenance.

3. A normative governance capability with limited enforcement powers to promote compliance with global standards for the ethical and responsible use of AI and related technologies. This could involve creating a “technology passport” system to ease assessments across jurisdictions and regulatory landscapes. Support from existing international actors, such as the UN, would provide legitimacy and a mandate for this capability. It could be developed within the UN ecosystem through collaboration between the ITU, UNESCO, and OCHR, supported by global technical organizations such as IEEE.

4. A conformity assessment and process certification toolbox to promote responsible behavior and assist with confidence-building measures and transparency efforts. Such assessments should not be performed by the companies that develop AI systems or the tools used to assess those systems.

5. Ongoing development of technological tools (“regulation in a box”), whether embedded in software or hardware or both, is necessary for transparency, accountability, validation, and audit safety protocols, and to address issues related to the preservation of human, social, and political rights in all digital goods—each of which is a critical element of confidence-building measures. Developed with other actors in the digital space, these tools should be continuously audited for erroneous activity and adapted by the scientific and technical community. They must be accessible to all parties at no cost. Assistance from the corporate community in providing and developing tools and intel on technical feasibility is essential, as will be their suggestions regarding norms. However, regulatory capture by those with the most to gain financially is unacceptable. Corporations should play no final role in setting the norms, their enforcement, or to whom the tools should be made available.

We are fully aware that this skeleton framework begs countless questions as to how such governance mechanisms are implemented and managed, how their neutrality and trustworthiness can be established and maintained, or how political and technical disagreements will be decided and potential harmful consequences remediated. However, it is offered to stimulate deeper reflection on what we have learned from promoting and governing existing technologies, what is needed, and next steps forward.

Emerging and Converging Technologies

This framework has significant potential applications beyond the AI space. If effective, many of the components proposed could serve as models for the governance of as-yet-unanticipated fields of scientific discovery and technological innovation. While generative AI is making it urgent to put in place international governance, many other existing, emerging, and anticipated fields of scientific discovery and technological innovation will require oversight. These fields are amplifying each other’s development and converging in ways difficult to predict.

This proposal, developed by Carnegie Council for Ethics in International Affairs (CCEIA) in collaboration with IEEE, draws on ideas and concepts discussed in two June 2023 multi-disciplinary expert workshops organized by Carnegie Council’s AI & Equality Initiative and IEEE SA and hosted by UNESCO in Paris and ITU in Geneva. Participation in those workshops, however, does not imply endorsement of this framework or any specific ideas within the framework.

Workshop Participants (in alphabetical order):

Doaa Abu Elyounes, Phillippa Biggs, Karine Caunes, Raja Chatila, Sean Cleary, Nicolas Davis, Cristian de Francia, Meeri Haataja, Peggy Hicks, Konstantinos Karachalios, Anja Kaspersen, Gary Marcus, Doreen Bogdan-Martin, Preetam Maloor, Michael Møller, Corinne Momal-Vanian, Geoff Mulgan, Gabriela Ramos, Nanjira Sambuli, Reinhard Scholl, Clare Stark, Sofia Vallecorsa, Wendell Wallach, Frederic Werner.

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this article are those of the authors and do not necessarily reflect the position of Carnegie Council.

You may also like

JUL 6, 2023 Article

The Case for a Global AI Observatory (GAIO), 2023

Making the case for a "Global AI Observatory" (GAIO) noting six main areas of activity for more serious regulation of AI.

JUL 7, 2023 Article

Achieving an International Framework for the Governance of AI

Senior Fellows Wendell Wallach and Anja Kaspersen argue in favor of a robust framework for the international governance of AI.

JUN 5, 2023 Article

Are We Automating the Banality and Radicality of Evil?

Current iterations of AI are increasingly able to encourage subservience to a non-human master, telling potentially systematic untruths with emphatic confidence.