Large Language Models (LLMs) & Diplomacy

Robust multilateral action relies on the ability of states to intake, process, and generate vast amounts of written text. Resolutions, memoranda, communiqués, chairpersons’ reports, transcripts, and much more; processing and generating such extensive corpora is a significant task, one that many parties may often find tedious and repetitive. It therefore stands to reason that there will be—if there is not already—a strong impulse among certain parties to use large language models to support their efforts in this arena.

What will be the ethical trade-offs of this practice?

Download the Communiqué

Recommendations for Managing the Trade-offs

The Carnegie Ethics Accelerator convened experts from diplomatic organizations, academia, and civil society to further analyze these trade-offs and develop a series of proposals to manage them.

How can diplomatic entities navigate the transparency, inequality, and skill fade concerns that adopting LLMs entails?

Read the one-pager

Ethics Accelerator Convenings on AI in Diplomacy

Workshop 1

November 8, 2023
Carnegie Council
New York, NY

Workshop 2

March 19, 2024
Carnegie Council
New York, NY

Forecasting Scenarios from the Use of AI in Diplomacy

If diplomatic entities adopt LLMs for translation, research, and prediction tasks, what kinds of outcomes can we expect? What are key technological, economic, societal, cultural (etc.) factors that influence whether LLM usage is effective?

Three Carnegie Ethics Accelerator participants evaluate the drivers and likelihood of six distinct AI scenarios.

Read the article