- Soft Law Approaches to AI Governance
In the latest Artificial Intelligence & Equality Initiative (AIEI) webinar, Carnegie Council Senior Fellow Anja Kaspersen speaks with Arizona State's Gary Marchant and Carlos Ignacio Gutierrez about their work on characterizing soft law programs for the governance of AI. Soft law is defined as any program that sets substantive expectations, but is not directly enforceable by government. What is the role of these programs in managing applications and methods of AI?
- AI & Equality Initiative: Think Before You Code
ThinkTech is an independent nonprofit association, started by and for students, young technologists, and professionals working to shape the impact of artificial intelligence and other digital technologies on individuals and society. Under the slogan "Think before you code," it serves as a platform to create guidance for the responsible development of technology. In this podcast, Senior Fellow Anja Kaspersen speaks with ThinkTech's Lukas D. Pöhler, Eva Charlotte Mayer, and Agnes Gierulski about their projects.
- Ethics, Equality, & AI in the Caribbean
Artificial intelligence (AI) will affect the socio-economic development of nations across the globe. Caribbean countries are particularly susceptible because they tend to be labor intensive economies and are therefore at risk of significant economic and social disruption from automation and artificial intelligence. Three experts in this space--Cordel Green, Stacey Russell, and Erica Simmons--discuss these issues and much more.
- Africa, Artificial Intelligence, & Ethics
Artificial intelligence is impacting and will impact Africa as profoundly as any continent on Earth. While some African nations struggle with limited access to the Internet, others are leaping into the digital economy with Smart Cities. Access for all, digital literacy, and capacity-building remain as challenges. How through AI and ethics can prospects for all of Africa be improved?
- A Human Rights Approach to U.S. Cybersecurity Strategy
"Cybersecurity policies need to take seriously the human rights of individual users by putting people's empowerment and well-being at their center," writes Stonehill College's Anwar Mhajne. As the Biden adminstration looks to defend human rights through its foreign policy, it's important to find a multilateral approach that ecnompasses the cyber realm and "puts individual rights at its center."
- The Doorstep: TikTok & the Normalization of Protests Around the World, with Dr. Tia C. M. Tyree
Howard University's Professor Tia Tyree joins "Doorstep" hosts Tatiana Serafin and Nick Gvosdev to discuss social media and youth activism in 2021 and beyond. The digital native generation is taking its online activism offline more swiftly and easily than ever with TikTok as the platform of choice. What responsibilities do tech giants and governments have to support this mobilization? How will global societies be reshaped as Gen Z takes power?
- AI & Equality Initiative: Algorithmic Bias & the Ethical Implications
In this AI & Equality Initiative podcast, Senior Fellow Anja Kaspersen speaks with three researchers working with the University of Melbourne's Centre for AI and Digital Ethics about bias in data and algorithms. How can these types of biases have adverse effects on health and employment? What are some legal and ethical tools that can be used to confront these challenges?
- Facing a Pandemic in the Dark: An Update on Cox's Bazar & COVID-19, with Razia Sultana
Three weeks ago, Razia Sultana, a Rohingya lawyer and activist, wrote an article for the Carnegie Council website about how over 1 million Rohingya refugees living in unsanitary conditions and with no Internet access in makeshift camps in Cox's Bazar, Bangladesh are dealing with the COVID-19 pandemic. In this Q&A, she gives an update on this situation.
- The Risks and Rewards of Big Data, Algorithms, and Machine Learning, with danah boyd
How do we analyze vast swaths of data and who decides what to collect? For example, big data may help us cure cancer, but the choice of data collected for police work or hiring may have built-in biases, explains danah boyd. "All the technology is trying to do is say, 'What can we find of good qualities in the past and try to amplify them in the future?' It's always trying to amplify the past. So when the past is flawed, it will amplify that."