- Responsible AI & the COVID-19 Pandemic, with Rumman Chowdhury
How can we use artificial intelligence ethically during a crisis? How do we balance privacy with security and public health? Rumman Chowdhury, global lead for responsible AI at Accenture, discusses surveillance, supply chains, pseudoscience, Netflix, and much more as the world adjusts to social distancing.
- The Future of Artificial Intelligence, with Stuart J. Russell
UC Berkley's Professor Stuart J. Russell discusses the near- and far-future of artificial intelligence, including self-driving cars, killer robots, governance, and why he's worried that AI might destroy the world. How can scientists reconfigure AI systems so that humans will always be in control? How can we govern this emerging technology across borders? What can be done if autonomous weapons are deployed in 2020?
- Killer Robots, Ethics, & Governance, with Peter Asaro
Peter Asaro, co-founder of the International Committee for Robot Arms Control, has a simple solution for stopping the future proliferation of killer robots, or lethal autonomous weapons: "Ban them." What are the ethical and logistical risks of this technology? How would it change the nature of warfare? And with the U.S. and other nations currently developing killer robots, what is the state of governance?
- Behind AI Decision-Making, with Francesca Rossi
With artificial intelligence embedded into social media, credit card transactions, GPS, and much more, how can we train it to act in an ethical, fair, and unbiased manner? What are the theories and philosophies behind AI systems? IBM Research's Francesca Rossi discusses her work helping to ensure that the technology is "as beneficial as possible for the widest part of the population."
- Carnegie New Leaders Podcast: Designing an Ethical Algorithm, with Michael Kearns
How can algorithms be made more "ethical"? How can we design AI to protect against racial and gender biases when it comes to loan applications or policing? UPenn's Professor Michael Kearns, co-author of "The Ethical Algorithm," and Geoff Shaefer, who works on AI issues at Booz Allen Hamilton, discuss these issues and much more.
- AI in the Arctic: Future Opportunities & Ethical Concerns, with Fritz Allhoff
How can artificial intelligence improve food security, medicine, and infrastructure in Arctic communities? What are some logistical, ethical, and governance challenges? Western Michigan's Professor Fritz Allhoff details the future of technology in this extreme environment, which is being made more accessible because of climate change. Plus he shares his thoughts on some open philosophical questions surrounding AI.
- The Ethical Algorithm, with Michael Kearns
Over the course of a generation, algorithms have gone from mathematical abstractions to powerful mediators of daily life. They have made our lives more efficient, yet are increasingly encroaching on our basic rights. UPenn's Professor Michael Kearns shares some ideas on how to better embed human principles into machine code without halting the advance of data-driven scientific exploration.
- Making AI Work, Ethically & Responsibly, with Heather M. Roff
Heather M. Roff, senior research analyst at the Johns Hopkins Applied Physics Lab, thinks some researchers are having the wrong conversations about AI. Instead of wondering whether AI will ever be a moral agent, we should be focused on how to program the technology to be "morally safe, right, correct, justifiable." What are some practical uses for AI today? How can it be used responsibly in the military realm?
- AI & Human Rights: The Practical & Philosophical Dimensions, with Mathias Risse
Mathias Risse, director of Harvard Kennedy School's Carr Center for Human Rights Policy, discusses the many connections between artificial intelligence and human rights. From practical applications in the criminal justice system to unanswered philosophical questions about the nature of consciousness, how should we talk about the ethics of this ever-changing technology?
- Eyes in the Sky: The Secret Rise of Gorgon Stare and How It Will Watch Us All, with Arthur Holland Michel
Arthur Holland Michel, founder of the Center for the Study of the Drone, traces the development of the Pentagon's Gorgon Stare, one of the most powerful surveillance technologies ever created. When fused with big-data analysis techniques, this network can be used to watch everything simultaneously, and perhaps even predict attacks before they happen. Can we capitalize on its great promise while avoiding its potential perils?
- Global Ethics Weekly: AI Governance & Ethics, with Wendell Wallach
Wendell Wallach, consultant, ethicist, and scholar at the Yale Interdisciplinary Center for Bioethics, discusses some of the current issues in artificial intelligence (AI), including his push for international governance of the technology. He and host Alex Woodson also speak about Trump's recent executive order, universal basic income, and some of the ethical issues in China concerning AI, including the Social Credit System.
- Control and Responsible Innovation of Artificial Intelligence
Artificial Intelligence's potential for doing good and creating benefits is almost boundless, but equally there is a potential for doing great harm. This panel discusses the findings of a comprehensive three-year project at The Hastings Center, which encompassed safety procedures, engineering approaches, and legal and ethical oversight.
- Army of None: Autonomous Weapons and the Future of War, with Paul Scharre
"What happens when a predator drone has as much as autonomy as a self-driving car, moving to something that is able to do all of the combat functions all by itself, that it can go out, find the enemy, and attack the enemy without asking for permission?" asks military and technology expert Paul Scharre. The technology's not there yet, but it will be very soon, raising a host of ethical, legal, military, and security challenges.
- The Risks and Rewards of Big Data, Algorithms, and Machine Learning, with danah boyd
How do we analyze vast swaths of data and who decides what to collect? For example, big data may help us cure cancer, but the choice of data collected for police work or hiring may have built-in biases, explains danah boyd. "All the technology is trying to do is say, 'What can we find of good qualities in the past and try to amplify them in the future?' It's always trying to amplify the past. So when the past is flawed, it will amplify that."
- The Driver in the Driverless Car with Vivek Wadhwa
What are the social and ethical implications of new technologies such as widespread automation and gene editing? These innovations are no longer in the realm of science fiction, says entrepreneur and technology writer Vivek Wadhwa. They are coming closer and closer. We need to educate people about them and then come together and have probing and honest discussions on what is good and what is bad.
- Megatech: Technology in 2050
In this insightful interview, "Economist" executive editor Daniel Franklin discusses driverless cars, gene-editing, artificial intelligence, and much more. Are we entering an "accelerando" stage of technological change? And what are the ethical implications?
- Homo Deus: A Brief History of Tomorrow
Soon, humankind may be able to replace natural selection with intelligent design and to create the first inorganic lifeforms, says Yuval Noah Harari. If so, this will be the greatest revolution since life began. But what are the dangers, and are they avoidable?
- Artificial Intelligence: What Everyone Needs to Know
We're asking the wrong questions about artificial intelligence, says AI expert Jerry Kaplan. Machines are not going to take over the world. They don't have emotions or creativity. They are just able to process large amounts of data and draw logical conclusions. These new technologies will bring tremendous advances--along with new ethical and practical issues.
- The Pros, Cons, and Ethical Dilemmas of Artificial Intelligence
From driverless cars to lethal autonomous weapons, artificial intelligence will soon confront societies with new and complex ethical challenges. What's more, by 2034, 47 percent of U.S. jobs, 69 percent of Chinese jobs, and 75 percent of Indian jobs could all be done by machines. How should societies cope and what role should global governance play?
- Interview with Robert Sparrow on Autonomous Weapon Systems and Respect in Warfare
Professor Sparrow works on ethical issues raised by new technologies. Here he discusses Autonomous Weapon Systems (AWS), often referred to as "killer robots." Unlike drones, which are remotely operated by humans, with AWS the robot itself determines who should live or die. What are the ethical arguments for and against these killing machines?