“AI Governance”: A Black Gen Z-er’s Two Cents on The Conversation

Jun 30, 2021

The emerging position is for governments to be at the apex of the governance regime when it comes to AI. This is being signaled by initiatives such as the recently announced Artificial Intelligence Act proposed by the European Commission (EC). However, a truly sustainable framework in the digital age requires an omni-stakeholder approach to questions of AI governance and policy. It is critically important that a voice be given to the most vulnerable in such a process. It is also critical that there be an alignment of the interests of “big tech” with human interest, particularly in ensuring that AI does not undermine human rights. This would lead to a better society; one which is built on trust and receptive to innovation, and which advances the welfare of both the technology companies and the society they serve. Below are three recommendations for how this might be advanced.

Firstly, to align the interests of tech companies and the desires for a fair and just society, we must utilize the foundational principle of demand and supply. We would have seen an example in the aftermath of the murder of George Floyd when the 21st century “cancel culture” swept in demands for the human rights of black and brown people to be valued and the threat of severe economic punishment if such demands were not met. Major corporations had an awakening to how risky it was to be unresponsive to matters of social justice and ineffective public policy. That alignment of the awakening of people to deprivation of their liberties and the economic cost of injustice resulted in a rapid supply of corporate attention to governance through the lenses of accountability, responsive authority, and inclusivity. It is still early days, but there is a sense that these changes will be a permanent centerpiece of corporate strategy as well as for future social activism. It is therefore imperative that people are able to recognize how their human rights, like that of privacy, for instance, are being threatened more than ever in the digital space and that they respond with a similar level of outrage as they would to violations of privacy such as “peeping toms" and stalking in the physical world.

Secondly, the users of digital technology must take responsibility for how their own biases feed into the inequalities that they complain about. Our biases are reflected in our searches and the terms that we use, and the clicks we make. These choices are training algorithms to perceive what we value and perpetuate our biases in ways that work against our own interests in the everyday usage of the technology.

Thirdly, and in furtherance of the other two points, is the imperative that digital literacy be a priority in the formal education system and form a part of life-long learning. People must be capable of enjoying the benefits of AI-driven technologies whilst able to detect and militate against harms such as algorithmic bias and manipulation. Digital literacy must also be seen as a pillar of inclusive AI governance. Otherwise, the involvement of uninformed and vulnerable people at the governance table would be mere tokenism and of no greater value than if they were not involved in governance at all.

Pia-Milan Green is a research fellow for Carnegie Council’s Artificial Intelligence and Equality Initiative.

You may also like

A Dangerous Master book cover. CREDIT: Sentient Publications.

APR 18, 2024 Article

A Dangerous Master: Welcome to the World of Emerging Technologies

In this preface to the paperback edition of his book "A Dangerous Master," Wendell Wallach discusses breakthroughs and ethical issues in AI and emerging technologies.

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 23, 2024 Article

What Do We Mean When We Talk About "AI Democratization"?

With numerous parties calling for "AI democratization," Elizabeth Seger, director of the CASM digital policy research hub at Demos, discusses four meanings of the term.