At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability.
As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful
accountability and oversight – including basic safeguards of responsibility, liability, and due
process – is an increasingly urgent concern.
Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central
problem and addresses the following key issues:
- The growing accountability gap in AI, which favors those who create and deploy these technologies at the expense of those most affected
- The use of AI to maximize and amplify surveillance, especially in conjunction with facial and affect recognition, increasing the potential for centralized control and oppression
- Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
- Unregulated and unmonitored forms of AI experimentation on human populations
- The limits of technological solutions to problems of fairness, bias, and discrimination
Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical pathways informed by research so that policymakers, the public, and technologists can better
understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.
The following report develops these themes in detail, reflecting on the latest academic research, and outlines seven strategies for moving forward:
- Expanding AI fairness research beyond a focus on mathematical parity and statistical fairness toward issues of justice
- Studying and tracking the full stack of infrastructure needed to create AI, including accounting for material supply chains
- Accounting for the many forms of labor required to create and maintain AI systems
- Committing to deeper interdisciplinarity in AI
- Analyzing race, gender, and power in AI
- Developing new policy interventions and strategic litigation
- Building coalitions between researchers, civil society, and organizers within the technology sector
These approaches are designed to positively recast the AI field and address the growing power imbalance that currently favors those who develop and profit from AI systems at the expense of the populations most likely to be harmed
The report provides 10 practical recommendations that can help create accountability frameworks capable of governing AI
1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.
2. Facial recognition and affect recognition need stringent regulation to protect the public interest.
3. The AI industry urgently needs new approaches to governance.
4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.
5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers.
6. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services.
7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces.
8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.”
9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues.
10. University AI programs should expand beyond computer science and engineering disciplines.