As we speak—and browse, and post photos, and move about—artificial intelligence is transforming the fabric of our lives. It is making life easier, better informed, healthier, more convenient. It also threatens to crimp our freedoms, worsen social disparities, and gives inordinate powers to unseen forces.
Both AI’s virtues and risks have been on vivid display during this moment of global turmoil, forcing a deeper conversation around its responsible use and, more importantly, the rules and regulations needed to harness its power for good.
This is a vastly complex subject, with no easy conclusions. With no roadmap, however, we risk creating more problems instead of solving meaningful ones.
Last fall The Rockefeller Foundation convened a unique group of thinkers and doers at its Bellagio Center in Italy to weigh one of the great challenges of our time: How to harness the powers of machine learning for social good and minimize its harms. The resulting AI + 1 report includes diverse perspectives from top technologists, philosophers, economists, and artists at a critical moment during the current Covid-19 pandemic.
The report’s authors present a mix of skepticism and hope centered on three themes:
AI is more than a technology. It reflects the values in its system, suggesting that any ethical lapses simply mirror our own deficiencies. And yet, there’s hope: AI can also inspire us, augment us, and make us go deeper.
AI’s goals need to be society’s goals. As opposed to the market-driven, profit-making ones that dominate its use today, applying AI responsibly is to use it to support systems that have human goals.
We need a new rule-making system to guide its responsible development. Self-regulation simply isn’t enough. Cross-sector oversight must start with transparency and access to meaningful information, as well as an ability to expose harm.
AI itself is a slippery force, hard to pin down and define, much less regulate. We describe it using imprecise metaphors and deepen our understanding of it through nuanced conversation. This collection of essays provokes the kind of thoughtful consideration that will help us wrestle with AI’s complexity, develop a common language, create bridges between sectors and communities, and build practical solutions. We hope that you join us.
Foreword | rajiv shah
An open invitation to shape our integrated future | zia khan
We have already let the genie out of the bottle | tim o’reilly
Humanity and AI: cooperation, conflict, co-evolution | andrew zolli
AI’s invisible hand: why democratic institutions need more access to information for accountability | marietje schaake
Taking care of business | hilary mason + jake porway
Making AI work for humans | amir baradaran + katarzyna szymielewicz + richard whitt
Data projects’ secret to success is not in the algorithm | claudia juech
Inclusive AI needs inclusive data standards | tim davies
Unlocking AI’s potential for good requires new roles and public–private partnership models | stefaan verhulst
Making sense of the unknown | nils gilman + maya indira ganesh
Complete machine autonomy? It’s just a fantasy | maya indira ganesh