AI EXPLAINED: Non-technical Guide for Policymakers
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last unless we learn how to avoid the risks.”Stephen Hawking
Google’s AI bested doctors in detecting breast cancer. Microsoft has created a tool to find pedophiles online. An AI-powered app is helping China’s remote villages fight poverty. Leveraging AI to fight wildfires. This secretive company might end privacy as we know it. Top AI researchers fighting deepfakes.
Those are some of the many articles that attracted our attention since the beginning of this year. Artificial Intelligence (AI) is not a futuristic vision anymore, but rather something that is here today, transforming the way we work and live. There are already significant AI implementations in medicine, health care, pharmaceutics, finance, transportation, security and criminal justice. AI in healthcare is already enabling better diagnosis and treatment predictions, freeing medical staff from administrative burdens, saving doctors and nurses 17–20 percent of their time and potentially creating $150 billion in annual savings in US healthcare. AI could contribute up to $15.7 trillion to the global economy, and up to 26% boost in GDP for local economies by 2030. AI could contribute to all 17 UN Sustainable Development Goals, helping hundreds of millions of people in both developing and developed countries.
At the same time, AI created with good intentions can create unintended negative consequences. AI built-in bias can create discriminatory recruiting algorithms or chatbots that become racist. While many jobs will be created by AI, many will be lost as well, which will require effective support systems to help workers transition to new positions. Although initially seen as a potential remedy for the climate crisis, more and more academics, researchers and practitioners are raising alarms because of the computing power and energy required for AI-enabled technologies, increasing the fear that the technology will deepen the climate crisis.
Current AI developments raise fear and hope, and with it some important policy, regulatory and ethical issues.
How should we enable and promote data access and at the same time protect privacy?
How do we protect against biases in algorithms?
How do we introduce and practically implement fairness, accountability, transparency and ethical creation and implementation of AI?
How do we strengthen trust in AI?
How do we avoid pitfalls while benefiting from AI promises?
All of them are important questions, to be answered in a timely and informed way and many of these necessary discussions are already happening at the highest levels of government. Earlier this month the White House announced 10 principles that federal agencies should consider when formulating laws and rules for the use of AI, with the main message to keep limiting regulatory “overreach”. The announcement follows an agenda set by the White House in February last year, when President Trump issued an executive order launching the American AI Initiative and telling federal agencies to channel more of their current investment in AI related applications. OECD AI Principles, the first intergovernmental standards for AI, were also adopted last year, introducing principles and recommendations for governments. AI regulations are determined by various governments to allow both effective and safe interactions and developments. Stopping the fast pace of AI development will hardly be beneficial and in many fields not even possible.
The development of AI needs to be supported by the appropriate policy framework and regulatory oversight to be able to enable fast and sustainable development. Informed policymakers are the crucial element for creating this framework for facilitating the best possible AI implementation.
Why write a non-technical guide for Policymakers? Our work showed that the AI discourse can often be intimidating, the science hard to understand, and the ideas difficult to comprehend. However, it is undisputed that many fields of our lives are already augmented and changed through AI, and AI already has and will have major implications for our society. This is why there is an urgent need to better comprehend this field and its implications. How AI is created and implemented has to be clear, explainable, and understandable.
Our aim was to create the Policymakers Guide to AI with a human-centered approach and explain in an engaging way AI basics to an audience of policymakers and all interested individuals who don’t have expertise in this field. Our goal is to demystify what AI is, and demonstrate how it is already altering our lives and societies we live in. We present the state of the art of AI, its applications across different industries and fields, with examples from medicine, the automotive industry and education. We discuss its cross-cutting challenges and explain its transformative power. The Guide offers explanations and additional resources, videos, articles, papers, and tutorials, to help policymakers prepare for the current and future AI developments and impacts. It serves as an open resource, welcoming all comments and suggestions to make it better and inviting, continuing dialogue in explaining AI and keeping up with its developments.
“As AI develops and its applications grow, there is a great opportunity, but also a great responsibility to make sure it contributes to public good and benefit to all, with fairness, reliability, security, and where appropriate transparency and privacy are ensured.” Branka Panic, AI for Peace Founding Director
Bridging the gap between artificial intelligence and policymaking AI is already changing how we live our lives, but we as humans also influence how the AI is being built and implemented. Policy is crucial in shaping this relationship and making it positive both for future AI developments and the future of humanity.
There are no policy professionals who are proficient and have many years of experience in building this relationship. There are experts who did versions of it, like tech policy experts who shaped early internet governance, or experts who considered privacy issues related to technologies, but nobody previously had a chance to tackle challenges as the ones posed by AI today. This field is new to all of us, bringing different types of questions and issues that evolve every single day, and faster than anything we’ve seen before.
This calls for a new generation of AI policy experts able to tackle problems of this new era. And although AI has become part of a public narrative and recognized as an urgent topic to deal with, there is still a dire need for more people to think about it from different perspectives in various sectors. There is an urgent need for developing professionals who are “bilingual” in both artificial intelligence and policy. Even though some important programs were piloted to bridge this gap, like Aspen Tech Policy Hub, Tech Congress or Open Philanthropy AI Policy Careers, which we highly welcome and recommend as an approach, there are not enough AI experts who want to get involved in public policy.
This is why we are approaching social scientists as well, engaging them as expert parties of AI policy dialogue. They are crucial in helping us as a society to understand the developments of AI, to distinguish hype from real impact, what AI can and cannot do, in order to accordingly advance public policy goals.
Hopefully, this Guide will bring us closer to bridging the existing gap between AI and policymaking. This Guide aims to help better understanding of AI and contribute to ensuring that the outcomes driven by AI technologies are for the benefit and not the detriment of humanity.
This piece was supported by the CITRIS Policy Lab. The CITRIS Policy Lab, headquartered at CITRIS and the Banatao Institute at UC Berkeley, supports interdisciplinary research, education, and thought leadership to address core questions regarding the role of formal and informal regulation in promoting innovation and amplifying its positive effects on society.