top of page
  • Writer's pictureAI for Peace

Peace in the Age of Artificial Intelligence — The Promise and Perils

How did we depart in the last decade and what to expect in the new one?

On January 17–18 the World Peace Conference took place in Ontario, California. The Conference gathered hundreds of experts from the United States and around the world to discuss major issues, challenges, and solutions for creating sustainable peace in their regions, countries, communities and homes. From Nobel Peace Prize Winners and Nominees, survivors of Hiroshima and Nagasaki, survivors of the Holocaust, to civil society leaders and change-makers, all of them shared their inspirational stories and invited for action.

AI for Peace Founder and Executive Director, Branka Panic, was invited as a speaker to join the Conference and present on the topic “Peace in the Age of Artificial Intelligence”. This article is a summary of her presentation of main perils and promise of AI for creating lasting peace.


The beginning of a new decade

2020 started with troubling news of a drone killing of Iranian General Qassem Soleimani, followed by Iranian counterattacks on US troops in Iraqi military bases. That morning, Twitter was burning hot with people wondering about WW3 and calling for #nowarwithiran. This came as a serious reminder of the fragility of peace and of constant need to be aware of the possibility of another war. This is how we started a new decade, and when we look back at how we departed 2019, we have even more reasons to be worried.


World in 2010’s — war, violence, forced migration, weather disasters

The World is experiencing another protracted crisis, with a plethora of challenges to human security and planetary health. The Global Peace Index 2019 shows that, although global peacefulness improved for the first time in five years, the world continues to be less peaceful than a decade ago. According to Save the Children, 420 million children, nearly one in five, live in conflict affected areas, more than any time in the previous 20 years. War, violence, persecution, famine and natural disasters drove worldwide forced displacement to another new high in 2018. UNHCR’s 2019 Global Trends Report shows that nearly 70.8 million people were displaced at the end of 2018, with 2.3 million more forcibly displaced than just a year earlier. Humanitarian needs in 2020 will be the highest in decades with more than 167 million people needing urgent aid — and more than half of them needing emergency food assistance. Extreme weather disasters, droughts and wildfires and climate change are only increasing people’s vulnerability to humanitarian crisis.


All of this tells us that peace is in urgent need of allies today. Searching for those allies, we are looking in the direction of artificial intelligence. We are analyzing how and where it is currently applied, and what the future potential impacts of AI are for ending war and creating lasting peace. As peacebuilders, we see AI as a promise, but we are also aware of its many perils.


AI in war and military purposes

What we saw in the previous decade is numerous AI applications in war. We can immediately associate AI with the debate about autonomous weapons, or so called “killer robots”. Over the past decade, artificial intelligence advanced rapidly and made possible the development of fully autonomous weapons systems which can select, attack, kill and wound human targets without effective human control. The concept of fully autonomous weapon systems is highly controversial. While US government consider their deployment as national imperative, the movement against them is becoming bigger and stronger. In March 2019 UN Secretary-General Antonio Guterres convened an AI expert meeting to push ahead restrictions on the development of lethal autonomous weapons systems.


“Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”, he said.


At the same time AI and robotics researchers sent an open letter, advocating against autonomous weapons and AI arms race and claiming that AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people. “Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits”, says the letter. Very strong voice for banning development and use of fully autonomous weapons comes from human rights organizations, Human Rights Watch, Amnesty International and others. They emphasize the implications of autonomous weapons in the context of international law, particularly international human rights law and standards. This movement culminated in creating a “campaign to stop killer robots”, with signatories such as Stephen Hawking and Elon Musk, supported by 4,500 AI experts, 28 countries, 110+ NGOs, 21 Nobel Peace Laureates and many more.


“Maybe robotics and AI are inevitable, but applying them to kill human beings on their own is not inevitable, unless you do nothing, and we refuse to do nothing” Jody Williams, 1997 Nobel Peace Prize Laureate, Founding coordinator of the International Campaign to Ban Landmines.


Taking AI from war to peace

Fast spread of technology, availability of “big data” and lower costs of processing and storage allowed AI to take greater application not only in war, but in many other fields, including peacebuilding. For some time now the most spread technology used in peacebuilding is satellite, drone and aerial imagery of high resolution, enabling us to see if a building has been damaged or destroyed, and enabling early action for those in urgent need. “Social listening”, monitoring of digital conversations, has become important and valuable in understanding the voice of the citizens, their needs and grievances and help improve functionality and access to services. This serious of tools allowed peacebuilders to communicate with more people in more ways, collecting better information and sustaining relationships with local stakeholders and populations.


Going even a step further, AI gives a possibility of analyzing collected data about political, social, institutional and economic variables, satellite and drone imagery enabling us to recognize potential conflicts early on and making early warning and response efforts more efficient than ever before. Predicting where growing political, national, ethnic or religious tensions might grow into open conflict is extremely difficult. But researchers believe that AI might help analyze vast information from potential conflict zones to predict where peacebuilding operations should be focused. Researchers from the Peace Research Institute Oslo (PRIO) piloted ViEWS, Political Violence Early-Warning Systems that produces monthly forecasts for 36 months at the country and sub-national level for state-based conflict, non-state conflict and one-sided violence in Africa. Both data and computational power are available today to create more accurate predictions and scale them. Embedded Networks Lab and Warwick University combine machine learning and traditional modeling to predict the size of the conflict, where it might take place and how soon, and inform those who should take timely action, such as United Nations Peacekeeping Forces.


Ethical framework and innovative global governance

The same technologies can be used for good, or if in the wrong hands can be used as weapons. Any technology can be hacked, weaponized and used in a way we didn’t intend. This is why we need to make sure that, even when we have good intentions, we are not creating unintended consequences. We need a framework for the ethical and safe deployment of artificial intelligence in conflict and peacekeeping settings. We need to define what constitutes ethical use of AI and we need to embed those ethical standards in innovative global governance systems based on international law. We need those ethical standards to become part of mandatory curricula for AI engineers already in education phase. We also need social scientists, activists and policymakers knowledgeable in AI field, able to ask the right questions at the right time. In that way we can proactively detect unintended uses and consequences of AI and have a timely action. We need to look for and be aware of negative externalities and design safeguards against them.


We have a lot to fear of and be aware of all the possible negative impacts and the potential misuse of AI, from drone swarms, equipped with high-class artillery that can serve as sophisticated weapons of mass destruction, to facial recognition used for surveillance and violations of human rights and AI enabled deepfakes of fake videos, images and text. But there is also a lot to hope for. Studies already proved AI’s great potential to make a real difference across a range of social domains but realizing that potential in peacebuilding requires decisive action by governments, international humanitarian and development actors, tech companies, nonprofits and citizens themselves.


We, at AI for Peace, believe that next years will be immensely influenced by AI development and deployment, and we are on a mission to make that impact peaceful and beneficial for all humanity. Our imperative is to make sure AI technology is used only for good and only for peace, and this is the result we aim to witness once we turn back in 2030, to see how the previous decade looked like.


Ontario, CA 18 January, 2020

236 views
bottom of page