
I recently had the opportunity to participate in the 2024 Sudikoff Interdisciplinary Seminar on Genocide Prevention, which focused on the role of Generative AI (GAI) in mass atrocities and atrocity prevention. A white paper on this topic, Generative AI, Mass Atrocities, and Atrocity Prevention, authored by Brittan Heller, Lecturer at Stanford Law School, for the United States Holocaust Memorial Museum, highlights and explores the dual nature of GAI in this context. On the one hand, GAI technologies have the potential to enhance early warning systems, counter misinformation, and improve crisis response efforts, providing powerful tools for identifying and mitigating risks in volatile regions. On the other hand, these same technologies can also fuel the spread of misinformation, incite violence, and amplify exclusionary ideologies, all of which can destabilize societies and escalate conflict.
Striking a balance between the potential benefits and risks of GAI requires careful consideration as we integrate these tools into efforts to prevent mass atrocities, and the author skillfully highlighted these nuances and complexities. The organizers also encouraged participants to explore the topic further, including examining GAI cases beyond the Western context. In response to this call, I am writing this brief (or not so brief) reflection to shed light on GAI applications in crisis and conflict settings, as well as in regions with histories of discrimination, exclusion, and violence—environments that face significant risks for future atrocities.
Personal Reflections: A Commitment Born of Painful History
As someone deeply involved in conflict and post-conflict settings through my work, studies, and lived experience, I find this issue extremely important. My personal history also deeply connects me to these challenges. My grandfather was captured during World War II and sent to a concentration camp as part of a genocidal attempt against Slavic people in Europe, while the Jewish side of my family also endured the horrors of the Holocaust. This painful history drives my commitment to work on understanding more recent risks of mass atrocities, including those posed by the misuse of AI and other emerging technologies.
This year marks the 30th anniversary of the Rwandan genocide, a stark reminder of the deadly consequences of manipulative information. In 1994, a Hutu-majority government, in collaboration with a government-aligned radio station, incited violence that led to the murder of over one million people. The lessons from this recent history of Rwanda remind us of the deep impact information manipulation can have in fueling atrocities. At that time, however, the information ecosystem was much simpler, with access to news largely limited to printed newspapers, radio and TV broadcasts. Today, however, the information landscape is much more complex, with digital platforms and algorithms playing a pivotal role in shaping how information is disseminated and consumed (my co-author, Paige Artur and I write extensively about this in our recent book “AI for Peace”, chapter on “AI and hate speech: the role of algorithms in inciting violence and fighting against it”).
Lessons from Early Deepfake Incidents
I first began writing about the dangers of AI generated deepfakes back in 2019, long before the current generative AI revolution that has significantly amplified these risks. One of the earliest and most telling examples occurred in 2018 in Gabon, during a period of political uncertainty. President Ali Bongo had been absent from the public due to illness, sparking widespread speculation about his health and ability to govern. In an attempt to discourage and stop these concerns and stabilize the situation, the government released a video of the President delivering a New Year's message. However, the video quickly raised suspicions of being a deepfake—a digitally manipulated video that made it appear as the President was speaking, despite his prolonged absence. Gabon video fueled instability, raising questions about the government’s credibility and contributing to a military coup attempt.
At the time, deepfake technology was a growing concern but not as sophisticated or widespread as today’s generative AI tools. With more advanced, accessible, and realistic generative AI now available, the threat has grown exponentially. These tools are now cheaper, more available and can easily create high-quality content in real-time, making it harder to distinguish fact from fiction and enabling disinformation campaigns to spread faster and further. The Gabon incident highlights the growing risk of AI-driven instability, underscoring the need for safeguards against its misuse.
The Dangers of Manipulated Media in Conflict – Ukraine and Gaza
One of the more recent example occurred in 2022, during the early days of the war in Ukraine, when a deepfake video of President Volodymyr Zelenskyy spread on social media. The manipulated video showed Zelenskyy urging his soldiers to surrender to Russia, and it was posted on a Ukrainian news website by hackers before being quickly debunked and removed. Despite its swift removal, the incident highlighted how deepfakes can exploit chaos in conflict zones, spreading disinformation and undermining trust.
Since then, numerous examples of GAI use in the Russia-Ukraine war have emerged, including fabricated videos falsely portraying then-Ukrainian Commander-in-Chief General Valerii Zaluzhnyi accusing President Zelenskyy of killing his aide and warning about his own potential premature death (to read more about the videos look at the Atlantic Council’s Digital Forensic Research Lab reporting). Another AI-generated attempt of deception took place in the form of a leaked Zoom call of ex-president Poroshenko targeting foreign fighters in Ukraine. Another video, marked as deepfake by the Ukraine’s Center for Countering Disinformation, was used to try to oust the current government. With the rise of generative AI, such threats have become more powerful, endangering both security and public confidence. This war proved to be a testing ground for GAI and other forms of AI enabled misinformation and disinformation, and the space here does not allow me to go in further examples, but you can find many more here, here and here.
Any research on mass atrocities must address the atrocities unfolding in Gaza, highlighted by the recent Amnesty International report. The report examines civilian killings, destruction of infrastructure, forced displacement, denial of humanitarian aid, and power supply restrictions and concludes that Israel has committed genocide against Palestinians in Gaza. Tools like “Lavender” and “Gospel” played a significant role in the Gaza conflict, using AI to rapidly process vast amounts of data and identify targets for military strikes. “Lavender” focused on marking individuals, while “Gospel” targeted buildings believed to house militants. Another automated system, “Where’s Daddy,” tracked individuals and directed bombings when they entered their family homes. These AI systems were credited with the deaths of thousands of Palestinians, including many women and children, especially in the initial weeks of the conflict, as their decisions guided Israeli airstrikes.
AI-generated deceptive images and videos had a relatively smaller but still significant impact in this war. Not only did these fabrications create confusion, but they also cast doubt on authentic war images, fueling suspicion during a time of increased division. Activists on both sides have employed AI-generated disinformation to influence public opinion or create the impression of broader support for their cause. Examples include AI-generated billboards in Tel Aviv supporting the Israeli Defense Forces (IDF), fake images shared by Israeli accounts of people cheering for the IDF, AI-created condemnations of Hamas by Israeli influencers, and AI images showing Palestinian bombing victims in Gaza. Another image shows a fake AI-generated “tent city,” appearing constructed for Israeli refugees, and has been viewed 250,000 times. You can see more examples of GAI content in this war, documented by Anti-Defamation League here and by Access Now here.
Lessons from 'Shallow Fakes' and Social Media Algorithms: From Ethiopia to Myanmar
As I already pointed out, the threat of spreading misinformation and disinformation existed long before the rise of generative AI, as demonstrated by some earlier examples of so cold "shallow fakes." For instance, during the Tigray conflict in Ethiopia, a fabricated Twitter account posing as a UN diplomat, "George Bolton," used an AI-generated profile picture to share messages that supported Ethiopia’s leader while criticizing Tigray’s leadership. This synthetic persona aimed to create the illusion of credible international support. Similarly, a widely shared image of two women praying beside a destroyed church in Tigray was later found to be an old photo from Eritrea, disproving the claim of destruction linked to the conflict.
These examples underscore how misleading content, even without AI involvement, can manipulate narratives and sway public opinion in conflict zones. However, while tools like reverse image search can often debunk fabricated visuals, the rise of AI-generated content complicates this process. Current tools are unable to reliably distinguish truth from fabrication (for more on this, see the fascinating work of Professor Hany Farid at UC Berkeley). This raises a crucial question: once such content spreads online, does it even matter that it is AI-generated? Also, how does this contribute to the erosion of our information ecosystem, where people begin to doubt the authenticity of all content, including genuine atrocities shared online? We also see the troubling effect of widespread skepticism, as seen in the example of recent floods in Spain, where real images were dismissed as fake because the scale of destruction appeared too unrealistic to believe. Another rising problem is also the "layer's dividend," where public figures exploit misinformation to evade accountability.
Equally important is the role of social media algorithms and accountability of big tech companies, a topic that demands deeper exploration, particularly in light of Facebook's role in the Rohingya genocide in Myanmar. When Facebook entered Myanmar, it swiftly became the dominant platform in the country’s digital landscape, effectively serving as the internet for a vast majority of its connected population. However, the platform's rapid growth outpaced its ability to moderate content effectively in a culturally and linguistically nuanced way. This lack of oversight allowed Facebook feeds to be flooded with hate speech and inflammatory content, much of it deliberately spread by the military to incite division and target the Rohingya Muslim minority. The unchecked spread of disinformation and calls for violence on the platform became a catalyst for the escalation of tensions, culminating in the 2017 genocide, where Facebook’s role in amplifying hate was widely criticized as a significant contributing factor. This case underscores how social media platforms can be misused to amplify hate speech and serves as a cautionary tale for emerging technologies and companies behind the GAI tools. These lessons should serve as a minimum guide to more careful development and deployment of new AI tools to prevent similar misuse.
In conclusion, the ongoing and escalating use of generative AI tools in current conflict zones offers a sobering reminder of the dual-edged nature of these technologies. From fabricated videos that incite fear and instability to algorithm-driven systems that escalate violence, these examples—unfortunately unfolding in real time—underscore the urgent need for systematic analysis. By mapping these instances comprehensively and selecting key case studies for deeper examination, we can begin to understand the true impact of these tools. Only with this knowledge can we shape effective policy recommendations and bolster prevention efforts to mitigate the risks generative AI poses in fragile and conflict-prone contexts.
………………
Some additional questions and suggestions – the role of tech companies and ethics
To fully grasp the potential risks of GAI, it is crucial to first examine the historical use of algorithms in conflicts and their role in atrocities before the introduction of GAI. This context provides valuable insights into how GAI differs from earlier algorithmic tools, especially given the rapid advancements over the past two years since tools like ChatGPT or Midjourney became widely accessible.
And other guidelines are already being developed. The important thing is to understand that all these tools have different consequences in fragile context, and this is why we need a careful application of “do no harm” and “conflict sensitivity” approaches to be able to add additional layers of protection for crisis context and fragile settings, as well as in all settings already showing the risk factors for mass atrocities (see more about these ethical approaches to AI in our AI for Peace Ethics workstream).
(The workshop offered several great resources, If you are interested in exploring this topic further don’t miss them:
“Audiovisual Generative AI and Conflict Resolution: Trends, Threats and Mitigation Strategies”, published by WITNESS in September 2024.
“Making Tech Work for Global Criminal Justice”, by Just Security, December 2024.
“Early Warning in Atrocity Scenarios Must Account for the Effects of Technology, Good or Bad”, Just Security, November 2024
B-Tech Project, OHHCHR and business and human rights, accessed December 2024)
Comments