top of page

Advancing AI for Effective Crisis Early Warning Early Action – Insights from PRIO AI Days

Writer: AI for PeaceAI for Peace

The year 2024 was marked by unprecedented turmoil, with new conflicts erupting, existing crises intensifying, and climate-driven disasters multiplying. Political violence events increased by 25% compared to 2023, and over the past five years, conflict levels have nearly doubled. By mid-year, nearly 123 million people had been forcibly displaced by conflict and persecution. Crises have left millions, including vulnerable children, in urgent need of support - a stark reminder of the devastating impact of crises in places like Palestine, Ukraine, the Democratic Republic of the Congo, Myanmar, Sudan, Syria, Haiti, Yemen, Afghanistan, and more. In total, nearly 300 million people required humanitarian assistance and protection in 2024 due to conflicts, climate emergencies, and other factors.


This escalating need underscores the critical importance of systems that can anticipate crises, potentially save lives, and reduce suffering. Early Warning and Early Action (EWEA) systems have proven essential for addressing the interconnected challenges of natural disasters, armed conflicts, and climate change. During a session at the PRIO AI Days 2024, experts from organizations such as , VIEWS, UNHCR, the Danish Refugee Council, OSCE, the US State Department, ACLED, to name only few, explored the promise and pitfalls of using AI in EWEA. Their discussions provided valuable insights into how these systems can be improved, while navigating the technological, political, and ethical constraints inherent in their implementation.


The Promise of AI in Early Warning

Artificial intelligence is transforming how we predict and respond to crises, leveraging machine learning to analyze complex data and provide actionable insights. Projects and tools like the VIEWS forecast of expected conflict fatalities, ACLED Conflict Alert System, UNHCR’s risk models for displacement, the U.S. State Department’s Instability Monitoring and Analysis Platform (IMAP), the Holocaust Memorial Museum Early Warning Project, and Conflict Forecast for evaluating worldwide armed conflict risk, exemplify this potential. Similarly, platforms like the Danish Refugee Council’s DEEP connect 8,000 humanitarians with the tools and data they need for timely and effective crisis responses.

 

While I encourage you to explore these tools further, this introduction focuses on three projects and organizations that have been at the forefront of this field for years. While future editions of this newsletter will cover additional tools and organizations, in this edition, I also share insights from broader discussions with experts and stakeholders at PRIO AI Days. This is part of an ongoing effort to ‘demystify’ data science and make these tools more accessible to policymakers. If you are already familiar with these three examples, feel free to skip ahead to ‘Challenges in Deployment,’ where I comment on key takeaways from discussions with PRIO colleagues and other partners who participated in the event.

 

 

Introducing Three Key Projects

Simon P. von der Maase, Senior Researcher at PRIO presented VIEWS (Violence & Impacts Early-Warning System), a forecasting tool that predicts conflict-related fatalities up to 36 months into the future. Data on violence and protests is sourced from UCDP and ACLED, along with information on population, socio-economic factors, migration, and climate—though data quality varies across these categories. VIEWS leverages multiple ML models, each with distinct strengths, weaknesses, and features, enabling his team to look into which variables make the best predictions. These models include shadow models (used for testing rather than predictions), baseline models (which assume no change from the previous month). All models that the team deem useful get into the production then incorporated into ensemble predictions. After validation, these forecasts are made available via an API and a user-friendly dashboard on the VIEWS website, allowing users to access predictions directly.

“Jointly led by Uppsala University and Peace Research Institute Oslo, the VIEWS consortium unites a suite of state-of-the-art research projects dedicated to exploring novel methodologies to forecast violent conflicts and their impacts on society and human development.”, viewsforecasting.org


Katayoun Kishi, Head of Data Science, ACLED introduced ACLED Conflict Alert System (CAST) which provides monthly forecasts of organized armed conflict six months into the future, covering battles, explosions, remote violence, and violence targeting civilians. A general dashboard on the website, linked to an API, offers an easy users access to these forecasts. The system emphasizes transparency by explaining its predictions—starting with baseline estimates that use only historical data and showing how various predictors contribute to final outcomes. A separate tab on the dashboard focuses on accuracy, comparing predictions with actual events and highlighting discrepancies in the number of events. While the model's global accuracy can be challenging to evaluate due to varying performance across countries, predictions are typically within 5% of actual events. Efforts are underway to further prioritize accuracy, adding a qualitative dimension to the system. ACLED experts worldwide are creating their own forecasts, which are tested against the model's predictions (in cases where the model underperforms, researchers often achieve greater accuracy). These insights contribute to a larger initiative examining "conflict signatures," aiming to identify unique patterns in conflicts—such as the actors involved, geographic spread, and evolution over time—to better predict future developments.

 

“The ACLED Conflict Alert System (CAST) is a conflict forecasting tool that predicts political violence events up to six months in the future for every country in the world. Updated predictions are released each month for the following six months, alongside accuracy metrics for previous forecasts.” Acleddata.com/conflict-alert-system/

 

Geraldine Henningsen, Data Scientist at UNHCR introduced the United Nations High Commissioner for Refugees (UNHCR) as an organization in the early stages of developing comprehensive Early Warning and Early Action (EWEA) systems, though it has already implemented tools for anticipatory action. Significant efforts are focused on crisis preparedness and timely response, with the Department of Emergency and Disaster Preparedness continuously monitoring indicators to detect early warning signals. Since 2021, Henningsen and her team have been advancing AI-based systems, building on prior work in population statistics. They have developed two key tools: a monthly risk index estimating displacement risk at the national level and a subnational model supported by the CRAF’d initiative, which analyzes displacement with a focus on climate change. While these projects represent critical progress, Henningsen emphasized that they do not yet form a fully integrated EWEA system. Such a system, supported by Luxembourg’s government, is under development to create a comprehensive framework incorporating risk flagging within decision-making processes.

These systems can offer early warnings that allow policymakers to act before a disaster spirals out of control, mitigating harm and potentially saving lives. However, the effectiveness of these tools depends on their integration into decision-making frameworks and their ability to balance predictive accuracy with practical usability.



Challenges in Deployment and Toward an Ideal Model

Despite its potential, AI in Early Warning and Early Action (EWEA) faces significant challenges, with transparency being one of the major hurdles. Many machine learning models function as "black boxes," generating predictions without revealing the underlying factors driving those results. This lack of clarity can hinder trust and actionable decision-making by policymakers.


The ACLED Conflict Alert System seeks to address this issue by prioritizing explainability alongside transparency. The system breaks down its predictions, starting with baseline estimates that exclude additional predictors and rely solely on historical data (look previous paragraph where CAST is introduced). It then illustrates how various predictors contribute to the final outcomes. To enhance accountability, ACLED includes a dedicated accuracy tab, comparing predictions with actual outcomes and highlighting any discrepancies, such as the number of events the model over- or underestimated. This approach aims to foster trust by making the model's processes and performance more comprehensible and verifiable.

“Years are going to pass, these models are going to be used and these models are going to affect our models, because policy makers are acting on our predictions, so we have to start worrying about that and (start) looking forward”, Alexandra Malaga, Data Scientist, EconAI and Conflict Forecast

 

Alexandra Malaga, Data Scientist, EconAI and Conflict Forecast highlighted a key challenge in deploying predictive models: ensuring their accuracy is maintained as policymakers act on their forecasts. When decisions are made based on these predictions, interventions can reduce the predicted risks. However, if the effects of these interventions are not incorporated into the models, the models may mistakenly "learn" that risks were overestimated, eventually rendering them obsolete. To address this, developing a policy intervention dataset is essential. Such a dataset would document how specific actions influence outcomes, enabling models to adjust predictions based on varying policy scenarios. This approach ensures that models remain adaptive and effective, even as interventions reshape the dynamics of risk.

 

Another challenge lies in balancing prediction accuracy with ethical considerations. Geraldine Henningsen of UNHCR emphasized the need for careful deployment of even the most advanced models to avoid unintended consequences. For instance, publicly sharing predictions of mass displacement could prompt neighboring countries to preemptively close their borders, worsening the crisis for vulnerable populations. This concern is a key reason why UNHCR has chosen not to publish its data publicly. EconAI faces a similar challenge, highlighting the issue of "self-fulfilling prophecies," where the predictions themselves could potentially influence outcomes. One possible solution is to share data and predictions exclusively with trusted actors who have the ability to intervene and prevent conflict, thus mitigating the risk of reinforcing negative predictions.

 

Drawing from the insights of various speakers, some essential elements emerge for designing the ideal EWEA model for a stakeholder in the EWEA field. Such a model would be transparent, adaptable, and tailored to specific contexts. It should provide clear insights into the drivers of crises, striking a balance between the sophistication of machine learning and the interpretability of traditional statistical methods. Importantly, it must address the varied demands of different scenarios, whether responding to acute emergencies like earthquakes or managing protracted crises involving displacement and conflict.

 

At its core, an effective system must be people-centered. It should adhere to the “do no harm” principle, ensuring predictive technologies do not inadvertently harm vulnerable populations. Furthermore, it must navigate the complexities of political contexts, equipping policymakers with actionable tools while avoiding the risk of exacerbating tensions or triggering unintended consequences.

 


Bridging the Gap between Data Science and Decision-Making

Now, even with a "perfect model"—a concept interpreted differently across the organizations represented at this event—a significant challenge remains in bridging the gap between data scientists and decision-makers. As one speaker aptly noted, explaining to policymakers that "this variable is the most important for predicting genocide" is not the same as saying "this variable contributed the most signal to the model's predictions." The distinction between correlation and causation is still a lesson that must be emphasized, underscoring the need for effective communication and understanding between technical experts and policymakers.

A frequently asked question for data scientists is how a specific model fits into a decision-making workflow—a question that ultimately only decision-makers can answer. Addressing this challenge requires more targeted conversations with end-users to ensure the models are effectively integrated into their processes. However, these discussions must be conducted in a way that is efficient and minimally disruptive to existing workflows.


It is also crucial to recognize the wide range of current and potential stakeholders for predictive tools, each with diverse needs. Users include experts, journalists, small non-profits, academics, governments, and international organizations, meaning no single tool can address all their requirements. For instance, while the U.S. State Department is highly advanced and may not need external models, it could benefit from external data to enhance its own systems. On the other hand, smaller NGOs or journalists may need quick, straightforward answers about conflicts or the probability of conflict in specific countries or localities. Understanding and addressing these varying needs is essential to ensure predictive tools deliver value across different contexts and different users.



The good news, as noted by Ashleigh Landau, Research Associate at the Early Warning Project, Simon-Skjodt Center for the Prevention of Genocide, is that there is significant engagement from policymakers eager to understand the details behind predictions—such as where the data comes from, how the model works, and what factors drive predictions or risks in specific countries. However, as Gray Barrett from the U.S. State Department emphasized, AI products ultimately must also be accessible to non-experts: “I don’t need people to have master’s degrees in data science to use our products—nor should they.” Bridging this “cultural divide” is crucial for ensuring that AI tools are not only understood but also trusted and acted upon. Yet, this divide remains one of the toughest obstacles to overcome.



A Call to Action

“If you look at UCDP and ACLED, we have better data on where people are dying than where people are…the population data is way worse and it’s crazy to know more where people die than where they are,” Simon P. von der Maase

A key call to action that emerged was the significant gap in funding. Transitioning from research papers or pilot projects to full-scale infrastructure requires substantial investment, and donors need to understand this need. Research funding often supports the development of new science but typically does not cover the costs of building, maintaining, or providing the computational resources necessary for these projects. Furthermore, there is a significant gap in certain types of data sets, and experts recommend building more accessible data sets that can be integrated into existing models, rather than investing in new models. The Complex Risk Analytics Fund (CRAF’d) is already a great step in that direction, as the first multi-stakeholder initiative “to finance, connect and reimagine data to save lives”. As a multi-donor initiative with pooled funding, it also helps reduce bias and mitigates the potential influence of any single donor government on the direction or outcomes of a supported project or organization.

 

CRAF’d is driven by the belief that data, analytics, and AI can help global partners to better anticipate, prevent, and respond to complex risks. Key contributors to CRAF’d include Germany, the United States of America, the United Kingdom, the Netherlands, and the European Commission.”, crafd.io

 

Ultimately, the stakes are too high to overlook the potential of AI in early warning and early action. However, as emphasized by experts at PRIO AI Days, technology alone cannot address these challenges. The success of EWEA systems relies on collaboration—among data scientists, policymakers, peacebuilders and humanitarian actors—and a shared commitment to ethical and effective action. While the path forward is not without its obstacles, the tools and knowledge are already in place, as all of the exceptional experts demonstrated at this event. By prioritizing transparency, accuracy, trust, and explainability, we can leverage the power of data and machine learning to build a safer, more resilient world.

…….

 

To learn more about PRIO AI Days and access event recordings visit PRIO website: https://www.prio.org/events/series/1 

 

 

 

 
 

Komentáře


bottom of page