Digital tools and AI are part of modern peacemaking practice, yet the language used is not always clear. This guide defines core concepts used at CMI and across the field. It explains what these terms mean in practice, with real examples of how technology supports dialogue and analysis.
What is “digital peacemaking”?
Digital peacemaking is the strategic use of digital tools to support peace processes, dialogue, and mediation, while actively working to reduce the risks digital tools can create, such as misinformation, privacy threats, or the spread of harmful content.
What does “AI for peacemaking” mean?
AI for peacemaking means that peace practitioners use artificial intelligence in a range of ways that support peace efforts. AI can help peacemaking actors understand people’s needs, spot rising tensions early, and prepare better conversations or options, without replacing human judgement.
CMI has experience of the responsible use of AI to enhance conflict analysis, broaden inclusion, strengthen participation, and improve decision-making.
Read more:
Principles for the responsible use of artificial intelligence in peacemaking
In 2026, CMI articulated its commitments for the ethical and effective use and development of AI in peacemaking. The 10 principles are intended to provide the guardrails needed to responsibly develop, pilot, and scale AI applications that serve peace.
How can inclusion be improved with digital technology?
By using digital technologies, more people can take part in peace efforts, even if they have low literacy, speak different languages, or cannot travel safely. In CMI’s work in Yemen, for instance, participants were able to send voice notes on WhatsApp in Yemeni Arabic, which OpenAI transcription and translation helped turn into usable input for analysis.
In CMI’s South Caucasus work, digital consultations were used to gather input from Armenian and Azerbaijani stakeholders ahead of in-person dialogue sessions. This represents the accessibility of remote participation with the depth of face-to-face exchange.
Digital technology can therefore help peacemakers bypass physical and linguistic barriers, but challenges exist, such as unequal access to devices and connectivity, limited digital literacy, and safety or surveillance risks. To make digital participation genuinely inclusive, tools should be paired with practical measures that reduce gaps and protect participants.
What are digital dialogues?
Digital dialogues are live, guided online conversations where many people answer questions in real time, react to what others say, and add their own ideas. The results are summarised to show where people agree and where they don’t.
For example, the United Nations has used Remesh AI to run live dialogues with up to about 1,000 participants at a time in Yemen, Libya, and Sudan (including women’s groups and youth). CMI, meanwhile, has used Remesh AI in Sudan for dialogues with women’s groups and youth.
What are digital consultations?
Digital consultations are structured processes in which participants share their views remotely – often via messaging platforms like WhatsApp – in response to guided questions over a set period. Unlike live digital dialogues, consultations are typically asynchronous: participants contribute at their own pace, which can increase accessibility for those in insecure or hard-to-reach contexts. AI tools can then be used to analyse and synthesise the responses. CMI has used this approach in Yemen, the South Caucasus, Guinea-Bissau, and Sudan, gathering input from participants who might otherwise be excluded by geography, security constraints, or language barriers.
What is “PeaceTech”?
While there is no single definition, PeaceTech broadly refers to technologies used to support peacebuilding, peacemaking, peacekeeping, and peace-enforcement efforts. It encompasses a wide range of tools, from simple communication technologies such as radios and social media to more complex systems like satellite imagery, AI, and early warning platforms. The aim of PeaceTech is to advance different forms of peace across local, national, and international contexts. Digital peacemaking can be seen as a subset of PeaceTech, focused specifically on supporting peace processes, dialogue, and mediation through digital tools and methodologies.
“CMI has used Remesh AI in Sudan for dialogues with women’s groups and youth.”
What does “AI sensemaking” mean?
AI sensemaking means turning a large volume of qualitative data into usable insights for mediators and peace practitioners. This is sometimes referred to as ‘processing multilingual qualitative data at scale’, but what it really means is using AI tools to analyse highly detailed and considered responses from participants.
For example, in CMI’s South Caucasus work, the Talk to the City AI tool clustered responses by key themes and highlighted which ideas had the strongest support, so mediators could quickly see what mattered most. Similarly, in Yemen, AI sensemaking helped identify key topics such as political visions, obstacles to implementation, and opportunities for youth involvement.
Read more:
Case study: how CMI used AI to amplify youth voices in Yemen
CMI harnessed new technology, including artificial intelligence, to help in the collection and analysis of youth opinions in Yemen. The results are a blueprint for areas of the world where traditional engagement is restricted or impossible.
How can AI help us understand conflict by analysing media?
AI can quickly process lots of news and social media to spot what stories are spreading, who is spreading them and ascertain when views are growing more angry or divisive. This can help relevant actors or authorities act before conflict escalates.
For example, Build Up’s Phoenix tool maps online conversations – sometimes known as “social media listening” – so peacemakers can see what is driving tensions. CMI is building an AI tool to track emerging narratives across both traditional and social media.
What does “listening at scale” mean?
“Listening at scale” is a term sometimes used in the peacebuilding field to describe efforts to broaden participation by gathering input from hundreds or thousands of people and identifying shared concerns, areas of consensus, and points of division.
For example, the peacebuilding organisation Build Up has used the Pol.is platform in Kenya, Sudan, Guinea-Bissau, and Burkina Faso to collect large volumes of public input and visualise where views converge or diverge.
Pol.is allows participants to submit short statements and vote on others’ contributions, using algorithms to cluster patterns of agreement and highlight positions that gain cross-group support.
CMI has similarly used WhatsApp-based consultations combined with AI sensemaking to gather and analyse input from participants across multiple countries, including Yemen, the South Caucasus, Guinea-Bissau, and Sudan.
AI sensemaking can support listening at scale by helping analyse large volumes of qualitative input to detect emerging themes, visualise patterns, and synthesise insights more efficiently.
How can AI serve as an “early-warning system” for peacemakers?
“AI for early warning” as it is often called means that AI is like a smoke alarm for conflict: it watches for patterns that often come before violence and raises a flag early.
For example, VIEWS – an open-source platform powered by AI – provides monthly forecasts on the likelihood and intensity of armed conflict worldwide and Ushahidi can map incoming reports from communities (texts, forms, messages) so responders can quickly spot trouble areas and act preventively.
What are AI agents and how can they help support peace?
AI agents are task-doers: you give them a goal, and they can work through it step by step, using tools like your computer and the internet. Human oversight remains essential, with users defining the boundaries of what agents can do and reviewing their outputs.
In peace work, agents can automate internal rudimentary tasks such as sorting reports, summarising notes or translating messages. This can free up teams to focus on relationships and decisions.
What is a mediator assistant and what can it do?
A mediator assistant in the context of peacemaking and conflict resolution is a support tool that helps a mediator prepare and stay organised. It can summarise points made by different sides, track key narratives and actors, spot patterns over time, translate, and suggest more neutral wording. It can also learn from past cases to propose options. But it should support, not replace human mediators because peacemaking depends on human trust and judgement.
What are future-oriented dialogues?
Future-oriented dialogues use structured foresight methods – such as scenario building or the Three Horizons Framework – to help participants explore how current trends might unfold and what actions could shape a more peaceful future. Rather than focusing solely on present grievances, these formats encourage participants to think beyond existing positions and identify shared aspirations.
Digital tools and AI can support this by generating scenarios, visualising possible futures, or synthesising participants’ contributions into shared narratives. CMI has applied future-oriented dialogue formats in several contexts to complement traditional mediation approaches.
What does ‘responsible AI’ mean in peacemaking?
Responsible AI in peacemaking means using artificial intelligence in ways that are transparent, accountable, and aligned with the principles of conflict sensitivity and do-no-harm. This includes being clear about how AI tools process data, ensuring meaningful human oversight over AI-generated outputs, protecting the privacy and safety of participants, and recognising the limitations of AI, particularly in sensitive political contexts where errors or biases can have serious consequences. CMI has developed a set of AI principles to guide how it integrates AI into its peacemaking work.

