Published on Wednesday, 29th of October 2025

AI in peacebuilding and mediation: what does responsible use look like?

CMI Digital Peacemaking specialist, Michele Giovanardi, reflects on a workshop by CMI and partners at Geneva Peace Week. The workshop asked: what is responsible use of AI in peacemaking, what are the risks and what are the opportunities?

Artificial intelligence is dramatically reshaping how peacebuilding and mediation actors understand and engage with conflict dynamics: from early warning and dialogue facilitation to inclusion and analysis.

During Geneva Peace Week 2025, CMI co-hosted a workshop which brought together practitioners, policymakers, and researchers to reflect on how AI can be safely and effectively used for peace.

‘A code of conduct for the responsible use of AI in peacebuilding’ drew experts from international organisations, civil society and academia to explore the opportunities and risks of AI in peacemaking.

What opportunities can AI realistically offer for peacebuilding and mediation?

AI can support peacebuilders and mediators at multiple levels – from grassroots to governmental decision-making – by enhancing three critical areas: better conflict analysis, better listening, and better decision-making.

Better conflict analysis. AI tools can process large volumes of data from diverse sources including news reports, social media, and intelligence briefings. This can strengthen situational awareness and a preventive approach to crisis diplomacy. By identifying early signs of escalation and mapping key stakeholders in complex contexts, AI enables more proactive engagement before conflicts intensify.

Better listening. AI is already being used to broaden inclusion in peace processes. For example: chatbots that collect citizen input, translation systems that bridge linguistic divides, and analytical models that identify the most urgent or divisive issues that emerge from consultations. In sum, AI helps ensure that more voices, and particularly those of minorities and marginalised groups, are heard in shaping peace efforts. Examples such as CMI’s youth consultations in Yemen demonstrate how AI can foster inclusion and support the sustainability of peace processes.

Better decision-making. As a daily assistant to peacebuilders and mediators, AI can draw from historical and organisational data to help simulate and test interventions. Some initiatives are exploring conflict forecasting and behavioural modelling through “digital twins”, or social replicas of stakeholder groups. Additionally, agentic AI can streamline internal organisational processes, freeing practitioners to dedicate more time to meaningful human interaction.

Where are the boundaries and what pitfalls must we avoid?

We must ensure that AI doesn’t cause more harm than the problems it aims to solve. Responsible adoption is essential, and this is a shared responsibility. Peacebuilding and mediation organisations must establish clear guidelines and codes of conduct for AI use.

But policymakers and technology companies also play a crucial role, as they have greater leverage in setting boundaries through what some call “prosocial design“. These are features that actively discourage violence and promote constructive online interactions by design.

CMI’s work is grounded in key principles that we are now codifying and validating across the peacebuilding field: local ownership, because peace must be built by those who live it; transparency, because trust is the foundation of dialogue; inclusive design, ensuring technology reflects the diversity of those it serves; and complementarity, combining digital innovation with traditional face-to-face engagement. Above all, we follow the principle of “Do No Harm”: making certain that AI tools do not endanger lives, escalate tensions, or contribute to surveillance, repression, or manipulation.

Successful AI-supported peace interventions require careful process design. Using AI “for the sake of it” is counterproductive. AI can only support peacebuilding when grounded in thorough process analysis that considers digital literacy levels, connectivity infrastructure, potential biases, and risks of misuse. AI and digital tools should always serve as complementary features; meaning enhancers of human capacity, not replacements for it. This is especially critical in peacebuilding, where we deal with sensitive issues involving deep emotions, trauma, and grievances that require genuine human presence and empathy, and in mediation, where trust is essential and cannot be replicated by machines.

Balancing risks with benefits: what practices can mitigate the downsides?

There are significant risks in AI’s application to peacebuilding, including: the potential for bias in AI systems; over-reliance on algorithmic outputs; and the creation of information echo chambers that can reinforce divisions rather than bridge them.

Workshop participants noted that while AI can process and analyse data at unprecedented speed, it fundamentally lacks the empathy and contextual understanding that peacebuilding and mediation require. To mitigate these risks, maintaining a “human touch” is essential, through active critical thinking and verification of AI-generated insights. Practitioners must critically engage in interpreting and validating algorithmic outputs rather than just accepting them.

Organisations also need to be transparent about how AI tools inform their analysis and decision-making processes, to ensure accountability and build trust with affected communities. Practitioners must also build their AI and digital literacy skills to ensure that technology complements rather than replaces human judgement. Without adequately understanding AI’s capabilities and limitations, practitioners risk misusing tools or overlooking critical contextual factors. AI must operate within clearly agreed ethical frameworks that prioritise safety, inclusivity, and context-sensitivity.

At the same time, participants recognised AI’s potential to strengthen peacebuilding and mediation when applied responsibly. AI can support education, provide deeper analytical insights about conflict dynamics, and help identify common ground in negotiations. It can serve as a neutral enabler that enhances collaboration and understanding.

Following the workshop, CMI and organising partners – the Agency for Peacebuilding and the CyberPeace Institute – will continue to develop practical tools to support responsible AI adoption in the field. This comprises three resources: 1) a comprehensive guide on responsible use of AI in peacebuilding and mediation; 2) an improved digital risk mitigation course for mediators; and 3) a new free introductory course on responsible AI use in peacemaking.

These digital learning offerings complement CMI’s ongoing in-person capacity-building programmes, which remain available to organisations seeking support with digital transformation and AI integration in their work.

Michele Giovanardi is Programme Officer for Digital Peacemaking at CMI – Martti Ahtisaari Peace Foundation and a doctoral candidate at the United Nations University for Peace (Costa Rica), where he conducts research on artificial intelligence and human-machine interaction in peacemaking. He was previously a researcher at the Global PeaceTech Hub of the Florence School of Transnational Governance, European University Institute (EUI), where he also worked in communication, digital education, and learning technologies.