Principles for the responsible use of artificial intelligence in peacemaking

CMI recognises that artificial intelligence (AI) can support efforts toward inclusive, just, and lasting peace by assisting mediators and conflict-affected communities in analysing complex information and patterns. When developed, tested, and applied with appropriate safeguards, AI applications can meaningfully strengthen peace processes and help broaden participation.

At the same time, AI outputs are shaped by existing data and technical systems that reflect biases, power dynamics, and political agendas, and must therefore be used critically rather than treated as neutral or authoritative sources of understanding.

AI is rapidly transforming the political, economic, and social landscapes in which peace processes unfold. In conflict-affected contexts, AI is already shaping narratives and understandings, influencing trust and relationships between actors, and affecting the flow of information. Without proactive guidance and deliberative development, AI can exacerbate inequalities, entrench bias, and further erode confidence among conflict stakeholders.

While governments, companies, and civil society invest extensively in AI for security, commerce, and administration, comparatively limited attention and resources are directed toward applications that foster dialogue, bridge divides, and strengthen conflict prevention. This gap highlights the need for deliberate investment in innovation, experimentation, and learning around AI for peace, supported by clear principles that enable responsible action.

We call for partnerships with governments, the private sector, and research institutions to place AI in the service of peace. This includes principled use supported by ethical safeguards and trust-building measures, alongside targeted innovation in applications that directly support dialogue and peacemaking. Together, these approaches help ensure that AI strengthens, rather than undermines, relationships among conflict stakeholders.

Grounded in decades of experience in mediation and dialogue, the following principles articulate our commitments for the ethical and effective use and development of AI in peacemaking. These principles are intended not to limit innovation, but to provide the guardrails needed to responsibly develop, pilot, and scale AI applications that serve peace. These commitments apply both to how we deploy AI in our dialogue and mediation work and to how we integrate AI internally across our own systems, workflows, and management practices, recognising that credible external use depends on responsible internal transformation.

1. People at the Centre

AI should complement and enhance, but never replace, in-person engagement, human interaction, and trust building. We use AI only where it demonstrably adds value to human-led peacemaking, recognising that sustained human presence and relationships remain central to effective peacemaking.

2. Inclusive Participation and Representation

Inclusion is a foundational consideration in our use of AI. We seek to support broader and more meaningful participation, with particular attention to groups that are often marginalised, including women, youth, minorities, displaced populations, and those whose perspectives are underrepresented in digital and written sources. We remain attentive to systemic biases, data gaps, and power asymmetries, and approach AI use cautiously to reduce the risk of reinforcing existing inequalities.

3. Sustainability and Proportional Use of AI

We approach AI investment and use with a sustainability lens, prioritising proportionate, resource-efficient applications that offer clear long-term value for peace efforts. We remain attentive to environmental, institutional, and contextual costs, and avoid deploying AI where simpler or less resource-intensive approaches are more appropriate.

4. Human Judgment and Decision-Making

AI does not replace human judgment or political decision-making in peace processes. It is used strictly to support, not determine, human-led dialogue, analysis, and decision-making. Responsibility remains with human actors at all times.

5. Risk Awareness and Safeguards

AI use in peacemaking must not endanger lives, escalate tensions, or enable surveillance, repression, or manipulation. Accordingly, we apply rigorous data protection and safety standards to mitigate these risks and protect trust, confidentiality, personal security, and ethical integrity in AI-supported peace interventions.

6. Critical Use and Interpretation of AI Outputs

AI outputs are shaped by the data, assumptions, and design choices on which they are based and must not be treated as neutral, complete, or authoritative. We interpret AI-generated insights critically and contextually, using them as one input among many rather than as a basis for conclusions or decisions in peace processes.

7. Agency and Informed Consent

We seek to ensure informed consent and clear communication about how AI tools function, their limitations, and how data are used. Our approach emphasises understanding and agency, enabling stakeholders to make informed choices about whether and how AI is used and to retain control over their dialogue processes.

8. Contextual Sensitivity

Every peace process is shaped by distinct historical, social, and political dynamics. We design and adapt AI tools with attention to local norms, conflict dynamics, and political realities. This supports AI use that is appropriate to, and responsive to, the particular circumstances of each peace process.

9. Collaboration and Shared Responsibility

Advancing the responsible and innovative use of AI in peacemaking requires collaboration beyond individual organisations. We commit to sharing lessons from practice, contributing to collective learning, and engaging with governments, research institutions, the private sector, and civil society to strengthen ethical and context-sensitive approaches to AI in peacemaking.

10. Ethical Learning and Adaptation in the Use of Technology

Responsible use of AI in our work requires ongoing reflection and openness to change, including within our own internal systems and practices. We approach AI with humility, learning from experience, and remain prepared to adapt or discontinue its use when it no longer serves peace outcomes.

Contact

For more information about artificial intelligence at CMI: Michele Giovanardi, Programme Officer, Digital Peacemaking, michele.giovanardi@cmi.fi