Principles for the responsible use of artificial intelligence in peacemaking
CMI recognises that artificial intelligence (AI) can support efforts toward inclusive, just, and lasting peace by assisting mediators and conflict-affected communities in analysing complex information and patterns. When developed, tested, and applied with appropriate safeguards, AI applications can meaningfully strengthen peace processes and help broaden participation.
At the same time, AI outputs are shaped by existing data and technical systems that reflect biases, power dynamics, and political agendas, and must therefore be used critically rather than treated as neutral or authoritative sources of understanding.
AI is rapidly transforming the political, economic, and social landscapes in which peace processes unfold. In conflict-affected contexts, AI is already shaping narratives and understandings, influencing trust and relationships between actors, and affecting the flow of information. Without proactive guidance and deliberative development, AI can exacerbate inequalities, entrench bias, and further erode confidence among conflict stakeholders.
While governments, companies, and civil society invest extensively in AI for security, commerce, and administration, comparatively limited attention and resources are directed toward applications that foster dialogue, bridge divides, and strengthen conflict prevention. This gap highlights the need for deliberate investment in innovation, experimentation, and learning around AI for peace, supported by clear principles that enable responsible action.
We call for partnerships with governments, the private sector, and research institutions to place AI in the service of peace. This includes principled use supported by ethical safeguards and trust-building measures, alongside targeted innovation in applications that directly support dialogue and peacemaking. Together, these approaches help ensure that AI strengthens, rather than undermines, relationships among conflict stakeholders.
Grounded in decades of experience in mediation and dialogue, the following principles articulate our commitments for the ethical and effective use and development of AI in peacemaking. These principles are intended not to limit innovation, but to provide the guardrails needed to responsibly develop, pilot, and scale AI applications that serve peace. These commitments apply both to how we deploy AI in our dialogue and mediation work and to how we integrate AI internally across our own systems, workflows, and management practices, recognising that credible external use depends on responsible internal transformation.


