CMI has launched a set of principles for the responsible use of artificial intelligence in peacemaking. It is the organisation’s commitment to use AI only where it demonstrably adds value to human-led peacemaking.
Artificial intelligence is increasingly impacting the political, social, and security fabric in which peace processes unfold. It already influences how conflicts evolve and how peace efforts are designed and delivered, from information flows and public narratives to early warning and mediation support. The question is no longer whether AI will affect peace and conflict, but how it will and who will shape its use.
Against this backdrop, CMI has launched its principles for the responsible use of artificial intelligence in peacemaking. “The principles articulate CMI’s commitment to using AI in ways that strengthen peace efforts grounded in human judgement, trust-building, and ethical responsibility,” says Michele Giovanardi, CMI Digital Peacemaking Officer.
“They are not a call to slow down or halt the use of AI, but to invest more, innovate more, and collaborate more, while ensuring that innovation is guided by clear guardrails.”
Read more:
Principles for the responsible use of artificial intelligence in peacemaking
CMI has articulated its commitments for the ethical and effective use and development of AI in peacemaking. The 10 principles are intended to provide the guardrails needed to responsibly develop, pilot, and scale AI applications that serve peace.
Innovation with responsibility
Many existing AI principles focus primarily on risk mitigation, but CMI’s experience is that safeguards without innovation are not enough. If mediators and peacebuilders do not actively engage with AI, develop meaningful applications, and shape how technologies are used, the peacemaking field risks falling behind as AI is deployed elsewhere with less regard for peace outcomes, says Giovanardi.
“Responsible AI for peace is a balancing act,” he adds. “We need strong safeguards to mitigate risks, but we also need to invest in innovation and new applications. If we only focus on what AI should not do, we miss the opportunity to explore what it can do to support dialogue, prevention, and trust-building.”
“The principles articulate CMI’s commitment to using AI in ways that strengthen peace efforts grounded in human judgement, trust-building, and ethical responsibility.”
CMI’s approach is rooted in practice, with AI used only where it demonstrably adds value to human-led peacemaking. Examples include strengthening listening and sensemaking among diverse stakeholders; supporting the preparation for mediation; supporting in analysis; expanding media monitoring to understand conflict dynamics better; and enabling more inclusive dialogue processes at scale.
At the same time, CMI recognises the pitfalls: AI outputs are shaped by existing data, power dynamics, and political contexts, and must therefore be interpreted critically rather than treated as neutral or authoritative.
Giovanardi adds that “CMI sees AI as an enabler of mediation, dialogue, de-escalation, and preventive engagement, not as a substitute for political processes or human relationships.”
He also notes: “There is a tendency to see AI mainly through a defence or security lens. But lasting security comes from a combination of defence and peace diplomacy. Investing in AI for peace is not an alternative to security investments, it is a necessary complement.”
A call for partnership and shared learning
For CMI, the principles are a call for partnership and conversation. They are an invitation to governments, the private sector, civil society, and research institutions to work together to advance AI applications that genuinely support peace efforts.
The principles also serve to foster dialogue with peer organisations and partners across regions, including in Europe, Africa, the Middle East, Asia, and beyond. CMI intends to convene conversations that bring together technology actors, policymakers, and peace practitioners to explore how AI can be applied responsibly in different political and cultural contexts.
Janne Taalas, CMI’s CEO, emphasises that CMI wants to be at the cutting edge of developing and applying AI for peace. “We have innovative pilots, highly capable colleagues, and strong organisational motivation. The next step is to scale this work through strong partnerships. We have the ingredients; now we need collective effort.”
Read more:
An introductory guide to the terminology of digital peacemaking
Digital tools and AI are part of modern peacemaking practice, yet the language used is not always clear. CMI’s introductory “digital peacemaking” guide is a succinct overview of everyday terminology with examples from real peace processes.
Leading by doing: from principles to practice
The principles are the result of a participatory process that included engagement with the broader peace and policy community, including discussions at international forums such as Geneva Peace Week.
They are also closely linked to CMI’s strategic investment in digital peacemaking. Across its work, CMI is developing and testing AI-supported applications that enhance listening and sensemaking; strengthen analysis of in-person dialogues; expand media monitoring; support mediator assistance; and improve monitoring and evaluation processes.
These efforts are guided by careful contextual analysis and process design to determine where AI adds value and where it may introduce risk.
The principles also align with CMI’s internal digital transformation efforts. Credible external deployment of AI must be in step with responsible internal adoption. CMI is therefore integrating AI into its own systems, workflows, and management practices in ways that improve analysis, reduce unnecessary administrative burdens, and free up resources for meaningful human engagement.
In addition, the principles connect to CMI’s forthcoming free online course, Responsible AI for Peace, developed in partnership with New York University and planned for launch later this year. The course will translate these commitments into practical learning for peacebuilders, policymakers, and practitioners.
Join the conversation
With the launch of its principles for the responsible use of artificial intelligence in peacemaking, CMI invites others to engage openly and critically:
- What principles guide your use of AI, if any?
- How are you currently using AI for peace, mediation, or conflict prevention?
- Where do you see opportunities to do better, together?
CMI welcomes collaboration, discussion, and joint experimentation. ‘PeaceTech’ is a global challenge and opportunity, and shaping it responsibly will require dialogue across sectors, regions, and disciplines.
Contact:
For more information about artificial intelligence at CMI, contact Michele Giovanardi, Programme Officer, Digital Peacemaking, michele.giovanardi@cmi.fi.

