A newsletter on Responsible AI and Emerging Tech for Humanitarians
AI is already shaping how humanitarians respond to crises - but who decides how it is used, and who ensures it upholds humanitarian values? Without strong governance, we risk deploying tools that reinforce bias, mishandle sensitive data, and make decisions without accountability - all of which can profoundly affect the people we aim to serve.
This month, we explore the critical need for AI governance at three levels:
- Within humanitarian organisations, where internal policies, ethical decision-making processes, and risk management frameworks are essential to ensure AI aligns with humanitarian principles and protects affected communities.
- Across the humanitarian sector, where we need sector-specific governance approaches that reflect our values and mandates - including greater transparency, shared standards, and collaboration between humanitarian actors to responsibly guide the development and use of AI.
- In the wider world of global AI regulation, where governments, industry, and multilateral bodies are shaping legislation, standards, and norms. While some of these efforts may not directly guide humanitarian use of AI, they shape the legal and ethical context we operate in - and offer valuable lessons we shouldn't have to learn the hard way.
AI is here to stay - but getting governance right is the difference between empowering communities and deepening vulnerabilities. Let’s explore what responsible AI governance looks like and why it matters.
🎙️ Podcast Feature – Behind the scenes of humanitarian AI
A conversation hosted by Brent Phillips with Eugenia Olliaro (Data Lead UNICEF), Stefaan Verhulst (Cofounder of NYU’s GovLab), Meeri Haataja (CEO of Saidot), Agata Ferretti (IBM) and Aparna Bhushan (Data Governance and Digital Policy Advisor).
This episode brings together experts from humanitarian, academic, and tech sectors to explore how AI governance can be made more inclusive, accountable, and context-aware to ensure ethical and effective use in humanitarian settings.
🎧 Listen now on SoundCloud or or tune into the series via: itunes, Spotify.

🌍 Hot off the press: The Safe AI Project
AI has the ability to innovate humanitarian action for the better. But there is a real risk of an overstretched humanitarian sector accelerating towards unsafe use of AI to reduce costs with serious unintended consequences for vulnerable populations.
In response, CDAC Network, The Alan Turing Institute and Humanitarian AI Advisory have joined forces to launch the SAFE AI project: Standards and Assurance Framework for Ethical Artificial Intelligence. Funded by UK FCDO, this project focuses on creating clear governance guidelines, building tools to verify AI trustworthiness, ensuring affected communities have a meaningful voice, and working directly with humanitarian organisations to build solutions that address their actual needs.
🔍Learn more about SAFE AI

🧭 Spotlight: UNESCO’s Global AI Ethics and Governance Observatory
As AI continues to impact humanitarian action, we must ensure AI systems are ethical, transparent, and accountable. For this purpose, UNESCO’s Global AI Ethics and Governance Observatory provides a crucial resource for policymakers, humanitarian organisations, and civil society to navigate the challenges posed by AI.
The platform offers Country Profiles that provide a snapshot of a country’s readiness to adopt AI ethically and responsibly including country-specific insights, regulatory developments, and best practices.
Whether you are developing AI policies, advocating for responsible governance, or seeking guidance on risk mitigation, this observatory is a go-to hub for ensuring AI serves humanity, not just efficiency.
💡 Editor’s Choice
Articles, training courses, videos, and podcasts we think you'll find interesting.
- GiveDirectly’s Responsible AI/ML Framework offers clear, actionable guardrails to help organisations adopt ethical AI and manage risks in humanitarian settings, with practical case studies that bring principles like transparency, inclusion, and consent to life.
- ICRC's Policy on AI: A framework for the ethical and responsible use of AI in humanitarian settings, ensuring alignment with core humanitarian principles and minimising harm to affected populations.
- UNICEF's Policy Guidance on AI for Children: Recommendations for developing AI policies that safeguard children’s rights.
- NetHope has developed 1. Humanitarian AI Code of Conduct to guide nonprofit organisations in the ethical development and deployment of artificial intelligence and 2. A Data Governance Toolkit: A guide to implementing data governance in nonprofits.
- ISO 42001 Standard: A global standard to help organisations implement responsible AI governance.
- EU's AI Act: The European Union’s legal safeguards that impact humanitarian AI, requiring compliance and fostering trustworthy and risk-aware AI applications.
- NIST's AI Standards: in the US, NIST develops technical AI standards that can help humanitarians assess AI performance, reliability, and governance, ensuring AI solutions are both effective and responsible.
- OECD AI Principles: The first intergovernmental AI principles promoting trustworthy AI that upholds human rights, fairness, and democratic values, offering a foundation for humanitarians to align AI efforts with ethical best practices.

🛠️ Case Study: Mercy Corps' Approach to Ethical Generative AI
Mercy Corps’ approach to AI governance demonstrates how organisations can balance innovation with responsibility in humanitarian settings. Recognising the inevitability of AI adoption, Mercy Corp built two in-house generative AI chatbots to provide a safe and ethical alternative to publicly available AI tools such as ChatGPT. These ensure compliance with their own data protection policies and reduce risks like bias and misinformation by sourcing information that are solely stored on Mercy Corps internal database.
Governance is embedded through their Ethical AI workstream, which integrates emerging AI regulations with humanitarian principles and conducts ethical AI assessments to guide responsible use. Extensive user testing, adversarial "red teaming," and transparency about the documents the chatbot uses ensure that chatbot outputs remain accountable and verifiable.
This case highlights how humanitarian organisations can govern AI internally - developing AI policies, risk management frameworks, and technical oversight to ensure AI aligns with humanitarian values.
📖 Read full case study
🎧 Podcast Spotlight: Foundational Impact with Suzy Madigan
Whose voices shape AI governance? In this episode of Foundational Impact, Suzy Madigan, Responsible AI Lead at Care International, discusses the urgent need for greater representation of Global South civil society in AI decision-making (tech chat starts at minute 12.45). She highlights how these voices are often sidelined in key discussions, despite being directly impacted by AI-driven solutions. Suzy emphasises that INGOs have a responsibility to amplify partner voices from the global south and advocate for more inclusive AI governance to ensure that AI development reflects diverse perspectives. As AI reshapes humanitarian action, ensuring inclusive governance is more urgent than ever.

Upcoming Opportunities

📅 Upcoming Opportunities
Funding
The European Commission (EC) has launched a funding opportunity, the Ethical and Effective use of AI Systems in Education and Training Programme. With a total funding of EUR 13,000,000, the programme aims to support projects that promote the responsible integration of generative AI in education, with a submission deadline of 26th May 2025.
Innovation Norway's Humanitarian Innovation Programme has opened its 2025 call for proposals, inviting humanitarian UN agencies, INGOs with a presence in Norway, IFRC, and ICRC to apply for funding (up to NOK 10 million per project) aimed at developing and scaling innovative solutions that enhance humanitarian action. Submission deadline 13th June 2025.
Events
The 4th European Humanitarian Forum 2025, co-hosted by the European Commission and Poland, will take place on 19th – 20th May 2025 at The Square in Brussels, focusing on pressing humanitarian issues.
Third Sector: The Conference, set for 18th -19th June 2025 at the Barbican Centre in London, will bring together leading voices in fundraising, finance, tech, and charity leadership to address the sector's challenges including digital and AI.
The AI for Good Global Summit 2025, scheduled from the 8th - 11th July in Geneva, Switzerland, aims to identify trustworthy AI applications, build skills and standards, and advance governance for sustainable development.
Helen McElhinney, Executive Director, CDAC NetworkPrioritising community engagement and getting the governance of AI right is imperative. Setting guidelines and standards for the use of AI in humanitarian settings will ensure AI lives up to all its promises, big or small
Disclaimer: The views expressed in the articles featured in this newsletter are solely those of the individual authors and do not reflect the official stance of the editorial team, any affiliated organisations or donors.