A newsletter on Responsible AI and Emerging Tech for Humanitarians
At the heart of humanitarian action are the four core principles: humanity, neutrality, impartiality, and independence. These principles guide organisations in delivering aid solely based on need, without discrimination, and free from external influence.
As artificial intelligence (AI) becomes increasingly embedded in humanitarian decision-making, we have to ensure it aligns with these principles to avoid bias, inequity, misuse and distrust. AI-driven solutions must be developed to be neutral and impartial, ensuring they do not reinforce discrimination or prioritise certain groups over others.
For us to uphold the principle of humanity, we need to design AI solutions in such a way that they remain understandable and accountable, with safeguards that protect people’s rights, allow for human oversight, and ensure technology adapts to local contexts rather than imposing external systems. They should always strengthen, rather than replacing, human capacity to serve those in need.
This month, we explore how organisations can embed humanitarian principles into AI development and use and how we can ensure that these powerful tools remain aligned with humanitarian values, rather than undermining them.
Behind the scenes of humanitarian AI (Podcast Feature)
A conversation hosted by Brent Phillips with Aleks Berditchevskaia (Nesta), Mark DuBois (independent consultant) and Olivier Mills (Baobub Tech).
While humanitarian efforts are grounded in core principles, these values are often missing from humanitarian AI discussions. In this episode we explore how we can ensure AI development aligns with the principles at the heart of our work?
🎧Listen now on SoundCloud or or tune into the series via: itunes, Spotify.

The clock is ticking to build minimum guardrails into Humanitarian AI
In this article, Sarah W. Spencer, UKHIH and Elrha AI consultant, and Helen McElhinney, Executive Director of the CDAC Network discuss AI solutions and their quiet risks when woven into the fabric of humanitarian response.
Without safeguards, AI solutions may fundamentally undermine commitments to localisation, accountability, and the centrality of crisis-affected populations. So, to ensure that AI solutions align with humanitarian values, Sarah and Helen advocate for the creation of shared standards or human-centred guidelines that define safe and responsible AI use and emphasise the importance of community engagement in the design process. After all, it is not about resisting innovation but about anchoring it in the voices, understanding, and rights of those that the technology is meant to serve.

Case study: fAIr Mapping - Empowering local communities with AI to speed up response
The fAIr mapping tool, developed by the Humanitarian OpenStreetMap Team (HOT), uses AI models that are locally trained to assist community mappers in identifying and mapping features like buildings and roads. This significantly speeds up disaster mapping and therefore the humanitarian response. The models are trained on small sets of locally collected satellite and drone images collected by communities to create maps that accurately reflect the unique features of their region. This helps avoid the biases often introduced by models built on data from unrelated contexts.
Maps generated by AI are then validated by local mappers, whose feedback improves the model – combining AI driven efficiency gains with human oversight. The fAIr mapping tool is a great example of how AI can be developed and deployed in alignment with humanitarian principles. By prioritising community involvement, transparency, and local relevance, the Humanitarian OpenStreetMap Team (HOT) ensures the tool upholds core values of impartiality and humanity. This approach centres community knowledge and participation and therefore ensures that the tool serves genuine local needs rather than external agendas.
Additionally, fAIr's open-source design reflects a commitment to neutrality and independence, making its methods and data publicly accessible and ensuring decision-making processes remain transparent and accountable.

Spotlight: Guiding AI with humanity: Inside the ICRC’s AI policy
The International Committee of the Red Cross’ (ICRC) AI policy establishes a framework for the ethical and responsible use of AI in humanitarian settings and explicitly aligns AI development and deployment with the core humanitarian principles: humanity, impartiality, neutrality, and independence. It prioritises a community-centred approach which helps the organisation ensure that AI solutions are designed to protect the dignity and safety of affected populations.
AI solutions are only adopted, when necessary, appropriate, and guided by clear objectives. Potential risks to populations, users, and the organisation are thoroughly assessed, and solutions undergo rigorous testing before deployment.
The ICRC commits to avoiding AI systems that could be manipulated for political purposes, used to discriminate, or cause harm. At the same time, it acknowledges the policy’s limitations as being ‘aspirational and general in nature’. For any humanitarian actor considering AI adoption, this policy could serve as a practical guide to integrate fundamental humanitarian principles into AI governance.
📖 Read ICRC’s AI Policy
Editor's Choice
Curated articles, tools, and events on AI and humanitarian innovation.
- “Refugee protection in the artificial intelligence era” (Chatham House, 2022) examines the growing use of AI in asylum and immigration systems, the risks to human rights and the need for tailored safeguards to ensure AI solutions uphold legal standards in refugee protection.
- "AI Risks and Realignments in Humanitarian Crisis Contexts" (UCL, 2024) serves as a practical guide for humanitarians to navigate AI risks and identifies key dichotomies such as transparency versus security and human oversight versus automation while highlighting the need for balanced, context-sensitive AI approaches.
- "Artificial Intelligence in Gender-Based Violence in Emergency Programming" (UNICEF, 2024) assesses how AI can enhance GBV interventions through tools like predictive systems and language models, while warning of risks such as data privacy breaches and algorithmic biases, advocating for survivor-centred AI integration
Upcoming Opportunities

The 2025 Humanitarian Networks and Partnerships Weeks (HNPW), takes place from 17 - 28 March, 2025 (first week remote, second week in-person in Geneva).
The forum will include a virtual session ‘Where do we go from here? Closing the gap in digital inclusion for vulnerable, marginalized, and excluded groups’, delivered by CLEAR Global and UKHIH, on digital inclusion and its links to gendered digital risks, language inclusion, and other factors of vulnerability. Speakers will share tools, services, and approaches that support digital inclusion.
Date & Time: March 19, 2025, 14:00 - 15:30 Geneva Time (Remote Session). Registration Link: Zoom Meeting / Passcode: 367585
Other relevant sessions at the HNPW:
- Navigating the Future: The Impact of AI and emerging technologies on Security Risk Management (view abstract and register), 18 Mar 25 16:00-17:30 remote
- Artificial Intelligence: Potential Futures for Humanitarian Learning and Development (view abstract and register), 20 Mar 25 12:00-13:30 remote
- Making IATI Data AI Ready - A Partnership Approach (view abstract and register), 21 Mar 25 17:00-19:00 remote
- GeoAI in Action: Practical Tools for Humanitarian Impact (view abstract and register), 24 Mar 25 10:00-11:30 hybrid (at the venue: 24 Mar 25 11:00-12:30 UTC+1)
- GANNET: Revolutionizing Humanitarian Decision-Making with AI-Driven Insights (view abstract and register), 24 Mar 25 15:00-16:30, hybrid (at the venue: 24 Mar 25 16:00-17:30 UTC+1)
- The good, the bad and the pragmatic: Navigating the AI landscape responsibly (view abstract and register), 25 Mar 25 10:00-11:30, hybrid (at the venue: 25 Mar 25 11:00-12:30 UTC+1)
- AI for humanitarians - an introductory workshop (view abstract and register), face-to-face 26 Mar 25 09:00-10:30 UTC+1
- Humanitarian Workforce & AI-Capacity & Capabilities? (view abstract and register), 26 Mar 25 15:00-16:30, hybrid (at the venue: 26 Mar 25 16:00-17:30 UTC+1)
- Scaling AI effectively: lessons from business, donors, and humanitarian agencies (view abstract and register), 26 Mar 25 15:00-16:30, hybrid (at the venue: 26 Mar 25 16:00-17:30 UTC+1)
- AI-enabling Geographic Information Systems (view abstract and register), face-to-face 27 Mar 25 16:00-17:30 UTC+1
Another training opportunity is the new Global Campus of Human Rights free online course ‘A Human Rights Based Approach to AI’ that helps participants explore the profound impact AI can have on human rights, both positively and negatively, by presenting new challenges for individuals, communities, and society as a whole. Course dates: 17 February – 23 March 2025 (self-paced) - Free enrolment until 9 March 2025.
Continuing our focus of training, the European Commission (EC) has launched a funding opportunity, the Ethical and Effective use of AI Systems in Education and Training Programme. With a total funding of EUR 13,000,000, the programme aims to support projects that promote the responsible integration of generative AI in education, with a submission deadline of 26 May 2025.
Ursula von der Leyen, President of European Commission, at Paris AI SummitThe time has come for us to formulate a vision of where we want AI to take us as a society and as humanity.
Disclaimer: The views expressed in the articles featured in this newsletter are solely those of the individual authors and do not reflect the official stance of the editorial team, any affiliated organisations or donors.