A newsletter on Responsible AI and Emerging Tech for Humanitarians
As more humanitarian actors adopt and integrate AI solutions into their work, transparency is not optional—it is essential. Understanding which AI tools are being used, how they make decisions, and how they operate in crisis-affected contexts is critical for building trust, ensuring ethical use, and safeguarding vulnerable populations.
Transparency in the humanitarian use of AI can be explored across three layers:
1) Transparency within the AI Model: Understanding how AI systems make decisions, including their limitations and biases.
2) Transparency within the Humanitarian Sector: Sharing insights on who is using AI, what works, and what doesn’t, fostering collaboration and avoiding duplication.
3) Transparency with Users: Informing and engaging those directly or indirectly impacted by AI decisions to ensure clarity and understanding.
Explainability, interpretability, and accountability are essential elements that apply across all three layers, guiding the responsible deployment and use of AI.
Embracing transparency means humanitarian organisations can harness the potential of AI responsibly and make a positive impact while upholding the core principles of Humanity, Neutrality, Impartiality and Independence. Enhancing transparency aligns with international AI standards by IEC, ISO, NIST, OECD, and UNESCO, promoting ethical and accountable practices globally.
Let’s explore the latest updates, insights, and tools in responsible AI!
Transparency in action
A conversation hosted by Brent Phillips with Michael Hind (IMB Research), Shadrock Roberts (Mercy Corps), Scott Turnbull (Data Friendly Space) and Liam Nicoll (International Rescue Committee) with inputs from Sarah W. Spencer, an AI consultant supporting the UKHIH.
This episode of Humanitarian AI Today goes beyond abstract definitions of transparency to explore real-world applications. Learn how humanitarian organisations are using AI tools and are addressing the gaps in data and accessibility.
🎧 Listen to Podcast or tune into the series via: itunes, Spotify.

We have launched a new AI Directory!
Now it’s your turn to help build it!
We’re thrilled to announce the launch of our Directory of AI-enabled Humanitatian Projects! This open-access resource consolidates information on current and past projects which deploy AI for humanitarian impact. Our aim is to create visibility of these existing initiatives, making it easier to:
- Share lessons and build on existing innovations.
- Connect humanitarian organisations with tech providers.
- Identify safe and reliable AI tools for the sector.
This initiative was driven by the growing use of AI to enhance reach, impact, and efficiency in the humanitarian sector, while addressing the lack of publicly shared information that hinders collaboration and informed decision-making.
But this is just the beginning. We need your input—we invite humanitarian organisations and tech companies to join us by sharing details about your AI initiatives. Transparency about what you are doing, what works, and what you are learning is essential for advancing the sector as a whole.
Editor's Choice
Articles, training courses, videos, and podcasts we think you'll find interesting.
- The Humanitarian AI Code of Conduct launched this summer by NetHope (2024), was developed by and for humanitarian organisations and offers tailored principles to implement AI technologies responsibly and ethically.
- AI Challenges: Biases and lack of transparency of algorithms, CIVICUS (2024) interviews the Humanitarian OpenStreetMap Team who highlights the need for open-source solutions and community involvement to enhance humanitarian efforts.
- The clock is ticking to build guardrails into humanitarian AI, Helen M. and Sarah S. (2024) address the need for the sector to establish clear guidelines for AI use, ensuring transparency and adherence to humanitarian principles
- Not child’s play, (2023), R. Radu (Oxford) and E. Olliaro (UNICEF) advocate for improved transparency of children-related data in AI models, the ability for AI systems to "unlearn" in vulnerable contexts, and the creation of a public register of vetted AI solutions.

Over the coming months, we’ll be sharing a series of case studies from NetHope that delve into the inner workings of AI applications and how these technologies are being applied in real-world scenarios.
Case study: How DEEP enhances data transparency and collaboration
DEEP is another example of transparency in action by offering an open-source platform that empowers humanitarians to manage and analyse vast amounts of unstructured data efficiently.
Using Natural Language Processing (NLP) and generative AI, the platform extracts, organises, and synthesises information from diverse sources like PDFs, reports, and websites. This enables the creation of situational analyses, needs assessments, and actionable insights at speed.
DEEP fosters collaboration across organisations by sharing validated data and remains transparent by allowing any organisation to insect its processes and algorithms. This makes it a vital tool for enhancing trust, efficiency, and impact in crisis response.
Spotlight on Explainable AI (xAI)
What is xAI? Explainable AI (xAI) refers to methods and techniques that make the decision-making processes of AI systems transparent and understandable to humans. XAI addresses the "black box" problem and enables users to comprehend, trust, and effectively interact with these systems.
How does it work? xAI focuses on three main techniques: prediction accuracy (ensuring AI decisions align with real-world outcomes), traceability (tracking decision-making steps within AI models), and decision understanding (helping users comprehend AI decisions). Together, these approaches enable effective human-AI collaboration.
Why does it matter? In the humanitarian sector, xAI is essential for upholding values like neutrality, impartiality and independence, especially in conflict-sensitive environments. If AI-driven decisions are not explainable, humanitarian organisations risk losing trust and may inadvertently cause harm to the communities they aim to serve, as accountability and ethical action cannot be assured.

Upcoming events
- Cash Bytes: Data and Digital Developments in CVA, Online, 11th December, 2024 – The free webinar hosted by the CALP Network focusses on the latest technological advancements within Cash and Voucher Assistance (CVA).
- Global Nonprofit Leaders Summit, Washington USA, 25th –27th March, 2025 – A free in-person event where nonprofit leaders will explore AI innovation and skill development to increase impact.
- ACM FAccT Conference, Athens, Greece 2025 (exact date TBC) – a computer science conference that brings together practitioners interested in fairness, accountability, and transparency in socio-technical systems.
Funding Opportunity!
The Box Impact Fund offers six $25k grants to nonprofits for digital transformation projects in crisis response. Application deadline is the 9th December 2024.

Call for Proposals
The UKHIH is seeking proposals to deliver the Inclusive Solutions for Humanitarian Technology initiative. Deadline is Friday 29 November.
Get involved! We want to hear from you! Share your ideas and let us know what AI topics interest you most. Share your thoughts.
Justin Spelhaug, Vice President of Technology for Social Impact, MicrosoftAI is the opportunity of our lifetimes and it is the challenge of our lifetimes to figure out how to use this technology
Disclaimer: The views expressed in the articles featured in this newsletter are solely those of the individual authors and do not reflect the official stance of the editorial team, any affiliated organisations or donors.