A newsletter on Responsible AI and Emerging Tech for Humanitarians
Evidence is the foundation for scaling innovations responsibly, particularly in the humanitarian sector where lives and livelihoods are at stake. Traditionally, humanitarian innovations are expected to follow an innovation cycle, meeting specific thresholds of evidence at every stage before being scaled.
However, the rapid adoption of artificial intelligence (AI) technologies has often outpaced this traditionally cautious approach. Recent conversations highlight a concerning trend: many AI applications are deployed without sufficient evidence of their effectiveness, feasibility, or ethical implications. For instance, discussions at Wilton Park have underscored the need for robust evaluation and assurance approaches to ensure AI tools are safely developed and implemented in humanitarian contexts.
Furthermore, experts caution against the premature deployment of AI systems without adequate oversight and are warning that such practices can lead to unintended consequences, including potential harm to vulnerable populations.
This month, we explore the role of evidence in humanitarian AI: What does good evidence look like? Why does it matter? And how can we ensure AI innovations meet these thresholds before they are deployed at scale? Let’s delve into the challenges and opportunities of building an evidence-driven future for AI in the humanitarian sector.
Behind the scenes of humanitarian AI (Podcast Feature)
A conversation hosted by Brent Phillips, with Zineb Bhaby (Data Solutions Lead for the Norwegian Refugee Council), Zita Lengyel-Wang (Matching Manager, Tech to the Rescue), Maria Kett (UCL), Thomas Byrnes (Humanitarian and Social Protection Consultant) and Tigmanshu Bhatnagar (UCL)
This month’s podcast explores the challenges, opportunities, and the evidence about the acceptability, feasibility, effectiveness, value for money impact, and the transferability of AI solutions across contexts.
🎧Listen now on SoundCloud or or tune into the series via: itunes, Spotify.

Evidence and ethical design in Humanitarian Technology
Our recent landscape review of emerging technologies in the humanitarian sector highlights significant ethical challenges faced by organisations when designing and deploying technology. Emerging evidence from the review reveals a troubling trend: while technologies are already widely used, ethical design is often neglected, and communities are frequently excluded from the design process. As a result, localisation agendas which are meant to empower affected populations instead risk reinforcing global power imbalances by centring external actors such as donors and global technology companies.
In this reflection on the review, Shruti Viswanathan explores the risks of digital harm, from data misuse to the political dynamics of technology development. She emphasises the urgent need for humanitarian organisations to address these power structures. By prioritising community-led action and ethical, inclusive frameworks, the sector can better align technology with its core humanitarian principles.

Case study: How AI is helping humanitarian actors predict and prepare for Crises
In 2024, forced displacement exceeded 120 million globally, driven by conflicts, natural disasters, and climate change. In response, the Danish Refugee Council (DRC) developed four AI-powered forecast models that analyse data from over 120 indicators to predict displacement which enable humanitarian actors to better prepare for and respond to crises.
Rigorous testing, by applying them to historical data and real-time scenarios in various regions, demonstrated the models' effectiveness in forecasting displacement at both national and sub-district levels. Currently, the models cover 26 countries at the national level and five countries at the district level, with plans to expand to 12 additional countries.
A key learning was that internal data alone was insufficient for model development. Reflecting on this, Alexander Kjærum, Global Advisory at DRC, noted, “If we had the opportunity to start over, we would prioritize a more structured approach to data collection from the beginning.” Another challenge was helping end users understand and trust the models, which often appeared as a "black box."

Spotlight: How an evidence-driven process can make AI and Tech Initiatives better
When introducing AI or tech into humanitarian settings, success depends on more than just a great idea – it is about ensuring it works in the real world. The Humanitarian Innovation Guide, by the Humanitarian Innovation Fund, provides a framework to test and refine solutions during the pilot phase, ensuring they are effective, ethical, and scalable.
This process starts with small-scale pilots where evidence is gathered on what works and what doesn’t. Feedback from users and affected communities plays a crucial role, helping identify improvements early on. For AI, this could mean addressing issues like algorithm bias or ensuring a tool integrates seamlessly with existing systems.
By the time a solution is ready to scale, it’s not just bigger but better - tested, refined, and trusted. This approach minimises risks, builds stakeholder confidence, and ensures tech initiatives deliver real, measurable impact where it’s needed most. In short, starting small and learning big is the key to scaling smart.
📖 Visit the interactive Humanitarian Innovation Guide
Editor's Choice
Curated articles, tools, and events on AI and humanitarian innovation.
- In Building Trust with AI (S. Spencer, 2024), it is highlighted that although mapping AI projects in the humanitarian sector enhances transparency, there remains a significant gap in evidence regarding their successes, limitations, and failures.
- More Humanitarian Organizations Will Harness AI’s Potential (Wired, 2024) contends that while initiatives such as Signpost, AprendAI, and Google's Flood Hub show significant promise, further evidence and more robust evaluation frameworks are essential to fully assess their effectiveness and efficiency.
- The ‘Standards of Evidence Model’ by Nesta introduces a structured framework designed to assess the effectiveness of social innovations.
- Launched in April 2024, the World Food Programme's (WFP) AI Sandbox provides a dedicated cloud platform and expert support to assess the feasibility, effectiveness, and potential impact of internal AI projects before broader implementation.
- In How AI is Transforming Humanitarian Aid, T. Byrnes (2024) sets out real-world examples of how AI is already being used to strengthen market assessments and monitoring, predict and mitigate crises and to optimise humanitarian programmes by IFRC, World Food Programme, Food and Agriculture Organization and UNHCR.
Upcoming Opportunities

Google.org has launched a $30 million open call for organisations incl. nonprofits to apply for funding and participate in a six-month Generative AI Accelerator program, aiming to develop AI-driven solutions in e.g. in community resilience – Deadline 10th February 2025.
Excitement is building as many prepare to head to Paris for the Artificial Intelligence (AI) Action Summit on the 10-11 February, 2025, where key themes like the future of global AI governance will take centre stage.
Join the 2025 Humanitarian Networks and Partnerships Weeks, 17-28 March, 2025 (first week remote, second week in-person in Geneva), which provide a unique forum for humanitarian networks and partnerships to meet and address key humanitarian areas such as participatory action and accountability to affected communities with technology, data and innovation being key enablers.
Alternatively, join the AI Standards Hub's Global Summit, scheduled for March 17-18, 2025, in London and online to shape inclusive AI standards that ensure ethical, effective, and transparent AI applications for humanitarian action.
Thomas Byrnes, Humanitarian and Social Protection ConsultantWe see a new generation of aid workers who are accustomed to using AI in their academic and personal lives. For example, in a Tokyo University study, 97% of students reported using GPT for their coursework.
Disclaimer: The views expressed in the articles featured in this newsletter are solely those of the individual authors and do not reflect the official stance of the editorial team, any affiliated organisations or donors.