Child Safeguarding Platform for Global Humanitarian Charity

Children in Uganda

Overview

I led delivery of an AI-powered image screening system for a global humanitarian charity active in over 100 countries with annual revenues exceeding $3bn that automatically detects inappropriate content in photos submitted through their child sponsorship programme.

The challenge: Millions of images flow through the charity's sponsorship channels annually. Manual review couldn't scale, creating safeguarding risks for children and exposing reviewers to distressing content.

The solution: A multimodal AI pipeline with human-in-the-loop review that flags images based on the model's reasoning against the charity's safeguarding rulebook, protecting 500k+ sponsored children whilst eliminating most manual review work.

The Problem

The charity struggled with:

  • Volume mismatch - millions of images, limited review capacity
  • Safeguarding gaps - review delays left children exposed
  • Human cost - reviewers experienced emotional strain from viewing disturbing content
  • Trust and reputation - inappropriate images reaching sponsors could cause serious reputational damage and erode donor trust
  • Precision needs - had to catch harmful content reliably, even if it meant more false positives

What We Built

A multimodal AI pipeline that evaluates images against the charity's safeguarding rulebook, with human-in-the-loop review for flagged content.

How it works:

  1. AI analyses images using Amazon Nova's multimodal capabilities
  2. Model generates reasoning explaining how content relates to safeguarding rules
  3. System flags images requiring human review based on this reasoning
  4. Human reviewers handle edge cases only
  5. Seamless integration with current processes

Impact

  • 1.9m+ images screened
  • ~95% recall rate
  • 500k+ children protected
  • Eliminated most manual review
  • Reduced staff emotional burden and operational costs

My Role

I owned the product from discovery through launch:

Discovery - interviewed stakeholders across safeguarding, operations and executive teams to understand constraints and define success metrics focused on child safety

Execution - managed roadmap and backlog, evaluated prompt pipelines with AI engineers prioritising safety over convenience

Integration - embedded the system into the charity's workflows with minimal disruption, and facilitated training of reviewers on the new platform

Technologies

Amazon Nova (multimodal AI), AWS backend, React frontend, human-in-the-loop decision systems

Why It Mattered

This project required balancing technical delivery with ethical responsibility. Every design choice prioritised child protection - optimising for recall rather than typical accuracy metrics, preserving human judgement for complex cases, and building resilience into a system where failures have real consequences.

The result: technology that protects the charity's reputation and strengthens trust with donors and sponsors, whilst serving both the children it safeguards and the staff who review content.


Cover photo by Brian Wegener on Unsplash