Trust and Disinformation in the Age of AI: A Humanitarian’s Guide to LLMs
Written with Katherine Leonard
October 2023
Internal report - not released publically
AI generated summaryEmerging AI systems are transforming how information is produced, shared, and trusted - especially in humanitarian contexts where accuracy and credibility are essential. The rapid rise of large language models has made it easier than ever to generate human‑like text, but these same strengths create new vulnerabilities. Humanitarian practitioners now face a landscape where misinformation can spread unintentionally through well‑meaning use of AI tools, and where disinformation campaigns can be more sophisticated, multilingual, and scalable than anything previously seen. The report examines how these models work, why they so often produce persuasive but inaccurate content, and how dataset limitations, hallucinations, and built‑in biases can erode trust. It highlights the ease with which malicious actors can circumvent safety guardrails, automate harmful narratives, and exploit AI to fabricate evidence, mimic institutional voices, or amplify conspiracy theories. These capabilities fundamentally reshape the threat environment for humanitarian work, increasing risks for personnel and potentially undermining core principles such as neutrality and impartiality. At the same time, the analysis stresses that these tools hold meaningful potential. When used responsibly, AI can help humanitarians communicate complex issues more clearly, expand access to trustworthy information, and support monitoring, analysis, and fact‑checking efforts. Because models excel at pattern recognition and generating accessible explanations, they can serve as powerful allies in understanding and countering harmful information flows. The report concludes with practical recommendations for organizations and field teams: develop clear policies on data use and disclosure, ensure a human‑in‑the‑loop for all AI‑generated outputs, invest in digital literacy and MDH training for local offices, and strengthen interdisciplinary collaboration. Ultimately, the humanitarian sector must balance the risks and opportunities of AI, taking proactive steps to protect trust while leveraging these technologies to strengthen response, communication, and community engagement. |