Article Series

Browse our collection of multi-part article series on various topics

AI Fairness 101 - Real-World Incidents 7 parts

  1. 1

    When an Algorithm Broke Thousands of Families: The Netherlands Child Welfare Scandal

    How a design-phase failure in the Dutch childcare fraud algorithm created one of the worst AI governance disasters in Europe — and what the Global South must learn from it.

  2. 2

    Access Denied: How India's Digital 'Cure-All' Became a Real-World Fairness Crisis

    How Aadhaar’s promise of digital inclusion turned into one of the largest algorithmic exclusion crises in the world.

  3. 3

    The Golden Touch of Ruin: How Michigan’s MiDAS Algorithm Falsely Accused 40,000 People of Fraud

    A deep dive into Michigan’s MiDAS unemployment fraud algorithm — and how design-phase failures, automation bias, and the removal of human oversight turned efficiency into injustice.

  4. 4

    The COMPAS Algorithm Scandal: When AI Decides Who Goes to Jail ⚖️

    As AI enters courts and welfare systems worldwide, the COMPAS debate reveals a critical lesson: fairness depends on context, and exporting models without reform risks scaling inequality.

  5. 5

    The Optum Healthcare Algorithm Bias Against Black Patients (2019)

    A 2019 case study of the Optum healthcare algorithm showing how proxy bias led to racial disparities and under-served Black patients.

  6. 6

    When Algorithms Decide Who Recovers: The UnitedHealth nH Predict Case

    In 2023, a lawsuit revealed how UnitedHealth used an AI system to determine when elderly patients should stop receiving care. The nH Predict case highlights how cost-driven algorithms can override clinical judgment and introduce systemic bias in healthcare decisions. This case raises critical questions for policymakers especially in the Global South about the risks of scaling AI without adequate oversight.

  7. 7

    The Algorithmic Gender Bias — Lessons from the Amazon AI Hiring Failure

    Amazon built an AI to find the best candidates. It ended up filtering out women. Amazon’s hiring tool is a clear example of how gender bias can be embedded and amplified through algorithms. In the Global South, the risks are even higher.