Author: Core Team

19 posts by this author

Core Team
The Core Team is a small, focused group of practitioners and researchers committed to advancing responsible, inclusive, and context-aware AI from the Global South. We work at the intersection of technology, policy, and real-world impact, with experience spanning AI governance, risk, data systems, and public interest applications. Our work is driven by a simple belief: AI frameworks built in data-rich, institution-heavy environments cannot be transplanted wholesale into developing contexts. The Global South requires its own evidence, its own guardrails, and its own voice in shaping how AI is designed, deployed, and governed. Today, the Core Team operates as a compact nucleus. Over time, it will expand to include researchers, technologists, and policy advocates across the Global South, collaborating across countries and sectors to build shared knowledge, practical tools, and grounded policy proposals that reflect local realities while engaging globally.
How AI Bias Locked Out Millions of Job Seekers (A Case Study on Mobley v. Workday)

How AI Bias Locked Out Millions of Job Seekers (A Case Study on Mobley v. Workday)

The Mobley v. Workday lawsuit represents a landmark shift in legal accountability, establishing that AI software vendors can be held liable as agents for discriminatory hiring practices that exclude qualified candidates. The case highlights how black box algorithms can systematically penalize individuals based on race, age, and disability through biased training data and the use of neutral proxies. This legal evolution signals a broader mandate for Accountability by Design, requiring employers and developers to ensure transparency and human oversight in automated recruiting systems

Core Team
AI Hiring Bias Exposed: How SiriusXM’s Algorithm Rejected Qualified Candidates

AI Hiring Bias Exposed: How SiriusXM’s Algorithm Rejected Qualified Candidates

This article examines the landmark Harper v. Sirius XM Radio, LLC lawsuit, highlighting how automated hiring systems can institutionalize racial discrimination through proxy biases like zip codes and educational institutions. By analyzing the technical and systemic failures of the iCIMS implementation, it offers a critical roadmap for corporate AI governance to prevent qualified talent from becoming algorithmically-invisible while navigating an era of increasing regulatory scrutiny

Core Team
AI Hiring Gone Wrong: How Eightfold’s Social Media Profiling Sparked a Fairness and Consent Crisis

AI Hiring Gone Wrong: How Eightfold’s Social Media Profiling Sparked a Fairness and Consent Crisis

A 2026 lawsuit against Eightfold AI reveals how job applicants may have been secretly scored using social media and online data, without consent or transparency. The case exposes how AI hiring systems can replicate bias, exclude candidates with thin digital footprints, and create massive legal and fairness risks. What happens when invisible algorithms decide who gets a chance?

Core Team
When Algorithms Decide Who Recovers: The UnitedHealth nH Predict Case

When Algorithms Decide Who Recovers: The UnitedHealth nH Predict Case

In 2023, a lawsuit revealed how UnitedHealth used an AI system to determine when elderly patients should stop receiving care. The nH Predict case highlights how cost-driven algorithms can override clinical judgment and introduce systemic bias in healthcare decisions. This case raises critical questions for policymakers especially in the Global South about the risks of scaling AI without adequate oversight.

Core Team
The Global South AI Labor Index: A Framework for Monitoring AI’s Workforce Impact

The Global South AI Labor Index: A Framework for Monitoring AI’s Workforce Impact

Artificial intelligence is beginning to reshape labor markets worldwide, yet most current studies measure its impact using indicators designed for advanced economies. In the Global South, workforce disruption is more likely to appear through rising informality, wage compression, underemployment, and shrinking entry-level opportunities rather than immediate job losses. This policy brief introduces the Global South AI Labor Index and an accompanying AI Labor Risk Dashboard to help governments detect early signals of AI-driven workforce transformation. Together, these tools provide a practical monitoring framework for managing the labor impacts of AI in developing economies.

Core Team