All News

Amsterdam's Welfare AI Experiment Highlights Challenges in Fairness

Amsterdam’s ambitious AI project, Smart Check, aimed to detect welfare fraud fairly but ultimately failed to eliminate bias. Despite rigorous testing and ethical safeguards, the system wrongly flagged applicants and mirrored human biases. This experiment underscores the complexity of creating truly fair AI in social services and the ongoing debate over responsible AI use.

Published June 11, 2025 at 06:13 AM EDT in Artificial Intelligence (AI)

Amsterdam embarked on a high-stakes experiment to use artificial intelligence for detecting welfare fraud with a system called Smart Check. The city aimed to break from a decade-long pattern of discriminatory algorithms by designing a fair and transparent AI model to evaluate welfare applications. However, despite extensive efforts, the project revealed fundamental challenges in achieving fairness in algorithmic decision-making.

Smart Check was designed to identify potentially fraudulent welfare applications by analyzing 15 characteristics while deliberately excluding sensitive demographic factors like gender, nationality, and age to avoid bias. The city employed an explainable boosting machine algorithm to enhance transparency and conducted rigorous bias testing, consulting experts and stakeholders throughout development.

Yet, the pilot revealed that Smart Check still exhibited bias, wrongly flagging applicants based on nationality and gender, sometimes inverting the bias seen in human caseworkers. Attempts to mitigate bias through data reweighting improved fairness on some metrics but introduced new complexities, such as intersectional biases that were difficult to detect or correct.

The project faced opposition from welfare recipients’ advocates and the city’s Participation Council, who feared the AI would exacerbate harm to vulnerable groups. Despite the city’s commitment to responsible AI principles, the system’s inability to outperform human judgment or eliminate bias led to the pilot’s termination in late 2023.

Amsterdam’s experiment highlights the broader dilemma facing governments worldwide: can AI systems tasked with critical social decisions ever be truly fair? The legacy of historic biases embedded in training data, the complexity of defining fairness mathematically, and the ethical concerns of profiling citizens remain formidable obstacles.

The city’s experience also underscores the importance of involving affected communities in AI governance and recognizing that fairness is not merely a technical problem but a political and ethical one. While AI can assist in improving public services, it cannot replace the nuanced judgment and trust built through human interaction.

Ultimately, Amsterdam’s Smart Check serves as a cautionary tale about the limits of responsible AI frameworks when applied to complex social welfare systems. It invites policymakers and technologists to rethink how fairness is defined, measured, and implemented, emphasizing transparency, accountability, and the voices of those most impacted.

As AI continues to shape public sector decision-making, Amsterdam’s experience reminds us that technology alone cannot solve systemic social issues. Instead, it calls for a balanced approach that combines ethical AI development with human oversight and meaningful community engagement to build trust and fairness in welfare systems.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte offers deep insights into ethical AI development and deployment, helping governments and organizations navigate fairness challenges. Explore how our AI auditing frameworks and bias mitigation strategies can improve public sector AI projects and protect citizen rights.