The “Childcare Benefit Scandal” has rocked politics in the Netherlands. For years, tens of thousands of families were falsely accused of welfare fraud in a scandal involving algorithmic discrimination and institutional racism. The Dutch parliament has labeled this an “unprecedented injustice,” and the entire government resigned over the scandal in early 2021.
What had gone so terribly wrong?
Under the means-tested ‘childcare benefit,’ parents pay for private daycare and claim their costs back from the government. Tens of thousands of parents, accused of having claimed too much, were ordered to pay back huge debts, sometimes reaching into six figures, that they supposedly owed. They had their benefits suspended; thousands were pushed (further) into poverty and debt; and enormous psychological harm followed. Couples separated; some died by suicide.
The overwhelming majority of the victims of this scandal were from immigrant backgrounds.
An algorithmic tool had been introduced within the childcare benefits system to calculate the probability of a recipient committing fraud. It disproportionately flagged parents with non-Dutch nationality as ‘high-risk.’ The design and implementation of this system revealed clear xenophobic assumptions that non-citizens are more likely to commit fraud.
The root causes of the Childcare Benefit Scandal therefore lie with the underlying political climate. Though the Netherlands has long been reputed a “progressive beacon to the world” with a well-developed welfare state, it has shifted decidedly to the right in recent decades. Since 2010, when far-right politician Wilders’ significant electoral gains led his party to tacitly support a new government, the government has taken an increasingly tough approach to benefit fraud. It began strictly enforcing anti-welfare-fraud measures which, reflecting growing xenophobic sentiment, especially focused on immigrants and communities of color.
This was not the first time algorithmic systems had been used to try to predict welfare fraud in the Netherlands.
The tax authority was enlisted into this welfare fraud crackdown. Responsible for administering childcare benefits since 2005, it had suffered years of cutbacks and long lacked the capacity to undertake rigorous checks before making payments. Throughout the late 2000s, the authority often made mistakes and claimed erroneous overpayments back from parents. But in 2013, as the government accelerated its anti-welfare-fraud agenda, the authority was granted a larger operating budget on the condition that it reduce fraud. Shortly afterwards, it introduced the algorithmic tool into the childcare benefit program.
This was not the first time algorithmic systems had been used to try to predict welfare fraud in the Netherlands. For years, many public bodies could use the ‘System Risk Indication’ (SyRI), exchanging data about individuals’ employment, benefits, and taxes, among other categories, and analyzing this pooled data using an algorithmic system flagging ‘high-risk individuals’ for further investigation. Like the childcare benefit algorithmic system, SyRI was targeted and discriminatory: it was deployed exclusively in neighborhoods with high numbers of low-income households, some with disproportionately many residents from immigrant backgrounds.
And the Netherlands is not unique: governments everywhere are introducing similar systems into welfare programs. From Chile to the United States, similar racial profiling and “poverty profiling” are occurring as these systems are deployed.
But the Netherlands seems to be an especially promising place to hold the government to account for such developments. In February 2020, the Hague District Court held that SyRI violated the human right to private and family life. Referring to an amicus intervention by the former United Nations Special Rapporteur on extreme poverty and human rights, to which our project contributed, the court noted that there is “a risk that SyRI inadvertently creates links based on bias, such as a lower socio-economic status or an immigration background.” The legislation was struck down and SyRI was stopped. This case was hailed as “pioneering,” the first in the world to halt a digital welfare system on human rights grounds.
The Childcare Benefit Scandal, too, seems at first glance to represent a model of political accountability for harmful digital welfare systems. Cumulative political condemnation apparently succeeded: members of Parliament and investigative journalists mounted pressure, a parliamentary committee’s stinging report entitled “Unprecedented Injustice” concluded that “fundamental principles of the rule of law were violated,” and revelations that the tax authority obstructed investigations and that government misled parliament intensified the scandal. As a result, the entire cabinet resigned and the tax authority acknowledged institutional racism.
These victories nonetheless ring hollow. The government—which, despite the collective resignation, is composed of the same political parties and led by the same Prime Minister—is doubling down on the very same approach. A new law, dubbed “Super-SyRI,” is pending in parliament. The proposed law will enable data-sharing among government agencies and private parties to “prevent and combat serious crime” through algorithmic analysis of even larger pools of data than was used in SyRI. Rights groups say the law will lead to “arbitrary data surveillance,” allows for similar profiling as in the Childcare Benefit Scandal, and is even more far-reaching than SyRI.
As was clear from a panel we hosted this summer, a growing coalition stands ready to challenge “Super-SyRI.” Further, following intense political pressure, the government has announced that it will soon require public bodies to disclose the algorithmic systems they use in a mandatory “algorithm register” and undertake human rights impact assessments before they deploy algorithmic systems. These are important wins.
But the SyRI case, the Childcare Benefit Scandal, and the proposed “Super-SyRI” system each demonstrate that solely focusing on the algorithmic systems at hand will be insufficient. A broader reckoning with underlying political drivers is also required. These systems were introduced in a political climate wherein individuals from low-income and immigrant backgrounds are viewed as “suspect” and where the government adopts algorithmic systems to detect welfare fraud while tax fraud by wealthier groups is not subjected to similar digitally mediated scrutiny. As this context remains unchanged, it is unsurprising that low-income and immigrant groups are still subjected to similar harmful digitalized systems and that more far-reaching versions are being introduced.
Digital rights groups are doing fantastic work advocating for protection from such systems. But these cautionary tales from the Netherlands highlight that all human rights organizations have a stake in these concerning developments, as they proliferate around the world.