Denmark’s AI-driven welfare system risks discrimination, over-surveillance

0

The welfare recipients in Denmark are facing the risk of being overly monitored and discriminated against due to the use of artificial intelligence (AI) and algorithms designed to detect fraud, according to a new report from Amnesty International.

The human rights organisation warned that the growing reliance on mass surveillance in the country’s social benefits system is undermining the very purpose of these programs, which are meant to support vulnerable individuals.

Hellen Mukiri-Smith, an AI researcher and co-author of the report, explained that the algorithms used by the Danish welfare agency Udbetalning Danmark are creating a system that targets rather than supports citizens. “Mass surveillance has created a social benefits system that risks targeting, rather than supporting, the very people it was meant to protect,” she said.

Amnesty International analyzed four algorithms, out of 60 used by Udbetalning Danmark, which rely on personal data from public databases. The algorithms are designed to detect fraud related to a variety of welfare benefits, including pensions, parental leave, sick leave, and student grants. The data used includes sensitive information such as residence, travel history, citizenship, income, family relationships, and place of birth.

One particular algorithm, dubbed the “Model Abroad,” scrutinizes recipients’ nationality to determine if they are fraudulently collecting benefits while living abroad. The report points out that this model disproportionately impacts individuals based on their citizenship, which Amnesty argues violates their right to non-discrimination.

David Nolan, another author of the report, noted that 90 percent of cases flagged by the “Model Abroad” algorithm ultimately turn out not to be fraudulent, emphasizing its potential to cause harm. “We argue this does directly violate their right to non-discrimination because of the use of citizenship,” Nolan told AFP.

Amnesty International has called on the Danish government to take immediate action to address these concerns. Specifically, the organization urges authorities to increase transparency around the use of these algorithms, allow for audits of the systems, and stop using sensitive data related to nationality or citizenship. “Feeding personal data into an algorithmic model needs to be done with much greater care,” Nolan added.

The report also highlights a potential side effect of the reliance on digital systems in welfare services—exclusion of marginalized groups. Amnesty warns that vulnerable populations, such as the elderly and certain foreign nationals, may be disproportionately impacted by digital barriers, which could prevent them from accessing benefits they are entitled to.

The report is the latest in a series of critiques of algorithmic use in social services in Western countries.

In October, 15 organizations, including Amnesty, filed a legal complaint against the French welfare agency CNAF for using similar algorithms to detect undue payments.

LEAVE A REPLY

Please enter your comment!
Please enter your name here