Azra Hoosen | ah@radioislam.co.za
30 November 2024 | 13:15 CAT
3 min read
The Danish welfare authority, Udbetaling Danmark (UDK), faces allegations of discrimination against people with disabilities, low-income groups, migrants, refugees, and marginalised racial communities through its use of artificial intelligence (AI) tools for social benefits fraud detection. This is according to Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State, a new report released by Amnesty International.
UDK denies the accusation, arguing that its practices are legally grounded and do not constitute social scoring. However, it has yet to provide a detailed rebuttal of Amnesty’s claims.
Speaking to Radio Islam International, Hajira Maryam, Media Manager at Amnesty Tech, explained that a two-and-a-half-year investigation revealed how the use of automation, artificial intelligence, and advanced machine learning tools risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalised racial groups.
“At the same time what is being done at this point, the authorities are using fraud detection algorithms, which are paired with extensive mass surveillance practices, and this has also led to people unwillingly or unknowingly give up their rights to privacy. This has in the society as a whole, created an atmosphere of fear,” she said.
Algorithms and Mass Surveillance
UDK, along with Arbejdsmarkedets Tillægspension (ATP), administers social benefit systems in Denmark. ATP, in partnership with private corporations such as NNIT, has developed over 60 algorithmic models to detect fraud. Amnesty’s findings suggest these algorithms rely on vast amounts of personal data, including residency status, citizenship, family relationships, and even travel history. “Private corporations play a big role in developing these tools,” said Maryam.
She highlighted that UDK’s fraud detection algorithms are connected to approximately nine databases. “These algorithms have information about a person’s being as a whole including citizenship which has been confirmed by UDK authority and we do argue that the use of citizenship as a direct parameter in these algorithms does discriminate directly against people, especially from racialised groups,” she added.
She noted that Amnesty International was granted partial access to only 4 of the 60 algorithmic models. The algorithms include models like Really Single, which assesses relationship status based on “unusual” living arrangements, and Model Abroad, which flags individuals with ties to non-EEA countries for further investigations. These models disproportionately affect people with disabilities, older individuals in unconventional relationships, and migrants in multi-generational households, Amnesty claims.
“Overall the picture is quite grim,” said Maryam.
Psychological Toll and Discrimination
This report builds on Amnesty’s ongoing research into public sector digitalisation in countries such as the Netherlands, India, and Serbia, highlighting the global risks associated with algorithmic decision-making.
“Psychologically it takes a toll on people that are already marginalised in society and overall it’s quite unfair. This is a pattern when these systems are deployed especially in social security services and welfare services, government officials are using AI as a cost effective measure, claiming it will make everything efficient but when people are wrongly flagged they go through these extensive bureaucratic procedures and a lot of them get very scared,” she said.
According to Maryam, there is a war against welfare. “The welfare state is only reserved to a few who are the dominant group, who are “Danes”, “non-Westerners” who have citizenship, the research is contesting that entire narrative,” she said.
She emphasised a global trend of using AI in welfare distribution without fully understanding the potential human rights harms it can cause. “Technology is not introduced in a vacuum, it is introduced in a context, and it perpetuates the harms that pre-exist in those contexts against communities who are discriminated. These systems have a pattern of using proxy data, which is also extremely dangerous and does discriminate against people. We need binding regulations we need more transparency,” she said.
Calls for Reform
Amnesty calls for an immediate suspension of these practices, greater transparency, and compliance with EU regulations. The organisation also urges the European Commission to clarify which AI practices fall under the prohibition of social scoring.
Denmark, under EU and international human rights laws, has an obligation to safeguard rights such as privacy, equality, and non-discrimination. Amnesty asserts these principles must remain central to welfare administration.
LISTEN to the full interview with Muallimah Annisa Essack and Hajira Maryam, Media Manager at Amnesty Tech, here.
0 Comments