Governments all over the world are using their citizens’ personal data -from someone’s children’s travel history to machine-made guesses about who someone sleeps with- and combine them into ‘fraud risk scores’. How these algorithms work is largely hidden from the public, but they are already transforming once well-functioning societies into a surveillance culture defined by distrust.
Rumors, happenings, and innovations in the technology sphere. If it’s technological news, it probably belongs here.
The study quoted in the article is largely based on an investigation in the Dutch city of Rotterdam which is obviously using these algorithms. What is not mentioned, though, is that in 2020 a Dutch court ruled that a government system that uses artificial intelligence to identify potential welfare fraudsters is illegal: