Governments increasingly use data analysis to make decisions that affect citizens. But how transparent are these practices? In a study summarised here, Nicholas Diakopoulos had students file freedom of information requests to obtain, among other things, the algorithms behind government decision-making. Most requests were denied, for a variety of reasons. Some states claimed algorithms aren’t «documents» covered by FOI legislation; others said they were copyrighted.
The article reminded me of the risk profiles Dutch municipal welfare agencies use to decide who to submit to rigorous checks - including very intrusive home searches. As early as 2006, I was involved in a survey by Dutch trade union FNV which found that two in five municipalities used risk profiles for that purpose:
This has the advantage that for a large group of people, unnecessary routine checks can be dispensed with. However, there’s virtually no debate about what criteria can be used without causing unacceptable unequal treatment. Is it ok to select people because they’ve worked in the catering industry, or as a self-employed person? Or because of their nationality?
When the government uses algorithms exert control over citizens (or when they outsource that task, for that matter), there should be accountability. So would it be possible to obtain such algorithms through an FOI request?
I found one decision that suggests that algorithms aren’t a priori excluded from FOI requests - at least so in the eyes of the Utrecht municipality (I used Open State’s FOI search engine to find it). But welfare recipients’ organisation Bijstandsbond informed me that an FOI request has been filed in the past to obtain the risk profiles used by the Amsterdam municipal welfare agency. The request was denied.