I disagree with many clever minds when it comes to algorithms. Take for instance the following sentence: “The opacity of the algorithms’ power means that it isn’t easy to determine when algorithmic governance stops serving the common good and instead becomes the servant of the powers that be.” A pretty common claim. I am fine with it, except when it blames “the opacity”. A regrettable misunderstanding is at play, which paralyzes some people’s imagination. I think there are issues with algorithms, and I would like to provide a standpoint from which everyone can be critical, mobilize their political imagination, and step into the debate. My point is dead simple: we do not need to understand how algorithms think as long as we acknowledge that they have agency.
Algorithms, complexity and I have a long story, but here like anyone else I am simply concerned with algorithms impacting my life. They might be hidden and have an indirect influence, their effects are nevertheless real. I am writing this post in reaction to an article written by two Danish thinkers, Jacob Mchangama and Hin-Yan Liu, titled The Welfare State Is Committing Suicide by Artificial Intelligence. It is a short read, and all my quotes come from it. The authors reflect on the recent use of “algorithms to identify children at risk of abuse” in the Danish welfare system. Their main point is that “democratic infrastructures” and “judicial procedures” cannot keep algorithmic power in check, because we “will be largely unable to understand and explain why the algorithm” took its decision, which makes it “impossible for courts to hold [it] accountable.” They locate the source of the problem in the opacity of algorithms which allows, they say, to “take a toll on privacy, family life, and free speech, as individuals will be unsure when their personal actions may come under the radar of the government.” I agree that the situation requires scrutiny from the public but beyond that, I will not waste your time on reading my opinion. I just want to explain why I disagree that opacity prevents us from regulating algorithms. The following quote exposes this precise point.
“Consider the Danish case: the civil servants working to detect child abuse and social fraud will be largely unable to understand and explain why the algorithm identified a family for early intervention or individual for control. As deep learning progresses, algorithmic processes will only become more incomprehensible to human beings, who will be relegated to merely relying on the outcomes of these processes, without having meaningful access to the data or its processing that these algorithmic systems rely upon to produce specific outcomes. But in the absence of government actors making clear and reasoned decisions, it will be impossible for courts to hold them accountable for their actions.”
Indeed algorithms are political beings. Insofar as they take decisions, they produce an effect, hence they have agency. And it is fair to expect them to become “more incomprehensible to human beings.” But concluding that this kind of opacity prevents us from regulating them is misunderstanding what it means to comprehend an algorithm. Contrary to what the authors believe, we have many ways to evaluate an algorithm after its outcomes. We can know it in depth and make many reliable predictions just by analyzing its outputs. This is not free, it comes at an additional cost to developing the algorithm itself, but it does not require to understand how it works, how it thinks. This is sometimes called post-hoc interpretability, to emphasize that interpretation is not relying on the internal mechanics of the algorithm. This is typically the case with deep learning where the algorithm is trained in a way that is “incomprehensible to human beings.” This is nothing special, just new to those who thought we had a divine right to understand everything in this world. As for us who feel the constant pain of being too stupid for what the world has to offer, we are used to have our capabilities exceeded and we find turnarounds to keep going on – when we can. Complexity is a name we use sometimes to talk about that. Post-hoc understanding is a turnaround we use to keep going on with algorithms that are too complex.
To me this whole story feels like there is not much to write about, but I know it is false because so many people feel threatened by opacity. It may come from misplaced confidence in our ability to contain and master all the things we produce, despite the accumulated evidence that it is not the case, culminating in our inability to keep our own habitat, the surface of our planet, in a state that suits our needs. Common misconceptions about what does or does not act are blinding us, for instance when we think that human beings have a power to act that the surface of our planet is lacking – but it provides a hot feedback! Algorithms are in the same situation. Once we acknowledge that they act by themselves (in the sense that they are opaque to us) and consider them accordingly, ways to regulate them in a democratic setting naturally appear. They do because we are surprisingly skilled at post hoc interpretation, something we use everyday without even thinking of it. Except we don’t usually do it for artificial things, only for other human beings.
Regulating the agency of human beings is the point of all politics, even though we barely know how the human mind works. The questions that seem to bother us about algorithms sound surprisingly empty when asked about persons. Let us call our algorithm Donald. What if civil servants working to detect fraud were largely unable to understand and explain why Donald identified a family for early intervention? Well this would be an issue, but not much more than an incompetent employee. Our societies invented many ways to deal with such things. We might stick with Donald until someone complains and then fire him. Or we might evaluate his task after a series of indicators and check if he does his job. We might hire different Donalds and conduct an independent audit. We might ask people to vote. None of these solutions involve looking inside his brain. And we would certainly not conclude that in the absence of clear and reasoned decisions, it is impossible to hold Donald accountable for his actions.
Understanding an algorithm does not even dispense from regulating it. Let us assume that black people are overrepresented in Donald’s targets, and a journalist claims Donald is racist. Are you surprised that Donald could be racist? People are constantly surprised that algorithms can be. Should we assume that Donald is fair? Of course not. What makes him racist, the way he thinks or the way he acts? Imagine that we can look into Donald’s mind and we find a sound rationale, where race is not a factor of the decision but geographical location is, and it turns out that mostly black people live in the targeted locations. Does it make Donald less racist? Algorithms do not dispense us from dealing with such political questions, and our solutions as a society are not so different for algorithms than for people. Even the fact that entire classes of algorithms might be flawed is not a particular problem. #BlackLivesMatter is scrutinizing an entire class of human beings.
Algorithms are problematic, but their problems do not arise from their opacity. They arise from our democratic institutions not acknowledging their agency. We saw the American Congress question Mark Zuckerberg, but it should have questioned Facebook algorithms first. Algorithms are not so mute, they can designate where responsibility flows. Of course the Congress did not have the expertise to question algorithms, but it was also powerless because it had no practical mean to scrutinize them. Why would we leave beings with such powers out of any jurisdiction? We cannot just let their owners have the exclusivity of their scrutiny. That would be an incredibly naïve mistake for a democracy, a mistake we would never do if their agency was more obvious.
I drew a number of conclusions for myself. I share them below as poorly reflected suggestions that might have in fertility what they lack in robustness.
Scrutiny. We do not leave children unattended. We must not leave algorithms unattended. Who is in charge of watching a given algorithm? Our democratic infrastructures could enforce ways that this question always has an answer. No algorithm should be left out of jurisdiction, so no algorithm should be out of scrutiny.
Accountability. Justice succeeds in dealing with the accountability of human beings, which is a difficult question. We can do it for algorithms if we acknowledge their agency. Like for human beings, accountability naturally circulates to others – algorithms, persons… Like for human beings, sometimes no one is guilty. Algorithms can be evil, but can also make honest mistakes, and sometimes both at once. And they have their own disorders.
Disposability. We can dispose of algorithms and we can proliferate them at a low cost, with or without variations. This makes a huge difference with persons and opens additional opportunities to regulate them. In many situations we use a single algorithm that has been declared the best fit for the task. This might be a consequence of an ideological quest for objective efficiency, but it is not very farsighted. Why not employ a swarm of variants so that we have a chance to observe who performs better? It also multiplies scrutiny, because we have more chances to make the distinction between contingent and essential effects.
Understandability. Though understanding algorithms is generally considered difficult, post-hoc understanding can be much simpler. It is an evaluation of the effects produced by the algorithm, and can be described in a simpler language. In the case of the Danish algorithm, it might be written in terms of over-/under-representation of different populations. This information is important anyway. Because it is easier to share, it can also spark the interest of the public and gather more eyes to watch the algorithm.
It is a political fight. Since opacity is not blocking us, we do not have to wait for a better understanding of deep learning. The situation will only get worse over the long term anyway. Regulating algorithms is a political issue, and technology is not holding us back. Culture might be, though, insofar as the modernist vision of the world tends to be blind about the agency of algorithms, which impairs our imagination on that matter. Also for clarification: though political, this fight obviously has to take place (in part) on a scientific ground, in the academic arena. Algorithm scrutiny starts in the papers describing them, and I have a lot to say on that topic but it will be for another time!
OpenEdition suggests that you cite this post as follows:
Mathieu Jacomy (January 4, 2019). Getting back our imagination about the regulation of algorithms. Reticular. Retrieved January 20, 2025 from https://reticular.hypotheses.org/366
Many thanks Mathieu for provoking this debate ! I am currently finishing an ethnography of algorithm in the French public sector, so I am full of insights I will very very soon have to transform into a meaningful empirical PhD chapter :)
As all provocation, I think there is something very true and exciting in what your proposing, but also something a bit misleading. I am talking about your bold sentence: « we do not need to understand how algorithms think as long as we acknowledge that they have agency. »
I interpret many things from it :
1.Forget opacity, the core political issue about algorithmic power is potentially elsewhere.
(for sure there is many other ways to scrutinize an algorithm and they have been underestimated, as you point with post-hoc interpretability.)
2.We need to acknowledge the agency of algorithms, and for that, opacity is not always necessary.
(Yeah I am with you !!!)
3.If we know enough about the agency of an algorithm, we do not necessarily need to open its so called “black box” – whatever it means – in order to regulate it.
(And here I do not follow you. We still might find interesting clues in uncovering an algorithm no ? More and more, my position (and yours ?) is that there is no such a thing as an opaque algorithm. As Joshua Kroll pointed, inscrutability is a fallacy (see here: https://www.researchgate.net/publication/328292009_The_fallacy_of_inscrutability).
Moreover, in some countries like France, there is legal constraints already in place to scrutinize algorithm intervening for the “general interest” (in a mission of public service). This law emerged because of the APB/Parcoursup controversy.
Among other things, there is a new individual rights for french citizens, a right for explanation, in the article R311-3-1-2 from the “Code des relations entre le public et les administrations” :
https://www.legifrance.gouv.fr/affichCodeArticle.do?cidTexte=LEGITEXT000031366350&idArticle=LEGIARTI000034195881
But these new rights need to be used by citizens ! And public bodies are not (always) in capacity to respond to the requests made by citizens.
To me, here is the problem of the danish thinkers’ article: their arguments are more targeted at securing privacy than defining steps for a better accountability, look for example when they say : « But the potential for mission creep is abundantly clear. Udbetaling Danmark is a case in point: The agency’s powers and its access to data have been steadily expanded over the years. […] Danish citizens have not been asked to give specific consent to the massive data processing already underway. »