Getting back our imagination about the regulation of algorithms

I disagree with many clever minds when it comes to algorithms. Take for instance the following sentence: “The opacity of the algorithms’ power means that it isn’t easy to determine when algorithmic governance stops serving the common good and instead becomes the servant of the powers that be.” A pretty common claim. I am fine with it, except when it blames “the opacity”. A regrettable misunderstanding is at play, which paralyzes some people’s imagination. I think there are issues with algorithms, and I would like to provide a standpoint from which everyone can be critical, mobilize their political imagination, and step into the debate. My point is dead simple: we do not need to understand how algorithms think as long as we acknowledge that they have agency.

Algorithms, complexity and I have a long story, but here like anyone else I am simply concerned with algorithms impacting my life. They might be hidden and have an indirect influence, their effects are nevertheless real. I am writing this post in reaction to an article written by two Danish thinkers, Jacob Mchangama and Hin-Yan Liu, titled The Welfare State Is Committing Suicide by Artificial Intelligence. It is a short read, and all my quotes come from it. The authors reflect on the recent use of “algorithms to identify children at risk of abuse” in the Danish welfare system. Their main point is that “democratic infrastructures” and “judicial procedures” cannot keep algorithmic power in check, because we “will be largely unable to understand and explain why the algorithm” took its decision, which makes it “impossible for courts to hold [it] accountable.” They locate the source of the problem in the opacity of algorithms which allows, they say, to “take a toll on privacy, family life, and free speech, as individuals will be unsure when their personal actions may come under the radar of the government.” I agree that the situation requires scrutiny from the public but beyond that, I will not waste your time on reading my opinion. I just want to explain why I disagree that opacity prevents us from regulating algorithms. The following quote exposes this precise point.

“Consider the Danish case: the civil servants working to detect child abuse and social fraud will be largely unable to understand and explain why the algorithm identified a family for early intervention or individual for control. As deep learning progresses, algorithmic processes will only become more incomprehensible to human beings, who will be relegated to merely relying on the outcomes of these processes, without having meaningful access to the data or its processing that these algorithmic systems rely upon to produce specific outcomes. But in the absence of government actors making clear and reasoned decisions, it will be impossible for courts to hold them accountable for their actions.”

Indeed algorithms are political beings. Insofar as they take decisions, they produce an effect, hence they have agency. And it is fair to expect them to become “more incomprehensible to human beings.” But concluding that this kind of opacity prevents us from regulating them is misunderstanding what it means to comprehend an algorithm. Contrary to what the authors believe, we have many ways to evaluate an algorithm after its outcomes. We can know it in depth and make many reliable predictions just by analyzing its outputs. This is not free, it comes at an additional cost to developing the algorithm itself, but it does not require to understand how it works, how it thinks. This is sometimes called post-hoc interpretability, to emphasize that interpretation is not relying on the internal mechanics of the algorithm. This is typically the case with deep learning where the algorithm is trained in a way that is “incomprehensible to human beings.” This is nothing special, just new to those who thought we had a divine right to understand everything in this world. As for us who feel the constant pain of being too stupid for what the world has to offer, we are used to have our capabilities exceeded and we find turnarounds to keep going on – when we can. Complexity is a name we use sometimes to talk about that. Post-hoc understanding is a turnaround we use to keep going on with algorithms that are too complex.

To me this whole story feels like there is not much to write about, but I know it is false because so many people feel threatened by opacity. It may come from misplaced confidence in our ability to contain and master all the things we produce, despite the accumulated evidence that it is not the case, culminating in our inability to keep our own habitat, the surface of our planet, in a state that suits our needs. Common misconceptions about what does or does not act are blinding us, for instance when we think that human beings have a power to act that the surface of our planet is lacking – but it provides a hot feedback! Algorithms are in the same situation. Once we acknowledge that they act by themselves (in the sense that they are opaque to us) and consider them accordingly, ways to regulate them in a democratic setting naturally appear. They do because we are surprisingly skilled at post hoc interpretation, something we use everyday without even thinking of it. Except we don’t usually do it for artificial things, only for other human beings.

Regulating the agency of human beings is the point of all politics, even though we barely know how the human mind works. The questions that seem to bother us about algorithms sound surprisingly empty when asked about persons. Let us call our algorithm Donald. What if civil servants working to detect fraud were largely unable to understand and explain why Donald identified a family for early intervention? Well this would be an issue, but not much more than an incompetent employee. Our societies invented many ways to deal with such things. We might stick with Donald until someone complains and then fire him. Or we might evaluate his task after a series of indicators and check if he does his job. We might hire different Donalds and conduct an independent audit. We might ask people to vote. None of these solutions involve looking inside his brain. And we would certainly not conclude that in the absence of clear and reasoned decisions, it is impossible to hold Donald accountable for his actions.

Understanding an algorithm does not even dispense from regulating it. Let us assume that black people are overrepresented in Donald’s targets, and a journalist claims Donald is racist. Are you surprised that Donald could be racist? People are constantly surprised that algorithms can be. Should we assume that Donald is fair? Of course not. What makes him racist, the way he thinks or the way he acts? Imagine that we can look into Donald’s mind and we find a sound rationale, where race is not a factor of the decision but geographical location is, and it turns out that mostly black people live in the targeted locations. Does it make Donald less racist? Algorithms do not dispense us from dealing with such political questions, and our solutions as a society are not so different for algorithms than for people. Even the fact that entire classes of algorithms might be flawed is not a particular problem. #BlackLivesMatter is scrutinizing an entire class of human beings.

Algorithms are problematic, but their problems do not arise from their opacity. They arise from our democratic institutions not acknowledging their agency. We saw the American Congress question Mark Zuckerberg, but it should have questioned Facebook algorithms first. Algorithms are not so mute, they can designate where responsibility flows. Of course the Congress did not have the expertise to question algorithms, but it was also powerless because it had no practical mean to scrutinize them. Why would we leave beings with such powers out of any jurisdiction? We cannot just let their owners have the exclusivity of their scrutiny. That would be an incredibly naïve mistake for a democracy, a mistake we would never do if their agency was more obvious.

I drew a number of conclusions for myself. I share them below as poorly reflected suggestions that might have in fertility what they lack in robustness.

Scrutiny. We do not leave children unattended. We must not leave algorithms unattended. Who is in charge of watching a given algorithm? Our democratic infrastructures could enforce ways that this question always has an answer. No algorithm should be left out of jurisdiction, so no algorithm should be out of scrutiny.

Accountability. Justice succeeds in dealing with the accountability of human beings, which is a difficult question. We can do it for algorithms if we acknowledge their agency. Like for human beings, accountability naturally circulates to others – algorithms, persons… Like for human beings, sometimes no one is guilty. Algorithms can be evil, but can also make honest mistakes, and sometimes both at once. And they have their own disorders.

Disposability. We can dispose of algorithms and we can proliferate them at a low cost, with or without variations. This makes a huge difference with persons and opens additional opportunities to regulate them. In many situations we use a single algorithm that has been declared the best fit for the task. This might be a consequence of an ideological quest for objective efficiency, but it is not very farsighted. Why not employ a swarm of variants so that we have a chance to observe who performs better? It also multiplies scrutiny, because we have more chances to make the distinction between contingent and essential effects.

Understandability. Though understanding algorithms is generally considered difficult, post-hoc understanding can be much simpler. It is an evaluation of the effects produced by the algorithm, and can be described in a simpler language. In the case of the Danish algorithm, it might be written in terms of over-/under-representation of different populations. This information is important anyway. Because it is easier to share, it can also spark the interest of the public and gather more eyes to watch the algorithm.

It is a political fight. Since opacity is not blocking us, we do not have to wait for a better understanding of deep learning. The situation will only get worse over the long term anyway. Regulating algorithms is a political issue, and technology is not holding us back. Culture might be, though, insofar as the modernist vision of the world tends to be blind about the agency of algorithms, which impairs our imagination on that matter. Also for clarification: though political, this fight obviously has to take place (in part) on a scientific ground, in the academic arena. Algorithm scrutiny starts in the papers describing them, and I have a lot to say on that topic but it will be for another time!

Exploring relations between Pareto and Network Science on Wikipedia with Hyphe

I am currently looking into the power law, where it comes from and which role it plays in network science (scale-free networks are often characterized by a power law distribution of node degree). I used the web crawler Hyphe to investigate Wikipedia pages on that topic, and Gephi to analyze the links (I know these tools well). You will find here a report of that small experiment, unfolding the method and discussing it a bit.

In a nutshell I obtained a network of Wikipedia pages where we see two main clusters, one about Pareto and statistics, the other one about network science. We see a bridge between both, and as expected the power law is part of it. It validated my implicit hypothesis, and I learned a few additional things. Here is it (you may want to open it full screen and zoom to read the labels).

Wikipedia pages about Pareto, the power law and network science.

Note: this visualization has features that you may not have seen before, such as the node halos that clarify their links. It is not straight out of Gephi, I used a Javascript device to produce it. It is a prototype and I will talk about it in another post.

Let’s start with the elephant in the room: did I learn anything non trivial from this image? Yes. Nothing big, but useful things in a research context. The image above is the entry point I present you to have a quick idea of what I write about. My findings did not come out of just a quick read of that network. They came out of the whole process, and I provide details below. Now that you are warned against this common misunderstanding, here is what I obtained from this work:

  • My hypothesis about the power law bridging certain statistical concepts and network science is confirmed. No big surprise, but it is a way to establish it.
  • I get oriented in these concepts and I now have a good idea of my next steps. In particular I know which concepts I must prioritize to investigate the relations between the two knowledge areas.
  • I have a well-described and argued set of pages (the “Pareto-to-network-science” corpus) that I can repurpose later in a science context, because the process behind is transparent, reproducible and open to criticism.
  • For the same reason I have a set of pages defined as bridging my two domains, that I can repurpose later (the “bridge” corpus).
  • I also have a better idea of what the two sides are, and in particular the fact that they are asymmetric. I did not expect that (though I should have).
  • I had other surprises, and I value them highly because it is a not-so-common occasion to have a clue about my own biases:
    • I did not expect the “de Solla Price” bridge
    • I did not expect two sub clusters in network science
  • I can show an image that summarizes the situation, which might come in handy in a number of situations. Like this post.

In the next sections I will expose the protocol I used to get that network, and my analysis. This is more or less what I would write in a paper. However in addition I will evoke my exploration, that took place before the final protocol, and that is usually not shared in a paper.

Protocol

1. Starting lists

We start with two manually curated lists of pages related to the two topics we are studying. The two lists have the same amount of pages arbitrary set to 10. Here are the lists:

Pareto and the power law:

  1. https://en.wikipedia.org/wiki/80-20_law
  2. https://en.wikipedia.org/wiki/Long_tail
  3. https://en.wikipedia.org/wiki/Pareto_distribution
  4. https://en.wikipedia.org/wiki/Pareto_index
  5. https://en.wikipedia.org/wiki/Pareto_principle
  6. https://en.wikipedia.org/wiki/Power-law
  7. https://en.wikipedia.org/wiki/Power_law
  8. https://en.wikipedia.org/wiki/Vilfredo_Pareto
  9. https://en.wikipedia.org/wiki/Zipf%27s_law
  10. https://en.wikipedia.org/wiki/Zipf%E2%80%93Mandelbrot_law

Network science:

  1. https://en.wikipedia.org/wiki/Albert-L%C3%A1szl%C3%B3_Barab%C3%A1si
  2. https://en.wikipedia.org/wiki/Complex_network
  3. https://en.wikipedia.org/wiki/Duncan_J._Watts
  4. https://en.wikipedia.org/wiki/Network_science
  5. https://en.wikipedia.org/wiki/Preferential_attachment
  6. https://en.wikipedia.org/wiki/Scale-free_network
  7. https://en.wikipedia.org/wiki/Scale-free_networks
  8. https://en.wikipedia.org/wiki/Small-world_network
  9. https://en.wikipedia.org/wiki/Small-world_phenomenon
  10. https://en.wikipedia.org/wiki/Small_world_network

At this point there is no crawl or corpus, but since we have seen the final result already, let’s visualize where the starting lists will en up in the final corpus. It will make the analysis easier to understand.

In indigo on the left, the “Pareto Power Law” starting pages.
In red on the right, the “Network Science” starting pages.

2. Crawl

Using the web crawler Hyphe, we define all Wikipedia pages as different web entities and we crawl these 20 pages. We obtain a list of 1874 web entities cited by these, most of which are other Wikipedia pages.

3. Corpus cleaning

We filter out all the web entities cited only by 3 or less of the starting pages, and we remove any web entity that was not a Wikipedia page or was a tool page (categories, help, list of links…) and we crawl them to obtain the hyperlinks between them. At this stage we have 201 Wikipedia pages and the hyperlinks between them.

A quick look into the most linked pages in this corpus shows that many of them are not related to our topics. These “high layer” pages are very generic and they are cited by our two topics just because they are generally cited by many Wikipedia pages. We use a simple criterion to rule them out: we remove any page that does not cite back at least one page of the starting 20. This simple procedure removes half of the pages we had.

Our final corpus consists of 106 Wikipedia pages (and the hyperlinks between them) characterized as:

  • Being cited by at least 4 of the starting pages
  • Citing back one or more of those starting pages
  • Not being a “tool” page (categories, lists of links…)

4. Identifying the two topics

We started with two lists of pages corresponding to two different (but related) topics. We assume that once extended to our final corpus, these two topics are still present and distinguishable. Just looking at the resulting network gives a strong clue that it is indeed the case. However we do not have to rely on a visual interpretation.

We define an extended version of each of the starting set. For each list, the extended version contains all pages that are citing or cited by at least 5 pages of the starting list. In other terms, the extended set contains pages that have a link (citing or being cited) with 50% or the starting pages (of that list). Note that this procedure allows some pages to be on both sides, or on neither, but as the table below shows it is a minority of cases (less than 10%).

Visualizing the “Pareto Power Law” extended set, that we will call “PPL” for brevity, shows that it largely overlaps with the visual cluster on the left, but not totally. A few pages have not been captured by our procedure, while a page on the right has been. That page is “Scale-free network”. Note: I do not attribute the power of being true to visual clusters as opposed to our selection metric, or vice versa. I just observe they generally agree while having a few crucial disagreements.

In darker grey, the “Pareto Power Law” (PPL) extended set

We will also shorten “Network Science” in “NS”. Visualizing the extended set shows that despite being bigger, it was well captured by our selection procedure. No nodes were missed from the visual cluster, but a node clearly placed on the left side has been caught, it is “Power law”.

In darker grey, the “Network Science” (NS) extended set

If you are familiar with Gephi and its epistemic culture, you might wonder why I did not use modularity clustering to delineate the clusters. I will discuss this point later and remain focused on describing the protocol.

5. Identifying the bridge(s)

First of all, we must note that two pages belong to both sets, which in itself can be seen a strong form of bridging. These two pages are “Scale-free network” and “Power law”.

We then identified bridges by looking at nodes that have connections with at least 10% of each set in a given direction (citing or being cited). This way we make the distinction between 4 types of bridge: cited by one extended set and citing the other (in both directions), cited by both, or citing both. Each page can have multiple bridging roles.

If we just look at the number of different bridging roles played by each pages, we get the following distribution:

5 bridging roles
Scale-free network

4 bridging roles
Power law

3 bridging roles
none

2 bridging roles
Complex network
Degree distribution
Preferential attachment
Random graph
Scale-free networks
Small-world network
Social network
Sociology
Watts and Strogatz model

1 bridging role
Computer science
Social networks

6. Visualizing results

In Gephi I used a force-driven placement algorithm, Force Atlas 2, to assign node positions as you have seen above. I used the LinLog mode as it emphasizes the clusters, and its drawback (slow convergence) is not really a problem on such a small network. Once the assignment looked to have converged, and only then, I activated the “no overlap” feature to improve readability. As I expected to have to use this “base map” within a text, I chose to rotate it so that it spreads horizontally.

Analysis

Let’s look at how the links are distributed as a function of our sets. A simple way to do it is to look at density, but this metric is biased by the sizes of clusters and the general density of the network. In order to remove these biases, we will normalize the density the same way we would do with modularity. Like modularity, the normalized densities vary from -0.5 to 1 and the higher, the more links there are compared to the number of links there could be in the context of that network.

The PPL set has an internal normalized density of 0.027, versus an external normalized density of -0.014. Normalized densities are generally low and the important fact here is how internal density dominates external density: there are much more links inside the PPL set than between the PPL set and the rest. The PPL is a cluster in that sense.

Similarly, the NS set has an internal norm. density of 0.117 and external of -0.018. NS is also a cluster, and even better defined.

We see it in the visualization, but it does not rely on the visual representation. We have two well defined topological clusters, with different densities and sizes.

The group of pages defined as “Pareto Power Law” are about statistical laws, probabilities and important figures such as Pareto and Zipf. We suspect that it is part of a much bigger group of pages about statistics, but possibly because the power law is a central concept to that field, our strategy might not have been able to capture that whole group. This set is smaller (30 pages) and less dense (0.027 norm. density) than the “Network Science” set (72 pages and 0.117 norm. density). As a conceptual space it is narrower than network science. We hypothesize that it just the fringe of a larger conceptual space about statistics, and it is possibly not so well defined as a subtopic (a different protocol could test this).

The group of pages defined as “Network Sciences” is larger, better defined, and more interconnected. It is well groomed as a conceptual space, with specific concepts (“Preferential attachment”, “Small-world network”…) intertwined with a body of much more generic concepts (“Internet”, “Social network”…). I am confident that the sub-cluster we identify visually (at the bottom of the cluster) and corresponding to the topic of social networks would be confirmed as such by the same kind of density analysis.

The connexions between the two clusters are multiple. Looking at the direction of links in the different kinds of bridges, we see that there are much more pages where links come from NS and go to PPL than the contrary. This indicates that NS cited the concepts of PPL more than the other way around.

Two pages have a more important bridging role: “Scale-free network” and “Power law”. It is not a big surprise, but I am happy to have established the key role of these two concepts in the circulation of concepts from statistics to network science. The rest of this investigation will rely on a more qualitative approach.

The other bridges that we have identified are a priority for my investigation, and more generally now that the corpus has been scrutinized and that we know that it captures the areas it was intended to, it would be a good idea to read all of these 100 pages. A possible follow-up could be a text analysis of the texts of these pages, and/or their Wikipedia history.

Why not using modularity clustering and betweenness centrality?

Because I did not need to, and it was easier to explain my protocol that way.

This is the alternative protocol: curate a corpus manually so that it captures the two topics. Run modularity clustering in Gephi to find clusters. Run betweenness centrality to find the bridges.

The problem with that protocol is how it depends on abstract concepts for a quite simple thing. Modularity clustering is hard to explain. The visual clustering in the visualization, that is known to be coherent with modularity clustering, is hard to explain. Betweenness centrality is hard to explain. We can explain how it works but not what it does.

Betweenness centrality counts the number of shortest paths, so that a node with a high score is on many shortest paths between other nodes. It means that if you remove such a node, you break many shortcuts, you make distances longer in the network. This is how it works. But not what it does. From my point of view what betweenness centrality does is get both “intuitive bridges” and centers. Centers are nodes that are well connected, connected to other well connected nodes, and that you will find with other metrics such as closeness centrality or just the degree. The “intuitive bridges” are the other ones, left over by other metrics, which are often “in between” clusters. How it works does not tell you what it does, and the justification of the method is obscured.

Sometimes there is no other way. But in this experiment, I designed a strategy that would not rely on these hard-to-explain elements and define our corpus and our bridge in a simpler way. It is just about who cites who, and it still works. But of course I knew it would work beforehand, because I had explored the domain. It was not a bet, as the game was rigged from the start.

Exploration

Now that I shared a more finalized product, I will open my kitchen and expose my methodological notes. They are just slightly redacted to be readable from you. They were not written to be exposed in extenso, so you will have to pardon the style.

I just started with the following three pages:

  • https://en.wikipedia.org/wiki/Power_law
  • https://en.wikipedia.org/wiki/Pareto_distribution
  • https://en.wikipedia.org/wiki/Pareto_principle

First round of exploration: among the Wikipedia pages cited by the 3 entry points, if we except special pages related to scientific content (DOI, ISBN…) and specific to the Wikipedia practice (Main page, Help…) we get 3 other pages: Long Tail, Normal Distribution and Zipf Law. We extend the corpus in that direction.

Second round of exploration, pages cited by 3+ of the above 6. We decide to eliminate list pages (eg. Category: Statistical Law). We find tens of pages about the different statistical distributions. In order to avoid a drifting to statistics in general, we rule them OUT except for the Log-Normal distribution because of the controversy about interpreting real data as power law or log-normal. We just get Benford’s Law, Log-normal distribution, exponential distribution, Generalized Pareto distribution and Zipf-Mandelbrot law.

At this point the corpus is hugely skewed towards statistics. We want to expand it towards networks and management or political science, where Pareto and Juran were influent. We search for specific terms in Hyphe’s Prospect to add sufficiently cited pages:

  • “Pareto” gives Vilfredo Pareto, Pareto efficiency and Pareto index
  • “Law” gives Power-law and 80-20 law (aliases of pages we had)
  • “Network” gives Scale-Free Network and Social Network
  • “Watts”and “Barabasi” add Bianconi-Barabasi model, Barabasi-Albert model, Watts and Strogatz model, Albert-Laszlo Barabasi, Duncan J Watts
  • “Small-World” brings Small-world network, Small-world experiment and Small-world phenomenon and Small world network (alias).

The following round of exploration allows to find the links to “network science”. We focus on pages cited by 6+ of the above. A lot of graph theory appears and like previously, we try to stay focused on complex / scale free / small world networks and their specificities. By this mean we add 25+ pages mostly related to network science.

The obtained network is pretty clear: the power-law is bridging statistics and in particular Pareto’s law with network science. (see file “Pareto Law exploration.gexf”).

After this exploration we can do a more understandable and more straightforward protocol. We will start with two lists of pages, one on Pareto and one on Network science, crawl both and see what comes out and how it bridges.

Reticular

🎉 Welcome here, this is the first blog post! – Mathieu 

I investigate our relation with tools in a context of social science and humanities, with a focus on networks and complexity.

Following the data deluge and the democratization of digital tools, networks became a part of the scholar’s toolbox for studying various phenomena. Multiple factors made this possible: a new object was invented (the “complex network”), graph theory successfully tackled hard challenges (eg. Google’s Page Rank and the “relevance” problem), and data visualization became accessible to non-specialists. As a co-founder of Gephi, a popular open source software for network analysis, I had the chance to witness the adoption of networks by a number of scholars. Now gradually transitioning from the role of engineer those of researcher, I aim at retracing the trajectory of the network as technology and intellectual object in the social sciences.

Despite being mainly epistemological by nature, this research agenda spreads across multiple fields. The algorithms used by digital devices play a crucial role (eg. assigning positions to the nodes of a network) and I discuss some of them from the double perspective of the designer (computer scientist) and user (social science scholar). Information design (data visualization) is another key topic that I approach sometimes as a computer scientist and sometimes as a practitioner, as the techniques and standard references are often imported from the industry (see for instance the “Information is Beautiful Awards”). Although sociology is not original field, I follow applications of digital instruments in two distinct areas, media studies (following trends in web mining and platform studies) and qualitative sociology (notably controversy mapping). Finally I often adopt the perspective of science and technologies studies to reflect on the role of these tools in the scholar’s practice, occasionally taking baby steps in history of sciences.

This research notebook is intended toward the users of digital technologies in social science and humanities who want to reflect on their effect on our practice and thought.