Inside the giant network map I made for Le Monde

20 minutes reading time

I made a giant network map for Le Monde. It was published the 1st of April 2022. It was no April fools! Here I talk about the craft I put into it.

Géopolitique de la twittosphère, Le Monde 2022-04-01

It was a collaboration. The data came from Linkfluence (Guilhem Fouétillou). They were gathered and processed by Linkage (Pierre Latouche, Carlos Ocanto, Stéphane Petiot and Charles Bouveyron). The data were editorialized by the journalists from Le Monde who wrote the related papers (notably Nicolas Chapuis and Matthieu Goar).

The visualization was declined in different formats online. A simple scrollytelling presents the four papers of the series (paywall), where a larger, zoomable version can also be found. You can download the largest map just below. It’s in CC-BY-SA.

Download the visualization in large format

What if?

My goal is to make visible the decisions baked into these images. I will not show the process the way it looked to me. I will use a “what if?” style: what if I made different decisions? You will see that the map could have ended very differently.

I will use a smaller version of the map, readable at the format of this blog, as a reference. The decision points I mention below had different answers depending on the situation. This is the reference map for this post:

The reference map (the decisions I ended up making)

Layout algorithm. There are multiple algorithms that place the nodes in the picture, and none is “the best”. I used Force Atlas 2 with LinLog, because I find it convenient and I like its result. Here is what it would look like with OpenOrd. I don’t like the result for two reasons : nodes do overlap (it’s not readable), and clusters are artificially separated (that’s how this algo works). Anyway, if I were to go with that, I would have to adjust all the other decisions. There are dependencies between the design decisions.

Another layout: OpenOrd

The settings also matter. Here is what Force Atlas 2 gives you with default settings. Not too different, but the nodes overlap, the clusters are less defined, and the minor nodes all around drifted more far away, which caused a framing issue.

Another layout: Force Atlas 2 with default settings.

Orientation. Obvious but crucial: I intentionally put the political left on the left, and the political right on the right. I also put the government on top and the antisystem in the bottom. That was a choice aimed at sticking to expectations. But the layout has no orientation by default.

Same layout, but with a different orientation. Equally valid, but…

Node size. In this map, node size tells how cited nodes are (in the corpus). Of course, being cited is harder than citing. Everyone can retweet a lot. So being cited is a better indication of notoriety or influence than citing. Yet, the ability to cite means something, albeit something else. It’s a proxy for activism. I made a choice here, but see below what it would have looked with size as a function of citing (i.e., mentioning and retweeting). As you can see, activism comes from the sides of the map. The borders cite the center. Also note that there are many nodes that cite a lot, while only a few nodes are cited a lot.

If node size represents how many accounts of the corpus one retweets or mentions

Underlying heat map. I use a heat map for different things. It summarizes where the nodes gather. Let me show it to you, as it comes up later on. Black means low, white means high. I kept the rest of the map on top of it for context, but of course I just use the heat (i.e., height) information. It’s just computed after node positions.

The heat map used as a background: black in the back, white in the front.

Hill shading. I use a classic cartography technique called hill shading: drawing the shades of elevation. Of course, elevation is fake, it’s just derived from the heat map (i.e., node density). Without it, the map looks like this.

No hill shading (and no hypsometric gradient, see below).

How different does it look to you? In my view, hill shading plays two roles. It emphasizes high density areas, that have a specific meaning (community/cluster), and it reminds of traditional maps (geographical). It helps readability, and it renders the image more familiar.

Hypsometric gradient. In addition to the hill shading, cartographers often use a hypsometric gradient: the background color depends on the elevation. I do the same to remind classic cartography, with blue where there are no nodes (the sea) and a clear color where there are nodes (land). I find the analogy with continents and islands useful. The gradient is instrumental to this, the metaphor disappears if I remove it.

Just the hill shading, without the hypsometric gradient.

Hill shading settings. Hill shading takes two attributes: the height of the sun (elevation), and its clockwise angle (azimuth). We are used to having the light coming from the top-left, so that is what I used. Check a different choice:

Hill shading with light from the bottom-right

Inspiration, and impact craters. For this map, I drew inspiration from old maps of the moon by NASA. You’ll see the reference right away. Here is an example.

NASA map of the Moon (sample).

From there, a happy accident. Around highly connected nodes, the smaller nodes are repelled relatively far away, which, with hill shading, resembles an impact crater on the map. The crater evokes importance, which is coherent with the meaning of highly connected nodes. So I kept that, even though it was not intentional. I owned it, but a posteriori. It’s very visible when you just have hill shading and its hypsometric gradient (what I’d call the “basemap”, the background of the map).

Hill shading and hypsometric gradient only. Big nodes make impact craters. That’s an accident, but I like it.

Heat map settings. Now, this whole elevation thing depends on how the heat map is computed. There are different settings at play. For every map, I seek a balance between general shapes and local details. Check different settings:

Different heat map settings. Botero style?

Display edges. Or not. We had a discussion with Le Monde about displaying edges. Pros: it makes it clear that it’s a network; it adds information. Cons: it brings cluttering; useful only at high resolutions. It does not make sense at the detail level of the map I use as a reference, but here is what I suggested for a high resolution, work-in-progress version of the map (detail). It was pretty subtle. Yet we ruled it out as unnecessary.

With edges (work in progress; detail)

Color

Node colors. We naturally associate color with political parties. As we have seen, the positions already match our expectations. Yet color is instrumental to an intuitive reading. If all the nodes had the same color, we would see something like that:

If nodes had no colors.

A common approach is to use a community detection algorithm to get groups, and color them. Le Monde has a set of colors for political orientations. It turns out that the communities are recognizable, and we can apply those colors to them. It looks like this.

Colors from modularity clustering. Good as an approximation, horrible in the details. Node shadows not displayed (see below).

However, political affiliation is a touchy matter. We already expected some people to be mad at the map and the articles, because Twitter can be notoriously toxic. We knew that any affiliation mistake would be used as a way to discredit the journalistic work. And community detection makes A LOT of affiliation mistakes, like Emmanuel Macron painted as an antisystem (!). Not only because it is approximative; not only because society is more complex than what an algorithm can grasp; but also because modularity clustering tries to put everyone in a box. Not everyone has a political color, and this is very visible in this network, where many people have a critical relation to the candidates, and have commented on multiple ones, in a non-activist way. Not everyone is partisan.

The journalists from Le Monde, and notably Nicolas Chapuis and Matthieu Goar, have manually investigated the most cited accounts in the network. They retrieved the political color manually, looking at what people declare in their Twitter description. It does not mean that it is “the truth”. For instance, a number of political activists self-identify as journalists. Nevertheless, it is more respectful to people, and the positions in the map suffice to bring contradiction to self-declared affiliations. “Follow the actors” is a classic guideline of controversy mapping, by the way.

Manually retrieving political color is time-consuming, and we only did it for approximately 1000 accounts, i.e. 3% of the accounts. Which leads to a visualization issue: as you can see below, there are not enough colored nodes to make the different areas visible. The big dots have a color, but the many small ones remain gray. As a result, you will see the areas if you pay attention, but the big picture does not jump at your face.

Curated node colors (without node shadows).

Edges usually mitigate that kind of problem, because they occupy so much space that it becomes a kind of background. Here is for instance what the network looks like in Gephi, if we display the edges and color them as the mix of the source node and target node. There is a bit more color. Yet the many grey nodes (those without a political color) keep obfuscating the political color. And we do not display edges anyway.

A screenshot of the network in Gephi, with edges.

Node color shadows. I have developed a trick to emphasize node colors: “node shadows” (for lack of a better name). Basically, I paint the background with the color of the nodes. I just pay attention that it is very smooth, so that it is not too intrusive or busy, and that the colors do not mix too much, as blue and yellow do not equal green in this context (a right-wing plus an antisystem do not give you an ecologist). Yet I now run into another problem: the grey dots keep the shadows of colored nodes contained. That is precisely because the colors do not mix. Which is a good thing in general, but not here. Here the shadows work a little, but not much.

With node shadows, but not tuned.

A good design is always specific: I had to tune the process so that the shadows of uncolored nodes are not taken into account, so that the color of relevant ones can spread far enough. This gives the highlight we need to convey the big picture right away, finally.

The reference map, where I had to tune the node shadows.

Labels

Label colors. As we have seen, colors have a political meaning. I had to compromise. For instance, you may have noticed that the Zemmour brown (on the right) is pretty dark, while the antisystem yellow (in the bottom) is pretty light. This creates visual distortion, but I deemed it acceptable. The labels, however, also have to be readable. The yellow, in particular, creates a readability issue, as you can see below.

With labels the exact same color as nodes.

My solution was to constrain each label color into an acceptable range of lightness. I did not do it manually, I used the HCL color space (hue, chroma, lightness).

Fusion modes. It took some tricks to make the labels readable and natural at the same time. Notably, labels need a border. Else, they conflict with nodes and they are not legible.

Labels without any border. It creates readability issues.

For simple maps, the border could simply be of the same color as the background. But this map is too sophisticated already. What even is the background`? Any color I pick will conflict somewhere. The result is acceptable, but it draws too much attention to certain places, and creates imbalance.

Label borders using a background color: quite visible.

Instead, I use fusion modes on a combination of layers so that the labels have a node-free space border but still blend with the hill shading, the hypsometric gradient, the node shadows and so on. The result just feels natural, but it is, in fact, the most complex solution.

Fusion modes used to blend the node borders naturally.

Constant label thickness. I use another trick to ensure visual homogeneity: I set the labels to an approximately constant weight. The large labels are thin, while the small ones are thick. This is not very apparent in our case because, as we will see, in addition to that, I made the labels of the candidates bolder. Yet without this trick, a large discrepancy between label sizes, something that happens with large maps, makes small labels too thin (less readable) and large labels too thick (too emphasized).

If font weight was set the same regardless of the font size. I have exaggerated the label size range.

How many labels? There is a limit to how many labels you can draw, because there is not enough space. There are about 33,000 labels to display, and that is just impossible. But of course it depends on the display size, so bigger or zoomable maps can afford more labels. Regardless, labels also bring visual cluttering. We had a conversation about limiting the number of labels. In the end, we decided to show quite a lot of them. A map with less labels, like below, looks more summarized.

With only 25 labels.

Curved labels. A visually strong choice. Label paths follow the heat map gradient, but are constrained to not curve too much (it depends on the font size, by the way). My goal was to emphasize the isotropic nature of the visualization: contrary to a scatter plot, the space is the same in every direction. There are no axes. I find that horizontal labels put too much emphasis on the X axis, and suggest a type of reading that is not appropriate.

Classic, horizontal labels.

That being said, if the wiggly labels are too distracting, a compromise could be to allow various orientations but not the curvature (see below). I picked the most organic looking option, but that is not set in stone.

Isotropic but straight labels.

Forcing the labels of candidates. The journalists asked why some candidates were not visible. Indeed, initially, the candidates were not special nodes. If they were not visible enough, their label could be omitted (see below).

No special case for candidates. Some of which are not visible.

I added custom code to ensure that they would be displayed. Le Monde also asked me to make them more visible, which I did by making them bolder. Then a problem arose: the labels of some candidates conflicted (see below). To fix that, I had to specify the orientation of some candidates so that they do not conflict.

Forcing the display of some labels create conflicts, that I had to tune by creating exceptions in the code.

Hide some nodes. Finally, for one of the papers, I had to display only certain actors (those who mentioned a given conspiracy). One may think that it suffices to hide part of the nodes, but we need to keep the base map. So we need to hide nodes for certain things, and not for others. Once again, this requires tinkering with the code, because it is quite specific.

Display only a selection of nodes, but keep the base map.

A final word

I am aware that I did not explain how this is done in practice. That is for another time. But you can picture a reaaally long Javascript file that uses a lot of Graphology and D3, and that I need to use carefully, because bad settings lead to various issues like out-of-memory errors, horrible glitches, and code that unnecessarily runs for hours.

I also did not explain the basics of visual network analysis. You can find that in my PhD thesis, though.

I can nevertheless tell what the map does to its audience. It tells: “there is an order to that chaos”. It tells that the political space of Twitter is, somehow, organized. How? For that, you have to read the related articles in Le Monde (that’s the point). And the map also does something else: it draws you into the data. It encourages you to take a look at the labels, and explore it by yourself. I know that this kind of visual literacy is not widespread. But we have to start somewhere…

Rendering glitches during debugging

Of the words we choose to write science, and community detection

11 minutes read + 6 min for the bonus

This is just a few anecdotes and remarks feeding into data science and network stuff. Anecdotes first.

This morning I just listened to the French radio. Camille Kouchner was interviewed about her book that contributed to the French “me too” movement, one year ago. A book about an incest she witnessed as a kid. She was asked: how does one learn to keep silent? She answers that we do not learn silence, only to talk. In her family, kids were taught to not speak unless they had the right words. “I felt like I lacked the right words, so I shut up.”

Finding the right words is an endless topic. One my favorite books is The Order of Things by Michel Foucault. The English title is certainly appropriate, but the French one is stunningly more ambitious: Les Mots et les Choses. The words and the things. What would not fit under such a large umbrella? A nice takeaway from that book: even the process of knowing is not universal. Thinking is situated. No surprise, then, that words can betray us.

Mid-December, Sabine Hossenfelder published this video: Does Superdeterminism save Quantum Mechanics? Or does it kill free will and destroy science? Sabine is a pretty famous physicist and YouTuber. Her videos generally aim at educating a non-physicist audience into quantum physics. This time, her point comes from her own research, and defends a controversial interpretation of quantum physics: superdeterminism. Before she even gets to what it means, she has to explain that the concept is not as dramatic as it sounds: it is not more deterministic than determinism; it is just regular determinism. Why is it called “super-“, then? She did not pick that word, she is just stuck with it, and has no other choice than to fight against its implicit meaning as a prelude to making her case. The word is working for someone else.

This reminds me of the big bang. The expression. Obviously.

English astronomer Fred Hoyle is credited with coining the term “Big Bang” during a talk for a March 1949 BBC Radio broadcast, saying: “These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past.” … It is popularly reported that Hoyle … intended this to be pejorative … but Hoyle explicitly denied this and said it was just a striking image … .

Wikipedia article on the Big Bang

The striking image of the expression “big bang” has probably worked for many masters. It might be more accurate to say that its work was captured by various people in different situations. It was put to work by Fred Hoyle, and at the same time, it worked on its own. It had many masters, and it was also its own master. So does “superdeterminism”. The term comes from John Bell, who put it to work as a straw man. For him, superdeterminism was more than determinism, and as such, it was a ridiculous and dangerous idea. In his mouth, the word was demeaning. But not anymore. Now it is widely understood as “misleading” (Wikipedia), because there is no “super” in superdeterminism, although determinism itself is still controversial. Sabine now leverages the word differently, as a pointer to John Bell’s beliefs, that she specifically challenges. Superdeterminism might change allegiance after all.

Note: more content about this video at the end of this piece.

Words work both ways. Like with double agents, one can never tell who they really work for. It is unknowable. We have reached the end of the “work for” metaphor, as we admit that words perform in uncontrollable ways. I learned from Bruno Latour’s PhD writing courses that writing is largely about keeping the unintended meanings of whatever you write in check. That is why the reader is always right: I had to come to terms with the illusion that words meant what I thought they meant. Even if I am “right”, it is still my problem to get misunderstood. But of course I cannot really know what my words mean to others until someone else reads them, and provides me with feedback. The process is iterative. As Latour often said, writing is rewriting.

Inigo Montoya finally tells Vizzini, that guy from Princess Bride who says “inconceivable” all the time, that it means something else.

A few days ago, Petter Holme, a well-known network scientist, posted on his blog about How the names of measures influence their interpretations. “Methods and their names have complex and sometimes detrimental relationships … [especially] when the names have a clear and relevant meaning in the vernacular.” Efficiency, centrality, complexity, accessibility, universality… These network concepts are misleading, because their definition is much narrower than what their mundane name suggests. Seemingly relatable, but with very precise and technical definitions. Can we do better, as researchers? As Petter notes, “if the dullness of your measure’s name makes it instantly forgotten, it will not serve the purpose of science, no matter how smart it is.” Method names are shaped by scientific cultures.

Just before the Christmas break, I co-organized a data sprint for the ADD project. Such an event is transdisciplinary by design. Not every participant is a data scientist, and the names of the methods we used created various problems. For example, we used topic modeling. We tinkered with the LDA technique. To many participants, that name meant nothing. Unpacking the acronym to “Latent Dirichlet Allocation” does not help, but it leads to a narrative about how it works. Unfortunately, how it works does not tell much about what it performs, how it compares to other techniques, or what its purpose is. Worse, if a participant were to search for it in Google, they may find the wrong LDA, another technique known as Linear Discriminant Analysis. We also tinkered with hSBM, aka hierarchical stochastic block model. Same story. How does a non-data-scientist receive such acronyms? As black boxes whose functioning is knowable in principle, but inaccessible in practice. Participants felt like they were borrowing sophisticated tools from another field. The names played against reappropriating the techniques.

But there is a much more friendly name at hand: “topic modeling”. It is relatable. It tells what the technique performs, as opposed to how it works. LDA and hSBM both give you topics, and from there we can build an intuitive understanding of what they do. Except that we run into another problem. The “topics” given by either LDA or hSBM do not look like topics to participants. The word “topic” is relatable but misleading. Participants were flexible with the meaning of “topic”. Our research questions involved topics (e.g., the Danish e-identification system), themes (e.g., digital citizenship), ontologies (e.g., how different actors conceive trust), matters of concern (e.g., the lack of understanding of algorithms). For us, a topic, a theme, an ontology, a matter of concern, a discourse, a subject… are different things. They have different epistemic functions. What roles the “topics” of topic modeling play? None of those. They are just bags of words found by a statistical technique that does not ensure they have any given rhetorical purpose. It might be, but that must be tested. Yes, LDA and hSBM have methodological commitments that specify what they mean by “topic”, but those commitments do not relate to the epistemic cultures of other fields. “Topic” is a false friend, it translates badly.

Did we employ the expression “topic modeling” for better or for worse? I sincerely wondered, but ultimately, using the existing terminology is inevitable. Retrospectively I turn to a more relevant question: how should we leverage the expression “topic modeling” in productive ways? This is about building on top of a shared understanding of “topic” and recontextualizing it usefully. I very much like the idea of repurposing data and algorithmic techniques: overtly admitting that we use them in unintended ways, which entails finding alternatives to built-in validation methods (post-hoc interpretability).

Words do not just perform by misleading. They include and exclude people, they seduce and discourage minds, they show or hide methodological commitments, they smuggle implicit assumptions, they launder specious arguments… “Topic modeling” engaged participants while “LDA” and “hSBM” alienated them. But “topic modeling” also laundered the idea that reducing the concept of “topic” was necessary to its quantification. It put the participants in the false dilemma of either accepting the computer science version of that concept, or dropping the modeling altogether. With a different naming convention, other options would appear more clearly, such as modeling for what it is, without calling the bags of words “topics”. We already work with borrowed data, so we can deal with borrowed methods. That is not the problem. Most social science and humanities scholars are used to detect and clear built-in assumptions anyway. But words can make the task easier or more difficult. Names shape the public of methods, and play an important cultural role.

In my previous post I accounted for the controversiality of Tiago Peixoto’s promotion of inferential approaches to community detection. Tiago’s argument stems from the following alternative. Either we “articulate precisely what constitutes community structure”, or we do not. If we do, then our approach must use inferential methods. Else, our approach is what he calls “descriptive” and we may use inferential or non-inferential methods. An interesting part of his work is his endeavor to disqualify modularity clustering, a popular community detection method, essentially because it is a non-inferential method posturing as inferential. In that sense, it misleads its users, and can be “considered harmful”.

In practice, I have tinkered with three community detection algorithms. Modularity maximization with the Louvain technique, with the Leiden technique, and Tiago’s inferential method. Sometimes (dare I say often?), the Louvain technique gives me more usable results. [Audience boos]

I had the chance to talk about it with Tiago. My observation is that sometimes his technique, and to some extent the Leiden technique, gives me too many small clusters, or on the contrary too big ones. His interprets this observation as an effect of his technique being more accurate. In fact, one can precisely trick the Louvain technique into missing clusters because it assumes somehow homogeneous cluster sizes. I agree with the argument; yet it does not make the Louvain technique less useful to me. In fact, I am just not doing what it seems I am doing. I am not trying to detect communities. Did I pretend to`? It is complicated.

See, on the one hand I have heard myself call clusters found by modularity maximization “communities”. On the other hand, that name can generally not be taken for face value. From a sociological standpoint, the idea that an individual belongs to only one community, no less and no more, is ridiculous. If we were to operationalize the sociological idea of community, we would upfront assume that communities overlap, that belonging to one is gradual (not a yes/no), and probably not universal (not every actor agrees on who belongs to what). In fact, we should not even take for granted that a “community structure” necessarily consists of communities. Besides, we routinely use community detection in contexts where the idea of community has no literal meaning (e.g., in a network of keywords). In practice, we often repurpose community detection for something else. This is nothing new. But I admit that we still somehow pretend that we detect communities, because that is the name of the method, and because it states the scientific ground of our practice. But that grounding is not necessarily legit. That is a problem. Unfortunately, that problem is not the one solved by inferential methods. It belongs to what Tiago calls “descriptive approaches”.

The reason why modularity maximization works better for me has nothing to do with inferential methods. In fact, there could very well be a superior inferential method to craft; it just would not exactly aim at detecting communities. Its task must be different. But what? Can we even delineate how descriptive approaches repurpose community detection?

Let me freely admit my doubts about it. It paints my practice as somewhat inglorious, but never mind. The reality check is well needed. As you know, our actual practices do not resemble the convenient fictions we write in papers to contextualize our findings. Our actual practices are messier, more exploratory. We notably search for questions, although we pretend seeking answers. What I do most often is to use community detection to build an intersubjective statement about my network. I give myself some bricks to build a description. I do not care whether the found bag of nodes is a community. The description depends less on the bricks than on their assemblage. The bags are just contingent tools. However, I do care that they agree with the layout, else we cannot “see” the node groups properly. Or more exactly, different people would not agree that they see the same thing (intersubjectivity). And it also matters that there are neither too many nor too few bags of nodes, for the practical purpose of describing. Most of the time, that purpose is to craft a working hypothesis. Most of the time, that hypothesis will be thrown away and you will never hear of it. Even so, it has lead to other hypotheses, and so on, until we find version of a research question to which a reasonably solid answer can be found.

This point deserves a piece on its own, a visual one. That is for another time and I must conclude, so back to the matter of words. The expression “community detection” has done a lot of work for many people. It was relatable, which is a blessing to everyone and a curse to some. For Peixoto, the expression helped justifying the superiority of inferential methods, precisely because they state what constitutes a community structure explicitly. For SSH scholars who occasionally analyze networks, it offered a methodological justification to a useful technique; but I do not take for granted that this justification was appropriate. Like Sabine Hossenfelder with “superdeterminism”, network analysts are stuck with the expression “community detection”, and should probably pick a fight with it to regain some methodological agency. Like Sabine, they might turn the allegiance of the expression, and have it work for them. Understanding it helps me write about it.


Bonus on Sabine Hossenfelder’s argument on superdeterminism

I encourage you to watch Sabine Hossenfelder’s video. I find it fascinating for multiple reasons. Of course it is about the wave-function-collapse and spooky-action-at-a-distance stuff many of us love to hear about. But my account will not go into these details, because I am primarily interested in how she fights the uphill battle against the doxa. The way she argues shows something fantastic: that the quantum physics theory is deeply shaped by what physicists consider weird or not. And it is no accident that it shows here: it has been Sabine’s underlying argument across many videos (see also this one about mathematical beauty shaping particle physics).

We need context to understand. Everyone agrees that there is something weird about quantum mechanics. Not just “weird” as in counterintuitive, although it certainly is. “Weird” as in suspicious, deranging, uncomfortable. And that weirdness is a guide to physicists, insofar as they desire to get rid of it. That is how it shapes physics.

It seems that we cannot get rid of that weirdness, because theories that fix one kind of weirdness always create another kind. But it also means that we can move weirdness around. Maybe, certain kinds of weirdness are less weird than others, and that would be some progress. This is what Sabine’s argument is about. It offers to trade one weirdness for another.

Sabine’s argument goes against a popular story. The tale goes like this. Einstein was an absolute genius, but he made a mistake. He refused to accept the weirdness of quantum mechanics. He was uncomfortable with what he called “spooky action at a distance”. So he concluded that quantum physics were incomplete, that the weirdness had to come from our limited understanding, that we would find a non-spooky theory in the future. Surely, reality could not possibly be that weird. Then John Bell took up the challenge, and raised his convictions against him. He designed an experiment that many thought impossible. He found a way to test whether the wave function had hidden variables, as Einstein thought, or if it was probabilistic in nature. And the empirical results would ultimately prove Einstein wrong (sadly, after his death). John Bell proved that the spookiness was real. Albert Einstein, the demigod of physics, could be defeated after all; but only thanks to the help of the most formidable weapon: the inconceivable weirdness of quantum reality. This is anyway how the popular tale goes. But as Sabine argues, it omits a detail that changes everything.

The formidable weapon wielded by John Bell to prove Einstein wrong came with a clause (it assumes statistical independence). That clause was so obviously true to John that he took it for granted, and his feat was so formidable that the mention of the clause disappeared from the tale, and was forgotten. Everyone considered the demigod vanquished forever. But his power was lingering, and Sabine realized it. She remained unsettled by the spookiness, and came to understand that the matter was not settled. She inquired, found the forgotten clause, and understood that in his hubris, John had not finished his task: there was a loophole in his victory. Disclaimer: I took some artistic license!

I narrated the story in the style of a tale to cut short the theoretical details, but also to expose Sabine’s main difficulty: people are not willing to accept that their heroes have actually failed, and that their problems are not solved after all. But at the same time, the weirdness is still there, so it is not as if the story had such a good ending. Regardless, we have long coped with the weirdness left by John Bell. We have endorsed his legacy, the idea that quantum reality is ontologically weird. We found some comfort despite the spookiness. But Sabine could not cope, uncovered the loophole, and now she wants to trade the spookiness we know against the uncertainties of the forgotten clause. She reclaims the unmaking of John’s victory, and the restoration of Einstein’s original intuition. And to do that, she must convince us that John Bell was wrong to take the clause of statistical independence for granted.

This is where it gets about words. As she says, “all the alleged strangeness of quantum mechanics has its origin in nomenclature.” The strangeness she talks about is what Einstein called spookiness. Part of her argument is that we love the spookiness because of the tale, that we hold onto it for cultural reasons, not primarily for scientific reasons. Another part is that the spookiness is not as real as many believe because the clause of statistical independence was not fulfilled. That is precisely the argument about superdeterminism. Sabine must explain why the strength of the argument only derives from the interplay of its rhetoric and the cultural norms in physics.

In short, John Bell considered that if the clause of statistical independence was not fulfilled, then reality would be deterministic to such an extreme extent that many of the things we take for granted, like free will, would be impossible. That was so uncomfortable that he discarded that possibility. As Sabine shows, many physicists endorsed that challenging free will was unacceptable. I frame it this way to highlight the symmetry between Bell’s and Einstein’s arguments: both rejected a theoretical pathway because they found it too uncomfortable, too weird. But as Sabine remarks, the justifications given by John Bell and other influent physicists are not grounded in the epistemic standards of physics, they instead convoke the metaphysical reach of that discomfort: it would undermine free will, destroy science… Sabine makes it clear that there is no argument internal to physics for ruling out superdeterminism. Some influent physicists just promulgated the disqualification loud enough.

Do not miss, here, that Sabine picked that battle when her YouTube channel was strong enough. She has also published papers, so she fights on multiple fronts. I image she also has some institutional reach. But YouTube clearly places her on a different arena where her chances are much better. If only, because the exercise is difficult and many of the physicists she criticize cannot and will not show up to respond.

Sabine also de-facto argues that superdeterminism, i.e. breaking the statistical independence clause, is not that weird. But it seems pretty weird to me, and I can see why John Bell thought it was more dramatic than simple determinism. In Sabine’s own words, it implies that “what a quantum particle does depends on the measurement setting.” In short, what particles do depends on something that will happen in the future. For instance, in the double slit experiment, “the particle’s path depends on what measurement will take place. Because the particles must have known already, when they got on the way, whether to pick one of the two slits, or go through both.” Doesn’t this sound much crazier than determinism? It blatantly violates my own intuition. But like paradox, the dissonance only exists in certain ways of thinking about it. Sabine explains why, in practice, it boils down to just determinism. I find it convincing and, like her, I don’t buy the half-baked waffle about free will. Regardless, I can also understand why John Bell took statistical independence for granted, and employed the “superdeterminism” expression to make his point. I do find it weird, but not more than spooky action at a distance.

I’ve learned to cope with the spookiness of John Bell’s experiment, while the strangeness of dependency to future events is new to me. I wonder how it goes for Sabine’s YouTube audience, and for the community of physicists. Do they find something wrong with Sabine’s argument? Do they get convinced to exchange this new flavor of weirdness against the old one? Either way, I find it fascinating to observe the shaping of experimental programmes by the level of ontological comfort of influent physicists with different theories. Because all Sabine asks in the end is that funding bodies consider testing superdeterminism empirically, even though the physicists in charge think it tastes the wrong flavor of spookiness.

A Twitter controversy about community detection: empirical material

30 min read

Note: I know it’s awkward, but I will call everyone by their last name. It puts everyone on an equal foot.

When Tiago Peixoto promoted his latest work over Twitter, it sparked an interesting discussion. I call it a controversy because the participants may not agree on what the points of contention are. It is unresolved. The debate is notably about modularity maximization, a popular community detection technique. Is it obsolete? Peixoto has undertaken to disqualify it to the profit of a Bayesian inference approach. But although everyone seems to agree on the general superiority of inferential methods, a question remains up for debate: can modularity maximization still be useful in some situations, or does it deserve to be completely disqualified?

It turns out that I was in the company of Peixoto at the same time, as he participated to the Gephi coding retreat I organized at the TANT Lab in Copenhagen. He presented us this work, i.e. his inferential approach to community detection, and I implemented a limited version of his algorithm in Gephi. Our discussions shared similarities with those on Twitter. It seems to me that Peixoto’s argument for complete disqualification of modularity is incomplete, but conversely, if there is a clear motive for not disqualifying it, what is it?

I will not engage directly with the question in this blog post. Instead, I will delineate the debate in the style of draft for a controversy mapping. I will just account for who says what in these Twitter discussions. I am basically gathering empirical material, selecting and organizing quotes, and bringing enough context to make the discussion clear. I will also comment a bit, but I will not get to articulating a full argument before a further blog post.

I have three interests in this matter. First, I find it interesting to document the debates around the disqualification of an algorithmic technique. There is nothing wrong with some methods becoming obsolete in science, and the fact that these discussions are partially made public on Twitter is an opportunity to peek into that process. Second, I want to understand for myself what is at stake, and articulate my position if I have one. And third, I am under the impression that Peixoto’s argument could be mistaken for methodological imperialism from the perspective of the users of modularity maximization, and I would like to see if this is a concern for other researchers as well. I hope that these notes can be useful to the STS crowd, to Gephi users in digital methods, and to network scientists.

Overview of the material

The starting point of the controversy is Peixoto’s work and his effort to publicize it. It comes in three layers. First, there is a preprint released on Arxiv the 30 November 2021, titled Descriptive vs. inferential community detection: pitfalls, myths and half-truths. Second, there is a series of three blog posts derived from the paper, followed by two posts containing additional explanations. Here are they, in chronological order:

  1. Descriptive vs. inferential community detection, 2 December 2021
  2. Inferring, explaining, and compressing, 3 December 2021
  3. Modularity maximization considered harmful, 6 December 2021
  4. No free lunch in community detection?, 7 December 2021
  5. Do we need to believe in generative models?, 8 December 2021

Third layer, Peixoto’s tweets publicizing his blog posts, and the discussions that ensued. It also consists of a list of tweets:

  1. A series of 2 tweets about the release of the preprint, 2 December 2021
  2. A series of 7 tweets about the first blog post, 2 December 2021
  3. A series of 5 tweets about the second blog post, 3 December 2021
  4. A series of 11 tweets about the third blog post, 6 December 2021

From there, as a fourth layer if you will, there are the Twitter reactions to these tweets. I will not list them, because there are too many and you can read them from the links just above, but I will account for the most important ones below. But before I get to that, to understand them, we must have an idea of the content in question.

Summary of Peixoto’s argument

Let’s look into the content of his argument. I will primarily use the blog posts, because they are more accessible than the paper, while the Twitter threads are less easily readable.

The first blog post stems from the premise that “community detection methods can be divided into two main categories: ‘descriptive’ and ‘inferential.'” Peixoto argues a number of things. These types of methods have different properties. For instance, descriptive methods “do not articulate precisely what constitutes community structure” while inferential methods “start with an explicit definition of what constitutes community structure”. For context (not from the blog post), the modularity clustering in Gephi belongs to the “descriptive” type, and Peixoto’s own algorithms to the “inferential” type. He presents inferential methods as “state of the art”, and argues that even though “descriptive clustering approaches arise naturally” in a number of practical situations, they “carry no explanatory power”, contrary to inferential methods. The communities obtained from descriptive methods “can be seen and described, but they cannot explain”.

In this argument, the power to “explain” boils down to being able to predict when nodes are connected by looking at which communities they belong to, for a given (explicit) generative process. If, for a given model, the communities do not predict the edges, then the communities do not explain the network. And descriptive methods do not allow this prediction (they do not even state a generative process to begin with).

The blog post ends by proposing a test for determining whether you can afford not using inferential methods (that Peixoto calls a “litmus test”). There is a question you must ask yourself, and if the answer is “no” you can do as you want but if it is “yes” you must use inferential methods (argues Peixoto).

The second blog post explains how the principle of Bayesian inference is applied to community detection. It goes as follows. Imagine a certain process that generates networks according to certain rules. The nodes are a given, and the process decides of the links. The rules involve groups of nodes: the probability that two nodes get linked depends on their groups. If you know the rules, you can call it a model: it generates networks that are different but have a family resemblance, because the rules are the same. The networks come from the same model. So the model has group-based rules and nodes, and it gives you links. The Bayesian inference is about playing a guessing game: if you have the nodes, the links, and the model (the rules), can you guess which node was in which group? Often, there are many possible valid guesses, but some are mode likely than others. The inference measures that likelihood.

The post describes the maths and how to interpret the equations, and makes a connection with information theory. According to this connection, the likelihood that a partition in the guessing game can be measured after a quantity known as “description length” that “quantifies the amount of information in bits necessary to encode the parameters of the model”. Peixoto argues that “there is a formal equivalence between inferring the communities of a network and compressing it” and that it protects the method against overfitting, thanks to a theorem known as “Shannon’s source coding theorem”.

This blog post also reflects on the literature about community detection, and argues that, using the test presented in the first blog post, the most-used benchmarks in the literature were in fact not suitable to descriptive methods. Follows the argument that inferential methods are easier to benchmark because they state the model they aim to fit, and finally the argument that inferential methods are more comparable.

The post ends by contending that “every descriptive method can be mapped to an inferential one, according to some implicit model.” In other words, descriptive methods are inferential methods that do not state their model, which makes them inherently worse. The statement draws on a mathematical argument whose key is that “there is no such thing as a ‘model-free’ community detection method.”

The third blog post argues that the modularity maximization method is flawed, and one of the most problematic. By the way, let me acknowledge here that Gephi has probably contributed to popularizing the method (it’s not mentioned in the paper but I talked about it with Peixoto). The argument basically follows the rationale of the first two blog posts but applied to this specific case, so I do not repeat it here. Its key is that the modularity maximization method “does not take into account the deviation from the null model in a statistically consistent manner”. It provides examples where modularity clustering finds communities in a sparse random network, and where it fails to detect obvious communities that have very different sizes. It ends with a table of “some of the main problems with modularity and how they are solved with inferential approaches.”

What is made an issue of?

Now that we have an idea of Peixoto’s argument, let’s see what is problematic to different people. I will not be exhaustive. I will pick some elements of the discussion and justify why I find them relevant.

The first objection came from Renaud Lambiotte, who notably contributed to the modularity clustering method that we implemented in Gephi (known as “Louvain”). He reacted to the first series of tweets, about the preprint itself.

Lambiotte first expresses his disagreement, in a pretty civil way, and with Peixoto they endeavor to locate together, and publicly, where the disagreement lies:
RL: “… we will have to agree to disagree :-)”
TP: “I’d be happy to know with what you disagree …”
RL: “Well this one for instance …” (a paper follows)
TP: “What do you disagree with here? And why?”
RL: “Or that one …” (another paper follows)
TP: “If you don’t say why you disagree, it’s hard to respond.”

There is not enough for me to retrace what was controversial, but I want to highlight first how unlike the rest of Twitter this disagreement is: rational, civil, sourced… even though it ultimately fails to resolve, presumably due to the limitations of the Twitter format. I also wonder whether Lambiotte was especially sensitive to the straightforward attack on modularity clustering, a method he had contributed to popularize. We will see him again later on.

Objections to the tweets about the first blog post

When Peixoto presented his argument to me, I had a similar reaction to @rayohauno. Peixoto’s response on Twitter is that he does not make the “assumption” that random fluctuations show no features, but instead draws on the following demarcation: “a) features that arise out of randomness vs b) those that have an underlying cause.” He follows up with a picture, I will return to that shortly.

I want to make a remark first. In Peixoto’s argument, there is an inherent difference between features arising from randomness and features arising from an underlying cause. It’s not that they don’t look the same, it’s that even when they look the same, something makes a difference. Peixoto answered my objection by telling me that even though a monkey typewriting at random had a small chance to write some Shakespeare, there are statistical means to distinguish randomness from non-randomness. I still wonder: if you only get the result, i.e. the features, how exactly to you make the difference? This point exactly is picked up by @rayohauno as a follow-up, but Peixoto stopped interacting with him.

Let’s get back to the picture. Peixoto chose to illustrate his point with two analogies that happen to disqualify what he calls the descriptive approach, and frame the inferential approach as the objective truth. We find it in the first blog post and the paper, and I borrow it below:

/static/blog/figs/inferential/descriptive.png
By Tiago Peixoto, from this blog post.

I assume that anyone is familiar with the image on the top-left, a picture of the surface of planet Mars. The apparent face became famous in popular culture, as an element of evidence of life on Mars. A higher definition of the same location later revealed that there was, in fact, no face. At a higher degree of precision, the features disappeared and in this case, the accepted truth is that there is indeed no face here. The face, even though Peixoto argues that we do see it (we can all agree on that), the debunked face is now conspiracy-grade apophenia, in which very few people actually believe. I argue that regardless of the intent, it tends to ridicule the left side of the analogy. It contributes to the disqualification of modularity maximization on a rhetorical level.

Below is the second example, tweeted by Peixoto in response to @rayohauno’s objection. Once again, our intuition turns out conveniently aligned with the fact that the face we see is obviously spurious. I also find a ridiculing factor to the example, although I acknowledge that it is culturally situated. Yet it is worth considering that these analogies bear some controversiality because they disqualify by another mean (rhetorical) than making a point (but also by supporting a point).


Aaron Clauset also commented on Peixoto’s tweet. Clauset is a prominent network scientist who was central to a controversy about scale-freeness I previously wrote about, and I then identified him as an experimentalist: a researcher who builds up from experimental results towards theory. Let’s see his reaction here.

The exchange gradually involves other Twitter users, but the interactions between Peixoto and Clauset remain fairly linear and I collect them here as a straightforward dialogue.

AC: “There are many, many things to like about probabilistic generative models for community detection. But, alas, they are no panacea, because there is No Free Lunch in community detection (and, worse, No Ground Truth).” (link to this paper he wrote with @DanLarremore and @PiratePeel, also mentioned).
TP: “The claim is that statistical inference is more meaningful when the objective is to reach an inferential conclusion. Surprised this is controversial. Plus: the NFL is no panacea either. It covers mostly *uninformative* community structures.” (link to the preprint)
AC: “TBH, it’s your absolutism that makes it controversial. But you’re also using ‘descriptive’ in a non-standard way that I don’t think is epistemically helpful. In statistics, inferential models are almost always descriptive (in the standard sense); exception being causal inference.”
TP: “I don’t believe is non-standard: https://en.wikipedia.org/wiki/Descriptive_statistics But I’d happy to know of a better way to distinguish inferential from non-inferential analyses.
Re absolutism, I tried to be clear that it all depends on the question being asked. But we should be coherent with our aims.”
AC: “Model parameters are summary statistics, too, and so also descriptive statistics. Worse, in some exponential family models, any summary statistic can be an inferential model parameter. It’s messy! I think downstream utility is the only guiding light, which precludes absolutism.”
TP: “The relevant criterion here is if we are evoking a generative model or not. The terminology issue is a red herring; all these terms are overloaded. If there is a ‘only guiding light’ then you are being absolutist. Obvious contradiction.”
AC: “We’re in violent agreement on the wonderful properties of generative models! But the NFL theorem etc. have convinced me that non-generative models do have their uses, depending on downstream uses. I get that you feel the SBM is morally superior, but it rubs many the wrong way.”
TP: “I tried to be clear that I’m not favoring the SBM, or any particular model, but only the concept of defining a model explicitly. I’d be very curious to know your take on what I wrote on the NFL theorem: that it involves overwhelmingly incompressible problem instances.”
AC: “Your point about the NFL is an old one (in other literatures) and mathematically true. But it’s not useful because we don’t know the empirical distribution of problem instances. Hence only downstream uses can determine utility. We can agree that one such use is interpretability.”
TP: “If you agree that interpretability is a downstream utility that renders NFL inapplicable, then we are on the same page. Can you give an example (practical or conceptual) of a downstream utility for community detection that does not involve interpretability?”
From this point on, another interlocutor chips in and Clauset does not interact anymore.

In the third tweet of the sequence, Clauset refers to Peixoto’s “absolutism” as a source of controversiality. I have two remarks. First, this is a reason why one can argue that there is a controversy: some people identify it as such. This reason is not good enough on its own, but as we will see, the dispute also lasts. It’s a controversy in the sense that the actors agree on their disagreement, try to fix it the usual way (locating the misunderstanding), yet fail to reach closure (at least for now). The actors disagree on where their disagreement lies, which makes it more than a simple dispute. Second, there is a superficial reading of the exchange where the argument is that Peixoto’s claims are peremptory, oversold. And indeed, Clauset argues that it is a matter of form rather than content; as if he agreed on Peixoto’s point, but not on the way it was put. But I do not subscribe to that reading, because the exchange makes it clear that Clauset does have a scientific disagreement with Peixoto. What’s more, it do not see where Peixoto’s argument would be oversold: he does stand his (argumentative) ground. So what does this “absolutism” refer to?

I understand the argument on “absolutism” as such: the superiority of Peixoto’s method is not controversial, what is controversial is the disqualification of other methods. I believe that this is one of the hot spots of the controversy, probably the main one, because it is at the same time about form and content, about the perception of Peixoto’s claims and about these claims on a scientific ground. This deserves careful unpacking. Indeed, everyone would agree that some method A could be generally preferable to some method B, while not being preferable in every situation. The superiority of a method depends on the context. The disqualification of other methods is not necessary to the promotion of Peixoto’s one. I think it begs multiple questions, that we will approach step by step.

A point of detail is the kinds of methods discussed. Peixoto demarcates between “inferential” and “descriptive”. Clauset challenges the term “descriptive” and Peixoto accepts the alternative “non-inferential”. But then the discussion moves the demarcation to “generative models” versus “non-generative models” and I don’t think that it corresponds exactly to “inferential” versus “non-inferential”, but I will overlook the nuance. Peixoto will re-use the “inferential/non-inferential” demarcation in his next Twitter thread.

Is there a context where non-inferential methods are preferable? Clauset repeatedly argues that yes, non-inferential methods are preferable in some situations: “the NFL theorem etc. have convinced me that non-generative models do have their uses, depending on downstream uses.” He later argues that “only downstream uses can determine utility” and that “one such use is interpretability.” In other terms: when non-inferential methods provide more interpretable results, they are preferable to inferential methods. It seems to me that Peixoto generally argues that non-inferential methods are never preferable [EDIT: but his 1st post clearly states cases where they are useful, see his comment to this post], on the motive that non-inferential methods are just inferential methods that do not disclose their model, which makes them strictly worse (we will see that below). But in this case, the debate moves to a very specific point (having to do with the nuance I overlooked) and I will not unpack it.

Back to the question of absolutism. One way Peixoto is deemed absolutist by Clauset might be that he argues that non-inferential methods are never preferable. But there is more to it, which we see when Clauset writes: “I get that you feel the SBM is morally superior, but it rubs many the wrong way.” For context: SBM (stochastic block model) is a generative model used by Peixoto in his inference approach. Remark that Clauset makes it about: (1) Peixoto’s feelings; (2) moral superiority; and (3) shocking many people. I assume some amount of connivance here, because Clauset is “in violent agreement on the wonderful properties of generative models.” In short, I think he says that Peixoto is shooting at an ambulance: inferential methods are generally superior, but in some niche cases they are not, so disqualifying non-inferential methods is unnecessary and counterproductive, insofar as it is not necessary to promote inferential methods, but may spark opposition.

One could make the case that Clauset wants peace and Peixoto conflict. The third blog post in particular argues that the popular non-inferential method of modularity maximization is “considered harmful”. Peixoto shows a pretty clear agenda of replacing that method by an inferential one, for reasons that are clearly stated in the preprint, the blog posts, and the tweets. But I think that the contention is about what the battlefield is. Is it about a few specified techniques like modularity maximization, or about all non-inferential methods? It seems to me that Clauset does not want to disqualify non-inferential methods because they have legit uses, albeit possibly niche. That might be compatible with the obsolescence of modularity maximization. And Peixoto wants to disqualify modularity maximization because it is strictly worse than the inferential equivalent he proposes. And that might be compatible with other non-inferential methods being preferable in some circumstances.

I unpack these nuances because I believe that Peixoto is de-facto disqualifying non-inferential methods, while arguing that he is not, or not entirely. More on that later. I think that he is unintentionally ambiguous, and that his argument could be received as an unfair disqualification of non-inferential methods, echoing the fears of methodological imperialism of computer science [natural sciences *comment on this edit at the end of the post] over other disciplines. And I think that the preprint actually essentially argues a fair but limited disqualification of non-inferential methods, but that is still up for debate.

Objections to the tweets about the second blog post

Lambiotte interacts again with Peixoto, in a similar manner as the first time.

Let me summarize their discussion, only retaining certain parts:

TP: “… *Every* descriptive method, whether you want it or not, is equivalent to some *implicit* generative model!”
RL: “Either I do not understand, or I disagree with this comment. …”
TP: “I explained this in …” (explanations ensue, in multiple tweets)

From there, the discussion branches out in 6 different threads. Peixoto and Lambiotte are having multiple arguments at the same time, as Twitter allows. Although they somehow interfere, I kept them separated but featured a timestamp for context.

Branch 1
RL (6:10 PM): “Well, the Markov chain and the the graph are equivalent, so unsure about this argument.”
TP (6:18 PM): “A random walk is a stochastic process that has the adjacency matrix as a *parameter*. It does not model it.”
RL (6:29 PM): “I just meant that a Markov chain and the adjacency matrix are equivalent. It does not model it, finding structure in one is the same as finding structure in the other, isn’t it?”
TP (6:48 PM): “Here’s an example that maybe will make it clearer. …” (an example follows and the thread stops there)

Branch 2 (it branches out of a slightly different place)
TP (11:25 AM): “Whenever you *do* use it for inferential purposes w.r.t. the structure (i.e. your answer to the litmus test is ‘yes’), then what you are doing, whether you want it or not, is equivalent to inferring this hidden model.”
RL (6:13 PM): “I find it difficult to buy statements like ‘possible rules that were used to create them’, etc. The generation of a network is complex process, involving homophily, triadic closure, social influence, etc.”
TP (6:17 PM): “I don’t follow. You just listed several possible rules that can create networks. What don’t you buy? It’s OK to criticize the SBM (or any other model), but not the concept of statistical inference.”
RL (6:28 PM): “Even if you find communities via inference, I am unsure that the partitions are likely to have been responsible for the observed empirical network.”
TP (6:32 PM): “This is such a strange statement. The objective of inference is not to find ‘the truth’, but the most plausible explanation from a set of possibilities. You put statistical inference to a standard so much higher than you ever put Markov Stability or any other method.”

Branch 3
RL (6:14 PM): “A SBM, corrected of not, is thus clearly too unrealistic to help understand the formation of the network.”
TP (6:22 PM): “We can only talk about models in comparison to other models. Can you say that the SBM is more or less expressive that whatever lies behind Markov Stability? Clearly the SBM is more expressive than the model hidden behind modularity. In any case, I’m not advocating for the SBM in particular, but to the idea of using explicit models.”

Branch 4
RL (6:16 PM): “Of course, ‘inferential methods’ will do a good job to extract/explain communities from SBMs, but what would be the argument to use them on empirical data, or even on networks generated by other models (e.g. random geographic graphs, random hypergraphs, etc.)?”
TP (6:26 and 6:37 PM): “The argument for the SBM is the same as for histograms: they are not supposed to be right, only to approximate. This they do fairly well. But I’m not trying to argue for the SBM. Only to point out that if the objective is to do inference, there’s no option but to use a model.
The point I’m trying to make is that there is a *formal* equivalence between any community detection method and the statistical inference of *some* model. Therefore, it’s nonsensical to criticize the SBM (or any model) in favor of one that you don’t even know how it looks like.”

Branch 5 (6:17 PM) Just two tweets that I skip here.

Branch 6
RL (6:27 PM): “I do not mean to criticise statistical inference, but this is not the only viewpoint when analysing data, especially for complex data where one does know the model that generated them.”
TP (6:40, then 6:58 PM): “It is a misconception that statistical inference is only useful when we know the true model. If this were true, it would never be useful. If we set out to find communities in networks, we are not setting out to model every aspect of it. I’ve heard this argument before, but I suppose I don’t understand it. In the paper I outline two objectives: to describe or to infer. Clearly, if the objective is to infer, statistical inference is the only game in town. If the objective is to describe, you can do whatever. But if you are describing, you should refrain from making inferential statements (e.g. ‘these nodes had a higher probability of being connected’, etc). As soon as you do that, you fall back into inference. And if you are just describing, then it’s not a problem to overfit. So if you see a face on a piece of toast, or communities in random graphs, that’s OK. If that’s a disturbing result, then it’s only because your objective was inferential all along.”

Lambiotte’s objection is raised about Peixoto’s following point: descriptive methods are basically inferential methods that do not disclose the model they fit. Let me first clarify that Peixoto’s work also makes the point that descriptive methods are bad inferential methods, but this is not the point of contention here. Lambiotte’s objection is about whether or not non-inferential methods can be seen as just inferential methods with an implicit model. His second intervention highlights how problematic it is to fully know the processes that have determined a given empirical network (i.e., in real life). Peixoto at first interprets the objection as a questioning of the principle of Bayesian inference, but Lambiotte makes it clear that it is not. It is instead questioning that disclosing the model is the most important thing of the method in empirical situations where one cannot know the underlying processes anyway. In response, Peixoto argues that statistical inference is always useful, even when we don’t know “the true model”, because stating the model is necessary to making “inferential” statements. I will return to this point. Finally, Peixoto makes the “face on a piece of toast” argument again, without the picture this time.

I have four highlights. First, once again, the dispute is respectful and looks like a collective inquiry for truth (for agreement). Both parties are open about the limits of their own understanding: “either I do not understand or I disagree”, “I don’t follow”, “I’ve heard this argument before, but I suppose I don’t understand it.” Disclosing the points of misunderstanding is instrumental to determining the depth of the disagreement: does it lie in the form or in the content? Does the trouble come from just a misreading or from the challenging of an actual scientific statement? And which one? Such inquiry is the goal of the whole exchange, as explicitly stated by Lambiotte in the beginning. This is the cultural norm of science, and not of Twitter in general. In this Twitter subspace, science prevails.

Second, remark that not only Peixoto and Lambiotte acknowledge the difficulty to understand each other, but their responses also seem to partially miss the other side’s point. They might agree to disagree, but they do not easily agree on where the disagreement lies, hence this collective effort to locate it. Lambiotte, ultimately, did not acknowledge that the contention was closed, and I don’t assume its resolution.

Third, I want to highlight that Peixoto paints statistical inference as something that it is not OK to criticize, and something that is strictly superior to what he calls “descriptive methods” in every situation. I think it reflects a common but not universal position in computer science [physics *comment on this edit at the end of the post].

Fourth, the face on a toast again, once again used to ridicule: “So if you see a face on a piece of toast, or communities in random graphs, that’s OK.” Or is it just me? I think that finding communities in a random graph can be understood as an absurd statement, hence the contrast between “that’s OK” and the ridicule of the “face on a piece of toast”. But arguably, it depends on what you call a random graph: given which model? This time, Peixoto makes a connection to the test presented in the first blog post. I will reformulate it using Peirce’s semiotics, because it underlines the role of causality in meaning. In Peirce’s semiotics, the smoke signifies the fire because it indicates it, because it is caused by it. Peirce calls smoke an “index” or “indexical sign” of fire. From there, Peixoto’s “litmus” test can be restated as follows:

  • If you consider that whatever you see is real, i.e. if the face on the toast is a face because we see it as a face, then your approach is descriptive and you can use whatever method you want.
  • If you consider what you see as an index of something else, i.e. the shape on the toast is only a face if it was caused by an actual face, then your approach is inferential and you must use inferential methods.

In this analogy, seeing communities in a random graph is framed as something as ridiculous as seeing Jesus on a piece of toast. But this assumes a specific way our observations have a meaning. What about other ways? This open question is why the controversy resists closure.

The tweets about the third blog post

These tweets have no objections, and that is something that deserves a highlight. Indeed, the post is the most explicit call for disqualification, as its title shows: “Modularity maximization considered harmful.” Peixoto tweets it as “something tame and uncontroversial” and I don’t know if this is humorous or not. Was he sarcastic about expecting some pushback, or was he actually confident in the that point being consensual? In any case, as we have seen, the superiority of inferential methods is commonly accepted, and these tweets were not debated on Twitter.

Non-inferential methods considered useful?

Since I commented that I found Peixoto ambiguous, I want to document what he says in his blog posts about the usefulness of non-inferential methods, one of which is modularity maximization. This would help delineating what exactly he aims at disqualifying, and in which situations. I will highlight the key elements.

Note: I will entirely skip the argument about when and why inferential methods are more appropriate. Mind that my focus misrepresents his full argument.

First blog post

“Here we point out the major differences between [descriptive and inferential community detection methods] and discuss how to decide which is more appropriate, and also why one should in general favor the inferential varieties whenever the objective is derive interpretations from data. … descriptive clustering approaches are the method of choice in certain contexts.”
👉 Inferential methods are more appropriate in general, but non-inferential methods are more appropriate in certain contexts.

“A merely descriptive account of the image an be made … However, an inferential description of the same image would seek instead to explain what is being seen.”
👉 Inferential methods are superior because they can explain (because explaining is superior to describing).

“We emphasize that the communities found in fig. (b) are indeed really there from a descriptive point of view, and they can in fact be useful for a variety of tasks.”
“If the answer to [the litmus test] is “yes”, … a purely descriptive approach may be appropriate since considerations about generative processes are not relevant.”
👉 Non-inferential methods can be useful.

Second blog post

“Communities found [by the (non-inferential) Infomap method] could be useful for particular tasks, such as to identify groups of nodes that would be similarly affected by a diffusion process. This could be used, for example, to prevent or facilitate the diffusion by removing or adding edges between the identified groups.”
👉 The Infomap method can be useful.

Behind every description there is an implicit generative model. From a purely mathematical perspective, there is actually no formal distinction between descriptive and inferential methods, because every descriptive method can be mapped to an inferential one, according to some implicit model. … The only difference to a direct inferential method is that in that case the modelling assumptions are made explicitly, inviting rather than preventing scrutiny.”
👉 Inferential methods are superior because they make their assumptions explicit.

(I skip a lot of other quotes that make the same point)

Third blog post

“Despite its widespread adoption, [modularity maximization] suffers from a variety of serious conceptual and practical flaws, which have been documented extensively … The most problematic one is that it purports to use an inferential criterion — a deviation from a null generative model — but is in fact merely descriptive.”
👉 Modularity maximization is inherently flawed because it pretends to be inferential while being non-inferential.

“In the table below we summarize some of the main problems with modularity and how they are solved with inferential approaches.
[There is a table with 5 entries]
Because of the above problems, the use of modularity maximization should be discouraged, since it is demonstrably not fit for purpose as an inferential method.”
👉 Modularity maximization should be disqualified as an inferential method.

“At a fundamental level, all of its shortcoming are shared with any descriptive method in the literature — to varied but always non-negligible degrees.”
👉 All non-inferential methods should be disqualified as inferential methods.

Is there ambiguity`?

These points boil down to three arguments:

  1. Non-inferential methods can be useful
  2. Inferential methods are superior because they can explain
  3. Non-inferential methods are inherently flawed in inferential settings

None of that is ambiguous by itself, and the point on the usefulness of non-inferential methods is more than a concession in principle, as Peixoto provides examples. Peixoto also comes with a pretty clear frame for the superiority of inferential methods: the problems identified by his “litmus test”, that we could call inferential settings.

If there is ambiguity, or potential for misunderstanding, it is, I think, in the cocktail of the three arguments. It sounds like a syllogism: non-inferential methods are useful, but inferential methods can do strictly more, so non-inferential methods are strictly less useful. I think that this argument is flawed, because the superiority of inferential methods is valid in a narrower space than the usefulness of non-inferential methods, so we cannot say that inferential methods are superior in general. Or at least, I don’t see how Peixoto’s paper supports this point.

A final comment on possible misreading of this work as methodological imperialism. It comes from two points. First, the ambiguity I just mentioned, and in particular the that the disqualification of a popular method (non-inferential community detection) relies on the epistemic views of computer scientists [researchers from other fields *comment on this edit just below] (inferential settings). And second, the fact that the disqualification is also made through ridiculing analogies.

I do not have any simple answer to the first potential misunderstanding, but I have one for the second: analogies where there is less ridicule on the side of “descriptive” approaches. For instance a rainbow: we know that it does not exist, it moves as we try to reach it. But at the same time, it is not ridicule to see it, because we are in a much broader intersubjective agreement on their existence. Even though it only exists in our eyes.

Comment on the “computer science” edit

I initially painted Peixoto as a computer scientist in the blog post, but that was just wrong, if only because his approach is typical of physics: it comes from a nomothetic perspective (seeking universal laws), which underlies the argument on the superiority of inferential methods as well as the connection to information theory. You will see his reaction in the comments to this post. My bad!

There is more to it. When I used “computer scientist”, I had in mind something like “the algorithm designer as perceived by its users”. I am thinking for instance of the PhD students who use Gephi in media studies and digital methods, or in the digital humanities. I want to account for their perspective, because they need to be critical of their apparatus, but they lack a lot of the knowledge necessary to understand it in details. It is simply not their field. And in that culture, there is a latent fear of methodological imperialism that we need to talk about.

Here is for instance what Johanna Drucker, a prominent figure in the humanities, writes (she is an expert of visualization):

Most, if not all, of the visualizations adopted by humanists, such as GIS mapping, graphs, and charts, were developed in other disciplines. These graphical tools are a kind of intellectual Trojan horse, a vehicle through which assumptions about what constitutes information swarm with potent force. These assumptions are cloaked in a rhetoric taken wholesale from the techniques of the empirical sciences that conceals their epistemological biases under a guise of familiarity.

Drucker, J. (2014). Graphesis: Visual Forms of Knowledge Production. Cambridge, MA: Harvard University Press.

More quotes of that kind in this post. It’s complicated, because such criticism is legit, but it does not mean that it is necessarily valid. I do not want to dismiss it, because it is necessary. But I also want to say that there are many misunderstandings about who created these techniques and why, who implemented them, and the reasons why they circulated to distant fields like digital humanities. This quote shows that Drucker puts all of this into the huge bag of the “empirical sciences”.

All of this just to say the following: the algorithmic techniques we find in various tools were developed in many fields, including the social sciences and humanities (e.g., Linton Freeman on centrality metrics), yet they might be received by the users of these tools as an emanation of the natural sciences, the empirical sciences, computer science, or whatever. The field they are perceived to come from are not necessarily where they actually come from.

That being said, it makes it even more wrong that I characterized Peixoto’s work as computer science, because it carries the epistemic perspective of physics. I was sloppy, sorry. I actually do not know how community detection is perceived by its users in digital methods and digital humanities. The more I think of it, the less I am convinced that it is perceived as emanating from computer science.

What is Web Cartography? by Franck Ghitalla

13 min. read

With Dominique Boullier and OpenEdition Press, we edited and published Franck Ghitalla’s posthumous book, Qu’est-ce que la Cartographie du Web?. The book is in French, and Franck Ghitalla is mostly known in France, but it is worth introducing his work to a larger, English-speaking audience (and if you speak French, even better! I have more resources for you at the end). His book is in open access online, which allows you to Google-translate it in-place. Check for instance my foreword, or browse the chapters from the summary.

Portrait of Franck Ghitalla
Franck Ghitalla the “champion of networks”, as introduced by Le Monde in 2012.

In short, Franck was a linguist who used to teach at a French engineering school (UTC). He tragically died in December 2018, while teaching his famous course, the one I had followed in 2005 as a student, which had started our collaboration and friendship. From a French perspective, his originality was to have imported early the concepts of network science, for instance scale-freeness, from Barabási’s bestseller Linked. That worked very well because his UTC students were avid of discovering things and building stuff. The visions he inspired were immediately implemented into prototypes of web crawlers, digital libraries, network visualization tools, and more. That dynamic gave birth to Gephi. His students also founded startups like Linkfluence or Linkurious. Franck was renowned as an innovator. He was a very singular figure in the French academia, which caused him some trouble. He narrates it in the book. For instance, he was accused of Americanizing young French minds, which I find hilarious because from a US standpoint, Franck looks so irredeemably French. That is what I would like to briefly explain here, because it is important in a European context, where other researchers may find Franck’s perspective unique and interesting, and relate to it.

Qu’est-ce que la Cartographie du Web? is available in paperback version.

Franck’s starting point was knowledge. His interest in networks was driven by the idea that information had a geography. He saw networks (graphs) as a knowledge technique, in the spirit of Vannevar Bush, and the web as an empirical field to investigate, like the pioneers of network science (Albert-Laszló Barabási and Réka Albert; Lada Adamic and Natalie Glance…). He was drawn to the web because it was datafied knowledge. This unique form of writing allowed him computing visualizations, revealing the geography of information. But it is worth noting that modeling was never his goal. Contrary to the network science movement in the USA, largely founded by statistical physicists, Franck did not trust the machine over the person, and was not willing to blindly delegate analysis to algorithms. He wanted to see by himself, to explore and make discoveries. Franck was interested in information visualization for its hermeneutic potential, he did not want to use algorithms as a way to mechanize the work of interpretation, but to enrich it. He understood information cartography as a craftmanship, a qualitative art enabled by quantitative computations (but not limited to them), and an enlightened form of reading (and writing). For Franck, information cartography was not an automated process of reduction, but an instrumented process of revelation. It was an hermeneutic practice.

The letters for rods
Original illustration of the Memex from the Life reprint of “As We May Think” by Vannevar Bush. It inspired the hypertext.

His endeavor to seek interpretive opportunities in datafied knowledge was built upon the idea that complex phenomena had something to show, something more than universal laws, something specific to each empirical case. In that, he was meeting a forgotten idea of Gabriel Tarde, also promoted by Bruno Latour: the idea that complex phenomena like society, culture, and knowledge consist of nothing more than what composes them, like individual interactions. The idea roots in the philosophy of Leibniz, who wanted to show that thinking the world does not require the idea of God. Leibniz conceptualized the monad as a way to give meaning to things without resorting to God’s will, the soul, Plato’s ideal forms, or any other avatars of divinity. Leibniz conceptualized a purely material world, and Tarde reused the concept of monad for the same purpose, to state that even the most complex collective behaviors depend on nothing more than local interactions. Unfortunately, Tarde’s rival Emile Durkheim famously won the battle of ideas, and incepted sociology as a quantitative science. For Durkheim, on the contrary, complex collective phenomena are sui generis entities, they exist on their own, independently. They are something more than what composes them, they exist as something else. Social entities like nations exist on a different level than individuals, in this perspective. They have an essence, something reminiscent of a soul. Durkheim supports discarding individual information by using statistics (i.e., reductionism) because it gets you closer to the essence of collective phenomena. In a Tardian perspective, that essence is an unnecessary assumption. So if we are to be radical empiricists and minimize our assumptions, then we must get rid of the essence (or hidden truth, or universal law) and find another role and justification for reductive methods. You can read the long and better version of this argument in The Whole is Aways Smaller than its Parts (Latour et al., 2012).

For Tarde, society is entirely contained in individuals, it is entirely material; yet, and that is the crucial point, that is something that we can only see with the right tools. And, as Tarde admitted, such tools did not exist in his time. He knew that his argument was quite speculative, which also explains why Durkheim seemed to be right. This is where the web and big data, in the mid-2000s, were relevant to people like Franck Ghitalla and Bruno Latour. Because they believed that those could be the first real-world realizations of Tardian tools, allowing us to track collective phenomena down to their smallest components. This explains why modeling, and statistical reductionism in general, was not on Franck Ghitalla’s mind. He was confident that he would see collective phenomena in qualitative data sets, if they were large enough, and with the right tools. He trusted the topology of the web to offer an image of the collective interests of humans, an overview of knowledge. A distorted image, certainly, but still truer to our minds than how libraries and encyclopedias were organized. He saw the web and other data sources as fields for phylogenetic investigations à la Foucault, and he did not presume of what he would find. He had no grand theory, he only bothered with finding good empirical cases and interpreting them. He was a radical empiricist, and a practitioner. He did not conduct quantitative experiments to test hypotheses formulated by theorists, in contrast to the literature of network science (that he nevertheless loved). The only theoretical elements he proposed consisted of down-to-earth advices based on accumulated observations.

I remade a number of images for the book, and I made English versions. I feature them here (licensed as CC-BY, feel free to reuse) with a short version of the argument about them. This will give you a non-representative idea of the book’s content. Check my foreword for a more representative overview.

The web as layers

The web has been described as scale-free, although this is now controversial; I have written on that topic, and I will keep things simple here. The fact is that a few web pages have all the hyperlinks, while most pages have almost none of the hyperlinks. And this is also true at the level of websites. If we were to sort pages from the most linked to the less linked and plot the number of links, like in the figure below, we would see a power-law distribution, or at least a heavy-tailed distribution, which is basically the same for us. This type of distribution is sometimes called 80/20, because 20% of the pages would have 80% of the links; the numbers may vary but the important is that it is extremely skewed.

Let us call the top of the curve the “hubs” (the most connected pages) and the rest the “long tail” or “heavy tail”. This distribution has a curious property: the tail is itself a power-law distribution (figure below, bottom). That means that if you remove the head, the tail is still composed of its own head and its own tail. This is, in short, why it is called scale free: it looks the same at different scales (if you consider that zooming is focusing on the tail).

The power law distribution

Franck had a clever idea that we refined over the years. We think of the web as a series of layers, and these layers correspond to the distribution of links. From top to bottom, we have: the high layer, with the most connected websites like Google and Wikipedia; the intermediate layer, with the moderately linked websites; and the deep layer, with mostly disconnected resources. In this model, the specificity of the intermediate layer is to contain aggregates of web documents. In short, communities; I will return to that. The deep layer also deserves an explanation. It is said deep because it is so poorly linked that it is hard to reach by following hyperlinks. It contains most of the information of the web, but each piece of information is accessed rarely. It notably consists of specialized databases and storage systems used as in the web infrastructure, but with a logic of resource providing, not of curated hyperlinks. For instance, Wikimedia Commons. In other words, these are not resources that people link intentionally. By contrast, the intermediate layer contains resources that we see naturally as documents, such as blog posts, articles, or tweets. We share them via hyperlinks, and that creates communities.

The three layers of the web

In terms of link distribution, it requires thinking of the power law not in two parts (the head and the tail) but three: the hubs, the aggregation area, and the tail (figure below). The middle part is the intermediate layer, not as connected as the global hubs, but connected enough that a geography can emerge.

It is worth mentioning that, as the curve suggests, the demarcations are not clear-cut. Similarly, the layers are not clear-cut either. In reality, the layers form a continuum, but it is useful to think of them as different subspaces with different properties, for simplicity.

The intermediate layer is the middle part of the power law

The direction of the hyperlinks is crucially important. Each layer has different properties, both in terms of how it is linked to itself, and to the other layers. The high layer is massively cited by every other layer (figure below, left). Most links converge to the top, so the web behaves like an ocean where the internet user tends to bubble to the surface. Importantly, the deep layer points directly to the high layer, so there is no need to travel through the middle to reach the surface. There are always shortcuts to the top.

The deep layer is the opposite, it is only accessible from the intermediate layer (figure below, center). It is as difficult to go deeper as it is easy to reach the top. The crawler or internet user has to travel through the intermediate layers, from sublayer to sublayer, to reach the depths.

The intermediate layer is basically a bridge between the surface and the depths (figure below, right). These links are asymmetrical, as they tend to be bottom-up, but the layer is nevertheless a passage point. More importantly, it consists of aggregates that are also linked to each other.

Each layer is a space with different topological properties.

The intermediate layer is composed of aggregates. Each aggregate is composed of a core and a periphery. The core is composed of hubs and authorities. The hubs cite a lot of the aggregate’s resources (portals) while the authorities are cited a lot by within the aggregate. The core is highly connected, while the periphery is less connected.

In fact, each aggregate replicates the whole structure of the web. Intuitively, the core is closer to the top, and the periphery to the bottom. There might also be sub-aggregates, on multiple levels. This structure is quite intuitive if we think of it as communities. We may see the gamers as a community with its own hubs and authorities, but it certainly has sub-communities like FPS, RPG, and then possibly sub-sub-communities by sub-genre or game. And there might be bridges or overlaps between different communities (aggregates are not clear-cut either). The important is that the distribution of hyperlinks is heterogeneous and creates a number of self-organized localities.

Each aggregate is not just a denser subspace, it is also thematically centered. This is why aggregates are communities: their resources share a common interest or practice. They are not purely topological. They are subspaces where content and structure correspond to each other.

The intermediate layer is the only one where we can empirically observe a geography of information, because the deep layer is not connected enough, and the high layer is too connected. In other words, the high layer is everywhere, while the deep layer is nowhere. Only the intermediate layer offers a “somewhere” that we can study.

Structure of an aggregate (a community)

Different aggregate structures are possible. It depends on the core (or center, in the figure below), and it depends on the lesser connected websites too. The more hyperconnected the center, the stronger the aggregation. But a community may also exist as links between the less connected actors, which we call here a lattice. Community activity does not have to go through relation brokers (the center). It might be hierarchical, or on the contrary, flat. But to be considered an aggregate, i.e. a local subspace, it needs either a lattice plus a weak core, or a strong core.

Different types of aggregates and non-aggregates

The layers also correspond to different practices of information diffusion. The high layer corresponds to a broadcast model, where actors, typically mainstream media, push information to less connected actors (the public). On the contrary, the aggregates correspond to a viral model where the information spreads in the community between actors of an equivalent level of connectivity and visibility (horizontally).

Different information diffusion models in different layers

These models are not mutually exclusive. As shown in the figure below (c), a hybrid scenario is also possible, where the information circulates virally first, then gets broadcasted by a highly visible actor, then resumes to a viral circulation in the intermediate layer.

Three scenarios of information diffusion

This hybrid scenario is in fact a pretty good example of fake news laundering. Many fake news circulate in low-visibility layers, as partisan bullshit or humour. It is pretty difficult for these contents to disseminate widely from these subspaces. High visibility actors may however launder these contents and promote them to spaces where they could not otherwise circulate. This laundering has been relentlessly documented and is critical providing reach to fake news. The fake news may be debunked and resume a viral circulation, but the laundering will have spread it to localities of the intermediate layer that it could not have reached just from viral circulation.

I have no more original images to share, and I will leave you at that. More details in Franck’s book. I hope I have piqued your interest, and in that case, have a good read.

Links

An anecdote about the resistance of things

3 minutes read

I had a print a roll-up kakemono, and I checked a cheap option (5€) to have an actual human check my file. I thought: stupid mistakes happen, better have someone double-check, for this cheap.

Later that day (today), I get an unfortunate email: my file is non-conform. Due to a bad resolution, they say, and the printed result could be pixelated. But is it, though? I double check. The kakemono is a composite image of different sources (see it at the end), and most of them are well above the required resolution (a modest 150 dpi). Not the background image, though; but it’s blurry by nature and it does not matter. I can’t see anything else. There might be an option to tell them to proceed anyway, but I try a quick fix. I add a slight blur on the background image in the case that some pixels would be visible, then I export all as a 300 dpi pixel image, and import it back in the PDF format they require.

At that moment I thought that those 5€ just cost me time and money, as the original result would presumably be exactly the same. I thought so because their feedback was not the kind of mistake I had expected to make; in fact I had not made the feared mistake, I precisely knew what I was doing all along. And I had better things to do. I was frustrated.

But now I think: what if that frustration is precisely the sign that it worked? I started to accept the idea that the background image might have been actually pixelated, and that my quick fix was indeed what I was supposed to do. I started to see things differently once the initial frustration was gone.

So was it a good deal, or not, these five euros? Here is the interesting thing: it can only work by being annoying. If you have an idea of the kind of mistake you could make, you have already checked the file for that at the point you validate the option. No, you go for it as a protection against unexpected issues, albeit you think your file is right. So every time it actually proves useful, it’s a bad news in the eye of the user. Frustration is inherent to the success of the feature.

This happens because the feature is expressed as a resistance of the system. It resists your mistakes. The problem comes from you. Although it also works for you; you’re the client and the problem. You give the system the right to frustrate you. You pay for that. Because you know that you’re also that person who fucks up.

Now, you may not accept this resistance, you may disagree. You may think that it’s wrong, and you’re right. Your emotions, if you’re like me, will push you down that road. Remark that it depends on the feedback. If it told me: “check that the background image is not pixelated”, I might have accepted it better. But that’s because I have some degree of judgment about what I am doing.

Now picture a slightly different situation. You lack judgement about what you are doing. The system resists, but in the end you have no clue whether it is right or wrong. And it prevents you from doing what you are supposed to do. Would you think that it works? Would you feel it that way?

Tool makers don’t want their users to feel frustrated, hampered. Users make the success of a tool. A tool perceived to malfunction would presumably be abandoned by its users. Or would it? I will leave you at that: I think that scientific instruments are too docile, and that’s in part because their designers, for instance the computer scientists who publish algorithms, believe that it is not worth the pain of frustrating their potential users. If they fuck up, they fuck up; especially when they will never know.

The damned kakemono. More about its content later on…