11 minutes read + 6 min for the bonus
This is just a few anecdotes and remarks feeding into data science and network stuff. Anecdotes first.
This morning I just listened to the French radio. Camille Kouchner was interviewed about her book that contributed to the French “me too” movement, one year ago. A book about an incest she witnessed as a kid. She was asked: how does one learn to keep silent? She answers that we do not learn silence, only to talk. In her family, kids were taught to not speak unless they had the right words. “I felt like I lacked the right words, so I shut up.”
Finding the right words is an endless topic. One my favorite books is The Order of Things by Michel Foucault. The English title is certainly appropriate, but the French one is stunningly more ambitious: Les Mots et les Choses. The words and the things. What would not fit under such a large umbrella? A nice takeaway from that book: even the process of knowing is not universal. Thinking is situated. No surprise, then, that words can betray us.
Mid-December, Sabine Hossenfelder published this video: Does Superdeterminism save Quantum Mechanics? Or does it kill free will and destroy science? Sabine is a pretty famous physicist and YouTuber. Her videos generally aim at educating a non-physicist audience into quantum physics. This time, her point comes from her own research, and defends a controversial interpretation of quantum physics: superdeterminism. Before she even gets to what it means, she has to explain that the concept is not as dramatic as it sounds: it is not more deterministic than determinism; it is just regular determinism. Why is it called “super-“, then? She did not pick that word, she is just stuck with it, and has no other choice than to fight against its implicit meaning as a prelude to making her case. The word is working for someone else.
This reminds me of the big bang. The expression. Obviously.
English astronomer Fred Hoyle is credited with coining the term “Big Bang” during a talk for a March 1949 BBC Radio broadcast, saying: “These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past.” … It is popularly reported that Hoyle … intended this to be pejorative … but Hoyle explicitly denied this and said it was just a striking image … .
Wikipedia article on the Big Bang
The striking image of the expression “big bang” has probably worked for many masters. It might be more accurate to say that its work was captured by various people in different situations. It was put to work by Fred Hoyle, and at the same time, it worked on its own. It had many masters, and it was also its own master. So does “superdeterminism”. The term comes from John Bell, who put it to work as a straw man. For him, superdeterminism was more than determinism, and as such, it was a ridiculous and dangerous idea. In his mouth, the word was demeaning. But not anymore. Now it is widely understood as “misleading” (Wikipedia), because there is no “super” in superdeterminism, although determinism itself is still controversial. Sabine now leverages the word differently, as a pointer to John Bell’s beliefs, that she specifically challenges. Superdeterminism might change allegiance after all.
Note: more content about this video at the end of this piece.
Words work both ways. Like with double agents, one can never tell who they really work for. It is unknowable. We have reached the end of the “work for” metaphor, as we admit that words perform in uncontrollable ways. I learned from Bruno Latour’s PhD writing courses that writing is largely about keeping the unintended meanings of whatever you write in check. That is why the reader is always right: I had to come to terms with the illusion that words meant what I thought they meant. Even if I am “right”, it is still my problem to get misunderstood. But of course I cannot really know what my words mean to others until someone else reads them, and provides me with feedback. The process is iterative. As Latour often said, writing is rewriting.

A few days ago, Petter Holme, a well-known network scientist, posted on his blog about How the names of measures influence their interpretations. “Methods and their names have complex and sometimes detrimental relationships … [especially] when the names have a clear and relevant meaning in the vernacular.” Efficiency, centrality, complexity, accessibility, universality… These network concepts are misleading, because their definition is much narrower than what their mundane name suggests. Seemingly relatable, but with very precise and technical definitions. Can we do better, as researchers? As Petter notes, “if the dullness of your measure’s name makes it instantly forgotten, it will not serve the purpose of science, no matter how smart it is.” Method names are shaped by scientific cultures.
Just before the Christmas break, I co-organized a data sprint for the ADD project. Such an event is transdisciplinary by design. Not every participant is a data scientist, and the names of the methods we used created various problems. For example, we used topic modeling. We tinkered with the LDA technique. To many participants, that name meant nothing. Unpacking the acronym to “Latent Dirichlet Allocation” does not help, but it leads to a narrative about how it works. Unfortunately, how it works does not tell much about what it performs, how it compares to other techniques, or what its purpose is. Worse, if a participant were to search for it in Google, they may find the wrong LDA, another technique known as Linear Discriminant Analysis. We also tinkered with hSBM, aka hierarchical stochastic block model. Same story. How does a non-data-scientist receive such acronyms? As black boxes whose functioning is knowable in principle, but inaccessible in practice. Participants felt like they were borrowing sophisticated tools from another field. The names played against reappropriating the techniques.
But there is a much more friendly name at hand: “topic modeling”. It is relatable. It tells what the technique performs, as opposed to how it works. LDA and hSBM both give you topics, and from there we can build an intuitive understanding of what they do. Except that we run into another problem. The “topics” given by either LDA or hSBM do not look like topics to participants. The word “topic” is relatable but misleading. Participants were flexible with the meaning of “topic”. Our research questions involved topics (e.g., the Danish e-identification system), themes (e.g., digital citizenship), ontologies (e.g., how different actors conceive trust), matters of concern (e.g., the lack of understanding of algorithms). For us, a topic, a theme, an ontology, a matter of concern, a discourse, a subject… are different things. They have different epistemic functions. What roles the “topics” of topic modeling play? None of those. They are just bags of words found by a statistical technique that does not ensure they have any given rhetorical purpose. It might be, but that must be tested. Yes, LDA and hSBM have methodological commitments that specify what they mean by “topic”, but those commitments do not relate to the epistemic cultures of other fields. “Topic” is a false friend, it translates badly.
Did we employ the expression “topic modeling” for better or for worse? I sincerely wondered, but ultimately, using the existing terminology is inevitable. Retrospectively I turn to a more relevant question: how should we leverage the expression “topic modeling” in productive ways? This is about building on top of a shared understanding of “topic” and recontextualizing it usefully. I very much like the idea of repurposing data and algorithmic techniques: overtly admitting that we use them in unintended ways, which entails finding alternatives to built-in validation methods (post-hoc interpretability).
Words do not just perform by misleading. They include and exclude people, they seduce and discourage minds, they show or hide methodological commitments, they smuggle implicit assumptions, they launder specious arguments… “Topic modeling” engaged participants while “LDA” and “hSBM” alienated them. But “topic modeling” also laundered the idea that reducing the concept of “topic” was necessary to its quantification. It put the participants in the false dilemma of either accepting the computer science version of that concept, or dropping the modeling altogether. With a different naming convention, other options would appear more clearly, such as modeling for what it is, without calling the bags of words “topics”. We already work with borrowed data, so we can deal with borrowed methods. That is not the problem. Most social science and humanities scholars are used to detect and clear built-in assumptions anyway. But words can make the task easier or more difficult. Names shape the public of methods, and play an important cultural role.
In my previous post I accounted for the controversiality of Tiago Peixoto’s promotion of inferential approaches to community detection. Tiago’s argument stems from the following alternative. Either we “articulate precisely what constitutes community structure”, or we do not. If we do, then our approach must use inferential methods. Else, our approach is what he calls “descriptive” and we may use inferential or non-inferential methods. An interesting part of his work is his endeavor to disqualify modularity clustering, a popular community detection method, essentially because it is a non-inferential method posturing as inferential. In that sense, it misleads its users, and can be “considered harmful”.
In practice, I have tinkered with three community detection algorithms. Modularity maximization with the Louvain technique, with the Leiden technique, and Tiago’s inferential method. Sometimes (dare I say often?), the Louvain technique gives me more usable results. [Audience boos]
I had the chance to talk about it with Tiago. My observation is that sometimes his technique, and to some extent the Leiden technique, gives me too many small clusters, or on the contrary too big ones. His interprets this observation as an effect of his technique being more accurate. In fact, one can precisely trick the Louvain technique into missing clusters because it assumes somehow homogeneous cluster sizes. I agree with the argument; yet it does not make the Louvain technique less useful to me. In fact, I am just not doing what it seems I am doing. I am not trying to detect communities. Did I pretend to`? It is complicated.
See, on the one hand I have heard myself call clusters found by modularity maximization “communities”. On the other hand, that name can generally not be taken for face value. From a sociological standpoint, the idea that an individual belongs to only one community, no less and no more, is ridiculous. If we were to operationalize the sociological idea of community, we would upfront assume that communities overlap, that belonging to one is gradual (not a yes/no), and probably not universal (not every actor agrees on who belongs to what). In fact, we should not even take for granted that a “community structure” necessarily consists of communities. Besides, we routinely use community detection in contexts where the idea of community has no literal meaning (e.g., in a network of keywords). In practice, we often repurpose community detection for something else. This is nothing new. But I admit that we still somehow pretend that we detect communities, because that is the name of the method, and because it states the scientific ground of our practice. But that grounding is not necessarily legit. That is a problem. Unfortunately, that problem is not the one solved by inferential methods. It belongs to what Tiago calls “descriptive approaches”.
The reason why modularity maximization works better for me has nothing to do with inferential methods. In fact, there could very well be a superior inferential method to craft; it just would not exactly aim at detecting communities. Its task must be different. But what? Can we even delineate how descriptive approaches repurpose community detection?
Let me freely admit my doubts about it. It paints my practice as somewhat inglorious, but never mind. The reality check is well needed. As you know, our actual practices do not resemble the convenient fictions we write in papers to contextualize our findings. Our actual practices are messier, more exploratory. We notably search for questions, although we pretend seeking answers. What I do most often is to use community detection to build an intersubjective statement about my network. I give myself some bricks to build a description. I do not care whether the found bag of nodes is a community. The description depends less on the bricks than on their assemblage. The bags are just contingent tools. However, I do care that they agree with the layout, else we cannot “see” the node groups properly. Or more exactly, different people would not agree that they see the same thing (intersubjectivity). And it also matters that there are neither too many nor too few bags of nodes, for the practical purpose of describing. Most of the time, that purpose is to craft a working hypothesis. Most of the time, that hypothesis will be thrown away and you will never hear of it. Even so, it has lead to other hypotheses, and so on, until we find version of a research question to which a reasonably solid answer can be found.
This point deserves a piece on its own, a visual one. That is for another time and I must conclude, so back to the matter of words. The expression “community detection” has done a lot of work for many people. It was relatable, which is a blessing to everyone and a curse to some. For Peixoto, the expression helped justifying the superiority of inferential methods, precisely because they state what constitutes a community structure explicitly. For SSH scholars who occasionally analyze networks, it offered a methodological justification to a useful technique; but I do not take for granted that this justification was appropriate. Like Sabine Hossenfelder with “superdeterminism”, network analysts are stuck with the expression “community detection”, and should probably pick a fight with it to regain some methodological agency. Like Sabine, they might turn the allegiance of the expression, and have it work for them. Understanding it helps me write about it.
Bonus on Sabine Hossenfelder’s argument on superdeterminism
I encourage you to watch Sabine Hossenfelder’s video. I find it fascinating for multiple reasons. Of course it is about the wave-function-collapse and spooky-action-at-a-distance stuff many of us love to hear about. But my account will not go into these details, because I am primarily interested in how she fights the uphill battle against the doxa. The way she argues shows something fantastic: that the quantum physics theory is deeply shaped by what physicists consider weird or not. And it is no accident that it shows here: it has been Sabine’s underlying argument across many videos (see also this one about mathematical beauty shaping particle physics).
We need context to understand. Everyone agrees that there is something weird about quantum mechanics. Not just “weird” as in counterintuitive, although it certainly is. “Weird” as in suspicious, deranging, uncomfortable. And that weirdness is a guide to physicists, insofar as they desire to get rid of it. That is how it shapes physics.
It seems that we cannot get rid of that weirdness, because theories that fix one kind of weirdness always create another kind. But it also means that we can move weirdness around. Maybe, certain kinds of weirdness are less weird than others, and that would be some progress. This is what Sabine’s argument is about. It offers to trade one weirdness for another.
Sabine’s argument goes against a popular story. The tale goes like this. Einstein was an absolute genius, but he made a mistake. He refused to accept the weirdness of quantum mechanics. He was uncomfortable with what he called “spooky action at a distance”. So he concluded that quantum physics were incomplete, that the weirdness had to come from our limited understanding, that we would find a non-spooky theory in the future. Surely, reality could not possibly be that weird. Then John Bell took up the challenge, and raised his convictions against him. He designed an experiment that many thought impossible. He found a way to test whether the wave function had hidden variables, as Einstein thought, or if it was probabilistic in nature. And the empirical results would ultimately prove Einstein wrong (sadly, after his death). John Bell proved that the spookiness was real. Albert Einstein, the demigod of physics, could be defeated after all; but only thanks to the help of the most formidable weapon: the inconceivable weirdness of quantum reality. This is anyway how the popular tale goes. But as Sabine argues, it omits a detail that changes everything.
The formidable weapon wielded by John Bell to prove Einstein wrong came with a clause (it assumes statistical independence). That clause was so obviously true to John that he took it for granted, and his feat was so formidable that the mention of the clause disappeared from the tale, and was forgotten. Everyone considered the demigod vanquished forever. But his power was lingering, and Sabine realized it. She remained unsettled by the spookiness, and came to understand that the matter was not settled. She inquired, found the forgotten clause, and understood that in his hubris, John had not finished his task: there was a loophole in his victory. Disclaimer: I took some artistic license!
I narrated the story in the style of a tale to cut short the theoretical details, but also to expose Sabine’s main difficulty: people are not willing to accept that their heroes have actually failed, and that their problems are not solved after all. But at the same time, the weirdness is still there, so it is not as if the story had such a good ending. Regardless, we have long coped with the weirdness left by John Bell. We have endorsed his legacy, the idea that quantum reality is ontologically weird. We found some comfort despite the spookiness. But Sabine could not cope, uncovered the loophole, and now she wants to trade the spookiness we know against the uncertainties of the forgotten clause. She reclaims the unmaking of John’s victory, and the restoration of Einstein’s original intuition. And to do that, she must convince us that John Bell was wrong to take the clause of statistical independence for granted.
This is where it gets about words. As she says, “all the alleged strangeness of quantum mechanics has its origin in nomenclature.” The strangeness she talks about is what Einstein called spookiness. Part of her argument is that we love the spookiness because of the tale, that we hold onto it for cultural reasons, not primarily for scientific reasons. Another part is that the spookiness is not as real as many believe because the clause of statistical independence was not fulfilled. That is precisely the argument about superdeterminism. Sabine must explain why the strength of the argument only derives from the interplay of its rhetoric and the cultural norms in physics.
In short, John Bell considered that if the clause of statistical independence was not fulfilled, then reality would be deterministic to such an extreme extent that many of the things we take for granted, like free will, would be impossible. That was so uncomfortable that he discarded that possibility. As Sabine shows, many physicists endorsed that challenging free will was unacceptable. I frame it this way to highlight the symmetry between Bell’s and Einstein’s arguments: both rejected a theoretical pathway because they found it too uncomfortable, too weird. But as Sabine remarks, the justifications given by John Bell and other influent physicists are not grounded in the epistemic standards of physics, they instead convoke the metaphysical reach of that discomfort: it would undermine free will, destroy science… Sabine makes it clear that there is no argument internal to physics for ruling out superdeterminism. Some influent physicists just promulgated the disqualification loud enough.
Do not miss, here, that Sabine picked that battle when her YouTube channel was strong enough. She has also published papers, so she fights on multiple fronts. I image she also has some institutional reach. But YouTube clearly places her on a different arena where her chances are much better. If only, because the exercise is difficult and many of the physicists she criticize cannot and will not show up to respond.
Sabine also de-facto argues that superdeterminism, i.e. breaking the statistical independence clause, is not that weird. But it seems pretty weird to me, and I can see why John Bell thought it was more dramatic than simple determinism. In Sabine’s own words, it implies that “what a quantum particle does depends on the measurement setting.” In short, what particles do depends on something that will happen in the future. For instance, in the double slit experiment, “the particle’s path depends on what measurement will take place. Because the particles must have known already, when they got on the way, whether to pick one of the two slits, or go through both.” Doesn’t this sound much crazier than determinism? It blatantly violates my own intuition. But like paradox, the dissonance only exists in certain ways of thinking about it. Sabine explains why, in practice, it boils down to just determinism. I find it convincing and, like her, I don’t buy the half-baked waffle about free will. Regardless, I can also understand why John Bell took statistical independence for granted, and employed the “superdeterminism” expression to make his point. I do find it weird, but not more than spooky action at a distance.
I’ve learned to cope with the spookiness of John Bell’s experiment, while the strangeness of dependency to future events is new to me. I wonder how it goes for Sabine’s YouTube audience, and for the community of physicists. Do they find something wrong with Sabine’s argument? Do they get convinced to exchange this new flavor of weirdness against the old one? Either way, I find it fascinating to observe the shaping of experimental programmes by the level of ontological comfort of influent physicists with different theories. Because all Sabine asks in the end is that funding bodies consider testing superdeterminism empirically, even though the physicists in charge think it tastes the wrong flavor of spookiness.
OpenEdition suggests that you cite this post as follows:
Mathieu Jacomy (January 8, 2022). Of the words we choose to write science, and community detection. Reticular. Retrieved February 19, 2025 from https://reticular.hypotheses.org/1962
“However, I do care that they agree with the layout, else we cannot ‘see’ the node groups properly.”
Why hold the layout to such a high standard?
It’s a projection into 2D space of an object that most likely does not belong there. The same problems of inference vs. description, overfitting, etc, hold for network layouts just as they do for community detection.
What is the purpose developing an inferential community detection method that articulates the notion of statistical evidence, and then judge it by a comparison with a descriptive network layout? Why shouldn’t we use the communities found to judge the network layout instead?
A 2D force-directed layout can deceive just as much as a descriptive community detection method. In fact, the problem is potentially worse, since they are much more strongly misspecified.
I talk about this pitfall in Sec IV.F here: https://arxiv.org/abs/2112.00183
Network visualization is very important and useful, but the over-reliance on force-directed layouts is a big problem. There are other approaches that do not depend on this kind of layout at all. For example, chordal diagrams allow us to visually express what a statistical model like the SBM is telling us without introducing artefacts: https://skwd.at/rqzy3
We need more creativity when it comes to network visualization!
I value the layout because it is the knowledge support, the substrate of the inscription about which I want to build an intersubjective statement.
That the network does not belong to the 2D space does not matter for that task. We live with the reduction as we usually do.
Now, you ask why we should assess community detection as a function of descriptive analysis (layout). Let me clarify: we should NOT. I didn’t mean that, sorry if it was unclear. I assume here that community detection detects communities. If you do not use it for detecting communities, then whatever you do with it is not qualified as a benchmark for community detection.
Now, if community detection is repurposed for whatever other task T, then we may want to build *another* inferential method dedicated to task T. Then T can be used to judge that other method.
Shouldn’t we use community detection to judge the network layout? Well, we could. It depends on what you do with the layout. Do you use it to find communities? Then please do. It seems fit.
I don’t generally use the layout to find communities. Although it *may* look like it’s what I am doing, that is a misunderstanding. That being said, I am *also* interested in community detection sometimes. Then I know how to do it. I don’t need the layout for that, beyond basic monitoring needs.
I need not be naïve here: the documented alignment of community detection and force-driven layout creates a powerful pair of methods, at least for the need of building intersubjective statements. But it is the alignment that matters. Let me say it explicitly: any approach combining a layout and a layout-independent metric has the potential to provide the intersubjective statements we need. It works with matrices, for instance; but matrices are kind of the same thing as force-driven placement.
An exemple of something crucial we miss: the orientation of edges in directed graphs. For that purpose, matrices are clearly superior. For an intersubjective statement about asymmetries, I would go for another pair. Matrix and page rank, or just indegree, for instance. For statements about yet other things, I would go for yet other approaches.
I am not pushing people to use force-directed layouts, it happens by itself. I want to understand what they do, why such layouts are popular, what people do with it. What they get from it. I want to listen and observe people’s practices, and I am not in the business of giving lessons. I am proudly aware of how bad I am at judging other people’s practices. I do not pretend to know why people visualize networks, and I even see it a valid scientific programme. Weirdly, it has not been done seriously at a large scale.