Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

A Twitter controversy about community detection: empirical material

30 min read

Note: I know it’s awkward, but I will call everyone by their last name. It puts everyone on an equal foot.

When Tiago Peixoto promoted his latest work over Twitter, it sparked an interesting discussion. I call it a controversy because the participants may not agree on what the points of contention are. It is unresolved. The debate is notably about modularity maximization, a popular community detection technique. Is it obsolete? Peixoto has undertaken to disqualify it to the profit of a Bayesian inference approach. But although everyone seems to agree on the general superiority of inferential methods, a question remains up for debate: can modularity maximization still be useful in some situations, or does it deserve to be completely disqualified?

It turns out that I was in the company of Peixoto at the same time, as he participated to the Gephi coding retreat I organized at the TANT Lab in Copenhagen. He presented us this work, i.e. his inferential approach to community detection, and I implemented a limited version of his algorithm in Gephi. Our discussions shared similarities with those on Twitter. It seems to me that Peixoto’s argument for complete disqualification of modularity is incomplete, but conversely, if there is a clear motive for not disqualifying it, what is it?

I will not engage directly with the question in this blog post. Instead, I will delineate the debate in the style of draft for a controversy mapping. I will just account for who says what in these Twitter discussions. I am basically gathering empirical material, selecting and organizing quotes, and bringing enough context to make the discussion clear. I will also comment a bit, but I will not get to articulating a full argument before a further blog post.

I have three interests in this matter. First, I find it interesting to document the debates around the disqualification of an algorithmic technique. There is nothing wrong with some methods becoming obsolete in science, and the fact that these discussions are partially made public on Twitter is an opportunity to peek into that process. Second, I want to understand for myself what is at stake, and articulate my position if I have one. And third, I am under the impression that Peixoto’s argument could be mistaken for methodological imperialism from the perspective of the users of modularity maximization, and I would like to see if this is a concern for other researchers as well. I hope that these notes can be useful to the STS crowd, to Gephi users in digital methods, and to network scientists.

Overview of the material

The starting point of the controversy is Peixoto’s work and his effort to publicize it. It comes in three layers. First, there is a preprint released on Arxiv the 30 November 2021, titled Descriptive vs. inferential community detection: pitfalls, myths and half-truths. Second, there is a series of three blog posts derived from the paper, followed by two posts containing additional explanations. Here are they, in chronological order:

  1. Descriptive vs. inferential community detection, 2 December 2021
  2. Inferring, explaining, and compressing, 3 December 2021
  3. Modularity maximization considered harmful, 6 December 2021
  4. No free lunch in community detection?, 7 December 2021
  5. Do we need to believe in generative models?, 8 December 2021

Third layer, Peixoto’s tweets publicizing his blog posts, and the discussions that ensued. It also consists of a list of tweets:

  1. A series of 2 tweets about the release of the preprint, 2 December 2021
  2. A series of 7 tweets about the first blog post, 2 December 2021
  3. A series of 5 tweets about the second blog post, 3 December 2021
  4. A series of 11 tweets about the third blog post, 6 December 2021

From there, as a fourth layer if you will, there are the Twitter reactions to these tweets. I will not list them, because there are too many and you can read them from the links just above, but I will account for the most important ones below. But before I get to that, to understand them, we must have an idea of the content in question.

Summary of Peixoto’s argument

Let’s look into the content of his argument. I will primarily use the blog posts, because they are more accessible than the paper, while the Twitter threads are less easily readable.

The first blog post stems from the premise that “community detection methods can be divided into two main categories: ‘descriptive’ and ‘inferential.'” Peixoto argues a number of things. These types of methods have different properties. For instance, descriptive methods “do not articulate precisely what constitutes community structure” while inferential methods “start with an explicit definition of what constitutes community structure”. For context (not from the blog post), the modularity clustering in Gephi belongs to the “descriptive” type, and Peixoto’s own algorithms to the “inferential” type. He presents inferential methods as “state of the art”, and argues that even though “descriptive clustering approaches arise naturally” in a number of practical situations, they “carry no explanatory power”, contrary to inferential methods. The communities obtained from descriptive methods “can be seen and described, but they cannot explain”.

In this argument, the power to “explain” boils down to being able to predict when nodes are connected by looking at which communities they belong to, for a given (explicit) generative process. If, for a given model, the communities do not predict the edges, then the communities do not explain the network. And descriptive methods do not allow this prediction (they do not even state a generative process to begin with).

The blog post ends by proposing a test for determining whether you can afford not using inferential methods (that Peixoto calls a “litmus test”). There is a question you must ask yourself, and if the answer is “no” you can do as you want but if it is “yes” you must use inferential methods (argues Peixoto).

The second blog post explains how the principle of Bayesian inference is applied to community detection. It goes as follows. Imagine a certain process that generates networks according to certain rules. The nodes are a given, and the process decides of the links. The rules involve groups of nodes: the probability that two nodes get linked depends on their groups. If you know the rules, you can call it a model: it generates networks that are different but have a family resemblance, because the rules are the same. The networks come from the same model. So the model has group-based rules and nodes, and it gives you links. The Bayesian inference is about playing a guessing game: if you have the nodes, the links, and the model (the rules), can you guess which node was in which group? Often, there are many possible valid guesses, but some are mode likely than others. The inference measures that likelihood.

The post describes the maths and how to interpret the equations, and makes a connection with information theory. According to this connection, the likelihood that a partition in the guessing game can be measured after a quantity known as “description length” that “quantifies the amount of information in bits necessary to encode the parameters of the model”. Peixoto argues that “there is a formal equivalence between inferring the communities of a network and compressing it” and that it protects the method against overfitting, thanks to a theorem known as “Shannon’s source coding theorem”.

This blog post also reflects on the literature about community detection, and argues that, using the test presented in the first blog post, the most-used benchmarks in the literature were in fact not suitable to descriptive methods. Follows the argument that inferential methods are easier to benchmark because they state the model they aim to fit, and finally the argument that inferential methods are more comparable.

The post ends by contending that “every descriptive method can be mapped to an inferential one, according to some implicit model.” In other words, descriptive methods are inferential methods that do not state their model, which makes them inherently worse. The statement draws on a mathematical argument whose key is that “there is no such thing as a ‘model-free’ community detection method.”

The third blog post argues that the modularity maximization method is flawed, and one of the most problematic. By the way, let me acknowledge here that Gephi has probably contributed to popularizing the method (it’s not mentioned in the paper but I talked about it with Peixoto). The argument basically follows the rationale of the first two blog posts but applied to this specific case, so I do not repeat it here. Its key is that the modularity maximization method “does not take into account the deviation from the null model in a statistically consistent manner”. It provides examples where modularity clustering finds communities in a sparse random network, and where it fails to detect obvious communities that have very different sizes. It ends with a table of “some of the main problems with modularity and how they are solved with inferential approaches.”

What is made an issue of?

Now that we have an idea of Peixoto’s argument, let’s see what is problematic to different people. I will not be exhaustive. I will pick some elements of the discussion and justify why I find them relevant.

The first objection came from Renaud Lambiotte, who notably contributed to the modularity clustering method that we implemented in Gephi (known as “Louvain”). He reacted to the first series of tweets, about the preprint itself.

Lambiotte first expresses his disagreement, in a pretty civil way, and with Peixoto they endeavor to locate together, and publicly, where the disagreement lies:
RL: “… we will have to agree to disagree :-)”
TP: “I’d be happy to know with what you disagree …”
RL: “Well this one for instance …” (a paper follows)
TP: “What do you disagree with here? And why?”
RL: “Or that one …” (another paper follows)
TP: “If you don’t say why you disagree, it’s hard to respond.”

There is not enough for me to retrace what was controversial, but I want to highlight first how unlike the rest of Twitter this disagreement is: rational, civil, sourced… even though it ultimately fails to resolve, presumably due to the limitations of the Twitter format. I also wonder whether Lambiotte was especially sensitive to the straightforward attack on modularity clustering, a method he had contributed to popularize. We will see him again later on.

Objections to the tweets about the first blog post

When Peixoto presented his argument to me, I had a similar reaction to @rayohauno. Peixoto’s response on Twitter is that he does not make the “assumption” that random fluctuations show no features, but instead draws on the following demarcation: “a) features that arise out of randomness vs b) those that have an underlying cause.” He follows up with a picture, I will return to that shortly.

I want to make a remark first. In Peixoto’s argument, there is an inherent difference between features arising from randomness and features arising from an underlying cause. It’s not that they don’t look the same, it’s that even when they look the same, something makes a difference. Peixoto answered my objection by telling me that even though a monkey typewriting at random had a small chance to write some Shakespeare, there are statistical means to distinguish randomness from non-randomness. I still wonder: if you only get the result, i.e. the features, how exactly to you make the difference? This point exactly is picked up by @rayohauno as a follow-up, but Peixoto stopped interacting with him.

Let’s get back to the picture. Peixoto chose to illustrate his point with two analogies that happen to disqualify what he calls the descriptive approach, and frame the inferential approach as the objective truth. We find it in the first blog post and the paper, and I borrow it below:

/static/blog/figs/inferential/descriptive.png
By Tiago Peixoto, from this blog post.

I assume that anyone is familiar with the image on the top-left, a picture of the surface of planet Mars. The apparent face became famous in popular culture, as an element of evidence of life on Mars. A higher definition of the same location later revealed that there was, in fact, no face. At a higher degree of precision, the features disappeared and in this case, the accepted truth is that there is indeed no face here. The face, even though Peixoto argues that we do see it (we can all agree on that), the debunked face is now conspiracy-grade apophenia, in which very few people actually believe. I argue that regardless of the intent, it tends to ridicule the left side of the analogy. It contributes to the disqualification of modularity maximization on a rhetorical level.

Below is the second example, tweeted by Peixoto in response to @rayohauno’s objection. Once again, our intuition turns out conveniently aligned with the fact that the face we see is obviously spurious. I also find a ridiculing factor to the example, although I acknowledge that it is culturally situated. Yet it is worth considering that these analogies bear some controversiality because they disqualify by another mean (rhetorical) than making a point (but also by supporting a point).


Aaron Clauset also commented on Peixoto’s tweet. Clauset is a prominent network scientist who was central to a controversy about scale-freeness I previously wrote about, and I then identified him as an experimentalist: a researcher who builds up from experimental results towards theory. Let’s see his reaction here.

The exchange gradually involves other Twitter users, but the interactions between Peixoto and Clauset remain fairly linear and I collect them here as a straightforward dialogue.

AC: “There are many, many things to like about probabilistic generative models for community detection. But, alas, they are no panacea, because there is No Free Lunch in community detection (and, worse, No Ground Truth).” (link to this paper he wrote with @DanLarremore and @PiratePeel, also mentioned).
TP: “The claim is that statistical inference is more meaningful when the objective is to reach an inferential conclusion. Surprised this is controversial. Plus: the NFL is no panacea either. It covers mostly *uninformative* community structures.” (link to the preprint)
AC: “TBH, it’s your absolutism that makes it controversial. But you’re also using ‘descriptive’ in a non-standard way that I don’t think is epistemically helpful. In statistics, inferential models are almost always descriptive (in the standard sense); exception being causal inference.”
TP: “I don’t believe is non-standard: https://en.wikipedia.org/wiki/Descriptive_statistics But I’d happy to know of a better way to distinguish inferential from non-inferential analyses.
Re absolutism, I tried to be clear that it all depends on the question being asked. But we should be coherent with our aims.”
AC: “Model parameters are summary statistics, too, and so also descriptive statistics. Worse, in some exponential family models, any summary statistic can be an inferential model parameter. It’s messy! I think downstream utility is the only guiding light, which precludes absolutism.”
TP: “The relevant criterion here is if we are evoking a generative model or not. The terminology issue is a red herring; all these terms are overloaded. If there is a ‘only guiding light’ then you are being absolutist. Obvious contradiction.”
AC: “We’re in violent agreement on the wonderful properties of generative models! But the NFL theorem etc. have convinced me that non-generative models do have their uses, depending on downstream uses. I get that you feel the SBM is morally superior, but it rubs many the wrong way.”
TP: “I tried to be clear that I’m not favoring the SBM, or any particular model, but only the concept of defining a model explicitly. I’d be very curious to know your take on what I wrote on the NFL theorem: that it involves overwhelmingly incompressible problem instances.”
AC: “Your point about the NFL is an old one (in other literatures) and mathematically true. But it’s not useful because we don’t know the empirical distribution of problem instances. Hence only downstream uses can determine utility. We can agree that one such use is interpretability.”
TP: “If you agree that interpretability is a downstream utility that renders NFL inapplicable, then we are on the same page. Can you give an example (practical or conceptual) of a downstream utility for community detection that does not involve interpretability?”
From this point on, another interlocutor chips in and Clauset does not interact anymore.

In the third tweet of the sequence, Clauset refers to Peixoto’s “absolutism” as a source of controversiality. I have two remarks. First, this is a reason why one can argue that there is a controversy: some people identify it as such. This reason is not good enough on its own, but as we will see, the dispute also lasts. It’s a controversy in the sense that the actors agree on their disagreement, try to fix it the usual way (locating the misunderstanding), yet fail to reach closure (at least for now). The actors disagree on where their disagreement lies, which makes it more than a simple dispute. Second, there is a superficial reading of the exchange where the argument is that Peixoto’s claims are peremptory, oversold. And indeed, Clauset argues that it is a matter of form rather than content; as if he agreed on Peixoto’s point, but not on the way it was put. But I do not subscribe to that reading, because the exchange makes it clear that Clauset does have a scientific disagreement with Peixoto. What’s more, it do not see where Peixoto’s argument would be oversold: he does stand his (argumentative) ground. So what does this “absolutism” refer to?

I understand the argument on “absolutism” as such: the superiority of Peixoto’s method is not controversial, what is controversial is the disqualification of other methods. I believe that this is one of the hot spots of the controversy, probably the main one, because it is at the same time about form and content, about the perception of Peixoto’s claims and about these claims on a scientific ground. This deserves careful unpacking. Indeed, everyone would agree that some method A could be generally preferable to some method B, while not being preferable in every situation. The superiority of a method depends on the context. The disqualification of other methods is not necessary to the promotion of Peixoto’s one. I think it begs multiple questions, that we will approach step by step.

A point of detail is the kinds of methods discussed. Peixoto demarcates between “inferential” and “descriptive”. Clauset challenges the term “descriptive” and Peixoto accepts the alternative “non-inferential”. But then the discussion moves the demarcation to “generative models” versus “non-generative models” and I don’t think that it corresponds exactly to “inferential” versus “non-inferential”, but I will overlook the nuance. Peixoto will re-use the “inferential/non-inferential” demarcation in his next Twitter thread.

Is there a context where non-inferential methods are preferable? Clauset repeatedly argues that yes, non-inferential methods are preferable in some situations: “the NFL theorem etc. have convinced me that non-generative models do have their uses, depending on downstream uses.” He later argues that “only downstream uses can determine utility” and that “one such use is interpretability.” In other terms: when non-inferential methods provide more interpretable results, they are preferable to inferential methods. It seems to me that Peixoto generally argues that non-inferential methods are never preferable [EDIT: but his 1st post clearly states cases where they are useful, see his comment to this post], on the motive that non-inferential methods are just inferential methods that do not disclose their model, which makes them strictly worse (we will see that below). But in this case, the debate moves to a very specific point (having to do with the nuance I overlooked) and I will not unpack it.

Back to the question of absolutism. One way Peixoto is deemed absolutist by Clauset might be that he argues that non-inferential methods are never preferable. But there is more to it, which we see when Clauset writes: “I get that you feel the SBM is morally superior, but it rubs many the wrong way.” For context: SBM (stochastic block model) is a generative model used by Peixoto in his inference approach. Remark that Clauset makes it about: (1) Peixoto’s feelings; (2) moral superiority; and (3) shocking many people. I assume some amount of connivance here, because Clauset is “in violent agreement on the wonderful properties of generative models.” In short, I think he says that Peixoto is shooting at an ambulance: inferential methods are generally superior, but in some niche cases they are not, so disqualifying non-inferential methods is unnecessary and counterproductive, insofar as it is not necessary to promote inferential methods, but may spark opposition.

One could make the case that Clauset wants peace and Peixoto conflict. The third blog post in particular argues that the popular non-inferential method of modularity maximization is “considered harmful”. Peixoto shows a pretty clear agenda of replacing that method by an inferential one, for reasons that are clearly stated in the preprint, the blog posts, and the tweets. But I think that the contention is about what the battlefield is. Is it about a few specified techniques like modularity maximization, or about all non-inferential methods? It seems to me that Clauset does not want to disqualify non-inferential methods because they have legit uses, albeit possibly niche. That might be compatible with the obsolescence of modularity maximization. And Peixoto wants to disqualify modularity maximization because it is strictly worse than the inferential equivalent he proposes. And that might be compatible with other non-inferential methods being preferable in some circumstances.

I unpack these nuances because I believe that Peixoto is de-facto disqualifying non-inferential methods, while arguing that he is not, or not entirely. More on that later. I think that he is unintentionally ambiguous, and that his argument could be received as an unfair disqualification of non-inferential methods, echoing the fears of methodological imperialism of computer science [natural sciences *comment on this edit at the end of the post] over other disciplines. And I think that the preprint actually essentially argues a fair but limited disqualification of non-inferential methods, but that is still up for debate.

Objections to the tweets about the second blog post

Lambiotte interacts again with Peixoto, in a similar manner as the first time.

Let me summarize their discussion, only retaining certain parts:

TP: “… *Every* descriptive method, whether you want it or not, is equivalent to some *implicit* generative model!”
RL: “Either I do not understand, or I disagree with this comment. …”
TP: “I explained this in …” (explanations ensue, in multiple tweets)

From there, the discussion branches out in 6 different threads. Peixoto and Lambiotte are having multiple arguments at the same time, as Twitter allows. Although they somehow interfere, I kept them separated but featured a timestamp for context.

Branch 1
RL (6:10 PM): “Well, the Markov chain and the the graph are equivalent, so unsure about this argument.”
TP (6:18 PM): “A random walk is a stochastic process that has the adjacency matrix as a *parameter*. It does not model it.”
RL (6:29 PM): “I just meant that a Markov chain and the adjacency matrix are equivalent. It does not model it, finding structure in one is the same as finding structure in the other, isn’t it?”
TP (6:48 PM): “Here’s an example that maybe will make it clearer. …” (an example follows and the thread stops there)

Branch 2 (it branches out of a slightly different place)
TP (11:25 AM): “Whenever you *do* use it for inferential purposes w.r.t. the structure (i.e. your answer to the litmus test is ‘yes’), then what you are doing, whether you want it or not, is equivalent to inferring this hidden model.”
RL (6:13 PM): “I find it difficult to buy statements like ‘possible rules that were used to create them’, etc. The generation of a network is complex process, involving homophily, triadic closure, social influence, etc.”
TP (6:17 PM): “I don’t follow. You just listed several possible rules that can create networks. What don’t you buy? It’s OK to criticize the SBM (or any other model), but not the concept of statistical inference.”
RL (6:28 PM): “Even if you find communities via inference, I am unsure that the partitions are likely to have been responsible for the observed empirical network.”
TP (6:32 PM): “This is such a strange statement. The objective of inference is not to find ‘the truth’, but the most plausible explanation from a set of possibilities. You put statistical inference to a standard so much higher than you ever put Markov Stability or any other method.”

Branch 3
RL (6:14 PM): “A SBM, corrected of not, is thus clearly too unrealistic to help understand the formation of the network.”
TP (6:22 PM): “We can only talk about models in comparison to other models. Can you say that the SBM is more or less expressive that whatever lies behind Markov Stability? Clearly the SBM is more expressive than the model hidden behind modularity. In any case, I’m not advocating for the SBM in particular, but to the idea of using explicit models.”

Branch 4
RL (6:16 PM): “Of course, ‘inferential methods’ will do a good job to extract/explain communities from SBMs, but what would be the argument to use them on empirical data, or even on networks generated by other models (e.g. random geographic graphs, random hypergraphs, etc.)?”
TP (6:26 and 6:37 PM): “The argument for the SBM is the same as for histograms: they are not supposed to be right, only to approximate. This they do fairly well. But I’m not trying to argue for the SBM. Only to point out that if the objective is to do inference, there’s no option but to use a model.
The point I’m trying to make is that there is a *formal* equivalence between any community detection method and the statistical inference of *some* model. Therefore, it’s nonsensical to criticize the SBM (or any model) in favor of one that you don’t even know how it looks like.”

Branch 5 (6:17 PM) Just two tweets that I skip here.

Branch 6
RL (6:27 PM): “I do not mean to criticise statistical inference, but this is not the only viewpoint when analysing data, especially for complex data where one does know the model that generated them.”
TP (6:40, then 6:58 PM): “It is a misconception that statistical inference is only useful when we know the true model. If this were true, it would never be useful. If we set out to find communities in networks, we are not setting out to model every aspect of it. I’ve heard this argument before, but I suppose I don’t understand it. In the paper I outline two objectives: to describe or to infer. Clearly, if the objective is to infer, statistical inference is the only game in town. If the objective is to describe, you can do whatever. But if you are describing, you should refrain from making inferential statements (e.g. ‘these nodes had a higher probability of being connected’, etc). As soon as you do that, you fall back into inference. And if you are just describing, then it’s not a problem to overfit. So if you see a face on a piece of toast, or communities in random graphs, that’s OK. If that’s a disturbing result, then it’s only because your objective was inferential all along.”

Lambiotte’s objection is raised about Peixoto’s following point: descriptive methods are basically inferential methods that do not disclose the model they fit. Let me first clarify that Peixoto’s work also makes the point that descriptive methods are bad inferential methods, but this is not the point of contention here. Lambiotte’s objection is about whether or not non-inferential methods can be seen as just inferential methods with an implicit model. His second intervention highlights how problematic it is to fully know the processes that have determined a given empirical network (i.e., in real life). Peixoto at first interprets the objection as a questioning of the principle of Bayesian inference, but Lambiotte makes it clear that it is not. It is instead questioning that disclosing the model is the most important thing of the method in empirical situations where one cannot know the underlying processes anyway. In response, Peixoto argues that statistical inference is always useful, even when we don’t know “the true model”, because stating the model is necessary to making “inferential” statements. I will return to this point. Finally, Peixoto makes the “face on a piece of toast” argument again, without the picture this time.

I have four highlights. First, once again, the dispute is respectful and looks like a collective inquiry for truth (for agreement). Both parties are open about the limits of their own understanding: “either I do not understand or I disagree”, “I don’t follow”, “I’ve heard this argument before, but I suppose I don’t understand it.” Disclosing the points of misunderstanding is instrumental to determining the depth of the disagreement: does it lie in the form or in the content? Does the trouble come from just a misreading or from the challenging of an actual scientific statement? And which one? Such inquiry is the goal of the whole exchange, as explicitly stated by Lambiotte in the beginning. This is the cultural norm of science, and not of Twitter in general. In this Twitter subspace, science prevails.

Second, remark that not only Peixoto and Lambiotte acknowledge the difficulty to understand each other, but their responses also seem to partially miss the other side’s point. They might agree to disagree, but they do not easily agree on where the disagreement lies, hence this collective effort to locate it. Lambiotte, ultimately, did not acknowledge that the contention was closed, and I don’t assume its resolution.

Third, I want to highlight that Peixoto paints statistical inference as something that it is not OK to criticize, and something that is strictly superior to what he calls “descriptive methods” in every situation. I think it reflects a common but not universal position in computer science [physics *comment on this edit at the end of the post].

Fourth, the face on a toast again, once again used to ridicule: “So if you see a face on a piece of toast, or communities in random graphs, that’s OK.” Or is it just me? I think that finding communities in a random graph can be understood as an absurd statement, hence the contrast between “that’s OK” and the ridicule of the “face on a piece of toast”. But arguably, it depends on what you call a random graph: given which model? This time, Peixoto makes a connection to the test presented in the first blog post. I will reformulate it using Peirce’s semiotics, because it underlines the role of causality in meaning. In Peirce’s semiotics, the smoke signifies the fire because it indicates it, because it is caused by it. Peirce calls smoke an “index” or “indexical sign” of fire. From there, Peixoto’s “litmus” test can be restated as follows:

  • If you consider that whatever you see is real, i.e. if the face on the toast is a face because we see it as a face, then your approach is descriptive and you can use whatever method you want.
  • If you consider what you see as an index of something else, i.e. the shape on the toast is only a face if it was caused by an actual face, then your approach is inferential and you must use inferential methods.

In this analogy, seeing communities in a random graph is framed as something as ridiculous as seeing Jesus on a piece of toast. But this assumes a specific way our observations have a meaning. What about other ways? This open question is why the controversy resists closure.

The tweets about the third blog post

These tweets have no objections, and that is something that deserves a highlight. Indeed, the post is the most explicit call for disqualification, as its title shows: “Modularity maximization considered harmful.” Peixoto tweets it as “something tame and uncontroversial” and I don’t know if this is humorous or not. Was he sarcastic about expecting some pushback, or was he actually confident in the that point being consensual? In any case, as we have seen, the superiority of inferential methods is commonly accepted, and these tweets were not debated on Twitter.

Non-inferential methods considered useful?

Since I commented that I found Peixoto ambiguous, I want to document what he says in his blog posts about the usefulness of non-inferential methods, one of which is modularity maximization. This would help delineating what exactly he aims at disqualifying, and in which situations. I will highlight the key elements.

Note: I will entirely skip the argument about when and why inferential methods are more appropriate. Mind that my focus misrepresents his full argument.

First blog post

“Here we point out the major differences between [descriptive and inferential community detection methods] and discuss how to decide which is more appropriate, and also why one should in general favor the inferential varieties whenever the objective is derive interpretations from data. … descriptive clustering approaches are the method of choice in certain contexts.”
👉 Inferential methods are more appropriate in general, but non-inferential methods are more appropriate in certain contexts.

“A merely descriptive account of the image an be made … However, an inferential description of the same image would seek instead to explain what is being seen.”
👉 Inferential methods are superior because they can explain (because explaining is superior to describing).

“We emphasize that the communities found in fig. (b) are indeed really there from a descriptive point of view, and they can in fact be useful for a variety of tasks.”
“If the answer to [the litmus test] is “yes”, … a purely descriptive approach may be appropriate since considerations about generative processes are not relevant.”
👉 Non-inferential methods can be useful.

Second blog post

“Communities found [by the (non-inferential) Infomap method] could be useful for particular tasks, such as to identify groups of nodes that would be similarly affected by a diffusion process. This could be used, for example, to prevent or facilitate the diffusion by removing or adding edges between the identified groups.”
👉 The Infomap method can be useful.

Behind every description there is an implicit generative model. From a purely mathematical perspective, there is actually no formal distinction between descriptive and inferential methods, because every descriptive method can be mapped to an inferential one, according to some implicit model. … The only difference to a direct inferential method is that in that case the modelling assumptions are made explicitly, inviting rather than preventing scrutiny.”
👉 Inferential methods are superior because they make their assumptions explicit.

(I skip a lot of other quotes that make the same point)

Third blog post

“Despite its widespread adoption, [modularity maximization] suffers from a variety of serious conceptual and practical flaws, which have been documented extensively … The most problematic one is that it purports to use an inferential criterion — a deviation from a null generative model — but is in fact merely descriptive.”
👉 Modularity maximization is inherently flawed because it pretends to be inferential while being non-inferential.

“In the table below we summarize some of the main problems with modularity and how they are solved with inferential approaches.
[There is a table with 5 entries]
Because of the above problems, the use of modularity maximization should be discouraged, since it is demonstrably not fit for purpose as an inferential method.”
👉 Modularity maximization should be disqualified as an inferential method.

“At a fundamental level, all of its shortcoming are shared with any descriptive method in the literature — to varied but always non-negligible degrees.”
👉 All non-inferential methods should be disqualified as inferential methods.

Is there ambiguity`?

These points boil down to three arguments:

  1. Non-inferential methods can be useful
  2. Inferential methods are superior because they can explain
  3. Non-inferential methods are inherently flawed in inferential settings

None of that is ambiguous by itself, and the point on the usefulness of non-inferential methods is more than a concession in principle, as Peixoto provides examples. Peixoto also comes with a pretty clear frame for the superiority of inferential methods: the problems identified by his “litmus test”, that we could call inferential settings.

If there is ambiguity, or potential for misunderstanding, it is, I think, in the cocktail of the three arguments. It sounds like a syllogism: non-inferential methods are useful, but inferential methods can do strictly more, so non-inferential methods are strictly less useful. I think that this argument is flawed, because the superiority of inferential methods is valid in a narrower space than the usefulness of non-inferential methods, so we cannot say that inferential methods are superior in general. Or at least, I don’t see how Peixoto’s paper supports this point.

A final comment on possible misreading of this work as methodological imperialism. It comes from two points. First, the ambiguity I just mentioned, and in particular the that the disqualification of a popular method (non-inferential community detection) relies on the epistemic views of computer scientists [researchers from other fields *comment on this edit just below] (inferential settings). And second, the fact that the disqualification is also made through ridiculing analogies.

I do not have any simple answer to the first potential misunderstanding, but I have one for the second: analogies where there is less ridicule on the side of “descriptive” approaches. For instance a rainbow: we know that it does not exist, it moves as we try to reach it. But at the same time, it is not ridicule to see it, because we are in a much broader intersubjective agreement on their existence. Even though it only exists in our eyes.

Comment on the “computer science” edit

I initially painted Peixoto as a computer scientist in the blog post, but that was just wrong, if only because his approach is typical of physics: it comes from a nomothetic perspective (seeking universal laws), which underlies the argument on the superiority of inferential methods as well as the connection to information theory. You will see his reaction in the comments to this post. My bad!

There is more to it. When I used “computer scientist”, I had in mind something like “the algorithm designer as perceived by its users”. I am thinking for instance of the PhD students who use Gephi in media studies and digital methods, or in the digital humanities. I want to account for their perspective, because they need to be critical of their apparatus, but they lack a lot of the knowledge necessary to understand it in details. It is simply not their field. And in that culture, there is a latent fear of methodological imperialism that we need to talk about.

Here is for instance what Johanna Drucker, a prominent figure in the humanities, writes (she is an expert of visualization):

Most, if not all, of the visualizations adopted by humanists, such as GIS mapping, graphs, and charts, were developed in other disciplines. These graphical tools are a kind of intellectual Trojan horse, a vehicle through which assumptions about what constitutes information swarm with potent force. These assumptions are cloaked in a rhetoric taken wholesale from the techniques of the empirical sciences that conceals their epistemological biases under a guise of familiarity.

Drucker, J. (2014). Graphesis: Visual Forms of Knowledge Production. Cambridge, MA: Harvard University Press.

More quotes of that kind in this post. It’s complicated, because such criticism is legit, but it does not mean that it is necessarily valid. I do not want to dismiss it, because it is necessary. But I also want to say that there are many misunderstandings about who created these techniques and why, who implemented them, and the reasons why they circulated to distant fields like digital humanities. This quote shows that Drucker puts all of this into the huge bag of the “empirical sciences”.

All of this just to say the following: the algorithmic techniques we find in various tools were developed in many fields, including the social sciences and humanities (e.g., Linton Freeman on centrality metrics), yet they might be received by the users of these tools as an emanation of the natural sciences, the empirical sciences, computer science, or whatever. The field they are perceived to come from are not necessarily where they actually come from.

That being said, it makes it even more wrong that I characterized Peixoto’s work as computer science, because it carries the epistemic perspective of physics. I was sloppy, sorry. I actually do not know how community detection is perceived by its users in digital methods and digital humanities. The more I think of it, the less I am convinced that it is perceived as emanating from computer science.


OpenEdition suggests that you cite this post as follows:
Mathieu Jacomy (December 11, 2021). A Twitter controversy about community detection: empirical material. Reticular. Retrieved January 20, 2025 from https://reticular.hypotheses.org/1924


6 thoughts on “A Twitter controversy about community detection: empirical material”

  1. I can’t say how yours (or Aaron’s) perception of “absolutism” came to be, since I tried to avoid it explicitly. On twitter, I refuted it as explicitly as I could within the 280 character limit.

    My *guess* is that there are some who believe in a sort of absolute universal equity between all approaches, and that for any conceivable tool there is an equal number of optimal uses for it, and they push back at any hint of a normative statement (even if it’s only about consistency). In my opinion, this is a kind of absolutism as well, and misses the forest for the trees. (This is also true formally: it’s an inductive bias just like any other.)

    It’s very difficult to talk about information, compression, statistical evidence, etc, using only discursive language and intuition. The technical meaning of these terms are actually very different from everyday language. Furthermore, I don’t think that very important mathematical facts like Shannon’s source coding theorem (“randomness can’t be compressed”) are widely known or intuitively understood.

    Seeing structure in randomness (face on mars), employing Occam’s razor, etc, are the best intuitive examples I could come up with. But there could be even better examples that trigger our intuition even more efficiently. I’m open to ideas!

    A model is compressive if it simplifies the data, i.e. attaches to it a complete but simple (i.e. short) explanation. Describing a mountain as a face offers no simplification, on the contrary, you still need to explain where the face came from. The same thing with clustering a random graph. The problem with conveying this intuitively is that sometimes our intuition actually fails: we *want* to see the face, and this spurious explanation seems simple.

    I should also take the opportunity to clarify one question you had: Since any structure can occur out of randomness (e.g. monkeys typing Shakespeare), then if we observe such an “emergent” structured instance, wouldn’t we make a mistake in concluding that it was not random? The simple answer is: yes, we would! The central question here is: How likely are we to make such a mistake? This question is precisely what “statistical significance” means. In the case of monkeys, we would need 10²⁵ of them just to type “to be or not to be”. This means that if you see this string of text, an astronomically more plausible hypothesis is that it was not in fact typed by illiterate monkeys. Now, of course you would say: but this argument applies to any string, even “wkyhhstgpwjxhmanxuro”! Here’s the catch that separates this last string from Shakespeare: it’s incompressible! Mathematics (Shannon’s theorem) tells us that it is asymptotically *impossible* to describe it, either with a statistical model or with a computer program, in a manner that uses fewer than log2(27¹⁸) = 86 bits. There’s more: we can also prove that the vast majority of strings typed randomly by monkeys are also incompressible like this. This is not true for english text; we can compress it far more (e.g. using a language model), making it quite atypical. So, indeed when we are able to compress an observed string, we have very strong, quantifiable evidence that it could not have been generated randomly with any non-negligible plausibility. (This is also just Bayes’ theorem stated via information theory.) The same concepts apply for community structure, the face on mars (more loosely speaking, because the problem there is not stated as precisely), and statistical data analysis in general.

  2. “It seems to me that Peixoto generally argues that non-inferential methods are never preferable.”

    I’m honestly puzzled how this was misunderstood.

    I stated quite the opposite; both in my blog post and article. I gave clear examples when descriptive methods make perfect sense, and inferential ones do not.

    The two simple examples I gave are microchip design and task scheduling. The graph partitioning algorithms used for this task do not make inferential claims, and if they end up clustering a random graph, that’s irrelevant. The only important outcomes are if the microchip functions efficiently and the tasks finish quickly.

    The point about it being OK for descriptive methods to overfit was not sarcastic. In fact, the concept of “overfiting” only exists in an inferential context (where we are “fitting”). If Markov Stability, which tries to find node groups according to how a random walk might get trapped in them, ends up partitioning a random walk, it did nothing wrong: the random walk actually gets trapped in those groups. The mistake comes when we try to extract from this result an inferential conclusion about the network structure.

    Note that Aaron’s argument about “downstream utility” is in fact identical to mine: it all depends on what you are trying to do.

    My “superiority” argument is in fact almost tautological, and it’s all bout consistency: inferential approaches should be preferred when the objective is to make inferential statements.

    I do claim, however, that *very often* people use community detection with inferential aims. Hence my “litmus test” that tries to reveal if such an aim exist.

    With respect to modularity, let me quote its motivation from Newman’s 2006 PNAS paper, who contrasts it *explicitly* with graph partitioning in its inferential goals:

    “The problem is that simply counting edges is not a good way to quantify the intuitive concept of community structure. A good division of a network into communities is not merely one in which there are few edges between communities; it is one in which there are fewer than expected edges between communities. If the number of edges between two groups is only what one would expect on the basis of random chance, then few thoughtful observers would claim this constitutes evidence of meaningful community structure. On the other hand, if the number of edges between groups is significantly less than we expect by chance, or equivalent if the number within groups is significantly more, then it is reasonable to conclude that something interesting is going on.”

    Any statement about “random chance”, “expected number of edges”, etc, is inferential. The method simply fails at what it sets out to do!

    If I could, I would highlight the phrase above: “If the number of edges between two groups is only what one would expect on the basis of random chance, then few thoughtful observers would claim this constitutes evidence of meaningful community structure.”

    This underlies how community detection is *often* employed in practice. And once you set out to do this, then, yes, statistical inference is not optional!

    PS. I’m not a computer scientist. :-)

    1. My bad for calling you a computer scientist, it’s on me. Your work is firmly that of a physicist. I’ll fix that in the post.

      I also acknowledge here that your blog posts are clear about (1) that modularity clustering is useful in non-inferential settings, and (2) that there is a problem with its rationale, as it aims at a non-inferential answer to an inferential question, so to speak.

      So where do I get the impression that you sometimes argue that “non-inferential methods are never preferable”? It comes from the Twitter discussion, and notably the fact that you convoke the superiority of inference as a response to the mention that non-inferential methods can be useful. For instance in the thread with Aaron Clauset. But I give you that I might be seeing ghosts, as the Twitter format is so constrained that it shapes the discussion in weird ways.

      As you said, the vocabulary is overloaded. From a semi-distant perspective like mine, where I try to translate to an even more distant public (e.g., Gephi users in the humanities), a lot of arguments sound pretty different from what they actually are. For instance, what you call an “informative” community structure. It is not immediately clear to a distant audience that it has a precise meaning, and that it is not a general statement on the epistemic validity of community structures.

      Note to myself: make the list of these overloaded concepts. There is so much going on there.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.