Close reading Wikipedia from Pareto to Network Science, part 1

This exploration searches for traces of the circulation of structuralist claims from statistics to network science, from Pareto to Barabási. Barabási’s claims are indeed reminiscent of the foundations of statistics, when Quetelet or Galton aimed at unveiling the laws of the social. Where does it come from? I saw a possible bridge in the concept of power law, for four reasons:

  1. It is a statistical distribution, comparable to the normal distribution and many others.
  2. It ties to network science insofar as it characterizes scale-free networks (ie. the distribution of node degrees).
  3. As a “statistical law” it carries a structuralist point, a claim to universality.
  4. It also carries a political meaning since it roots in Pareto’s study of inequalities.

Using Wikipedia as a documentary source, I had already observed that the power law and scale-free networks were bridging the domain of statistics and the domain of network science (see my previous blog post on the subject). To challenge my hypothesis and dig deeper, I engaged with the qualitative reading of the relevant Wikipedia pages. From that work I expect to get an overview of the conceptual landscape and to identify hot spots to be leveraged as landmarks for a further study of the academic literature.

Method

From my distant reading of the question, looking at the hyperlinks network, I obtained a list of about 100 Wikipedia pages. This list was too long for the time I had, so I reduced it to the most cited pages in each of my three categories, “Pareto power law”, “Network science”, and the “bridge” between them. I quickly realized that important pages were not listed, so I added them. Of the 45 articles I read, 13 were added in a second time. The distant reading (the network exploration) put me on the right tracks but as expected, the close reading (the qualitative inquiry) was more accurate at delineating the question.

Note: we will not systematically explain the concepts, since it would be make the text heavier for a minor benefit. The general argument is about the relations between the concepts, and should be understandable even without being familiar with them. If not, then of course we suggest to read the corresponding Wikipedia pages.

This is part 1: focusing on the power law

Just defining the concepts we need to understand the field requires an effort. Not because it is complicated, though it might also be, but because each concept appears under different names, with different meanings, and do not always relate consistently. We will focus first on bringing clarity to the main concepts. This part looks only into the power law and a group of closely related concepts that we may call its family – there is quite a lot to write about. The next part will look into other concepts, and then we will inquire further specific kinds of claims and arguments.

As this is a post on my research blog, it tends to expose more than a typical paper would. I publish it early, in the spirit of open research, and following the open source software mantra “release early, release often”. So the text is quite long, but you can the findings summarized below.

Findings

The power law is often referred to as a the Pareto distribution, despite minor differences. The log-normal distribution is often presented as an alternative to the power law, bust also sometimes as an equivalent. There is a controversy about which is better fitting empirical cases, which has failed to reach closure for two decades. Both have crucial similarities but come with widely different narratives. It turns out that the tail of distributions is generally their relevant part, and it does not allow to differentiate our two competitors. As a way out of the controversy, they can be framed as instances of the more general heavy-tail distributions, which also questions the narratives convoked to interpret empirical data.

The power law and its family

  • Power law
  • Pareto distribution / principle
  • Yule–Simon distribution
  • Log-normal distribution
  • Heavy-tailed distribution
  • 80-20 rule
  • Zipf’s law
  • Long tail
  • Matthew effect

A dozen concepts occupy a pretty narrow theoretical space. As we will see, they are close enough that they are sometimes convoked as if they were equivalent. If we had to pick one representative, in the context of Wikipedia it would be the power law, but the Pareto distribution is also very important as we will see. We start by looking at those and from there we give an overview of this space and clarify the relations between these different concepts, and analyze a few key observations.

power law

“In statistics, a power law is a functional relationship between two quantities, where a relative change in one quantity results in a proportional relative change in the other quantity, independent of the initial size of those quantities: one quantity varies as a power of another.”

Power law

In Wikipedia, the power law is posing as the mother of all of its kind. Not only is it defined in a relatively simple and generic manner, independently from any other law or distribution, but it also pretends to subsume all others. In its dedicated page, the section Examples opens with a typical claim about the pervasiveness of the power law, of a sort of claim we will investigate further in a moment: “More than a hundred power-law distributions have been identified in physics (e.g. sandpile avalanches), biology (e.g. species extinction and body mass), and the social sciences (e.g. city sizes and income).” Follows a massive list of 50 cases organized in 7 sections, among which we find several familiar figures: “Pareto distribution and the Pareto principle also called the “80–20 rule””, “Zipf’s law”, “Yule–Simon distribution (discrete)”, and “The scale-free network model”.

Other distributions also acknowledge the generality of the power law. For instance the Pareto distribution, in its own article, is presented as a culturally situated version of the power law.

Pareto distribution

“The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power-law probability distribution that is used in description of social, scientific, geophysical, actuarial, and many other types of observable phenomena.”

Pareto distribution

So what makes the Pareto distribution something more specific than the power law? The quote above, which is the first sentence of the article, states that the power law takes the “Pareto” name in a certain cultural context. However a second point immediately follows that precises that the power law takes the “Pareto” name for a specific value of a parameter of the power law, producing a specific “80-20” pattern.

“The Pareto distribution has colloquially become known and referred to as the Pareto principle, or “80-20 rule”, and is sometimes called the “Matthew principle”. This rule states that, for example, 80% of the wealth of a society is held by 20% of its population.”

Pareto distribution

Note that the “Pareto principle” and the “Matthew principle” have distinct dedicated articles, and that the “80-20 Rule” redirects not to “Pareto distribution” but to “Pareto principle”. Doublons are not uncommon in Wikipedia and they often make sense insofar as the different pages bring different perspectives on the same thing. Those doublons sometimes coexist ignoring each other, but it is not the case here since the different concepts refer to each other explicitly, and link to each other. However not all variations of the power law state equivalence or relation to all others.

Overview of the conceptual space

The following diagram exposes how each article positions itself vis-à-vis other concepts. Each arrow states how the Wikipedia article at the source of the arrow describes its relation with the concept at the target. The absence of a link means the absence of a position (all pairs have been systematically explored).

How in Wikipedia different concepts similar to the power law state their differences with each other

Download the source file as a CSV matrix
(RIGHT CLICK > SAVE AS or equivalent)

By exploring this landscape we find different types of relations: inclusion (ie. one concept being more generic and the other one more specific), equivalence (ie. two different names for the same thing), and difference. If we omit the details to only look at the type of relations, if we also omit the missing links, and we generalize the relations of inclusion, the network’s structure appears quite close to an ontology. This coherence shows a general consensus, in Wikipedia, about this otherwise tight conceptual space. The diagram below shows the relations of inclusion as boxes containing each other, which greatly simplifies the image. Five relations are not of inclusion, and are thus remarkable:

  • Difference between log-normal distribution and the power law
  • Equivalence between the log-normal and Pareto distributions
  • Equivalence between the Pareto distribution and the Matthew effect
  • Equivalence between the Pareto principle and the 80-20 rule
  • The long tail is a feature of the power law and its more specific versions
Relations between concepts close to the power law, focusing on inclusion represented as boxes containing each other. Other relations are represented as arrow (is a feature of), equal (equivalence) and unequal (difference) signs.

The general landscape seems quite classic, except for a paradoxical relation of equivalence between the Pareto distribution and the log-normal distribution, as in the following quote:

“The Pareto distribution and log-normal distribution are alternative distributions for describing the same types of quantities. One of the connections between the two is that they are both the distributions of the exponential of random variables distributed according to other common distributions, respectively the exponential distribution and normal distribution.”

Pareto distribution

This quote is key because it hints at the specific position of the Pareto distribution in this conceptual landscape. On the one hand, the central concept is the power law (PL), and not the Pareto distribution (PD). The PL is more generic (both concepts agree on that), is claiming to subsume other laws (which PD is not), and has the more open definition. It is also strictly speaking the proper statistical concept to refer to in most situations, for instance to characterize what the long tail is a feature of. But on the other hand the PD is often mentioned along with, or even instead of, the PL. For instance in the following quote, the Zipf distribution uses both the PL and the PD as points of comparison, both times marking its key feature of being discrete (non-continuous).

“Zipf’s law […] can be approximated with a Zipfian distribution, one of a family of related discrete power law probability distributions. […] The Zipf distribution is sometimes called the discrete Pareto distribution because it is analogous to the continuous Pareto distribution in the same way that the discrete uniform distribution is analogous to the continuous uniform distribution.”

Zipf’s law

In this example we can see that the concept of power law is so vague that it raises issues when it is necessary to refer to it. Because of its extremely broad definition (“a functional relationship between two quantities, where […] one quantity varies as a power of another”) the power law plays the role of a group of notions (“family”) or a general principle. It does not have a precise equation and strictly speaking, cannot be called a distribution. This role is better played by the Pareto distribution, which is the archetype of the Power law. We hypothesize that it might actually be the prototype of the power law, insofar as the latter might have been forged as a generalization of the former. Anyway this relation leads to metonymies where the Pareto distribution is invoked in place of the power law when the point requires a precise distribution.

The Pareto distribution and the power law nevertheless have a paradoxical situation. Despite being largely equivalent, they seem in opposite situations towards the log-normal distribution: the power law is mentioned as different, while the Pareto distribution is mentioned as equivalent. How could they be, if they are more or less the same thing? As we will see, this paradox is just an instance of a more general situation where the log-normal controversy is mentioned both as equivalent and as different to the power law. This seems to be the particular form taken in Wikipedia by a controversy about the pervasiveness of these two notions. And as we will see, the Pareto distribution plays a role in it.

log-normal distribution

Taking a look at the concept map above, you can notice that the log-normal distribution only has inbound links, and no outbound links. It does not state any relation with any other concept (of our list), even though other concepts state relations with it. Contrary to the power law and similarly to the Pareto distribution, it is a distribution with a precise equation. It is related to the most central and important of all statistical distribution, the normal distribution, of which it is the logarithmic version. It also relate to the very important central limit theorem. Finally, it also takes the name of an important historical figure of statistics, Francis Galton.

“In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. […] The distribution is occasionally referred to as the Galton distribution or Galton’s distribution, after Francis Galton. The log-normal distribution also has been associated with other names, such as McAlister, Gibrat and Cobb–Douglas. […] A log-normal process is the statistical realization of the multiplicative product of many independent random variables, each of which is positive. This is justified by considering the central limit theorem in the log domain.”

Log-normal distribution

The log-normal distribution, like the power law, has a claim to pervasiveness. The section “Occurrence and applications” of its article lists 21 cases and starts this way:

“The log-normal distribution is important in the description of natural phenomena. This follows, because many natural growth processes are driven by the accumulation of many small percentage changes. These become additive on a log scale.”

Log-normal distribution

Log-normal/power law controversy

Despite their different origins and characterizations, the log-normal and power law distributions have important similitudes, in particular when used to fit empirical data. A reader could not guess it by just reading the article on the log-normal distribution, since it does not relate to the characterization of the power law and this similitude is not mentioned. However it is extensively evoked in the article about the power law:

“Few empirical distributions fit a power law for all their values, but rather follow a power law in the tail. […] all power laws with a particular scaling exponent are equivalent up to constant factors, since each is simply a scaled version of the others. This behavior is what produces the linear relationship when logarithms are taken of both f(x) and x, and the straight-line on the log–log plot is often called the signature of a power law. With real data, such straightness is a necessary, but not sufficient, condition for the data following a power-law relation. In fact, there are many ways to generate finite amounts of data that mimic this signature behavior, but, in their asymptotic limit, are not true power laws (e.g., if the generating process of some data follows a Log-normal distribution). Thus, accurately fitting and validating power-law models is an active area of research in statistics […] For example, log-normal distributions are often mistaken for power-law distributions”

Power law

To understand what is happening here we need to take a few steps outside Wikipedia. There is indeed a controversy about which of the two distributions better fit empirical data, but unfortunately there is no dedicated Wikipedia page. I realized that a quick investigation in the academic literature was fruitful to understand some Wikipedia statements. A complete study would certainly be worth it but to keep us focused I choose to compromise. I dedicated a few hours to a quick-and-dirty mini literature review, whose findings are not robust but sufficient to help us. In the following paragraphs we will see why a controversy emerged and how it leads to a specific kind of statement in Wikipedia.

On that topic, the Wikipedia article has for reference A Brief History of Generative Models for Power Law and Lognormal Distributions by M. Mitzenmacher, published in 2004. This paper “intended to be accessible to a general audience” provides an excellent overview of the question and summarizes that “lognormal and power law distributions connect quite naturally, and hence, it is not surprising that lognormal distributions have arisen as a possible alternative to power law distributions across many fields.” For a less academic and more dramatic opinion on the controversy, let us cite a 2005 post on the 3 Quark Daily blog where an exchange of emails between
C. Shalizi (champion of the log-normal) and A.-L. Barabási (defender of the power law) sets off the following comment: “This is actually a new trend — take someone’s claim that something is a power law/lognormal and then claim it is actually distributed the other way. Frankly, the two distributions are very close…” Wikipedia does not mention it, but it is important to acknowledge here that there has been a “trend” of claims of this type: “empirical data X, previously considered to follow a power law, are in fact better fit by a log-normal distribution” – or the other way around. At the core of the controversy lies this long lasting dispute about which of the two distributions is a better fit for empirical data (M. Mitzenmacher could trace it back to the 1950s), but the dispute itself is not the controversy.

The log-normal controversy is a dispute about the dispute. On the level of the core dispute itself (power law versus log-normal distribution), at least actors agree on their disagreement. But on the state of the debate, there is no consensus. In particular, multiple authors have successively tried to put an end to the dispute, without any success.

The most common attempt to reach closure states that (1) considering the kinship between the two models, and (2) the inherent gap between any model and the empirical data it fits, then (3) the dispute does not have enough relevance to justify the attention it gets. For commodity, let us call this point the pragmatist closure. In his 2004 paper M. Mitzenmacher already argues for it and suggests that “from a more pragmatic point of view, it might be reasonable to use whichever distribution makes it easier to obtain results.” In 2009, A. Clauset, C. Shalizi and M. E. J. Newman conclude their paper Power-law distributions in empirical data this way “In closing, we echo comments made by Ijiri and Simon more than thirty years ago and similar thoughts expressed more recently by Mitzenmacher. They argue that the characterization of empirical distributions is only a part of the challenge that faces us in explaining the causes and roles of power laws in the sciences. […] We hope that the methods given here will prove useful in all of these endeavors, and that these long-held hopes will at last be fulfilled.” Are their hopes fulfilled now? In a paper published in 2012 and titled Pareto or log-normal? A recursive-truncation approach to the distribution of (all) cities, G. Fazio and M. Modica once again attempted without success to reach a pragmatist closure, observing that “repeating [their analysis] confirms the difficulty of distinguishing a Pareto tail from the tail of a log-normal and, in turn, identifying the city size distribution as a false or a weak Pareto law”. Note in this citation how the key element is the tail of the distribution.

Other authors disagree with the pragmatist closure, presumably because they believe that the core dispute is actually relevant and deserves to be decided. Such authors have tried to reach closure by proposing better models. An often cited 2000 paper by W. J. Reed titled The Pareto Law of Incomes – an Explanation and an Extension argues in favor of the “double Pareto” distribution (followed by other papers of the same author). As M. Mitzenmarcher summarizes, “an appropriate double Pareto distribution can closely match the body of a lognormal distribution and the tail of a Pareto distribution” but obviously, this new model did not close the dispute. A more recent attempt at closure by better model can be found in a 2016 paper by J. Luckstead and S. Devadoss titled Pareto tails and lognormal body of US cities size distribution. “The purpose of [their] study is to propose a distribution […] to model lower and upper tails with Pareto and middle range with lognormal […]. We denote this distribution as Pareto-tails lognormal (PTLN).” M. Bee tried a similar move in 2015 with a paper whose title will suffice: Estimation of the lognormal-Pareto distribution using probability weighted moments and maximum likelihood. There is even a Wikipedia page on the Modified lognormal power-law distribution. Note that in these examples as well, the tail is the key element.

We hypothesize that the inability to reach closure is an issue for Wikipedia contributors who cannot meet a firm ground to settle a statement on the core dispute. It is unclear if the differences between the power law and the log-normal distribution are relevant, and it is unclear if one of the two models, or any other model, is actually better. If a contributor, according to a precautionary or neutrality principle, would undertake to represent the different positions of the controversy, she would corner herself in both (1) mentioning the dispute and (2) contesting its relevance. This double movement to simultaneously acknowledge and undermine the dispute is characteristic to certain statements or sets of statements we find in Wikipedia. For instance the Power law article states that “log-normal distributions are often mistaken for power-law distributions”, while the Pareto distribution article states that “The Pareto distribution and log-normal distribution are alternative distributions for describing the same types of quantities.” The article on Gibrat’s law (a law related to the log-normal distribution even though it does not appear in the diagram presented before) offers another example of this double movement, once again involving distribution tails:

“While the city size distribution is often associated with Zipf’s law, this holds only in the upper tail, because empirically the tail of a log-normal distribution cannot be distinguished from Zipf’s law. A study […] finds that the entire distribution of cities, not just the largest ones, is log-normal. But this last claim that the lognormal distribution cannot be rejected has been shown to be the result of a statistics with little power: the uniformly most powerful unbiased test comparing the lognormal to the power law shows unambiguously that the largest 1000 cities are distinctly in the power law regime.”

Gibrat’s law

Distribution tails

As we have had multiple occasions to note, tails matter. There is indeed a whole science of distribution tails and, for clarity, it is worth noting that even though there is a notional use of “long tail”, there also is a specific statistical meaning. In the articles we have analyzed the notional use is common but not universal. The other important type of tail is the “heavy tail” which appears to be the largest category, subsuming both the power law and the log-normal distribution.

“The term [long tail] is often used loosely, with no definition or arbitrary definition, but precise definitions are possible. […] In statistics, the term long-tailed distribution has a narrow technical meaning, and is a subtype of heavy-tailed distribution. […] Note that there is no sense of the “long tail” of a distribution, but only the property of a distribution being long-tailed.”

Long tail

The multiple mentions of distribution tails we have seen so far come from the fact that the power law and the log-normal distribution have identical tails. The tail is naturally central to the pragmatist closure of the log-normal/power law controversy, since it is where the dispute dissolves.

“Few empirical distributions fit a power law for all their values, but rather follow a power law in the tail.”

Power law

“While the city size distribution is often associated with Zipf’s law, this holds only in the upper tail, because empirically the tail of a log-normal distribution cannot be distinguished from Zipf’s law.”

Gibrat’s law

But the tail is also central to the closure by better model, since many of them achieve to fit empirical data by compositing different power laws. Since the log-normal/power law dispute only happens in the head (of the distributions), this strategy is very efficient. Indeed the head is smaller than the tail and fitting a different law for the head greatly improves the model quality. The “double Pareto” distribution for instance takes its name from being the composite of two different Pareto distributions, separated by a single breaking point. We can find an illustration of such “broken power law” in the article about Zipf’s law.

Example of a “broken power law” from the Zipf’s law page

Compositing power laws displaces the debate to discussing where and how to determine breaking points. Since it keeps using power laws, it allows to retain some of their features and strengthens the corresponding rationale. The log-normal distribution and the power law come with different rationales and whichever better fits empirical data can claim its own explanation for the observed phenomenon. Unfortunately Wikipedia is not rich enough to investigate this aspect of the controversy, so we will just flag this sub-controversy for further inquiry in the academic literature. We can nevertheless mention this passage from the Rank-size distribution article:

“Most simply and commonly, a distribution may be split in two, termed the head and tail. If a distribution is broken into three pieces, the third (middle) piece has several terms, generically middle, also belly, torso, and body. These frequently have some adjectives added, most significantly long tail, also fat belly, chunky middle, etc. In more traditional terms, these may be called top-tier, mid-tier, and bottom-tier. […] Purely quantitatively, a conventional way of splitting a distribution into head and tail is to consider the head to be the first p portion of ranks, which account for 1-p of the overall population, as in the 80:20 Pareto principle, where the top 20% (head) comprises 80% of the overall population. The exact cutoff depends on the distribution – each distribution has a single such cutoff point—and for power laws can be computed from the Pareto index. […] The Yule–Simon distribution that results from preferential attachment (intuitively, “the rich get richer” and “success breeds success”) simulates a broken power law and has been shown to “very well capture” word frequency versus rank distributions. It originated from trying to explain the population verses rank in different species. It has also been show to fit city population versus rank better.”

Rank-size distribution

Finally, we must note that tail issues are also, and possibly mainly, about the size. This is why heavy-tailed distribution is the most generic category of our conceptual landscape. Vilfredo Pareto was “was fascinated by problems of power and wealth”, as mentioned in its own article. The Pareto distribution indeed formalizes a deeply rooted inequality. “It is a social law”, he wrote. The existence of a long tail is enough to characterize this inequality. The log-normal distribution may have a different head, it embeds as much inequality. But the inequality can be evaluated according to the thickness of the tail, hence the relevance of the general group of heavy-tailed distributions, who all embed this relation to inequality.

As a conclusion to this long section about the power law and its related concepts, and as a hint to the next part of our inquiry, network science, this quote from the article on complex networks, shows that the heavy-tailed distribution matters more than the power law:

“Most of these reported “power laws” fail when challenged with rigorous statistical testing, but the more general idea of heavy-tailed degree distributions—which many of these networks do genuinely exhibit (before finite-size effects occur)are very different from what one would expect if edges existed independently and at random”

Complex network

Getting back our imagination about the regulation of algorithms

I disagree with many clever minds when it comes to algorithms. Take for instance the following sentence: “The opacity of the algorithms’ power means that it isn’t easy to determine when algorithmic governance stops serving the common good and instead becomes the servant of the powers that be.” A pretty common claim. I am fine with it, except when it blames “the opacity”. A regrettable misunderstanding is at play, which paralyzes some people’s imagination. I think there are issues with algorithms, and I would like to provide a standpoint from which everyone can be critical, mobilize their political imagination, and step into the debate. My point is dead simple: we do not need to understand how algorithms think as long as we acknowledge that they have agency.

Algorithms, complexity and I have a long story, but here like anyone else I am simply concerned with algorithms impacting my life. They might be hidden and have an indirect influence, their effects are nevertheless real. I am writing this post in reaction to an article written by two Danish thinkers, Jacob Mchangama and Hin-Yan Liu, titled The Welfare State Is Committing Suicide by Artificial Intelligence. It is a short read, and all my quotes come from it. The authors reflect on the recent use of “algorithms to identify children at risk of abuse” in the Danish welfare system. Their main point is that “democratic infrastructures” and “judicial procedures” cannot keep algorithmic power in check, because we “will be largely unable to understand and explain why the algorithm” took its decision, which makes it “impossible for courts to hold [it] accountable.” They locate the source of the problem in the opacity of algorithms which allows, they say, to “take a toll on privacy, family life, and free speech, as individuals will be unsure when their personal actions may come under the radar of the government.” I agree that the situation requires scrutiny from the public but beyond that, I will not waste your time on reading my opinion. I just want to explain why I disagree that opacity prevents us from regulating algorithms. The following quote exposes this precise point.

“Consider the Danish case: the civil servants working to detect child abuse and social fraud will be largely unable to understand and explain why the algorithm identified a family for early intervention or individual for control. As deep learning progresses, algorithmic processes will only become more incomprehensible to human beings, who will be relegated to merely relying on the outcomes of these processes, without having meaningful access to the data or its processing that these algorithmic systems rely upon to produce specific outcomes. But in the absence of government actors making clear and reasoned decisions, it will be impossible for courts to hold them accountable for their actions.”

Indeed algorithms are political beings. Insofar as they take decisions, they produce an effect, hence they have agency. And it is fair to expect them to become “more incomprehensible to human beings.” But concluding that this kind of opacity prevents us from regulating them is misunderstanding what it means to comprehend an algorithm. Contrary to what the authors believe, we have many ways to evaluate an algorithm after its outcomes. We can know it in depth and make many reliable predictions just by analyzing its outputs. This is not free, it comes at an additional cost to developing the algorithm itself, but it does not require to understand how it works, how it thinks. This is sometimes called post-hoc interpretability, to emphasize that interpretation is not relying on the internal mechanics of the algorithm. This is typically the case with deep learning where the algorithm is trained in a way that is “incomprehensible to human beings.” This is nothing special, just new to those who thought we had a divine right to understand everything in this world. As for us who feel the constant pain of being too stupid for what the world has to offer, we are used to have our capabilities exceeded and we find turnarounds to keep going on – when we can. Complexity is a name we use sometimes to talk about that. Post-hoc understanding is a turnaround we use to keep going on with algorithms that are too complex.

To me this whole story feels like there is not much to write about, but I know it is false because so many people feel threatened by opacity. It may come from misplaced confidence in our ability to contain and master all the things we produce, despite the accumulated evidence that it is not the case, culminating in our inability to keep our own habitat, the surface of our planet, in a state that suits our needs. Common misconceptions about what does or does not act are blinding us, for instance when we think that human beings have a power to act that the surface of our planet is lacking – but it provides a hot feedback! Algorithms are in the same situation. Once we acknowledge that they act by themselves (in the sense that they are opaque to us) and consider them accordingly, ways to regulate them in a democratic setting naturally appear. They do because we are surprisingly skilled at post hoc interpretation, something we use everyday without even thinking of it. Except we don’t usually do it for artificial things, only for other human beings.

Regulating the agency of human beings is the point of all politics, even though we barely know how the human mind works. The questions that seem to bother us about algorithms sound surprisingly empty when asked about persons. Let us call our algorithm Donald. What if civil servants working to detect fraud were largely unable to understand and explain why Donald identified a family for early intervention? Well this would be an issue, but not much more than an incompetent employee. Our societies invented many ways to deal with such things. We might stick with Donald until someone complains and then fire him. Or we might evaluate his task after a series of indicators and check if he does his job. We might hire different Donalds and conduct an independent audit. We might ask people to vote. None of these solutions involve looking inside his brain. And we would certainly not conclude that in the absence of clear and reasoned decisions, it is impossible to hold Donald accountable for his actions.

Understanding an algorithm does not even dispense from regulating it. Let us assume that black people are overrepresented in Donald’s targets, and a journalist claims Donald is racist. Are you surprised that Donald could be racist? People are constantly surprised that algorithms can be. Should we assume that Donald is fair? Of course not. What makes him racist, the way he thinks or the way he acts? Imagine that we can look into Donald’s mind and we find a sound rationale, where race is not a factor of the decision but geographical location is, and it turns out that mostly black people live in the targeted locations. Does it make Donald less racist? Algorithms do not dispense us from dealing with such political questions, and our solutions as a society are not so different for algorithms than for people. Even the fact that entire classes of algorithms might be flawed is not a particular problem. #BlackLivesMatter is scrutinizing an entire class of human beings.

Algorithms are problematic, but their problems do not arise from their opacity. They arise from our democratic institutions not acknowledging their agency. We saw the American Congress question Mark Zuckerberg, but it should have questioned Facebook algorithms first. Algorithms are not so mute, they can designate where responsibility flows. Of course the Congress did not have the expertise to question algorithms, but it was also powerless because it had no practical mean to scrutinize them. Why would we leave beings with such powers out of any jurisdiction? We cannot just let their owners have the exclusivity of their scrutiny. That would be an incredibly naïve mistake for a democracy, a mistake we would never do if their agency was more obvious.

I drew a number of conclusions for myself. I share them below as poorly reflected suggestions that might have in fertility what they lack in robustness.

Scrutiny. We do not leave children unattended. We must not leave algorithms unattended. Who is in charge of watching a given algorithm? Our democratic infrastructures could enforce ways that this question always has an answer. No algorithm should be left out of jurisdiction, so no algorithm should be out of scrutiny.

Accountability. Justice succeeds in dealing with the accountability of human beings, which is a difficult question. We can do it for algorithms if we acknowledge their agency. Like for human beings, accountability naturally circulates to others – algorithms, persons… Like for human beings, sometimes no one is guilty. Algorithms can be evil, but can also make honest mistakes, and sometimes both at once. And they have their own disorders.

Disposability. We can dispose of algorithms and we can proliferate them at a low cost, with or without variations. This makes a huge difference with persons and opens additional opportunities to regulate them. In many situations we use a single algorithm that has been declared the best fit for the task. This might be a consequence of an ideological quest for objective efficiency, but it is not very farsighted. Why not employ a swarm of variants so that we have a chance to observe who performs better? It also multiplies scrutiny, because we have more chances to make the distinction between contingent and essential effects.

Understandability. Though understanding algorithms is generally considered difficult, post-hoc understanding can be much simpler. It is an evaluation of the effects produced by the algorithm, and can be described in a simpler language. In the case of the Danish algorithm, it might be written in terms of over-/under-representation of different populations. This information is important anyway. Because it is easier to share, it can also spark the interest of the public and gather more eyes to watch the algorithm.

It is a political fight. Since opacity is not blocking us, we do not have to wait for a better understanding of deep learning. The situation will only get worse over the long term anyway. Regulating algorithms is a political issue, and technology is not holding us back. Culture might be, though, insofar as the modernist vision of the world tends to be blind about the agency of algorithms, which impairs our imagination on that matter. Also for clarification: though political, this fight obviously has to take place (in part) on a scientific ground, in the academic arena. Algorithm scrutiny starts in the papers describing them, and I have a lot to say on that topic but it will be for another time!

Exploring relations between Pareto and Network Science on Wikipedia with Hyphe

I am currently looking into the power law, where it comes from and which role it plays in network science (scale-free networks are often characterized by a power law distribution of node degree). I used the web crawler Hyphe to investigate Wikipedia pages on that topic, and Gephi to analyze the links (I know these tools well). You will find here a report of that small experiment, unfolding the method and discussing it a bit.

In a nutshell I obtained a network of Wikipedia pages where we see two main clusters, one about Pareto and statistics, the other one about network science. We see a bridge between both, and as expected the power law is part of it. It validated my implicit hypothesis, and I learned a few additional things. Here is it (you may want to open it full screen and zoom to read the labels).

Wikipedia pages about Pareto, the power law and network science.

Note: this visualization has features that you may not have seen before, such as the node halos that clarify their links. It is not straight out of Gephi, I used a Javascript device to produce it. It is a prototype and I will talk about it in another post.

Let’s start with the elephant in the room: did I learn anything non trivial from this image? Yes. Nothing big, but useful things in a research context. The image above is the entry point I present you to have a quick idea of what I write about. My findings did not come out of just a quick read of that network. They came out of the whole process, and I provide details below. Now that you are warned against this common misunderstanding, here is what I obtained from this work:

  • My hypothesis about the power law bridging certain statistical concepts and network science is confirmed. No big surprise, but it is a way to establish it.
  • I get oriented in these concepts and I now have a good idea of my next steps. In particular I know which concepts I must prioritize to investigate the relations between the two knowledge areas.
  • I have a well-described and argued set of pages (the “Pareto-to-network-science” corpus) that I can repurpose later in a science context, because the process behind is transparent, reproducible and open to criticism.
  • For the same reason I have a set of pages defined as bridging my two domains, that I can repurpose later (the “bridge” corpus).
  • I also have a better idea of what the two sides are, and in particular the fact that they are asymmetric. I did not expect that (though I should have).
  • I had other surprises, and I value them highly because it is a not-so-common occasion to have a clue about my own biases:
    • I did not expect the “de Solla Price” bridge
    • I did not expect two sub clusters in network science
  • I can show an image that summarizes the situation, which might come in handy in a number of situations. Like this post.

In the next sections I will expose the protocol I used to get that network, and my analysis. This is more or less what I would write in a paper. However in addition I will evoke my exploration, that took place before the final protocol, and that is usually not shared in a paper.

Protocol

1. Starting lists

We start with two manually curated lists of pages related to the two topics we are studying. The two lists have the same amount of pages arbitrary set to 10. Here are the lists:

Pareto and the power law:

  1. https://en.wikipedia.org/wiki/80-20_law
  2. https://en.wikipedia.org/wiki/Long_tail
  3. https://en.wikipedia.org/wiki/Pareto_distribution
  4. https://en.wikipedia.org/wiki/Pareto_index
  5. https://en.wikipedia.org/wiki/Pareto_principle
  6. https://en.wikipedia.org/wiki/Power-law
  7. https://en.wikipedia.org/wiki/Power_law
  8. https://en.wikipedia.org/wiki/Vilfredo_Pareto
  9. https://en.wikipedia.org/wiki/Zipf%27s_law
  10. https://en.wikipedia.org/wiki/Zipf%E2%80%93Mandelbrot_law

Network science:

  1. https://en.wikipedia.org/wiki/Albert-L%C3%A1szl%C3%B3_Barab%C3%A1si
  2. https://en.wikipedia.org/wiki/Complex_network
  3. https://en.wikipedia.org/wiki/Duncan_J._Watts
  4. https://en.wikipedia.org/wiki/Network_science
  5. https://en.wikipedia.org/wiki/Preferential_attachment
  6. https://en.wikipedia.org/wiki/Scale-free_network
  7. https://en.wikipedia.org/wiki/Scale-free_networks
  8. https://en.wikipedia.org/wiki/Small-world_network
  9. https://en.wikipedia.org/wiki/Small-world_phenomenon
  10. https://en.wikipedia.org/wiki/Small_world_network

At this point there is no crawl or corpus, but since we have seen the final result already, let’s visualize where the starting lists will en up in the final corpus. It will make the analysis easier to understand.

In indigo on the left, the “Pareto Power Law” starting pages.
In red on the right, the “Network Science” starting pages.

2. Crawl

Using the web crawler Hyphe, we define all Wikipedia pages as different web entities and we crawl these 20 pages. We obtain a list of 1874 web entities cited by these, most of which are other Wikipedia pages.

3. Corpus cleaning

We filter out all the web entities cited only by 3 or less of the starting pages, and we remove any web entity that was not a Wikipedia page or was a tool page (categories, help, list of links…) and we crawl them to obtain the hyperlinks between them. At this stage we have 201 Wikipedia pages and the hyperlinks between them.

A quick look into the most linked pages in this corpus shows that many of them are not related to our topics. These “high layer” pages are very generic and they are cited by our two topics just because they are generally cited by many Wikipedia pages. We use a simple criterion to rule them out: we remove any page that does not cite back at least one page of the starting 20. This simple procedure removes half of the pages we had.

Our final corpus consists of 106 Wikipedia pages (and the hyperlinks between them) characterized as:

  • Being cited by at least 4 of the starting pages
  • Citing back one or more of those starting pages
  • Not being a “tool” page (categories, lists of links…)

4. Identifying the two topics

We started with two lists of pages corresponding to two different (but related) topics. We assume that once extended to our final corpus, these two topics are still present and distinguishable. Just looking at the resulting network gives a strong clue that it is indeed the case. However we do not have to rely on a visual interpretation.

We define an extended version of each of the starting set. For each list, the extended version contains all pages that are citing or cited by at least 5 pages of the starting list. In other terms, the extended set contains pages that have a link (citing or being cited) with 50% or the starting pages (of that list). Note that this procedure allows some pages to be on both sides, or on neither, but as the table below shows it is a minority of cases (less than 10%).

Visualizing the “Pareto Power Law” extended set, that we will call “PPL” for brevity, shows that it largely overlaps with the visual cluster on the left, but not totally. A few pages have not been captured by our procedure, while a page on the right has been. That page is “Scale-free network”. Note: I do not attribute the power of being true to visual clusters as opposed to our selection metric, or vice versa. I just observe they generally agree while having a few crucial disagreements.

In darker grey, the “Pareto Power Law” (PPL) extended set

We will also shorten “Network Science” in “NS”. Visualizing the extended set shows that despite being bigger, it was well captured by our selection procedure. No nodes were missed from the visual cluster, but a node clearly placed on the left side has been caught, it is “Power law”.

In darker grey, the “Network Science” (NS) extended set

If you are familiar with Gephi and its epistemic culture, you might wonder why I did not use modularity clustering to delineate the clusters. I will discuss this point later and remain focused on describing the protocol.

5. Identifying the bridge(s)

First of all, we must note that two pages belong to both sets, which in itself can be seen a strong form of bridging. These two pages are “Scale-free network” and “Power law”.

We then identified bridges by looking at nodes that have connections with at least 10% of each set in a given direction (citing or being cited). This way we make the distinction between 4 types of bridge: cited by one extended set and citing the other (in both directions), cited by both, or citing both. Each page can have multiple bridging roles.

If we just look at the number of different bridging roles played by each pages, we get the following distribution:

5 bridging roles
Scale-free network

4 bridging roles
Power law

3 bridging roles
none

2 bridging roles
Complex network
Degree distribution
Preferential attachment
Random graph
Scale-free networks
Small-world network
Social network
Sociology
Watts and Strogatz model

1 bridging role
Computer science
Social networks

6. Visualizing results

In Gephi I used a force-driven placement algorithm, Force Atlas 2, to assign node positions as you have seen above. I used the LinLog mode as it emphasizes the clusters, and its drawback (slow convergence) is not really a problem on such a small network. Once the assignment looked to have converged, and only then, I activated the “no overlap” feature to improve readability. As I expected to have to use this “base map” within a text, I chose to rotate it so that it spreads horizontally.

Analysis

Let’s look at how the links are distributed as a function of our sets. A simple way to do it is to look at density, but this metric is biased by the sizes of clusters and the general density of the network. In order to remove these biases, we will normalize the density the same way we would do with modularity. Like modularity, the normalized densities vary from -0.5 to 1 and the higher, the more links there are compared to the number of links there could be in the context of that network.

The PPL set has an internal normalized density of 0.027, versus an external normalized density of -0.014. Normalized densities are generally low and the important fact here is how internal density dominates external density: there are much more links inside the PPL set than between the PPL set and the rest. The PPL is a cluster in that sense.

Similarly, the NS set has an internal norm. density of 0.117 and external of -0.018. NS is also a cluster, and even better defined.

We see it in the visualization, but it does not rely on the visual representation. We have two well defined topological clusters, with different densities and sizes.

The group of pages defined as “Pareto Power Law” are about statistical laws, probabilities and important figures such as Pareto and Zipf. We suspect that it is part of a much bigger group of pages about statistics, but possibly because the power law is a central concept to that field, our strategy might not have been able to capture that whole group. This set is smaller (30 pages) and less dense (0.027 norm. density) than the “Network Science” set (72 pages and 0.117 norm. density). As a conceptual space it is narrower than network science. We hypothesize that it just the fringe of a larger conceptual space about statistics, and it is possibly not so well defined as a subtopic (a different protocol could test this).

The group of pages defined as “Network Sciences” is larger, better defined, and more interconnected. It is well groomed as a conceptual space, with specific concepts (“Preferential attachment”, “Small-world network”…) intertwined with a body of much more generic concepts (“Internet”, “Social network”…). I am confident that the sub-cluster we identify visually (at the bottom of the cluster) and corresponding to the topic of social networks would be confirmed as such by the same kind of density analysis.

The connexions between the two clusters are multiple. Looking at the direction of links in the different kinds of bridges, we see that there are much more pages where links come from NS and go to PPL than the contrary. This indicates that NS cited the concepts of PPL more than the other way around.

Two pages have a more important bridging role: “Scale-free network” and “Power law”. It is not a big surprise, but I am happy to have established the key role of these two concepts in the circulation of concepts from statistics to network science. The rest of this investigation will rely on a more qualitative approach.

The other bridges that we have identified are a priority for my investigation, and more generally now that the corpus has been scrutinized and that we know that it captures the areas it was intended to, it would be a good idea to read all of these 100 pages. A possible follow-up could be a text analysis of the texts of these pages, and/or their Wikipedia history.

Why not using modularity clustering and betweenness centrality?

Because I did not need to, and it was easier to explain my protocol that way.

This is the alternative protocol: curate a corpus manually so that it captures the two topics. Run modularity clustering in Gephi to find clusters. Run betweenness centrality to find the bridges.

The problem with that protocol is how it depends on abstract concepts for a quite simple thing. Modularity clustering is hard to explain. The visual clustering in the visualization, that is known to be coherent with modularity clustering, is hard to explain. Betweenness centrality is hard to explain. We can explain how it works but not what it does.

Betweenness centrality counts the number of shortest paths, so that a node with a high score is on many shortest paths between other nodes. It means that if you remove such a node, you break many shortcuts, you make distances longer in the network. This is how it works. But not what it does. From my point of view what betweenness centrality does is get both “intuitive bridges” and centers. Centers are nodes that are well connected, connected to other well connected nodes, and that you will find with other metrics such as closeness centrality or just the degree. The “intuitive bridges” are the other ones, left over by other metrics, which are often “in between” clusters. How it works does not tell you what it does, and the justification of the method is obscured.

Sometimes there is no other way. But in this experiment, I designed a strategy that would not rely on these hard-to-explain elements and define our corpus and our bridge in a simpler way. It is just about who cites who, and it still works. But of course I knew it would work beforehand, because I had explored the domain. It was not a bet, as the game was rigged from the start.

Exploration

Now that I shared a more finalized product, I will open my kitchen and expose my methodological notes. They are just slightly redacted to be readable from you. They were not written to be exposed in extenso, so you will have to pardon the style.

I just started with the following three pages:

  • https://en.wikipedia.org/wiki/Power_law
  • https://en.wikipedia.org/wiki/Pareto_distribution
  • https://en.wikipedia.org/wiki/Pareto_principle

First round of exploration: among the Wikipedia pages cited by the 3 entry points, if we except special pages related to scientific content (DOI, ISBN…) and specific to the Wikipedia practice (Main page, Help…) we get 3 other pages: Long Tail, Normal Distribution and Zipf Law. We extend the corpus in that direction.

Second round of exploration, pages cited by 3+ of the above 6. We decide to eliminate list pages (eg. Category: Statistical Law). We find tens of pages about the different statistical distributions. In order to avoid a drifting to statistics in general, we rule them OUT except for the Log-Normal distribution because of the controversy about interpreting real data as power law or log-normal. We just get Benford’s Law, Log-normal distribution, exponential distribution, Generalized Pareto distribution and Zipf-Mandelbrot law.

At this point the corpus is hugely skewed towards statistics. We want to expand it towards networks and management or political science, where Pareto and Juran were influent. We search for specific terms in Hyphe’s Prospect to add sufficiently cited pages:

  • “Pareto” gives Vilfredo Pareto, Pareto efficiency and Pareto index
  • “Law” gives Power-law and 80-20 law (aliases of pages we had)
  • “Network” gives Scale-Free Network and Social Network
  • “Watts”and “Barabasi” add Bianconi-Barabasi model, Barabasi-Albert model, Watts and Strogatz model, Albert-Laszlo Barabasi, Duncan J Watts
  • “Small-World” brings Small-world network, Small-world experiment and Small-world phenomenon and Small world network (alias).

The following round of exploration allows to find the links to “network science”. We focus on pages cited by 6+ of the above. A lot of graph theory appears and like previously, we try to stay focused on complex / scale free / small world networks and their specificities. By this mean we add 25+ pages mostly related to network science.

The obtained network is pretty clear: the power-law is bridging statistics and in particular Pareto’s law with network science. (see file “Pareto Law exploration.gexf”).

After this exploration we can do a more understandable and more straightforward protocol. We will start with two lists of pages, one on Pareto and one on Network science, crawl both and see what comes out and how it bridges.

Reticular

🎉 Welcome here, this is the first blog post! – Mathieu 

I investigate our relation with tools in a context of social science and humanities, with a focus on networks and complexity.

Following the data deluge and the democratization of digital tools, networks became a part of the scholar’s toolbox for studying various phenomena. Multiple factors made this possible: a new object was invented (the “complex network”), graph theory successfully tackled hard challenges (eg. Google’s Page Rank and the “relevance” problem), and data visualization became accessible to non-specialists. As a co-founder of Gephi, a popular open source software for network analysis, I had the chance to witness the adoption of networks by a number of scholars. Now gradually transitioning from the role of engineer those of researcher, I aim at retracing the trajectory of the network as technology and intellectual object in the social sciences.

Despite being mainly epistemological by nature, this research agenda spreads across multiple fields. The algorithms used by digital devices play a crucial role (eg. assigning positions to the nodes of a network) and I discuss some of them from the double perspective of the designer (computer scientist) and user (social science scholar). Information design (data visualization) is another key topic that I approach sometimes as a computer scientist and sometimes as a practitioner, as the techniques and standard references are often imported from the industry (see for instance the “Information is Beautiful Awards”). Although sociology is not original field, I follow applications of digital instruments in two distinct areas, media studies (following trends in web mining and platform studies) and qualitative sociology (notably controversy mapping). Finally I often adopt the perspective of science and technologies studies to reflect on the role of these tools in the scholar’s practice, occasionally taking baby steps in history of sciences.

This research notebook is intended toward the users of digital technologies in social science and humanities who want to reflect on their effect on our practice and thought.