Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Close reading Wikipedia from Pareto to Network Science, part 3

This is part 3: Statistical law or distribution?

In this part we focus on a nice issue. As we have seen previously, there is a Wikipedia article for the Pareto distribution. There is also a page for Pareto’s law, which tells you that it refers either to the Pareto distribution or to the Pareto principle. In this case like many others, a distribution is also called a law, but both concepts are not exactly the same. Can we establish the nuance between what is called a distribution and what is called a law? Which leads to two sub-questions: What does Wikipedia state on that matter? And what is the Wikipedia practice when it comes to name a thing law or distribution? Note: here we only study that issue where it relates to network science and the power law, since the corpus of analyzed articles is defined this way (see part 1 for the method).

As a post on my research blog, it exposes more than a typical paper would: honest on what is actually done, at the cost of not-so-relevant material. I also follow the open source software mantra “release early, release often”: style might be raw.

Findings

There is a confusion about law and distribution, and it is entirely the fault of the concept of statistical law. A distribution is just an equation. But law can refer to two clearly different things:

  • The statement that given rule or distribution is empirically pervasive (with the power to describe) and/or mathematically proven (with the power to explain). Wikipedia explicitly states such definition.
  • A given statistical distribution. This usage of law as a synonym for distribution is implicit but widely used in Wikipedia.

These two usages coexist in many articles, and we also find multiple ways the two concepts of law and distribution are presented as equivalent. The situation results in a confusion on the empirical grounding of statistical laws. The confusion creates a fallacy where mathematical validity may seem to cause and justify empirical validity. The claims to pervasiveness in particular deserve some attention. For most laws, the pervasiveness of observations is a defining characteristic without which they would not be called law in the first place. But pervasiveness is not established by the sole virtue of something being called “law”. We will investigate these questions in an upcoming part.

The overlapping definitions of distribution and law

Distribution has a precise definition, stated in its dedicated article. It is clear and operational, and is not challenged in any article of our corpus.

“In probability theory and statistics, a probability distribution is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment.”

Probability distribution

There are two relevant articles about laws, Empirical statistical law and Scientific law. Without any surprise, statistical laws are presented as a special case of scientific laws in the field of statistics. Remarkably, both articles acknowledge multiple meanings for the concept.

“The term law has diverse usage in many cases.”

Scientific law

Despite a reference definition that seems to frame the concept of law as a kind of empirical model (see citation below), the usage does not follow that definition because as we will see, law can also mean distribution.

“The laws of science, also called scientific laws or scientific principles, are statements that describe or predict a range of natural phenomena. Each scientific law is a statement based on repeated experimental observations that describes some aspect of the Universe.”

Scientific law

If you asked a metaphorical Wikipedia whether a statistical law and a distribution are the same thing, you would not get a clear-cut answer. Not only is Wikipedia ambivalent, sometimes pretending they are the same thing, sometimes not, but the confusion is even official. Indeed Wikipedia acknowledges that in the academic literature, different definitions of statistical law coexist, some of which just mean distribution, some of which do not.

The article on empirical statistical laws provides explanations on those multiple layers of meaning.

“An empirical statistical law or (in popular terminology) a law of statistics represents a type of behaviour that has been found across a number of datasets and, indeed, across a range of types of data sets.”

Empirical statistical laws

In other terms, a law can be
(1) a pervasive behavior
…except that:

“Many of these observances have been formulated and proved as statistical or probabilistic theorems and the term “law” has been carried over to these theorems.”

Empirical statistical laws

So a law can also be
(2) a pervasive behavior explained by a mathematical theorem
…but wait, there is more:

“There are other statistical and probabilistic theorems that also have “law” as a part of their names that have not obviously derived from empirical observations. However, both types of “law” may be considered instances of a scientific law in the field of statistics.”

Empirical statistical laws

At last, a law can also be
(3) a mathematical theorem

In a nutshell, it seems that there are two different things than can give birth to a law: empirical observation, and/or math theory. Each is sufficient to justify the name, none is necessary. This is according to what Wikipedia explicitly states on that matter.

Wikipedia states that a statistical law can be three things

As we will see the situation is a bit more complicated, but as a starting point we can already clarify that:

  1. Multiple meanings of law coexist
  2. When a law is mentioned, we should elucidate whether it is for empirical reasons, for theoretical reasons, or for both.

With this picture in mind, aware of the lack of a clear divide between law and distribution, we can focus on the usage and justification of these concepts.

Statements articulating law and distribution

Now that we have an idea of how Wikipedia frames law and distribution, we look into the usage of those term to see when they are differentiated or, on the contrary, presented as equivalent. In addition, we account for a specific kind of claim where law is framed as empirically grounded.

Demarcation

In our Wikipedia corpus we found no statements about a law|distribution divide, but we find demarcations between law and theory. As we have seen a distribution is a mathematical entity, but it is a function and not a theorem. As it turns out, the only demarcations we find are internal to the notion of law, and leave the concept distribution unaffected. For instance from the article about scientific law:

“Laws differ from scientific theories in that they do not posit a mechanism or explanation of phenomena: they are merely distillations of the results of repeated observation. As such, a law is limited in applicability to circumstances resembling those already observed, and may be found false when extrapolated.”

Scientific law
A first type of demarcation is about the power to explain, and is more exclusive

The following picture, from the same article, is supposed to illustrate the point, though it makes an important difference:

Illustration from the article Scientific law

The picture shows the three cases that can be called law, but once again demarcates between on the one side meaning (3) (on the left) and on the other side the meanings (2) and (1) (center and right respectively).

The picture makes an important difference however, insofar as it states that repeated successful predictions both describe and explain phenomena, while the text demarcates the law as unable to explain. If we reorder the picture in the same order as our precedent illustrations, we obtain the following:

A second type of demarcation is about the power to describe, and is more inclusive

According to both demarcations, a mathematical theorem that is not based on empirical observation should not be called a law. This perspective implies that despite anomalies existing for historical reasons, those are framed as the exceptions that prove the rule.

“What distinguishes an empirical statistical law from a formal statistical theorem is the way these patterns simply appear in natural distributions, without a prior theoretical reasoning about the data.”

Empirical statistical laws

This statement makes the exact same inclusive demarcation, but is interesting for another reason. The use of the concept of “natural distribution”, which here refers to empirical observations. As tempting as it may be, we cannot approach the concept of distribution as the mathematical side of the notion of law. Not only can laws be mathematical, but distributions are sometimes located on the empirical side of the observation/theory divide.

Equivalence

Equivalence has multiple forms, explicit or implicit. We expose two of them:

  1. Indifferent naming of the same thing as law or probability
  2. Statements of multiple names

Indifferent naming. The article on normal distribution provides a good illustration, we will focus on it for a few paragraphs. In the first third of the article, the normal distribution is systematically called a “distribution”. But then a few passages mention the “Gaussian law” and progressively, we see mentions of “normal law”.

The simplest form of indifferent naming is the usage of both “normal law” and “normal distribution” in the same context. We consider it an implicit form of equivalence, provided that both names refer to the same thing. In our example, it is important to remark the absence of nuances between the two terms. For instance, following observed demarcations, we could expect the “law” term to be used to emphasize the empirical dimension, but it is not the case. See how the first occurrence of “normal law” in the article clearly refers to the mathematical function:

“Pearson distribution — a four-parameter family of probability distributions that extend the normal law to include different skewness and kurtosis values.”

Normal distribution

As an additional clue of this equivalence, the article on normal law exists and proposes a redirection to normal distribution (along with two other unrelated meanings of normal law, in the field or aviation and in the field of justice).

A slightly different form of indifferent naming is when a distribution is also called a law but with a different name, as in distribution X and law Y. For instance, the “normal distribution” is also called “Gaussian law”. Once again, we verify that this term is referring to the exact same thing as the distribution, as in this passage:

“One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice.”

Normal distribution

Incidentally note that while the “law” term refers to the mathematical function, here “distribution” refers to the empirical, in a complete reversal of the expected demarcation.

Statements of multiple names. Sometimes we find statements on the multiple names, which is the explicit version of indifferent naming. For instance the article on the Poisson distribution has a subsection about the law of rare events which explicitly states synonymy (but only “sometimes”).

“The word law is sometimes used as a synonym of probability distribution, and convergence in law means convergence in distribution. Accordingly, the Poisson distribution is sometimes called the law of small numbers because it is the probability distribution of the number of occurrences of an event that happens rarely but has very many opportunities to happen. The Law of Small Numbers is a book by Ladislaus Bortkiewicz (Bortkevitch) about the Poisson distribution, published in 1898.”

Poisson distribution

The source of these multiple names is often implied or stated as historical, as in the article about the normal distribution where there is a History section with a “naming” subsection where it appears that the distribution was first named as a law:

“Since its introduction, the normal distribution has been known by many different names: the law of error, the law of facility of errors, Laplace’s second law, Gaussian law, etc. Gauss himself apparently coined the term with reference to the “normal equations” involved in its applications, with normal having its technical meaning of orthogonal rather than “usual”. However, by the end of the 19th century some authors had started using the name normal distribution”

Normal distribution

Other articulations

Like the Deleuzian fold which separates and joins at the same time, some statements imply demarcation and equivalence at the same time. Take for instance this comment following a list of mathematical extensions to the normal distribution:

“All these extensions are also called normal or Gaussian laws, so a certain ambiguity in names exists.”

Normal distribution

Acknowledging ambiguity is a passive-agressive way to argue for demarcation, pretending that a difference exists while not contributing to situating it. What the sentence actually does is stating that the multiple variations of the distribution share a common name, which is a statement of equivalence, although quite weak.

The law framed as pervasiveness. As we have seen, the reference definition of scientific law refers to its pervasiveness. Similarly the article on Zipf’s law argues that its status of “empirical law” refers to the pervasiveness of the Zipfian distribution. Note the “many” in the following sentence:

“[Zipf’s law] refers to the fact that many types of data studied in the physical and social sciences can be approximated with a Zipfian distribution, one of a family of related discrete power law probability distributions.”

Zipf’s law

In this perspective the law is not exactly the distribution, it is the fact that we find the distribution in multiple empirical situations. This is a possible key to understand the usage of our two terms. Not only is it aligned with the demarcation (the law carrying the empirical dimension of the distribution) but it also suggests an interpretation of the usage of law as a metonymy: “law X” would stand for “law of the pervasiveness of distribution X”. However this interpretation only stands for certain cases where we can observe such nuance, since we have seen with indifferent naming that in some situations there is a strict synonymy between law and distribution. Anyhow, the question of pervasiveness is central to our research question and we will dedicate some specific attention to it.

Claims to empiricism

As we have seen, when the term law is not used as a synonym of distribution, it has an empirical dimension as hinted by the title of the corresponding article, “Empirical statistical laws”. But we will first return to the reference definition of scientific law, once again quoted below. It states the key elements of the empirical dimension of a law, which we will discuss.

“The laws of science, also called scientific laws or scientific principles, are statements that describe or predict a range of natural phenomena. Each scientific law is a statement based on repeated experimental observations that describes some aspect of the Universe.”

Scientific law

Firstly, a law can describe or predict. It states two distinct agencies, that can both be qualified of empirical. The power to describe is the weakest, it only means that the law is a valid reduction of a phenomenon. The power to predict is stronger, since it allows applications. Both are empirical because they related to the observation of natural phenomena, but in different ways.

Secondly, a law is pervasive (“repeated observations”). Implicitly, laws can have validity conditions that prevent them from describing certain phenomena and/or predict them – laws are not realistically expected to apply strictly everywhere all the time. But a law is not something that happens once or twice. It earns its status of law by applying to enough observations, even though no precise threshold is specified.

The same article also states a third kind of agency, the power to explain. It seems to derive from the combination of the other agencies and pervasiveness. The power to describe (or “summarize”) gives the power to potentially explain, while the power to predict and the pervasiveness validate the explanation.

“Scientific laws summarize and explain a large collection of facts determined by experiment, and are tested based on their ability to predict the results of future experiments.”

Scientific law

We also find an alternative way to empirically ground the concept of law, purely based on pervasiveness, and sometimes on a more explicitly structuralist alternative, independence to details. This argument is called “universality” and is stated in a dedicated article.

“In network dynamics, universality refers to the fact that despite the diversity of nonlinear dynamic models, which differ in many details, the observed behavior of many different systems adheres to a set of universal laws. These laws are independent of the specific details of each system.”

Universality (dynamical systems)

We see here how universality refers to the fact itself that there are empirical laws. Note that independence to details is more than observed pervasiveness insofar as it adds an explanation to the phenomenon. In that sense, universality implies the power to explain, while the statements from the article on scientific law describe where the power to explain comes from. Universality only assumes the power to explain.

The article on the log-normal distribution makes a precise universalist point which embodies multiple aspects of the Wikipedia discourse on law and distribution.

“The log-normal distribution is important in the description of natural phenomena. This follows, because many natural growth processes are driven by the accumulation of many small percentage changes. These become additive on a log scale. If the effect of any one change is negligible, the central limit theorem says that the distribution of their sum is more nearly normal than that of the summands. When back-transformed onto the original scale, it makes the distribution of sizes approximately log-normal (though if the standard deviation is sufficiently small, the normal distribution can be an adequate approximation). This multiplicative version of the central limit theorem is also known as Gibrat’s law, after Robert Gibrat (1904–1980) who formulated it for companies. If the rate of accumulation of these small changes does not vary over time, growth becomes independent of size. Even if that’s not true, the size distributions at any age of things that grow over time tends to be log-normal.”

Log-normal distribution

We remark that:

  • It starts as a claim to empiricism (“important in the description of natural phenomena”)
  • It grounds the claim on pervasiveness first (“because many natural […] processes”…)
  • The term law (“Gibrat’s law”) does not refer to the distribution itself (log-normal) but to a “theorem” involving it (central-limit).
  • The passage argues that the theorem explains the observed pervasiveness, which is the universalist stance (regardless of how convincing you may find the point)
  • All of which frames Gibrat’s law as our definition (2), a pervasive behavior explained by a mathematical theorem.

An epistemic typology of statistical laws

In coherence with the three basic understanding of the concept of law that we have highlighted in the start of this section, the article on empirical statistical law proposes a list of examples formulated as a typology. It features various degrees of theoretical grounding, empirical grounding, and combination.

“Examples of empirically inspired statistical laws that have a firm theoretical basis include:
• Statistical regularity
• Law of large numbers
• Law of truly large numbers
• Central limit theorem
• Regression towards the mean

Examples of “laws” with a weaker foundation include:
• Safety in numbers
• Benford’s law

Examples of “laws” which are more general observations than having a theoretical background:
• Rank-size distribution

Examples of supposed “laws” which are incorrect include:
• Law of averages”

Empirical statistical laws

Note that one of those laws is called theorem, and another one distribution, but as we have seen the equivalence between law and distribution has multiple facets.

This typology however does not include some of the cases we have observed, and in particular when law is employed as a strict synonym of distribution. It also distinguishes the truth of a law by qualifying it as “supposed”, suggesting that a falsified law might no longer be law. We will propose our own typology by including all the cases we have observed and ignoring the truth status of the law. In other words, we will recognize a law if it is stated as a pervasive behavior, for instance, regardless of whether this statement is true or false. We see no problem with the concept of a false law, and consider convenient to still call it a law for epistemic reasons. We will also precise some observed characteristics of the different types. Note: this typology extends and enriches our first basic typology (numbering is consistent).

Observed types of usage of the term law

(0) Law strictly refers to a distribution (a mathematical function).
This definition is never stated but is observed in Wikipedia articles.
It maybe interpreted as a metonymy (see below).

(1) Law refers to a pervasively observed distribution/behavior.
Often considered to just describe a phenomenon.
Pervasiveness is required to be called a law.
Some validity conditions may apply.

(2) Law refers to an empirically observed theorem.
Often considered to explain and predict a phenomenon.
The theorem is often about a phenomenon following a certain distribution.
Pervasiveness is not longer required since the theorem is formal.
Some validity conditions may still apply.

(3) Law refers to a theorem (not observed empirically).
Often considered an improper use of the “law” term.

These different types correspond to two different perspectives: descriptive and explanatory. In the descriptive perspective, the law describes a phenomenon, possibly with a distribution. The fact that the distribution itself can also be called a law may cause some confusion. This perspective is purely grounded on empirical observations, which must have some pervasiveness to be called a law.

Descriptive perspective: based on pervasive observations, the law describes but does not explain

The explanatory perspective is on the contrary grounded on a mathematical theorem which is not necessarily observed in the real world. Because the theorem has a formal proof, a deduction, its validity conditions do not require pervasiveness. However the applicability to real life may still involve validity conditions. The mathematical foundation, as we have seen, are often credited for a power to explain and/or to predict. The fact that the theorem may be about a distribution also allow for a certain degree of confusion.

Explanatory perspective: based on a theorem, it not describes but also explains and/or predicts

The last two diagram show how these different usages overlap. As we will see, this situation generates a certain amount of confusion.

the inconsistent usage of the term law

Our four types of usage are not exclusive, as the type (0) is compatible with each of the three others. Indeed though usages (1), (2) and (3) differ by the way the law is grounded (empirically and/or theoretically) and are thus mutually exclusive, the usage (0) is possible any time a distribution is involved. What does it mean? Does it matter?

Let us take an example. An article mentions “Zipf’s law”. What do you think it means? There are two possible answers.

  • (0) It means Zipf’s distribution. It is just a mathematical function.
  • (1) It means that Zipf’s distribution is observed everywhere (inside its validity conditions). It is a statement on the empirical reach of a mathematical function, based on its pervasiveness.

Let us look at the Wikipedia article, which starts like this:

“Zipf’s law […] refers to the fact that many types of data studied in the physical and social sciences can be approximated with a Zipfian distribution”

Zipf’s law

This usage is clearly type (1). But later in the same article we find statements such as those:

“Zipf’s law is most easily observed by plotting the data on a log-log graph”

“The simplest case of Zipf’s law is a “1⁄f function.””

“many natural phenomena obey Zipf’s law”

Zipf’s law

And even the following, that I cannot reproduce in text form in this blog:

Zipf’s law as a formula, referring to the distribution

These multiple usages are as clearly of type (0), referring to the distribution and not to the statement of its pervasiveness. In this example like in others, different usages coexist in the same text.

This practice of mixing the usages is akin to a figure-of-speech such as a metonymy. We hypothesize that the Wikipedia writer may favor the shorter “Zipf’s law” to the longer “the mathematical equation involved in Zipf’s law”. It is however intriguing that for the sake of brevity, the perfectly sound concept of “Zipfian distribution” is not used instead. Distributions never have the meaning of an empirical statement, they always refer to the equation. We found no other reason than the persistance of a historical confusion to explain why the term “distribution” is not used in place of “law” when referring to the mathematical formula.

Does it matter? It brings confusion to statements where we cannot easily determine which of the two usages is intended, which has consequences on the evaluation of those statements. For instance one may wonder: is Zipf’s law true? Understood as type (0), it is as true as a mathematical equation can be, absolutely unfalsifiable inside its validity conditions – though a simple equation makes nothing else than an absolutely abstract statement. But understood as type (1), it entirely depends on the pervasiveness and validity conditions of the law, which are not necessarily robust and might change over time. A law can stop being true while we get more observations.

Fallacies

The main consequence of this confusion is to allow vicious arguments on the validity of laws. By leveraging the confusion to convoke the right perspective at the right moment, we could abuse rhetoric to make a false but convincing point. For instance we could argue that the log-normal law is guaranteed to be true by virtue of the central-limit theorem, then argue that it empirically grounded because of its pervasiveness. This argumentative line is strong because it draws justification from both sides of the empiricism/theory divide. It makes it look like math theory guarantees the law’s empirical reach, but this precisely is a complete fallacy. Indeed the law guaranteed by the central-limit theorem is just the log-normal distribution, not the statement of pervasiveness constitutive of the law understood as something empirical. Mathematical validity does not cause empirical validity, because conditions always apply. Yet the confusion around the concept of law allows a fallacy that shortcuts the gap.

This last example was just for the sake of argumentation, since we did not actually observe such fallacious arguments, and we did not even search for them – for the moment at least. We will return later to the questions of pervasiveness and universality, and we will pay extra attention to the usage of the term law in these situations, as there is a potential for abuse.

The potential for abuse can take the form of more generic and traditional fallacies like a simple circular logic such as:

  1. A law is by definition pervasive
  2. That pervasiveness is what makes it a law

As trivial as it sounds, laws are often evoked in Wikipedia and their status is rarely discussed outside their article, and pervasiveness is not even systematically justified. Wikipedia is not exactly academic literature and has its limits, nevertheless it seems quite dangerous to repeatedly assume that anything named “law” is, by virtue of that name, pervasive.

The Pareto distribution is the favorite law

Our corpus of analyzed articles is focused on Pareto and the power law, so it would only be natural that it is biased in favor of over-representing the Pareto distribution. However very generic articles such as Scientific law and Empirical statistical law seem to have a clear preference for Pareto and Zipf.

The first, and only detailed, examples of Empirical statistical laws, in the corresponding article, are Pareto and Zipf. Both are framed as laws. The latter has its empirical pervasiveness highlighted.

“The Pareto principle is a popular example of such a “law”. It states that roughly 80% of the effects come from 20% of the causes, and is thusly also known as the 80/20 rule. […]

Zipf’s law, described as an “empirical statistical law” of linguistics, is another example. According to the “law”, given some dataset of text, the frequency of a word is inversely proportional to its frequency rank. […] However, what sets Zipf’s law as an “empirical statistical law” rather than just a theorem of linguistics is that it applies to phenomena outside of its field, too. For example, a ranked list of US metropolitan populations also follow Zipf’s law, and even forgetting follows Zipf’s law. This act of summarizing several natural data patterns with simple rules is a defining characteristic of these “empirical statistical laws”.”

Empirical statistical laws

In the article Scientific law, Zipf’s law is the only example.

“The term “scientific law” is traditionally associated with the natural sciences, though the social sciences also contain laws. An example of a scientific law in social sciences is Zipf’s law.”

Scientific law

There is only one mention of “law” in the article on probability distributions and it is about “the prototypical power law distribution” (note the double qualification).

Here we only hypothesize that Zipf’s law and the Pareto principle, which are both sub-species of the power law, are associated with especially strong claims to empiricism.

Anecdote: a law whose name is explained by a law

As a lighter note to this section, we remark that not all laws are about a distribution. In the article on “Cauchy distribution”, inception! the name of the law is itself explained by a law. As the History section states:

“Functions with the form of the density function of the Cauchy distribution were studied by mathematicians in the 17th century, but in a different context and under the title of the witch of Agnesi. Despite its name, the first explicit analysis of the properties of the Cauchy distribution was published by the French mathematician Poisson in 1824, with Cauchy only becoming associated with it during an academic controversy in 1853. As such, the name of the distribution is a case of Stigler’s Law of Eponymy. Poisson noted that if the mean of observations following such a distribution were taken, the mean error did not converge to any finite number. As such, Laplace’s use of the Central Limit Theorem with such a distribution was inappropriate, as it assumed a finite mean and variance. Despite this, Poisson did not regard the issue as important, in contrast to Bienaymé, who was to engage Cauchy in a long dispute over the matter.”

Cauchy distribution

OpenEdition suggests that you cite this post as follows:
Mathieu Jacomy (February 28, 2019). Close reading Wikipedia from Pareto to Network Science, part 3. Reticular. Retrieved January 20, 2025 from https://reticular.hypotheses.org/799


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.