Close reading Wikipedia from Pareto to Network Science, part 4

This is part 4: How network science bridges with statistics

In this part we focus on the relations between the concepts of network science and of statistics, as stated in the articles of my corpus (see part 1). These concepts are essentially:

  • The power law
  • Scale-free networks
  • Preferential attachment

As a post on my research blog, it is honest on what is actually done but at the cost of not-so-relevant material. I also follow the mantra of “release early, release often”, so please forgive the lack of finishing touches.

Findings

Nothing decisive, essentially an account of the conceptual situation.

Power law is, as expected, the articulation between network science and statistics. It belongs to both fields insofar as it is a statistical distribution and a defining characteristic of complex networks.

Beyond that concept, scale invariance is the active principle in both the power law and complex networks. As we will see in the next section, it is where network science bridges with phase transitions and universality.

How Network Science bridges with Statistics

Power Law & Scale-free Networks

In the article on power law we found the most direct link between scale-free networks and the power law, though implicit. It is still a good starting point to understand the situation.

“[…] all power laws with a particular scaling exponent are equivalent up to constant factors, since each is simply a scaled version of the others.”

Power law, in the “Scale invariance” section

The fact that scaling a function’s input only causes a scaling of the function, as tautological as it sounds, is actually remarkable. It is precisely the definition of scale invariance. It turns out that this characterization is strictly equivalent to the formula of a power law. Hence the following conclusion: scale invariance is the power law. Though scale invariance and scale-free networks are not exactly the same thing, the link is still remarkably direct considering that scale-free networks are named after scale invariance. Note however that in Wikipedia we do not find a direct explicit link between scale invariance and scale-free networks.

Scale invariance is important because it is often invoked to frame and interpret the power law, as a consequence of a more fundamental phenomenon:

“The equivalence of power laws with a particular scaling exponent can have a deeper origin in the dynamical processes that generate the power-law relation.”

Power law

Scale invariance is never explicitly linked to scale-free networks. Each page is mentioned in the “see also” section of the other, but no sentence states the relation.

Scale-free networks (SFN) are defined by a distribution of node degrees following a power law. We have seen that there is a controversy about how closely it must follow it, which extends to a general debate about which empirical networks are actually scale-free (or log-normal…). Regardless, Wikipedia is explicit on the relation between SFN and the power law.

“A scale-free network is a network whose degree distribution follows a power law, at least asymptotically.”

Scale-free network

“A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. […] Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law.”

Social network

Interesting variation, in the following citation we note a reversal of the relation: preferential attachment is not the defining characteristic, but a consequence of SFN.

“Chris Anderson argues that while quantities such as human height or IQ follow a normal distribution, in scale-free networks with preferential attachments, power law distributions are created”

Long tail
Relations between the concepts of power law and scale free networks

The power law is also specifically tied to scale-free networks, and not to small-world networks. As we have seen, these two types of networks are sometimes interchangeable, and subsumed into the more general category of “complex networks”. But concerning the power law, they are not equivalent, since small-world networks are not expected to follow a power law, as in the following excerpt:

“In a small world network with a degree distribution following a power-law, deletion of a random node rarely causes a dramatic increase in mean-shortest path length (or a dramatic decrease in the clustering coefficient).”

Small-world network

Preferential attachment & power law

The power law and scale-free networks also often relate to a third concept: preferential attachment. In network science this concept describes a phenomenon where the most connected nodes tend to attract new connections, but it also has a non-network interpretation. Like the concept of power law, it has a number of variations such as “rich get richer” or “Chinese restaurant process” (see previous parts).

“Similarly, preferential attachment (intuitively, “the rich get richer” or “success breeds success”) that results in the Yule–Simon distribution has been shown to fit word frequency versus rank in language and population versus city rank better than Zipf’s law.”

Zipf’s law

In this excerpt, preferential attachment fits a empirical distributions. Similarly, the article on “Preferential attachment” is not primarily about networks but probabilities.

“A preferential attachment process is a stochastic urn process, meaning a process in which discrete units of wealth, usually called “balls”, are added in a random or partly random fashion to a set of objects or containers, usually called “urns”. A preferential attachment process is an urn process in which additional balls are added continuously to the system and are distributed among the urns as an increasing function of the number of balls the urns already have.”

Preferential attachment

But of course the concept still relates to the power law, as in the citation below, stating that preferential attachment generates a power law. Note the importance of the tail in this correspondence.

“the preferential attachment process generates a “long-tailed” distribution following a Pareto distribution or power law in its tail. This is the primary reason for the historical interest in preferential attachment: the species distribution and many other phenomena are observed empirically to follow power laws and the preferential attachment process is a leading candidate mechanism to explain this behavior.”

Preferential attachment

In another article we find a probabilistic version of the same statement. It explicitly mentions the usefulness of preferential attachment as a “model”.

“In statistics, the phrase “the rich get richer” is often used as an informal description of the behavior of Chinese restaurant processes and other preferential attachment processes, where the probability of the next outcome in a series taking on a particular value is proportional to the number of outcomes already having that particular value. This is useful for modeling many real-world processes that are akin to “popularity contests”, where the popularity of a particular choice causes new participants to adopt the same choice (which can lead to the outsized influence of the first few participants).”

The rich get richer and the poor get poorer

Preferential attachment & scale-free networks

A.-L. Barabási and R. Albert have famously championed preferential attachment as a model for generating (and explaining) scale-free networks. This model is named after them and has its own Wikipedia page.

“The Barabási–Albert model is one of several proposed models that generate scale-free networks. It incorporates two important general concepts: growth and preferential attachment. Both growth and preferential attachment exist widely in real networks.”

Barabási–Albert model

This model is key to network science, and we find other mentions of preferential attachment as a model, with or without the names of Barabási and Albert.

“Many networks have been reported to be scale-free […]. Preferential attachment and the fitness model have been proposed as mechanisms to explain conjectured power law degree distributions in real networks.”

Scale-free network

As we have seen previously, assortativity can have the meaning of preferential attachment. Presumably as such, it is mentioned as a defining feature of complex networks, though the connection is not detailed.

“Complex networks: Most larger social networks display features of social complexity […]. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure.”

Social network
Relations between the concepts of preferential attachment and scale-free networks

Overview of the bridge

Relations between the key concepts bridging statistics and the power law

Stop stretching your T-SNE, UMAP, and networks!

(source)

Did you know that T-SNE and UMAP have deep connections with visual network analysis? All those visualization techniques have something in common that is also radically different to most traditional techniques used in science, such as line charts, bar charts and scatter plots. The most trivial consequence is that stretching them damages their interpretability.

The problem is when tools do the stretching without the user noticing, or being aware of the issue. It explains the damaged visualizations we see in some papers. Exposing that issue is my motivation for writing this piece.

T-SNE and UMAP translate multivariate data as an implicit network that is then projected onto a 2D space (and sometimes 3D). The strategy used for this projection has many similarities with the force-driven placement algorithms used in network visualization: it is iterative, it tries to gather connected nodes, etc. Without going too far down the rabbit hole, let us mention one of the key properties of such approach: the produced placement is isotropic, which means that the space is the same in all directions. You must not stretch the resulting image because it breaks its isotropy. If you stretch the visualization, the image cannot be interpreted properly because it introduces a bias – in a surprisingly literal sense of the term. That was my main point. Now I will show what it looks like and explain why it matters.

The most classic charts are stretchable

“Statistical chart” in Google images

In all kinds of scientific papers we use line charts, bar charts, scatter plots… All those charts (pie charts, parametric curves excluded) share important principles:

  • Each axis represents something known, usually a quantity but sometimes an ordinal or categorical series (eg. in a bar chart)
  • Axes are independent, which allows to give each their own graphical scale

These classic charts all rely on independent axes. You can “stretch” them along each of their axes because those are independent. More exactly, there is no natural scale choice, so different arbitrary decisions naturally lead to stretched variations of the same chart. Though this kind of stretching slightly impacts the reading, the rules for interpretation remain the same.

Different scale choices for the same line chart

Graphical stretching

Basic graphical features such as image stretching are widely available in general public software such as Word / Excel / PowerPoint and even web technology. This leads to an unreasonable amount of the infamous “PowerPoint stretching”, as I am tempted to call it. Do not do it, ever.

Mona Lisa stretched: we immediately feel the awkwardness

We live in an isotropic environment, the space of our everyday life is the same in all directions. If you are 2 meters tall when you stand, then you are also 2 meters tall when you lie in bed. In simpler words, you can rotate stuff. So if you stretch an image representing the real world, it immediately feels awkward.

Graphical stretching of a classic chart

People sometimes stretch their visualizations so that they fit in a given layout. For instance in the image below on the right, the chart has been graphically reduced horizontally. Note how, contrary to the scale-stretching, this affects the visual aspect of all graphical elements such as line thickness, text etc.

Left: non stretched chart. Right: graphically stretched chart

This stretching is bad, but it is not that bad insofar as it still respects the axes. It loosely looks like a rescaled version and we can still more or less interpret it.

Graphical stretching of an isotropic visualization

Isotropic visualizations lose their interpretive properties when we stretch them, exactly like Mona Lisa. You may feel the same awkwardness as previously. I will explain precisely why we may feel it.

T-SNE visualization, original and stretched versions
Network visualization, original and stretched versions

That stretching is bad. I am sure you would not do it. Unfortunately, that is not where it ends.

Rescaling of isotropic visualizations

There is a perverse practice that I want to debunk. Some software allows the scale-stretching of isotropic visualizations. It is perverse because it is less obvious than the graphical stretching. The bias is as bad but being less visible, it has less chances to be taken in account. It is also perverse because some tools do it naturally, which for various reasons makes people incorporate the bad practice in their work.

The scale-stretching of a visualization, contrary to the graphical stretching, does not stretch the semiotic elements such as text, lines, legend… If your data points are represented as rounds, these rounds will not be stretched in ellipses, and you might be confident that the directions of space are the same. You have no strong clue that the isotropy might be broken.

Rescaling is less visible than graphical stretching

Let me show a few examples I randomly stumbled upon. I am not doing so to shame the authors, but to give an idea of what it looks like in the real world. The image below for instance is a typical case: the data points are nice rounds but the general shape, which should be round, looks like an ellipsis. I will explain later why it should be round.

Scale-stretched T-SNE visualization (source)

The next example has both T-SNE and UMAP. The scales and the background grid show evidence that isotropy is not respected: the grid should display squares, but instead it has rectangles. The figures on the scales confirm the visual analysis. Once again the T-SNE should have the general shape of a round, but instead looks like an ellipsis.

Scale-stretched T-SNE and UMAP visualizations (source)

Below I picked a less obvious case, slightly stretched out horizontally. The background grid has rectangles instead of squares, which shows the breaking of isotropy. Because it is only slightly stretched, it is easier to miss the bias.

Scale-stretched T-SNE visualization (source)

Last but not least, a screenshot of Pajek, the famous network visualization software which inspired all the others, including Gephi. It rescales the visualization to fit the proportions of active window, breaking isotropy most of the time.

A screenshot of Pajek (source)

Pajek is a marginal problem since it is not much used for visualization. The biggest problem happens inside frameworks such as R, d3.js or Matplotlib, because of a regrettable confusion with scatter plots.

The scatter plot confusion

You have a data set where data points have (x, y) spatial coordinates, how would you represent that? Scatter plot functions offer a solution to this exact problem, so I would not blame you for using such features. Unfortunately that obvious solution is also the wrong one when it comes to UMAP, T-SNE and networks. Networks tend to have their own separate pipeline, so the issue is less critical than for UMAP and T-SNE. Indeed those do not have edges and really look like any other multivariate data set. You have to be aware that the coordinates correspond to an isotropic space, unlike other similar techniques. You have to know that bags of dots come in two very different flavors, isotropic and non-isotropic.

D3.js scatter plot from tutorial (source)

The typical use of a scatter plot is to observe how two quantitative variable change together, in terms of correlation or not. Often we have no reason to presume they are linked and we just want to monitor if it is the case. For instance we want to visualize the age (in years) and the income (in dollars per year) of a set of persons. Since you have no way to convert one unit into the other, the space is not isotropic. You can scale the axes like you want.

Benzecri’s famous technique of (multiple) correspondence analysis (MCA), popularized by the French sociologist Pierre Bourdieu, has the exact same input than T-SNE or UMAP, a multivariate data set, and a seemingly similar output: it gives 2D coordinates to the data points. However in this technique the axes are central to the interpretation. In MCA you would precisely look at the distribution along axis 1 first, then along axis 2, and reflect in terms of distribution along combinations of axes. Axes are a global thing, they apply to each and every data point. This is how the internal mechanics of MCA work.

With T-SNE and UMAP, like with networks, it is absolutely wrong to reflect in terms of axes, because the algorithm’s internal mechanics only work on local features such as proximity and distance. The resulting image can be rotated because that leaves local features unchanged. Axes mean nothing, and only the comparisons between distances are meaningful, which requires an isotropic space.

The confusing resemblance of UMAP/T-SNE with MCA or other similar techniques is unfortunate, because they are almost entirely opposed on an epistemic level. What is mandatory in one case is forbidden in the other one, and vice versa. The situation is exactly the same between networks and line charts, for example. But those look very different, so it seems more natural that they require different kinds of interpretations. UMAP and MCA both look like scatter plots, but only MCA is interpreted that way.

Allowed and forbidden transformations of different charts

The problem with stretching isotropic data vis

Let us focus on networks. If you use Force Atlas 2 in Gephi, a force-driven placement algorithm, you will obtain a certain distribution of nodes in an isotropic space. It does not mean that the distribution will be homogeneous, however. The fact that a shape is oblong or very round means something. For instance, a clique (everyone is connected to everyone) makes a perfect round.

A clique in Gephi makes a perfect round

A stable, the opposite of a clique, that is a set of entirely disconnected nodes, also makes a round.

A stable in Gephi also makes a perfect round

What do a clique and a stable have in common? All nodes are equivalent. The circular shape is produced by the perfect balance of nodes in the networks.

You might wonder why the disconnected nodes of a stable do not repulse each other infinitely far. This is what would happen in a simple layout algorithm with only attractive and repulsive forces, but we generally use an additional force to keep in check the spreading of nodes. We do that because we know that we have a limited space to visualize. UMAP and T-SNE have similar strategies and you can see it very clearly in most examples.

When the shape is compact but not exactly round, it often translates the existence of multiple disparities in the structure. Take a look at the following network, which is the neural network of a small worm known as C. Elegans. It also makes a very compact shape that some would call “hairball”, but is not a round either:

C. Elegans neural network- Layout: Force Atlas 2

The areas on the borders are “poles” or connected clusters if you prefer. The fact that they do not completely separate does not help to identify these clusters visually, but there is an underlying structure that we can reveal with an algorithm like modularity clustering:


C. Elegans with nodes colored according to modularity clustering (Louvain)

There are clusters in the structural sense even if the visual clue is weak. Visual network analysis is a large topic but the important here is just that oblong shapes are a signature of poles or connected clusters. This is the problem with stretching networks. It can create oblong shapes out of nothing and add many anti-patterns to the visualization, inducing a huge bias.

This is especially important because it turns out that in many situations, local homogeneous structures make round shapes. Take a look for instance at Martin Grandjean’s visualization of air traffic network.

Air traffic network (source)

Many local clusters or sub-clusters look like round shapes, and those who do not (like Europe) probably hide poles. Stretching the image is especially bad for these kinds of networks.

T-SNE has a very similar behavior in case of well separated sub-clusters, for instance on the classical example of classifying hand written numbers:

T-SNE also tends to make round shapes for homogeneous clusters.
Source: L. van der Maaten

UMAP is the same. In the following example we even see how the oblong shapes embed poles or sub-clusters, exactly like a network does:

In UMAP too long shapes hide poles or sub-clusters.
Source: L. McInnes

So… stop stretching your T-SNE, UMAP, and networks! It will help your interpretations.

A standard for presenting network visualizations

I just attended an exam on controversy mapping at Aalborg University, where among other things students interpreted Gephi visualizations of different kinds (pic related). There were networks of Wikipedia pages on Parenting. The students were quite good despite common issues about how to talk about networks. The exercise is hard, and we do not expect most students to master it in the time of the course (in this case 3 weeks full time). It is nevertheless true that in my view, there is a standard way to present your network visualization. I realized that it would be useful to share my educated opinion on how you should present your network.

Let me first address two possible misunderstandings.

  1. It is not about your method. There are infinite amounts of valid research designs that involve network visualization. I am not the fun police. I will not discuss which are good or bad.
  2. It is not about evaluating the layout quality. That is a very valid topic, I have a lot to say about it, and it is something crucial that comes to mind when reading something like “the gold standard for network visualization”. However it is not what I mean here.

What I want to address in this post is which aspects you should cover, in which order, and more importantly how you should cover them. If you ever felt lost in an argumentative maze while presenting your network, stay with me.

But before I start suggesting what you should say and how, I need to introduce what I consider the four key layers of any discourse on a network visualization. I will take the time to detail them, for the moment I will just mention their existence with the picture below. If you are familiar with Bruno Latour’s work, you may recognize a chain of reference. If not, you will understand along the way: the key thing is to acknowledge the translations between the layers.

Layers to convoke when presenting a network visualization

What you should say

We assume the classic situation: you are presenting network maps made by yourself. You know all there is to know about the process, from harvesting to refining and to visualizing. You have some expertise on the topic. Your audience starts with a very open question such as “Can you tell us what this is about?”.

1. State the purpose of the work

State the topic first, your research questions if you have some, and/or what you tried to achieve.

It can be very short but it is still important.

We never visualize a network for the sake of visualizing a network. There is always an underlying motive. Interpreting a network is never simple and you and your audience are at risk to get lost in the process. Stating where you are headed to helps.

2. Describe what the visualization translates

Explain concisely the process that has lead to the visualization. It is a chain with many steps, which requires clarity. Use the proper terms and make explicit how each step leads to the next.

There are two valid strategies to narrate this, depending on the situation:

  1. Describe the process in a pseudo chronological order, from harvesting to visualization.
  2. Start with the physical object (the printed sheet, the screen…) and go upstream towards its origin.

Pick whatever makes you comfortable. You might want to leverage this occasion to explain the process, or you have done it before and you want to be straight to the point. In both cases there are a number of elements that you must provide.

You must explain the key steps of the process and use the proper terms to talk about each of them. Here I will use strategy number 2, ie. to narrate the steps starting from the physical object and going upstream through the process. There would be variations depending on your research design, I will just assume the common situation described in most Gephi tutorials.

In a nutshell, each step of the process is one of the four layers I introduced previously. Each layer is translating the layer just below, and the goal is to make each translation explicit.

Describe how the image translates the network

The image or map is the physical object that you empirically offer to your audience to understand your work (along with your explanations of course). You must explain where every thing visible in the image comes from. In a typical scenario this would be for example:

  • The image has been produced by visualizing a network.
  • The rounds are representing the nodes. All nodes have been represented.
  • The lines are representing the edges. All edges have also been represented.
  • The texts are nodes’ labels, we only displayed the most important ones.
  • The size of each round represents the degree of the node.
  • The color of each round represents the category of the node.
  • The thickness of a line represents the weight of the edge.
  • The color of lines has been set to a light grey to avoid too much visual cluttering.
  • The placement of the nodes has been decided by an algorithm analyzing their connections, not considering other attributes like their category.
  • The legend precises the color coding of node categories and the scale of edge thickness.

Explain how The layout works

The layout algorithm must be explained. In the case of Force Atlas 2 and many others, the important points are:

  • The layout places the nodes only in function of their links, it ignores all attributes.
  • It works iteratively by having all nodes repulse each other and connected nodes attract each other. By design it converges to an equilibrium that depends on the random starting positions.
  • The resulting projection is said isotropic: it has no specific axes and could be turned or flipped without losing its features. Is is supposed to be interpreted in terms of relative distances.

In case such settings are used, those also deserve to be mentioned:

  • Gravity: an additional force limits the spreading of the nodes, which brings a minor bias but allows to optimize space during visualization.
  • Prevent overlap: the placement of nodes has been adjusted so that they do not overlap, bringing a minor bias but optimizing readability during visualization.

Note: I do not think it is worth formalizing an additional layer, here a mathematical projection to a 2D space, even though it is what we actually do.

Describe how The network translates source data

The network or graph is the list of nodes and the list of edges used as a data structure in a software like Gephi. The network is translated visually by the image, but it is not the image. Similarly, it often translates less refined data, but is not that data.

You must explain what the nodes and the edges represent. In other terms, you must describe how they relate to the raw data (see below). For instance:

  • Nodes represent words mentioned at least 10 times, excluding a list of stop-words.
  • Edges represent co-occurrence, that is when two words appear in the same document.
  • The weight of edges represents in how many documents the words appear together.

Explain how source data refer to the empirical world

You must explain where source data come from and how they were picked. The choice of data to study often stems from an interest in something precise in the empirical world. It might be parenthood, #blacklivesmatter, Nordic design… Whatever is your topic or research questions, it provided an interpretive framing of the source data, for instance because certain elements are used as proxies to obtain information on your original object of interest.

It might be for instance to mention that you were interested in a topic involving gender issues, but for practical reasons it had to be specific enough, which lead you to pick the Parenting topic already delineated in Wikipedia.

3. Interpret your network map

Now that your audience knows what all of this is about, you can discuss the content of your network map. Your interpretation will consist of a number of statements that will rely first on the image and traverse layers down to the empirical world if possible.

There are many ways to organize your interpretation. You can refer to the suggestions Tommaso Venturini, Debora Pereira and I have proposed for visual network analysis. I will not open that discussion here. The only important thing is the gist of any argument of that kind: it exposes features of the network that are visible in the image, and argues that these features originate in the source data in a way that allows to say something on the empirical world. This interpretative path is long, I know. Sadly, such is the situation you are facing. Science is hard.

You must always be clear about the translations when you make your points. This is the one and only trick. Succeed at this, and you will master network interpretation. Making a good point is all about finding your way through the layers. It is hard though. I will dedicate the rest of this post to breaking down that question.

How you should say it

Pay attention to the vocabulary

The bread and butter of your arguments is the logical connections between the many elements you will convoke. There is so much to say that I will not even try. However it always starts with using the proper vocabulary. This question is critical here because as we will see, using the proper terms is your best defense against treacherous argumentative lines that will lead you into a maze of fallacies.

Each layer has its specific vocabulary, let us start by reviewing this.

image / map

The following vocabulary is apt at describing the image:

  • Circle, shape, line, text
  • Colors, light, dark
  • Big, small
  • Close, far
  • Busy/dense/full/occupied areas, holes, blank spaces
  • Center, periphery (of the image, of a zone…)

DO NOT USE to describe the image itself: node, edge, hyperlink, web page…

network / graph

The following vocabulary is apt at describing the network:

  • Node, vertice
  • Edge, link , connection
  • Node/edge weight, attribute, modality of an attribute
  • Degree, indegree, outdegree, centrality metrics
  • Density (of a set of nodes)
  • Neighbors, leaves (nodes with 1 neighbor), orphans (0 neighbor)
  • Structural equivalence (having the same neighbors)
  • Geodesic distance (length of the shortest path)
  • Clusters (as the outcome of a clustering algorithm)
  • Modularity (of a clustering)

DO NOT USE to describe the network: being close or far, being grouped…

You will often want to do simple countings, like saying that a set of nodes is big, or small, or bigger than… A set of nodes can be a cluster, nodes where attribute X takes modality Y, nodes of a degree of X or more, neighbors of X…

Source data

This step is not always only one step in the process, and can take many shapes. The important point is that data have always been transformed: they have been cleaned, filtered, refined… There are so many possibilities that I cannot provide an overview. I will just cherry pick a few examples.

If your raw data is Wikipedia pages, the following vocabulary applies:

  • Web page
  • Hyperlink, hypertext link
  • In text links, “see also” links

If your raw data was a set of documents in a co-occurrence analysis:

  • Text, document
  • Paragraph, expression, n-gram, word
  • Co-occurrence
  • Term frequency

Your data might be from a patents database, from Twitter or Facebook, from a qualitative sourcing… Each of these cases have their own types of objects, relations and vocabulary.

DO NOT USE to describe the raw data: node, edge, being connected, being close, being grouped…

Empirical world

The vocabulary you use when you refer to the empirical world can be:

  • People, institutions, actors, …
  • Books, projects, ideas, …
  • Topics, academic fields, interests, …
  • Friendship, accointances, affinities, …
  • Groups of peoples, community, culture, …
  • Notoriety, influence, authority, relevance, …

Beware of metonymies

In practice you want to say “the size of the nodes” and not “the size of the rounds”. Fine, but you are playing with fire. If you master the exercise you are able to use all sorts of shortcuts because you know the limits. A naïve listener might be under the impression that most of the concepts are interchangeable and that you can indifferently say line, edge, link or hyperlink. It is very wrong. The issues are real and you might trick yourself into fallacious arguments and circular logic.

MagrittePipe.jpg
Be clear about what is representing and what is represented

The line not to cross is made clear by looking at how we understand a metonymy, a figure of speech where we refer to something by using a different but closely related concept. For instance “swearing loyalty to the crown” refers to the sovereign and not to the physical object, of course. We are able to get the right meaning because it would make no sense to swear loyalty to a literal crown. The context tells if the word is metaphorical or literal, if there is a metonymy or not. The same applies to our concepts. Insofar as nodes do not have a size (they are abstract network entities) it is clear the “node sizes” refers to “the size of the shapes representing nodes”. In that sense the shortcut is valid, but remains tricky because we use the word node to actually refer to shapes, and this dangerous shift is how accidents happen. The line not to cross is when metonymies become ambiguous.

How you trap yourself into the maze of circular logic

First you say “those nodes are close”, which can only be understood as a metonymy for “those shapes representing nodes are close”, then you say “so they form a cluster” and you are already stepping on the forbidden limit. As a teacher I will often ask to clarify the ambiguity, for instance: “Can you precise why they form a cluster?” You understand that node placement results from the layout algorithm, which is indeed the answer I expect. However at this point the confusion can make you happily walk into circular logic by answering something like: “It is a cluster because the layout algorithm placed the nodes close to each other”. You might well explain how the algorithm work, but it does not matter, it is too late. You have trapped yourself into a fallacy – can you see why?

The argument is circular because it states that close nodes make clusters and clusters make close nodes. Unfortunately, being aware of the circularity does not really help. From my experience I know that you only realize that you are lost when it is far too late – if you ever do. Avoiding the fallacy is not about recognizing the forbidden zone, it is about not stepping into the maze. It is about having a practice that never puts you at risk.

What is the safe practice? First of all, it is to use the proper vocabulary. But I cannot win the fight against human nature and make you stop using shortcuts. So the safe practice is about using protections. Always check the layer where your argument is valid. The entry to the maze of circular logic is where confusing metonymies give birth to arguments with layers mismatch. But layers mismatch can also lead to less dramatic forms of bad arguments that can be very detrimental to you despite their low profile. We will see how checking layers helps to debunk those.

Bad arguments

There are different degrees of bad arguments, corresponding to the different ways you can fail to circulate the chain of reference from one layer to the next.

Tautology: stuck in a layer

The worst type of argument is when there is no argument. A simple description posing as a point. The lipstick of rhetoric on the pig of triviality. For instance: “The pro-life cluster separates from the pro-choice cluster by sustaining a sensible distance”. The argument is circular: the clusters are distant because they are distant. I diagnose this bad argument as a complete failure to circulate out of the top two layers, the image and the network.

You can debunk such statement by checking the layers. Making a point involves multiple steps where features of one layer are related to the next. A proper argument would look something like this:

  • The pro-life and pro-choice nodes appear distant in the image.
  • They are distant because they have only few connections. That is how the layout algorithm works, but we can also see that there are less edges between clusters than inside each one.
  • The bigger amount of edges inside clusters shows that actors tend to connect to those who are similar to them, and ignore those who are different.
  • This behavior (homophily) reveals an opposition between the two communities.

Naturalisation: jumping to conclusions

A bad but less bad type of argument is to jump over translations, making an incomplete point. I call this “naturalization” because jumping to conclusions often uses the rhetoric of evidence, as if the visualization was a natural manifestation of the empirical world. For instance: “the pro-choice are grouped together, showing they share common values”. The conclusion is sometimes true, but the argumentation is poor. As a teacher I would immediately ask: “can you explain why you think a group of nodes implies the sharing of common values?”, giving you a chance to show your ability to circulate between the layers – or making you realize that you are lost in the maze of argumentation. Some students just use shortcuts, and when asked to unpack their reasoning, they can do so.

Once again the safe practice is to check the layers involved. In this example, proximity belongs to the image layer (number 1). Sharing common values belongs to the empirical world (number 4). You must progress from layer to layer without jumping over any. Respecting the vocabulary helps to not confuse the layers:

  • The proximity of the pro-choice in the visualization…
  • …comes from the important amount of edges between the nodes…
  • …which reveals that these actors know and link each other on the web.
  • We hypothesize that it might be because they share common values.

In this example the last point is not very convincing, and probably just false. The form is valid but not the content. That was just an example but it remains true that the last translation, from the source data to the empirical world, is the most difficult. Unfortunately, it is also the most important.

Run the last mile

My last piece of advice is to always run the last mile: your arguments must lead to conclusions on the empirical world, even if only in a hypothetical way. The finish line is the real world. The reason why you analyze data is because you want to understand something about the world, and you must demonstrate your ability to do so.

Not running the last mile is the most tragic pitfall because it happens only to good students, those who went far but could not defeat the last boss. Bad argumentation does not lead you to the last mile, but you can have all your arguments valid and still not reach the finish line.

Not running the last mile produces analytically valid statements but only about the data. For instance, not mentioning the argumentation but just the conclusion:

  • …hence the governmental websites occupy the central positions in the NGOs corpus.
  • …all NGOs are citing each other on the web, except humanitarian associations.
  • …the websites of the radical left are well connected within the left-wing web sphere but do not form a cluster, being poorly connected to each other.

Those claims may be technically valid, they do not explain well how it relates to the empirical world. The kind of argument that I expect goes a little further, even if only in the form of hypotheses, for instance:

  • …possibly because many NGOs depend on governmental funding, which often require to link to the funding institutions.
  • …because humanitarian associations are in competition for donations, which may lead them to not cite their competitors.
  • …despite being gathered under the common label of “radical left”, these actors do not acknowledge each other and do not form a community, possibly because of ideological divergences.

While you are here…

If you are interested in the works of the Aalborg University students, you can look at their work down below:

Close reading Wikipedia from Pareto to Network Science, part 3

This is part 3: Statistical law or distribution?

In this part we focus on a nice issue. As we have seen previously, there is a Wikipedia article for the Pareto distribution. There is also a page for Pareto’s law, which tells you that it refers either to the Pareto distribution or to the Pareto principle. In this case like many others, a distribution is also called a law, but both concepts are not exactly the same. Can we establish the nuance between what is called a distribution and what is called a law? Which leads to two sub-questions: What does Wikipedia state on that matter? And what is the Wikipedia practice when it comes to name a thing law or distribution? Note: here we only study that issue where it relates to network science and the power law, since the corpus of analyzed articles is defined this way (see part 1 for the method).

As a post on my research blog, it exposes more than a typical paper would: honest on what is actually done, at the cost of not-so-relevant material. I also follow the open source software mantra “release early, release often”: style might be raw.

Findings

There is a confusion about law and distribution, and it is entirely the fault of the concept of statistical law. A distribution is just an equation. But law can refer to two clearly different things:

  • The statement that given rule or distribution is empirically pervasive (with the power to describe) and/or mathematically proven (with the power to explain). Wikipedia explicitly states such definition.
  • A given statistical distribution. This usage of law as a synonym for distribution is implicit but widely used in Wikipedia.

These two usages coexist in many articles, and we also find multiple ways the two concepts of law and distribution are presented as equivalent. The situation results in a confusion on the empirical grounding of statistical laws. The confusion creates a fallacy where mathematical validity may seem to cause and justify empirical validity. The claims to pervasiveness in particular deserve some attention. For most laws, the pervasiveness of observations is a defining characteristic without which they would not be called law in the first place. But pervasiveness is not established by the sole virtue of something being called “law”. We will investigate these questions in an upcoming part.

The overlapping definitions of distribution and law

Distribution has a precise definition, stated in its dedicated article. It is clear and operational, and is not challenged in any article of our corpus.

“In probability theory and statistics, a probability distribution is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment.”

Probability distribution

There are two relevant articles about laws, Empirical statistical law and Scientific law. Without any surprise, statistical laws are presented as a special case of scientific laws in the field of statistics. Remarkably, both articles acknowledge multiple meanings for the concept.

“The term law has diverse usage in many cases.”

Scientific law

Despite a reference definition that seems to frame the concept of law as a kind of empirical model (see citation below), the usage does not follow that definition because as we will see, law can also mean distribution.

“The laws of science, also called scientific laws or scientific principles, are statements that describe or predict a range of natural phenomena. Each scientific law is a statement based on repeated experimental observations that describes some aspect of the Universe.”

Scientific law

If you asked a metaphorical Wikipedia whether a statistical law and a distribution are the same thing, you would not get a clear-cut answer. Not only is Wikipedia ambivalent, sometimes pretending they are the same thing, sometimes not, but the confusion is even official. Indeed Wikipedia acknowledges that in the academic literature, different definitions of statistical law coexist, some of which just mean distribution, some of which do not.

The article on empirical statistical laws provides explanations on those multiple layers of meaning.

“An empirical statistical law or (in popular terminology) a law of statistics represents a type of behaviour that has been found across a number of datasets and, indeed, across a range of types of data sets.”

Empirical statistical laws

In other terms, a law can be
(1) a pervasive behavior
…except that:

“Many of these observances have been formulated and proved as statistical or probabilistic theorems and the term “law” has been carried over to these theorems.”

Empirical statistical laws

So a law can also be
(2) a pervasive behavior explained by a mathematical theorem
…but wait, there is more:

“There are other statistical and probabilistic theorems that also have “law” as a part of their names that have not obviously derived from empirical observations. However, both types of “law” may be considered instances of a scientific law in the field of statistics.”

Empirical statistical laws

At last, a law can also be
(3) a mathematical theorem

In a nutshell, it seems that there are two different things than can give birth to a law: empirical observation, and/or math theory. Each is sufficient to justify the name, none is necessary. This is according to what Wikipedia explicitly states on that matter.

Wikipedia states that a statistical law can be three things

As we will see the situation is a bit more complicated, but as a starting point we can already clarify that:

  1. Multiple meanings of law coexist
  2. When a law is mentioned, we should elucidate whether it is for empirical reasons, for theoretical reasons, or for both.

With this picture in mind, aware of the lack of a clear divide between law and distribution, we can focus on the usage and justification of these concepts.

Statements articulating law and distribution

Now that we have an idea of how Wikipedia frames law and distribution, we look into the usage of those term to see when they are differentiated or, on the contrary, presented as equivalent. In addition, we account for a specific kind of claim where law is framed as empirically grounded.

Demarcation

In our Wikipedia corpus we found no statements about a law|distribution divide, but we find demarcations between law and theory. As we have seen a distribution is a mathematical entity, but it is a function and not a theorem. As it turns out, the only demarcations we find are internal to the notion of law, and leave the concept distribution unaffected. For instance from the article about scientific law:

“Laws differ from scientific theories in that they do not posit a mechanism or explanation of phenomena: they are merely distillations of the results of repeated observation. As such, a law is limited in applicability to circumstances resembling those already observed, and may be found false when extrapolated.”

Scientific law
A first type of demarcation is about the power to explain, and is more exclusive

The following picture, from the same article, is supposed to illustrate the point, though it makes an important difference:

Illustration from the article Scientific law

The picture shows the three cases that can be called law, but once again demarcates between on the one side meaning (3) (on the left) and on the other side the meanings (2) and (1) (center and right respectively).

The picture makes an important difference however, insofar as it states that repeated successful predictions both describe and explain phenomena, while the text demarcates the law as unable to explain. If we reorder the picture in the same order as our precedent illustrations, we obtain the following:

A second type of demarcation is about the power to describe, and is more inclusive

According to both demarcations, a mathematical theorem that is not based on empirical observation should not be called a law. This perspective implies that despite anomalies existing for historical reasons, those are framed as the exceptions that prove the rule.

“What distinguishes an empirical statistical law from a formal statistical theorem is the way these patterns simply appear in natural distributions, without a prior theoretical reasoning about the data.”

Empirical statistical laws

This statement makes the exact same inclusive demarcation, but is interesting for another reason. The use of the concept of “natural distribution”, which here refers to empirical observations. As tempting as it may be, we cannot approach the concept of distribution as the mathematical side of the notion of law. Not only can laws be mathematical, but distributions are sometimes located on the empirical side of the observation/theory divide.

Equivalence

Equivalence has multiple forms, explicit or implicit. We expose two of them:

  1. Indifferent naming of the same thing as law or probability
  2. Statements of multiple names

Indifferent naming. The article on normal distribution provides a good illustration, we will focus on it for a few paragraphs. In the first third of the article, the normal distribution is systematically called a “distribution”. But then a few passages mention the “Gaussian law” and progressively, we see mentions of “normal law”.

The simplest form of indifferent naming is the usage of both “normal law” and “normal distribution” in the same context. We consider it an implicit form of equivalence, provided that both names refer to the same thing. In our example, it is important to remark the absence of nuances between the two terms. For instance, following observed demarcations, we could expect the “law” term to be used to emphasize the empirical dimension, but it is not the case. See how the first occurrence of “normal law” in the article clearly refers to the mathematical function:

“Pearson distribution — a four-parameter family of probability distributions that extend the normal law to include different skewness and kurtosis values.”

Normal distribution

As an additional clue of this equivalence, the article on normal law exists and proposes a redirection to normal distribution (along with two other unrelated meanings of normal law, in the field or aviation and in the field of justice).

A slightly different form of indifferent naming is when a distribution is also called a law but with a different name, as in distribution X and law Y. For instance, the “normal distribution” is also called “Gaussian law”. Once again, we verify that this term is referring to the exact same thing as the distribution, as in this passage:

“One of the main practical uses of the Gaussian law is to model the empirical distributions of many different random variables encountered in practice.”

Normal distribution

Incidentally note that while the “law” term refers to the mathematical function, here “distribution” refers to the empirical, in a complete reversal of the expected demarcation.

Statements of multiple names. Sometimes we find statements on the multiple names, which is the explicit version of indifferent naming. For instance the article on the Poisson distribution has a subsection about the law of rare events which explicitly states synonymy (but only “sometimes”).

“The word law is sometimes used as a synonym of probability distribution, and convergence in law means convergence in distribution. Accordingly, the Poisson distribution is sometimes called the law of small numbers because it is the probability distribution of the number of occurrences of an event that happens rarely but has very many opportunities to happen. The Law of Small Numbers is a book by Ladislaus Bortkiewicz (Bortkevitch) about the Poisson distribution, published in 1898.”

Poisson distribution

The source of these multiple names is often implied or stated as historical, as in the article about the normal distribution where there is a History section with a “naming” subsection where it appears that the distribution was first named as a law:

“Since its introduction, the normal distribution has been known by many different names: the law of error, the law of facility of errors, Laplace’s second law, Gaussian law, etc. Gauss himself apparently coined the term with reference to the “normal equations” involved in its applications, with normal having its technical meaning of orthogonal rather than “usual”. However, by the end of the 19th century some authors had started using the name normal distribution”

Normal distribution

Other articulations

Like the Deleuzian fold which separates and joins at the same time, some statements imply demarcation and equivalence at the same time. Take for instance this comment following a list of mathematical extensions to the normal distribution:

“All these extensions are also called normal or Gaussian laws, so a certain ambiguity in names exists.”

Normal distribution

Acknowledging ambiguity is a passive-agressive way to argue for demarcation, pretending that a difference exists while not contributing to situating it. What the sentence actually does is stating that the multiple variations of the distribution share a common name, which is a statement of equivalence, although quite weak.

The law framed as pervasiveness. As we have seen, the reference definition of scientific law refers to its pervasiveness. Similarly the article on Zipf’s law argues that its status of “empirical law” refers to the pervasiveness of the Zipfian distribution. Note the “many” in the following sentence:

“[Zipf’s law] refers to the fact that many types of data studied in the physical and social sciences can be approximated with a Zipfian distribution, one of a family of related discrete power law probability distributions.”

Zipf’s law

In this perspective the law is not exactly the distribution, it is the fact that we find the distribution in multiple empirical situations. This is a possible key to understand the usage of our two terms. Not only is it aligned with the demarcation (the law carrying the empirical dimension of the distribution) but it also suggests an interpretation of the usage of law as a metonymy: “law X” would stand for “law of the pervasiveness of distribution X”. However this interpretation only stands for certain cases where we can observe such nuance, since we have seen with indifferent naming that in some situations there is a strict synonymy between law and distribution. Anyhow, the question of pervasiveness is central to our research question and we will dedicate some specific attention to it.

Claims to empiricism

As we have seen, when the term law is not used as a synonym of distribution, it has an empirical dimension as hinted by the title of the corresponding article, “Empirical statistical laws”. But we will first return to the reference definition of scientific law, once again quoted below. It states the key elements of the empirical dimension of a law, which we will discuss.

“The laws of science, also called scientific laws or scientific principles, are statements that describe or predict a range of natural phenomena. Each scientific law is a statement based on repeated experimental observations that describes some aspect of the Universe.”

Scientific law

Firstly, a law can describe or predict. It states two distinct agencies, that can both be qualified of empirical. The power to describe is the weakest, it only means that the law is a valid reduction of a phenomenon. The power to predict is stronger, since it allows applications. Both are empirical because they related to the observation of natural phenomena, but in different ways.

Secondly, a law is pervasive (“repeated observations”). Implicitly, laws can have validity conditions that prevent them from describing certain phenomena and/or predict them – laws are not realistically expected to apply strictly everywhere all the time. But a law is not something that happens once or twice. It earns its status of law by applying to enough observations, even though no precise threshold is specified.

The same article also states a third kind of agency, the power to explain. It seems to derive from the combination of the other agencies and pervasiveness. The power to describe (or “summarize”) gives the power to potentially explain, while the power to predict and the pervasiveness validate the explanation.

“Scientific laws summarize and explain a large collection of facts determined by experiment, and are tested based on their ability to predict the results of future experiments.”

Scientific law

We also find an alternative way to empirically ground the concept of law, purely based on pervasiveness, and sometimes on a more explicitly structuralist alternative, independence to details. This argument is called “universality” and is stated in a dedicated article.

“In network dynamics, universality refers to the fact that despite the diversity of nonlinear dynamic models, which differ in many details, the observed behavior of many different systems adheres to a set of universal laws. These laws are independent of the specific details of each system.”

Universality (dynamical systems)

We see here how universality refers to the fact itself that there are empirical laws. Note that independence to details is more than observed pervasiveness insofar as it adds an explanation to the phenomenon. In that sense, universality implies the power to explain, while the statements from the article on scientific law describe where the power to explain comes from. Universality only assumes the power to explain.

The article on the log-normal distribution makes a precise universalist point which embodies multiple aspects of the Wikipedia discourse on law and distribution.

“The log-normal distribution is important in the description of natural phenomena. This follows, because many natural growth processes are driven by the accumulation of many small percentage changes. These become additive on a log scale. If the effect of any one change is negligible, the central limit theorem says that the distribution of their sum is more nearly normal than that of the summands. When back-transformed onto the original scale, it makes the distribution of sizes approximately log-normal (though if the standard deviation is sufficiently small, the normal distribution can be an adequate approximation). This multiplicative version of the central limit theorem is also known as Gibrat’s law, after Robert Gibrat (1904–1980) who formulated it for companies. If the rate of accumulation of these small changes does not vary over time, growth becomes independent of size. Even if that’s not true, the size distributions at any age of things that grow over time tends to be log-normal.”

Log-normal distribution

We remark that:

  • It starts as a claim to empiricism (“important in the description of natural phenomena”)
  • It grounds the claim on pervasiveness first (“because many natural […] processes”…)
  • The term law (“Gibrat’s law”) does not refer to the distribution itself (log-normal) but to a “theorem” involving it (central-limit).
  • The passage argues that the theorem explains the observed pervasiveness, which is the universalist stance (regardless of how convincing you may find the point)
  • All of which frames Gibrat’s law as our definition (2), a pervasive behavior explained by a mathematical theorem.

An epistemic typology of statistical laws

In coherence with the three basic understanding of the concept of law that we have highlighted in the start of this section, the article on empirical statistical law proposes a list of examples formulated as a typology. It features various degrees of theoretical grounding, empirical grounding, and combination.

“Examples of empirically inspired statistical laws that have a firm theoretical basis include:
• Statistical regularity
• Law of large numbers
• Law of truly large numbers
• Central limit theorem
• Regression towards the mean

Examples of “laws” with a weaker foundation include:
• Safety in numbers
• Benford’s law

Examples of “laws” which are more general observations than having a theoretical background:
• Rank-size distribution

Examples of supposed “laws” which are incorrect include:
• Law of averages”

Empirical statistical laws

Note that one of those laws is called theorem, and another one distribution, but as we have seen the equivalence between law and distribution has multiple facets.

This typology however does not include some of the cases we have observed, and in particular when law is employed as a strict synonym of distribution. It also distinguishes the truth of a law by qualifying it as “supposed”, suggesting that a falsified law might no longer be law. We will propose our own typology by including all the cases we have observed and ignoring the truth status of the law. In other words, we will recognize a law if it is stated as a pervasive behavior, for instance, regardless of whether this statement is true or false. We see no problem with the concept of a false law, and consider convenient to still call it a law for epistemic reasons. We will also precise some observed characteristics of the different types. Note: this typology extends and enriches our first basic typology (numbering is consistent).

Observed types of usage of the term law

(0) Law strictly refers to a distribution (a mathematical function).
This definition is never stated but is observed in Wikipedia articles.
It maybe interpreted as a metonymy (see below).

(1) Law refers to a pervasively observed distribution/behavior.
Often considered to just describe a phenomenon.
Pervasiveness is required to be called a law.
Some validity conditions may apply.

(2) Law refers to an empirically observed theorem.
Often considered to explain and predict a phenomenon.
The theorem is often about a phenomenon following a certain distribution.
Pervasiveness is not longer required since the theorem is formal.
Some validity conditions may still apply.

(3) Law refers to a theorem (not observed empirically).
Often considered an improper use of the “law” term.

These different types correspond to two different perspectives: descriptive and explanatory. In the descriptive perspective, the law describes a phenomenon, possibly with a distribution. The fact that the distribution itself can also be called a law may cause some confusion. This perspective is purely grounded on empirical observations, which must have some pervasiveness to be called a law.

Descriptive perspective: based on pervasive observations, the law describes but does not explain

The explanatory perspective is on the contrary grounded on a mathematical theorem which is not necessarily observed in the real world. Because the theorem has a formal proof, a deduction, its validity conditions do not require pervasiveness. However the applicability to real life may still involve validity conditions. The mathematical foundation, as we have seen, are often credited for a power to explain and/or to predict. The fact that the theorem may be about a distribution also allow for a certain degree of confusion.

Explanatory perspective: based on a theorem, it not describes but also explains and/or predicts

The last two diagram show how these different usages overlap. As we will see, this situation generates a certain amount of confusion.

the inconsistent usage of the term law

Our four types of usage are not exclusive, as the type (0) is compatible with each of the three others. Indeed though usages (1), (2) and (3) differ by the way the law is grounded (empirically and/or theoretically) and are thus mutually exclusive, the usage (0) is possible any time a distribution is involved. What does it mean? Does it matter?

Let us take an example. An article mentions “Zipf’s law”. What do you think it means? There are two possible answers.

  • (0) It means Zipf’s distribution. It is just a mathematical function.
  • (1) It means that Zipf’s distribution is observed everywhere (inside its validity conditions). It is a statement on the empirical reach of a mathematical function, based on its pervasiveness.

Let us look at the Wikipedia article, which starts like this:

“Zipf’s law […] refers to the fact that many types of data studied in the physical and social sciences can be approximated with a Zipfian distribution”

Zipf’s law

This usage is clearly type (1). But later in the same article we find statements such as those:

“Zipf’s law is most easily observed by plotting the data on a log-log graph”

“The simplest case of Zipf’s law is a “1⁄f function.””

“many natural phenomena obey Zipf’s law”

Zipf’s law

And even the following, that I cannot reproduce in text form in this blog:

Zipf’s law as a formula, referring to the distribution

These multiple usages are as clearly of type (0), referring to the distribution and not to the statement of its pervasiveness. In this example like in others, different usages coexist in the same text.

This practice of mixing the usages is akin to a figure-of-speech such as a metonymy. We hypothesize that the Wikipedia writer may favor the shorter “Zipf’s law” to the longer “the mathematical equation involved in Zipf’s law”. It is however intriguing that for the sake of brevity, the perfectly sound concept of “Zipfian distribution” is not used instead. Distributions never have the meaning of an empirical statement, they always refer to the equation. We found no other reason than the persistance of a historical confusion to explain why the term “distribution” is not used in place of “law” when referring to the mathematical formula.

Does it matter? It brings confusion to statements where we cannot easily determine which of the two usages is intended, which has consequences on the evaluation of those statements. For instance one may wonder: is Zipf’s law true? Understood as type (0), it is as true as a mathematical equation can be, absolutely unfalsifiable inside its validity conditions – though a simple equation makes nothing else than an absolutely abstract statement. But understood as type (1), it entirely depends on the pervasiveness and validity conditions of the law, which are not necessarily robust and might change over time. A law can stop being true while we get more observations.

Fallacies

The main consequence of this confusion is to allow vicious arguments on the validity of laws. By leveraging the confusion to convoke the right perspective at the right moment, we could abuse rhetoric to make a false but convincing point. For instance we could argue that the log-normal law is guaranteed to be true by virtue of the central-limit theorem, then argue that it empirically grounded because of its pervasiveness. This argumentative line is strong because it draws justification from both sides of the empiricism/theory divide. It makes it look like math theory guarantees the law’s empirical reach, but this precisely is a complete fallacy. Indeed the law guaranteed by the central-limit theorem is just the log-normal distribution, not the statement of pervasiveness constitutive of the law understood as something empirical. Mathematical validity does not cause empirical validity, because conditions always apply. Yet the confusion around the concept of law allows a fallacy that shortcuts the gap.

This last example was just for the sake of argumentation, since we did not actually observe such fallacious arguments, and we did not even search for them – for the moment at least. We will return later to the questions of pervasiveness and universality, and we will pay extra attention to the usage of the term law in these situations, as there is a potential for abuse.

The potential for abuse can take the form of more generic and traditional fallacies like a simple circular logic such as:

  1. A law is by definition pervasive
  2. That pervasiveness is what makes it a law

As trivial as it sounds, laws are often evoked in Wikipedia and their status is rarely discussed outside their article, and pervasiveness is not even systematically justified. Wikipedia is not exactly academic literature and has its limits, nevertheless it seems quite dangerous to repeatedly assume that anything named “law” is, by virtue of that name, pervasive.

The Pareto distribution is the favorite law

Our corpus of analyzed articles is focused on Pareto and the power law, so it would only be natural that it is biased in favor of over-representing the Pareto distribution. However very generic articles such as Scientific law and Empirical statistical law seem to have a clear preference for Pareto and Zipf.

The first, and only detailed, examples of Empirical statistical laws, in the corresponding article, are Pareto and Zipf. Both are framed as laws. The latter has its empirical pervasiveness highlighted.

“The Pareto principle is a popular example of such a “law”. It states that roughly 80% of the effects come from 20% of the causes, and is thusly also known as the 80/20 rule. […]

Zipf’s law, described as an “empirical statistical law” of linguistics, is another example. According to the “law”, given some dataset of text, the frequency of a word is inversely proportional to its frequency rank. […] However, what sets Zipf’s law as an “empirical statistical law” rather than just a theorem of linguistics is that it applies to phenomena outside of its field, too. For example, a ranked list of US metropolitan populations also follow Zipf’s law, and even forgetting follows Zipf’s law. This act of summarizing several natural data patterns with simple rules is a defining characteristic of these “empirical statistical laws”.”

Empirical statistical laws

In the article Scientific law, Zipf’s law is the only example.

“The term “scientific law” is traditionally associated with the natural sciences, though the social sciences also contain laws. An example of a scientific law in social sciences is Zipf’s law.”

Scientific law

There is only one mention of “law” in the article on probability distributions and it is about “the prototypical power law distribution” (note the double qualification).

Here we only hypothesize that Zipf’s law and the Pareto principle, which are both sub-species of the power law, are associated with especially strong claims to empiricism.

Anecdote: a law whose name is explained by a law

As a lighter note to this section, we remark that not all laws are about a distribution. In the article on “Cauchy distribution”, inception! the name of the law is itself explained by a law. As the History section states:

“Functions with the form of the density function of the Cauchy distribution were studied by mathematicians in the 17th century, but in a different context and under the title of the witch of Agnesi. Despite its name, the first explicit analysis of the properties of the Cauchy distribution was published by the French mathematician Poisson in 1824, with Cauchy only becoming associated with it during an academic controversy in 1853. As such, the name of the distribution is a case of Stigler’s Law of Eponymy. Poisson noted that if the mean of observations following such a distribution were taken, the mean error did not converge to any finite number. As such, Laplace’s use of the Central Limit Theorem with such a distribution was inappropriate, as it assumed a finite mean and variance. Despite this, Poisson did not regard the issue as important, in contrast to Bienaymé, who was to engage Cauchy in a long dispute over the matter.”

Cauchy distribution

Close reading Wikipedia from Pareto to Network Science, part 2

This is part 2: Concepts of complex network and preferential attachment

As we mentioned in the part 1 of this work, just defining the concepts we need to understand the field requires an effort. Before we move on to analysis, we will clarify two families of concepts, one centered on the complex network, and the other one on preferential attachment.

As this is a post on my research blog, it tends to expose more than a typical paper would. I follow the open source software mantra “release early, release often”.

Findings

Scale-free networks and small-world networks have a different origin and characterization, but it turns out that they are quite the same thing and their differences do not seem so important now that we have studied them further. The more general term of complex network seems appropriate to refer to this family of networks.

Preferential attachment has many names but is precisely defined. The term of “Matthew effect” is not a strict equivalent, though. There is an ambiguity about assortativity, which is an equivalent of preferential attachment only in a specific case, despite how some authors refer to the concept.

Scale-free, small world, and complex networks

Those three types of networks are sometimes presented as the same, sometimes not. Fortunately, a simple distinction makes it possible to define a ground where most articles agree:

  1. The characterizations are not the same. Scale-freeness is not small-worldness is not complexity.
  2. In practice, those three networks might be the same kinds of networks.

Note that the question of the differences the three types, beyond their characterization, is still open. Some articles claim that it is not useful to separate them, while others enforce the demarcations. However all articles seem to agree that the relevant features for this family of networks are richer than just scale-freeness and small-worldness. In that sense, these categories are more historical than an accurate typology of networks.

In the side template on network science present on many articles, all three types of networks appear on an equal foot.

On the topic of typologies, the article on social networks has a one worthy of Borges, where criteria of widely different sorts seem equally relevant at defining their own kind of networks. It is organized in different levels of analysis, and scale-free networks and complex networks appear as items in different levels:

• Micro level:
  o Dyadic level
  o Triadic level
  o Actor level
  o Subset level

• Meso level:
  o Organizations
  o Randomly distributed networks
  o Scale-free networks

• Macro level:
  o Large-scale networks
  o Complex networks

Social network

This article also details these two types of networks, and though the characterization differs, both acknowledge a certain amount of variability in the definition. We also see that the heavy-tail distribution of node degrees is mentioned as a feature of complex networks, which as we have seen is an other way to name the power-law, and thus poses an unnamed equivalence with the characterization of scale-free networks.

“Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. […] Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network.”

Social network

“Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random […]. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure.”

Social network

Despite their article, social networks are not a specific kind of network but rather an area of application. The three kinds that interest us have each a specific page where the can state their relations to each other.

The small-world network, does not mention the other kinds, but declares a feature of “fat-tailed distribution” which is an equivalent of the power law, and thus an unnamed link to scale-free networks.

“Networks with a greater than expected number of hubs will have a greater fraction of nodes with high degree, and consequently the degree distribution will be enriched at high degree values. This is known colloquially as a fat-tailed distribution.”

Small-world network

The scale-free network page starts with the characterization by power-law distribution of node degrees. It also mentions “the small world network model” as a sub-category, and makes several confusing references to the concept of “complex network”, like for instance the caption of this image:

From the Scale-free network page

Other example, the mention of “scale-free complex networks” in a paragraph marked as requiring “attention from an expert in Mathematics”.

From the Scale-free network page

The complex network article states explicitly its relations with the two other kinds: it is generalizing them, arguing that the original demarcations have lost relevance:

“Two well-known and much studied classes of complex networks are scale-free networks and small-world networks, whose discovery and definition are canonical case-studies in the field. Both are characterized by specific structural features—power-law degree distributions for the former and short path lengths and high clustering for the latter. However, as the study of complex networks has continued to grow in importance and popularity, many other aspects of network structure have attracted attention as well.”

Complex network

In that spirit, it assumes an explicitly vague definition. It compensates the drawback of not being a mathematical object by declaring itself as an area of research. It then states a form of correspondance with networks defined by their origin rather than their properties, notably “social networks”.

“In the context of network theory, a complex network is a graph (network) with non-trivial topological features—features that do not occur in simple networks such as lattices or random graphs but often occur in graphs modelling of real systems. The study of complex networks is a young and active area of scientific research (since 2000) inspired largely by the empirical study of real-world networks such as computer networks, technological networks, brain networks and social networks”

Complex network

It is interesting to remark that the article on social networks, symmetrically, states an equivalence with complex networks (and as we have seen, with them only).

“Together with other complex networks, it forms part of the nascent field of network science.”

Social network

Preferential attachment, cumulative advantage, rich get richer, Yule process, assortativity, and the Matthew effect

The primary definition of preferential attachment is probabilistic even if we will often find it associated with scale-free networks (in our corpus at least).

“A preferential attachment process is a stochastic urn process, meaning a process in which discrete units of wealth, usually called “balls”, are added in a random or partly random fashion to a set of objects or containers, usually called “urns”. A preferential attachment process is an urn process in which additional balls are added continuously to the system and are distributed among the urns as an increasing function of the number of balls the urns already have.”

Preferential attachment

The following definition (under the name of “the rich get richer”) is similar but more explicit on the implications.

“In statistics, the phrase “the rich get richer” is often used […] where the probability of the next outcome in a series taking on a particular value is proportional to the number of outcomes already having that particular value. This is useful for modeling many real-world processes that are akin to “popularity contests”, where the popularity of a particular choice causes new participants to adopt the same choice (which can lead to the outsized influence of the first few participants).”

The rich get richer and the poor get poorer

This time the equivalence between the different notions is pretty explicit, and the article on preferential attachment explains it. Only the Matthew effect is a little special, and as we have seen it is also considered equivalent to the Pareto distribution. Assortativity is also a special case, as we will see, because it is an equivalent of preferential attachment only in certain cases. Anyway the connection with the power law is central and explicit.

“”Preferential attachment” is only the most recent of many names that have been given to such processes. They are also referred to under the names “Yule process”, “cumulative advantage”, “the rich get richer”, and, less correctly, the “Matthew effect”. They are also related to Gibrat’s law. The principal reason for scientific interest in preferential attachment is that it can, under suitable circumstances, generate power law distributions.”

Preferential attachment

Cumulative advantage is another name of preferential attachment, but it seems it is not much employed anymore.

“In a later paper in 1976, Price also proposed a mechanism to explain the occurrence of power laws in citation networks, which he called “cumulative advantage” but which is today more commonly known under the name preferential attachment. […] Barabási and Albert proposed a generative mechanism to explain the appearance of power-law distributions, which they called “preferential attachment” and which is essentially the same as that proposed by Price.”

Scale-free network

There is also quite a long explanation about the distinction with the Matthew effect, quoted below for reference.

“Preferential attachment is sometimes referred to as the Matthew effect, but the two are not precisely equivalent. The Matthew effect, first discussed by Robert K. Merton, is named for a passage in the biblical Gospel of Matthew: “For everyone who has will be given more, and he will have an abundance. Whoever does not have, even what he has will be taken from him.” (Matthew 25:29, New International Version.) The preferential attachment process does not incorporate the taking away part. This point may be moot, however, since the scientific insight behind the Matthew effect is in any case entirely different. Qualitatively it is intended to describe not a mechanical multiplicative effect like preferential attachment but a specific human behavior in which people are more likely to give credit to the famous than to the little known. The classic example of the Matthew effect is a scientific discovery made simultaneously by two different people, one well known and the other little known. It is claimed that under these circumstances people tend more often to credit the discovery to the well-known scientist. Thus the real-world phenomenon the Matthew effect is intended to describe is quite distinct from (though certainly related to) preferential attachment.”

Preferential attachment

The article on the Matthew effect focuses more on the similitudes than the differences.

“In network science, the Matthew effect is used to describe the preferential attachment of earlier nodes in a network, which explains that these nodes tend to attract more links early on. […] “Because of preferential attachment, a node that acquires more connections than another one will increase its connectivity at a higher rate, and thus an initial difference in the connectivity between two nodes will increase further as the network grows, while the degree of individual nodes will grow proportional with the square root of time.” The Matthew Effect therefore explains the growth of some nodes in vast networks such as the Internet.”

Matthew effect

Preferential attachment is also mentioned on the statistical side of the studied articles, as for instance in the article on Zipf’s law.

“Similarly, preferential attachment (intuitively, “the rich get richer” or “success breeds success”) that results in the Yule–Simon distribution has been shown to fit word frequency versus rank in language and population versus city rank better than Zipf’s law.”

Zipf’s law

As a clue on the cultural side of these notions, the article on “The rich get richer and the poor get poorer” is specifically about the aphorism and not the phenomenon itself, which refers to either “wealth concentration” or “economic inequality”.

“This article is about the catchphrase. For the theoretical process, see wealth concentration. […] In statistics, the phrase “the rich get richer” is often used as an informal description of the behavior of Chinese restaurant processes and other preferential attachment processes, where the probability of the next outcome in a series taking on a particular value is proportional to the number of outcomes already having that particular value.”

The rich get richer and the poor get poorer

Assortativity and homophily

A misunderstanding must be clarified about assortativity, also called homophily.

  1. Assortativity, or homophily, generally means that in a network the probability that two nodes are connected is higher when those nodes have something in common. This is not specifically related to preferential attachment.
  2. However when the point in common is the degree (number of neighbors) then assortativity/homophily becomes an avatar of preferential attachment. Unfortunately, it seems that some authors employ a synecdoche where that specific case is named by the more generic concept, causing some confusion.

“Assortativity, or assortative mixing is a preference for a network’s nodes to attach to others that are similar in some way. Though the specific measure of similarity may vary, network theorists often examine assortativity in terms of a node’s degree.”

Assortativity

On the use of the term “homophily”, we can cite the article on social network analysis: “Homophily (assortativity)”.

The article on social networks is not specific but mentions “assortativity or disassortativity among vertices” as a feature of complex networks.

Note that awkwardly, “Assortative mixing” seems to be the same thing as assortativity but each has its own page, and each refers to the other.