Is Gephi a Black Box?

This text was inspired by Emilija Jokubauskaite’s master’s thesis at DMI under Bernhard Rieder’s supervision. She studied Gephi and its epistemic culture, conducting a series of interviews (including mine) and reflecting on the relations between the tool and its users, mostly in the social sciences.

Gephi and Its Context: Three Oppositions in The Epistemic Culture of Network Analysis And Visualisation

As Gephi’s original author, ie. before Mathieu Bastian and later Eduardo Ramos Ibáñez assumed the role of lead developer, and having continuously influenced its design, I find Jokubauskaite’s work precious. It accounts for the community of users in a remarkably productive way for the social science. It covers many aspects, but here I just picked one that is important to me: blackboxing.

In this text I draft a series of reflections sparked by her thesis. I did not take the time to organize them as a narrative, and prefered a series of points, mostly unsorted. And as a draft, the writing is loose and verbose. Sorry. It allows me to open the piece to an early discussion, which is the point of this research blog.

Though focusing on Gephi as a case, my perspective extends more generally to the use and design of data analysis software.

Does blackboxing matter?

Wikipedia, cites Bruno Latour: blackboxing is “the way scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become.” I appreciate how respectfully positive this definition is: Latour frames blackboxing as a by-product of success.

Not all agree on that matter. In the academia, some people criticize tools on the ground of their opacity. They claim those are black boxes, and it is not a compliment. To them, blackboxing is not a side effect of success, but an issue, a failure. Gephi is no exception. For instance Claire Lemercier and Claire Zalc criticize Gephi for being a black box. There are others. They write:

“More recent software devised in the context of “digital humanities” or “big data” sometimes only offers visualizations at the expense of calculations; or does not offer the analysis tools that are useful in the social sciences (as opposed to in physics); or presents some analysis tools like “black boxes”, with little accessible documentation as to how they work. (contrary to many, we are no fans of Gephi, for this type of reasons; we do not object, of course, to advanced network researchers using it, but we do not find it well-suited to beginners)”

C. Lemercier and C. Zalc

If blackboxing causes trouble, then it matters. And more precisely: what is the problem with black boxes, how can we fix it, and make better science tools? I will argue against the idea that Gephi is a black box, but at least, we can all agree that blackboxing is an issue we need to address.

Blackboxing is in part a cultural issue

Blackboxing is not really a tool thing but a culture thing. It is only a feature of the tool insofar as the tool impacts culture. Tools can give rise and impact cultures, but cultures also have their own reasons.

Some people find Gephi hard to grasp while others praise its simplicity. Users with very different sets of skills and knowledge have a different understanding of how it works and what it allows you to do. For some it is opaque, for others transparent. The defining quality of the black box, opacity, is relative.

We have two reasons to refuse the model of blackboxing as inaccessibility to underlying methods. Firstly, accessing is not understanding. You understand what you see only if you have the right competences. In the case of Gephi, an open source software, accessing the source code is possible but it does not guarantee understanding the method. Gephi exposes many settings, but accessing those is not knowing how to set them. On a different level, a decently sized body of Gephi documentation is accessible online, but it does not mean that users find it, or even search for it. Blackboxing can happen even when the method is accessible. Secondly, the method might be inaccessible but you understand it anyways. This is post-hoc interpretability, a mode of understanding algorithms that works well with deep learning but not only, and I will address this point separately. Anyways, blackboxing is not primarily about accessing methods.

The opacity of blackboxing relates to a complex dynamic of resistances and affordances offered by the tool, and incentives and expectations set by culture. Jokubauskaite writes extensively about it:

“It can be hypothesised that some of the specificities and affordances of the software […] are a part of an epistemic culture in which the relationship between the method of network analysis and the Gephi tool can be regarded atypical. Firstly […] scholars tend to approach network analysis with this tool as intuitively following some ‘recipe’ without always consciously acknowledging the subtleties in its steps. While Jacomy aimed to educate social scientists about graph theory and ‘what can be trusted’ in a network visualisation, the academic environment has possibly steered it towards learning how to productively use the tool. […] [T]his thesis would like to argue that Gephi cannot be regarded as simply a method that has been packed into a software. Rather, looking from the perspective of current research practices, it can be largely seen to have an academic practice of its own than being a part of the larger tradition of network analysis and visualisation historically. Specific Gephi affordances and the research practices of using it may be seen as constituting a self-sufficient epistemological routine apart from its complex historical and methodological underpinnings.”

E. Jokubauskaite

Or more simply:

“I would like to argue that in the case of Gephi, black-boxing is less reliant on the interface and the tool in itself, but is more related to the epistemic culture as well as the agency and knowledge of the user.”

E. Jokubauskaite

We resisted blackboxing Gephi

I was surprised to realize that Gephi can be considered a black box, because we intentionally aimed at the opposite. I do not want to spend too much time justifying ourselves, but here are a few of our reasons:

  • The method is exposed. The code is open and all implemented algorithms refer explicitly to relevant papers inside the graphical user interface.
  • We have a website, a forum, a Facebook group, a wiki, and many third party sources for documentation.
  • We expose visually what happens during node placement algorithms, a crucial part of visual network analysis. Instead of a load bar as in GUESS, you see the nodes moving and you can stop when you want. You have a chance to understand how it works (post-hoc).
  • Arbitrary meta-parameters are exposed, users see they exist and can edit them. Applies to both layout algorithm and metrics (clustering, centralities…).
  • We published a an open access paper on our own layout algorithm (ForceAtlas2) where we provide both the equations and visual explanations on each of its settings.
  • By design, Gephi never initiates a process without a user action. By default, it does not compute any statistics, any layout. Users have to do it, and acknowledge the existence of these possibilities, along with their alternatives.

Despite our best intent, we might still have failed. In that case, I am interested in understanding why Gephi is a black box, and why our constant efforts did not matter. But I am not convinced by the diagnosis.

As well known in the field of design, users typically misdiagnose problems. They detect actual problems, but they generally situate them in the wrong place. It can be explained by their ignorance of design constraints. In that sense, the primary way any device is a black box, is by hiding the design process. In particular, the work of exploring alternative solutions (that were removed) is never visible. This reduction of the exploratory process to a bounded object is a classic aspect of design, applying to virtually anything humans can produce, from cooking to writing a paper. For your own productions as well, you have certainly received that kind of unrealistic feedback, candidly oblivious of the many constraints you faced, possibly from your mother-in-law, or worse, peer-reviewer #2. Feedback is generally sparked by a real issue, though. The work of a designer is to understand where it comes from and elaborate a solution that takes constraints into account. This is how I frame the issue of Gephi’s blackboxing. I acknowledge the existence of an issue, but I do not take for granted that it is a matter of opacity.

Once you factor in constraints, you realize that there will always be a certain amount of blackboxing. Blackboxing is nuanced, and if it were a scale, it would not start at zero. Building a tool without blackboxing is like making a cake without cooking. But some tools are more or less blackboxed, or in different ways. Since a tool can be transparent to certain users and opaque to others, certain publics might have been favored over others. More simply, we can ask how much blackboxing could have been avoided. It is not the same question as the amount of blackboxing, because it accounts for the amount of inevitable blackboxing. This is where users miss an important factor. I believe, in the case of Gephi, that we could not do much better while retaining the tool’s quality (or spending a time we could not afford). However I acknowledge that we might have blackboxed it another way. In that sense the right diagnosis is not how much it is blackboxed, but which way. Despite our efforts, Gephi might be a black box in a way that is detrimental to a certain public.

Gephi was intentionally favoring beginners. We had two personas in mind when we designed it:

  1. Someone trained in social sciences or humanities engaging with relational data
  2. Someone trained in network science with an empirical case to study (eg. myself)

We wanted Gephi to be usable by people without a background in network science, while still being useful to more advanced users, but we did not want to favor the expert public at the detriment of the beginner public. My own design skills were poor at that time, and I made a number of mistakes that I can see now. But even if our learning curve was not as smooth as we thought, it paid off and as Jokubauskaite observed, many users feel how it helps them gradually engage with network science. “Some other interviewees also acknowledged this characteristic of the instrument, saying, for example, that ‘through the time of working with the actual tool you […] get to know what it […] represents. So it gives you more insight into what it is […] that you are working with’. Moreover, several of the interviewees expressed a wish to do more related research in order to learn more about the tool.”

What if Gephi is transparent to beginners but opaque to advanced users? We might have sacrificed the expert public in our quest of a tool usable by all. This hypothesis is aligned with a number of observations: Gephi is appreciated by beginners, it is criticized by some experts, and as Jokubauskaite notes, its users frame it as an entry point to the field of network science. However those facts can be explained otherwise. The critique of experts is probably a selection bias: only an expert would take the time to criticize Gephi, because a frustrated beginner would silently move on to an alternative. Additionally, Lemercier and Zalc explicitly make the opposite point: “we do not object, of course, to advanced network researchers using it, but we do not find it well-suited to beginners”. Our hypothesis does not seem to hold, and Gephi’s blackboxing problem does not seem to be caused by a bias towards the beginner public. There is no simple way in which Gephi is too much blackboxed, or blackboxed the wrong way.

Users repurpose Gephi as fast food

The model according to which tools implement methods is too simplistic. Naturally, users repurpose tools in unexpected ways, diverting features, and deviating from safe methodological paths. This is one of the reasons why blackboxing is a cultural issue. What happened when users repurposed Gephi?

Jokubauskaite notes: “Gephi cannot be regarded as simply a method that has been packed into a software. Rather, looking from the perspective of current research practices, it can be largely seen to have an academic practice of its own than being a part of the larger tradition of network analysis and visualisation historically. Specific Gephi affordances and the research practices of using it may be seen as constituting a self-sufficient epistemological routine apart from its complex historical and methodological underpinnings.”

I personally frame Gephi as a network science tool. So I was a little surprised to learn that Gephi’s practice is seen as different from network science. But it actually makes a lot of sense. In particular, as I have already shown in this blog, most of network science is a structuralist or even universalist approach to social and living phenomena, trying to leverage mathematical theories and computing to unveil hidden laws. That project is quite different from Gephi’s project, engaging empirically with relational phenomena through visual and interactive interfaces. What are doing the users observed by Jokubauskaite with Gephi?

“[W]hile the researchers often arrived at the use of Gephi without former knowledge of network analysis, their educational path from the beginning can be seen as largely focused on producing findings in short-time projects as opposed to taking time to empirically explore networks and learn about them.”

“Gephi is used is often quite time-restricted (for example, in short project-work). Additionally, when learning how to carry out network analysis, researchers often learn from step-by-step tutorials or ‘best practices’ that increase the productivity when using the tool, however might disincentivise further attempts at method clarification. Following that, the researchers have reported being encouraged to ‘just use it’ or instructed to take some methodological steps without providing further information on why and what the implications might be. These practices, in combination, can be seen as further black-boxing the method from the user of Gephi.”

E. Jokubauskaite

Some Gephi users consume it as fast food – possibly most of them. That is an interesting finding. My own experience confirms it, though it only became clear to me as I read Jokubauskaite’s work. I can even propose an explanation, focusing on the question of layouts, as for why it happened despite our efforts to slow down users.

Every Gephi user had already seen network visualizations before trying to produce one. I know for a fact that in such situation, you just assume that the placement results from a conventional process, or worse, that nodes have a natural position. But of course the nodes do not have Euclidean coordinates in the network, so we must produce them. We generally use an algorithm to place the nodes so that their relative distances tell us something on the structure of the network (see our introduction to visual network analysis). There are different ways to do that, different algorithms. None of them is “the one”. You have to choose, and as a beginner you do not have the knowledge to inform your decision.

This is a classic design dilemma. If you pick a choice for the user, you take agency from them, you make a methodological decision invisible, and you narrow down options. It causes more blackboxing and it is detrimental to expert users. But if you expose the choice, you require the beginner user to take an impossible decision. There are different ways to deal with the situation: convoking knowledge via documentation, or on-screen help, suggesting a default choice while presenting alternatives… But in the end, the easier you make the life of the user, the less visible you make the decision. It is a tradeoff.

In Gephi, despite our focus on beginners, we took the line that makes their life the most difficult: we confronted them with the impossible decision. We expose a list of layouts without any default choice. We do not even throw the choice at the user’s face, we do not guide them towards the decision. I proposed this guideline in the very first steps of Gephi’s design: all actions must be initiated by the user. In that sense, Gephi behaves more like a sandbox (where you pick tools to do stuff, as in Photoshop or Word) and less like a scripted method (where you execute a process, as in pushing a button). As a consequence, if you ignore the necessity of a layout, your network is not spatialized and appears as the “infamous Gephi Borg cube”, a square resulting from randomizing node coordinates between 0 and 1. The situation is known to be frustrating, hence the fun pejorative name.

Our choice requires the user to understand what they are doing, and the necessary knowledge is not inside Gephi, but in the community (YouTube, Facebook group, blogs…). This is where the cultural dimension of blackboxing kicks in. We deliberately chose to slow down users to force them to face their responsibility. To progress, they must face that the layout is a decision, that there are multiple choices, that there are settings, and that they must stop it. They can monitor the effect of the layout on the main view.

So how can users repurpose Gephi as fast food? Because despite all our efforts, users find shortcuts. The acclaimed game designer Mark Rosewater summarized his 20 years at designing Magic: the Gathering in a famous talk named Twenty Years, Twenty Lessons, and his #1 lesson is: Fighting against human nature is a losing battle. He writes:

“Your audience […] is humans. They come with a complex operating system. It’s quirky at times, but it can be understood. Just remember that humans are quite stubborn. They like to do things the way they like to do them and it’s hard to change their behavior. […] Don’t get yourself into a fight you’re probably not going to win. Human behavior is a powerful force. We are creatures of habit and instinctually fear change. Yes, there are things that come along—like the cell phone—that humans change their behavior around, but don’t assume your [device] is going to be one of those revolutionary things.”

M. Rosewater

Confronted with a choice we cannot make, we simply search for a recommendation and pass the difficulty if we can. Even if just to have an idea of what happens next. And most users have an idea of what they want to obtain: nodes spread out in nice clusters, interpretable as a map. Despite the frustration generated by our design, motivated users can find an answer on the web, pick the same algorithm as the others, apply it and go on. Any Gephi tutorial explains how. We can slow them down, force them to face underlying methodological fundamentals, but we cannot prevent them from using all sorts of shortcuts.

Jokubauskaite observes this phenomenon exactly, and correctly attributes this kind of blackboxing to the practice, and not the tool per se. When it comes to human nature, there is not much we can do. The contextual necessity to get a network analysis by dedicating the less possible time and energy does not have much to do with Gephi, but rather with digital glitter. It creates an irreducible amount of blackboxing, an opacity that does not lie in the tool but in practices and that we cannot easily reduce.

What is a clear box?

If black boxes are bad, what is the ideal we seek? How to build a clear box?

In our initial Gephi design, we had an implicit ideal: automating manual tasks. For instance the search and replace feature present in all text editors is transparent. Anyone understands what it does, it is just automated. It accelerates a repetitive task. This acceleration can unlock new possibilities, become a qualitative change, but it is still something you can understand. You can predict the outcome, anticipate issues, monitor results. Many metrics, for instance the TF-IDF used in text mining, are transparent as well because they just count stuff. Those can be seen as clear boxes. When I coded the first Gephi prototype, I had the previous experience of manually retracing and drawing networks, in the tradition of Moreno’s sociograms. I was also developing my own layout algorithms. For me, Gephi was just automating manual operations. But of course it felt very different to other users, because if you have no experience of those manual operations, it is not transparent to you. Besides, this version of the clear box has a worse flaw. In most algorithms, simple steps are automated but also combined in such a way that understanding the steps does not necessarily bring light to what the algorithm does. In fact, all computer algorithms are based on simple Boolean operations, but it does not make them transparent. Involving simple operations is not a satisfying characterization of a clear box.

Another version of the clear box is mentioned by Jokubauskaite after Bernhard Rieder and Theo Röhle in Digital Methods: Five Challenges. Here transparency is related to scrutiny, to the ability to access and discuss the method implemented by the tool. A clear box is when an algorithm can be held accountable. Rieder and Röhle’s point is that methods they call experimental, in the sense that the results they produce cannot be easily mapped back to the algorithms and the data they process,” can be scrutinized even if we do not understand them “in the same way we understand statistical concepts like variance or regression”. The typical example here is deep learning. Others call this post-hoc interpretability. We can scrutinize and learn the method from the outside, even if we cannot know it from the inside. I like this version of the clear box because in many ways, the explanations provided by academic papers on algorithms are far from satisfying anyways. For instance the common way to rationalize force-driven placement algorithms is not only false, but also unable to account for the results they produce. How is it a good explanation, then? I will not develop that point here, but I want to mention that explaining an algorithm from the inside is often better at providing the comfort of a rationalization than accounting for what the algorithm actually performs, which is more important. However this version of the clear box assumes tools implement methods, which is too naive. It does not account for users repurposing devices for their unexpected needs, in unexpected ways.

There is an apparent tradeoff between these two versions of the clear box. Either you ground transparency in the process, ignoring the results, and you can then explain repurposing but not scrutinize. Or you embrace post-hoc interpretability, grounding transparency on the results, scrutinizing the method but ignoring unexpected uses. I do not have the knowledge to engage with this discussion much further. It is possible that there is a middle way, or that these two conceptions correspond to two different kinds of devices, for instance close-ended and open-ended.

Anyways, it seems that there is no obvious definition of a clear box, no obvious solution to the problem of blackboxing. We have identified three issues. The first is what Rieder and Röhle call “the classic two cultures problem: even if specifications and source code are accessible, who can actually make sense of them?” It also deserves to be extended. Even inside one culture, the accessibility to source code is no guarantee to understand anything. All developers know that you cannot always understand your own code, especially when you wrote it a long time ago. We argue it is not a culture problem in itself, even if cultures do matter. It is just that accessibility to the method is necessary but not sufficient. The second issue lies in that tools, and especially open-ended (exploratory) devices, do not strictly speaking implement a method and are commonly repurposed by users in unexpected ways. This prevents from grounding transparency on method scrutiny, because it does not necessarily matches practice. Last but not least, the third issue is the relativity of transparency. What is transparent to some are opaque to others and, importantly, vice-versa. If only for that reason, there is no universal clear box.

Transparency as predictability

Jokubauskaite notes that some researchers criticize the unpredictability of force-driven layout algorithms. This is a reasonable point, but it misses the essential. She narrates: “While very popular and having different layout versions, the force-directed layout algorithms have […] been quite extensively critiqued. Krzywinski et al., for example, point to the reliability of force-directed layout visualisations. They state that ‘the effectiveness of these methods is reduced by inherent unpredictability, inconsistency and lack of perceptual uniformity’. Furthermore, they argue that the unpredictability of such network visualisations is influenced by the fact that they are ‘driven by an aesthetic heuristic that can influence how specific structures are rendered’ and that ‘different algorithms generate very different layouts of the same network’ […]. It needs to be noted, however, that even the same force-directed algorithms may produce different final visualisations of the same dataset […], as they are ‘notoriously brittle: they have many parameters that can be tweaked’ and ‘[t]he result varies depending on the initial state (Jacomy et al.).”

Predictability is a non obvious property. Firstly, it must not be absolute. If the results of algorithm A are fully predicted by algorithm B, then they are technically equivalent. We often use algorithms because we cannot predict their results. That is how they are useful to us, for instance by unveiling a hidden structure. But predictability could mean that, in a series of settled cases that serves as a benchmark, we know what to expect. The representativeness of that benchmark then becomes a characteristic of predictability, which leads us to another point. Predictability can be easily engineered. There are obvious techniques to make a non-deterministic algorithm deterministic, and vice-versa. You can give or take predictability easily, but only in a superficial way. Because in a practical situation, predictability also assumes a form of continuity: small variations in the input should produce small variations in the output. But engineering predictability typically breaks this rule. When a process is inherently random, because it depends on arbitrary initial conditions, you cannot fully hide that randomness. The technical predictability you can obtain is a form of lie and is detrimental to empirical situations. Thirdly, sometimes some things have to be unpredictable so that others are predictable (I am resisting quoting a misinterpretation of Heisenberg’s principle). Some quantities are expected to be predictable while others are not. It is typically the case of force-directed placement algorithms where visual clusters are fairly predictable despite exact coordinates being unpredictable.

In the case of force-directed placement algorithms, exact coordinates cannot be predicted by design, because iterative placement is both the source of randomness and the reason why it is efficient. Criticizing the unpredictability of coordinates is pretty superficial, when the visual clusters are predictable and the kind of structure you try to unveil. That kind of unpredictability is not relevant enough to claim opacity, because the key result can still be predictable. On the contrary, predictability of a series of known quantities seems a major way to ground transparency.

Jokubauskaite notes that users seem pretty good at predicting certain aspects of the layout. She notices “a tendency in decision making [related] to the aspects directly observable from the interface, for example, ‘I know that it is going to the middle, because it is being pulled by gravity’.” She notes an important difference between predicting what the tool does and understanding the method. “Examples such [as this] can be seen as contributing to an observation of a tendency among Gephi users to rationalise the methodological decisions based on the tool itself, in contrast to providing justifications with regards to network analysis as a method.” I am not sure of Jokubauskaite’s position on that matter, but the way I see it, predictability is more important than understanding the method, because the former is necessary to the latter. There can be many ways to provide “justifications with regards to […] a method”, some of which are superficial or just plain wrong. But the ability to predict enacts a true form of understanding, even if quite empirical. If one cannot predict a method, they do not really understand it. But if predictability is necessary, it is not sufficient. Understanding the method also provides a necessary frame for interpretation. Nevertheless, the ability to predict contributes to understanding the method. I think that Gephi, by the interactive user experience it offers, supports an active learning of the predictable features of layout algorithms. But of course, it is much more effective in complement to other forms of learning (documentation).

Knowledge, ignorance and transparency

It may seem paradoxical but blackboxing produces transparency – just not the kind of transparency we opposed to opacity. It produces the transparency of the mediation. The typical and literal example is glasses: you do not see them, you see through them. The mediation can be said transparent because you forget it. Your glasses become an organ, a part of your body. This phenomenon of incorporation is well known in cognitive science and applies more generally to our use of technology. A pen is not transparent in the visual sense, but it is transparent in the sense that when you hold it, you feel the paper you write on, you feel it through the pen, but you forget the pen. The pen is not writing, you are writing, through the pen. For similar reasons, the car you drive is a part of you and you are on the road, and your web browser is not online while you sit on a chair looking at it, you are on the web reading this blog post. All those mediations are like windows, we see the world through them, but we do not see them per se (at least when the coupling is working, which is not always the case). Their transparency is us forgetting about them, and forgetting that they change our perceptions – that is why we use them in the first place.

The blackest box is the invisible one. Not only we cannot open the box, but we forget it exists. Technology then becomes like Magic, it “just works”. And we hate being reminded it exists, because it feels like a part of our body suddenly stops working properly. We hate when our smartphone loses internet, when Google or Facebook go offline, when the mouse does not move the pointer, when a letter of the keyboard stops working… It is a sign that some technologies are blackboxed to the point that we usually forget their existence. And it is a good thing, because that is how they are useful. We do not want to see our own glasses, their transparency is necessary to their function.

Unfortunately, even when that kind of transparency is necessary, it raises issues. In fact, it raises the exact same issue as blackboxing: by not seeing the technology, we ignore what it performs, we lose the ability to scrutinize it, to criticise it, and to make it accountable. A transparent technology gradually fades into the background of our lives, as if covered by Harry Potter’s cloak of invisibility. Making it visible again takes a lot of effort. Surveillance capitalism has largely leveraged that effect (and convenience, but that is a different point). That transparency is problematic because we double ignore it. Not only we ignore how it works, but we also ignore that we ignore it.

“Real knowledge is to know the extent of one’s ignorance,” said Confucius. In many ways, modern scientific knowledge starts with the knowledge of ignorance, by managing the limits of knowledge. The requirement of falsifiability serves this purpose, and more generally all methods of validation. Even in Mathematics, knowability has become a major concept, as in Gödel’s incompleteness theorems. That kind of transparency blackboxes by preventing the knowledge of ignorance. And blackboxing produces that kind of transparency by hiding mechanics and thus putting them out of scrutiny’s reach.

It is very unfortunate that we have two notions of transparency, one opposed to opacity, the other aligned with it. And we need both of those to discuss Gephi’s blackboxing. I cannot just pretend one of those does not matter. In order to bring clarity, I will precise both notions and stop using the word “transparency” as often as I can.

  1. Transparency as mediation invisibility. It is opposed to visibility. Making mediations invisible blackboxes because it hides the box in the first place, making your chances to open it even lower. It blackboxes by preventing you to know that you ignore something, the mediation itself. As we argued, while this is problematic, it is also necessary to the proper functioning of those mediations.
  2. Transparency as the ability to scrutinize methods and processes. It is opposed to opacity. Black boxes are typically resisting scrutiny. As we have mentioned, it is unclear if the object of scrutiny is the method or the process, but in both cases we must be able to see through, to open the box.

In a nutshell, we have two kinds of black boxes: those we do not see, and those we cannot open. Some boxes are both.

In Gephi we tried to resist invisibilizing mediations while still leveraging them. It means that we wanted users to benefit from the active learning of interacting with the network through a mediation like a force-driven layout algorithm. This experience of seeing the network unfold and reach equilibrium while still being able to interact with it has certainly contributed to Gephi’s success. In this context the mediation is an asset, but it tends to invisibilize the underlying method of placing nodes with a force-driven placement algorithm. This is the reason why we required the user to trigger that feature and to always confront the existence of multiple choices and settings. To embody the method in the user experience, as frustrating as it can be, and resist its invisibilization. But as we pointed out, our attempt might be a relative failure because it fights the losing battle against human nature.

Post-hoc interpretability

Jokubauskaite writes: “[T]his thesis would like to argue that in the case of Gephi, especially, the notion of blackboxing is not as clear and universal. […] [T]he interviewees reported on not being sure of how certain aspects of the tool work, even taken into consideration the aim of the developers and the possibilities in the tool. They have, more specifically, reflected on often using the tool without making the decisions consciously and not being aware of what and why they should question in the process of implementing the tool-use in their research.” She notes three things at the same time: (1) users were successful at using Gephi, (2) but they do not know how it works, and (3) they are aware of this lack of knowledge. This awareness of ignorance that is key to resisting invisibilizing mediations is, in my eyes, a major point against Gephi being a black box. Surely, as Jokubauskaite writes, “[i]t has been articulated that the tool is ‘complicated and there’s a lot of things that you have no idea what is happening and what is going on.’” But I want to stress that as long as you are aware of it, it is not such a problem. And it seems to be the case with Gephi, which I am glad to know.

Some things in Gephi are blackboxed by opacity, in the sense that their internal functioning is hidden. I assume most of those as an editorial choice of favoring post-hoc interpretability. I have written a little bit on that concept on this blog. By post-hoc interpretability, I mean the ability to understand an algorithm from the outside, by benchmarking it and/or engaging with it over many iterations and situations. It does not require to know the internal mechanics. But it can lead to a high degree of prediction. It also fulfills the need for scrutiny, and can make algorithms accountable. It can be much more costly to achieve than reasoning after the internal mechanics of the algorithm, but it is sometimes our only option. In particular, it is the primary mode of understanding deep learning algorithms, whose internal mechanics are too complex to be interpreted by humans.

Note that “post-hoc” does not mean “a posteriori.” This mode of knowledge is not about looking at the output. It is about looking at how the output depends on the input, looking at that loop, over many iterations. It is “post” something because it requires the algorithm to be implemented and executed, as opposed to a method defended on a theoretical level.

My prototype example for this mode of understanding is of course force-driven placement algorithms. I know them both for their internal mechanics and the results they produce. For me, the internal mechanics and the rationale that comes with it are unable to explain the results they produce. For that reason, the real way I know these algorithms is by having observed the results they produce in many different situations, maybe more than anyone else. It might be surprising but I think that my expertise comes more from this experience, which I share with Gephi users, than from understanding the underlying equations, which I share with the computer scientists who publish the papers specifying these algorithms. But of course the extent of my knowledge goes much beyond what a Gephi user can see, at least because when I developed algorithms like Force Atlas I have seen how countless unreleased variations of the algorithm impact the result. The extent of my understanding is very different from those of a Gephi user, but the mode is the same. Post-hoc interpretation.

Gephi has been initially forged after my own use for exploring empirical networks (from the web, mostly). It has changed a lot since, but the influence of my own perspective is still largely present in today’s version. I have tried to transfer my own experience to the users. I did not try to transfer my knowledge, as I would do in writing a text. I did not try to transfer my opinions either (how could that be?). I tried to transfer the conditions of my empirical engagement so that they could interact with their data the same way I interacted with mine. In the multiplicity of my own experience, I selected the most critical and transferable aspects, reducing the large territory of my explorations to a narrower but more operational set of features, and by doing so I did some blackboxing. It was necessary, but it is another point. As Gephi grew and multiple influences applied to it, the empirical understanding of layout algorithms became a key feature. We made it the entry point to the world of network science. Gephi’s blackboxing was an editorial choice to favoring post-hoc interpretability, supported by active learning, over rationalizations based on the method, supported by reading or watching documentation.

Blackboxing is a design resource

Designing a tool requires a selection. The process both opens up possibilities and narrows them down to a set of coherent, consistent features. The reduction to a set of features is inevitable because of two limited resources: the time budget for making the tool, and user attention. We cannot develop too many features, and users cannot manage too many features. These choices made by the designer are already a form of blackboxing, because they hide alternatives. But a device is a curated object, by definition it has limited boundaries and the process of making it does not fit in the device itself. The design process has many influences and attachments, but the device itself must be free from those links in order to circulate. Which also explains why it can be repurposed. A device that is not free from its attachment, that cannot be repurposed, cannot be used by anyone else than its designer. Any device requires a form of enclosure in order to be shared. It has to be packaged. Even code libraries, the most open objects you can find in computer science, are packaged. As we have seen there is no universal clear box, and in that sense any box is, in some way or to some people, a black box.

But blackboxing is not only a necessity, it is also one of the most powerful design tools. It is even at the core of object-oriented programming. By packaging certain things, a designer can hide complexities and reduce the cognitive toll of certain features while retaining usefulness, or even improve it. Not all features are explained before the user tries it. Actually documentation is almost never read before a user tries a device. It is very productive to hide a feature until the user needs it, and some features are so contextual that they work as mediations. For instance auto-completion: how queries are suggested when you start typing in a search field. Completely hidden as long as you don’t need it, it appears just in time without interfering strongly with your action, so that you could ignore it. But it can also quickly become a part of your experience, almost invisible because you never think of it, but you would feel it missing if it were suddenly disabled. Yet there is quite a lot of complexity and agency behind this seemingly simple feature, as you can imagine. Are you sure you understand where Google search suggestions come from, and how much it impacts you? Anyways, despite the issues it raises, blackboxing is extremely efficient at providing more features, more value to the user.

Beyond the necessity to curate features, blackboxing is a major design resource. As surprising as it sounds, it can even contribute to less blackboxing. It all depends on how it is used. Firstly, a feature can be blackboxed in a given context and not in another one. The goal here is typically to smoothen the learning curve. Most complicated software applications have alternative user interfaces for beginners and advanced users. When you get more comfortable with the main features, you can switch to a state of the graphic user interface exposing more of the internal mechanics, and providing more control at the expense of simplicity. Secondly, not all features of a device are related to its core purpose. Text editors include drawing capabilities, and image editors include text editing features. These secondary features can be efficiently blackboxed to make room for the core features. By requiring less attention, they allow the user to focus more on what is important. For instance in Gephi, the search feature has autocompletion. No one ever argued it was making Gephi more opaque, because here it is secondary, while in Google Search it is not. Applied to secondary features, blackboxing can contribute to make a device less opaque.

Convenience

Like blackboxing, convenience is a resource for the designer. Ultimately, the convenience offered by Gephi is where I situate the problem that others see as blackboxing. I have argued that Gephi cannot be significantly less blackboxed than it is, and that its opacity roots more in epistemic practices than in its design. Jokubauskaite’s observation are aligned with this analysis. But if Gephi’s design is responsible for incentivizing opaque fast-food practices, it is because of the convenience it offers.

There is a lot to say on technological convenience, notably in relation to surveillance capitalism. From Google to Uber and voice assistants, convenience has been a driving force in our technological environment. See that recent piece for instance. I will not engage with a general discussion on convenience, but I can see why some scholars place Gephi somewhere on the Uber axis of convenience, albeit quite far away from those highly popular and profitable giants. Indeed Gephi offers convenience, and convenience is problematic. But that is not where it ends.

People are not fools, not even Gephi users. When repurposing Gephi as a pushbutton visualization tool, researchers do not ignore that they trade quality for convenience. They could hardly ignore it considering all the steps we have left in their way, intentionally, for that specific purpose. But there is a better reason: they are not fools. But clearly the tradeoff is worth it, which means that Gephi is still convenient enough despite its somehow frustrating user interface. Users are not puppeteered or tricked by Gephi, they willingly trade for convenience. And they are right to do so, because contrary to Google and Uber, Gephi does not use convenience as currency. It is convenient because a convenient tool is more useful. And we even sacrificed some convenience to resist opacity. People have their own reasons to trade away quality for time, and I do not consider myself as responsible for the existence of digital glitter. As I already argued, that is a losing battle against human nature. But convenience has a positive side, that not all scholars have an interest in supporting.

Something clicked when I read, in Jokubauskaite’s work, that “the interviews showed Gephi to be used in somewhat of project-to-project cases.” It is a core strength of Gephi: you can engage with it without much background knowledge, use it with some degree of success, and move on to something else. It is convenient enough that it supports a form of disposable use. Now, the question becomes: what is the quality obtained from this low level of engagement? Is this form of fast-food consumption good for science? I suspect that to Gephi’s critiques, this practice is seen as detrimental. On the contrary, I think it is beneficial. I believe that this is the main sticking point about Gephi.

I suspect the core disagreement is on how to maintain a high scientific quality. I am in the business of enrolling a wide crowd of people into a better engagement with data (network data). Expert researchers who critique Gephi are in the business of demarcating good from bad science. They see me as a glorified enthusiast, I see them as gatekeepers. We do not have to agree. Just note that I stand behind Gephi for the same reason why they criticize it, in the name of a better science. Not in spite of it.

I stand behind Gephi’s convenience because it supports low risk, low reward research strategies. And we are in great need of those. Digital traces are a fickle material. Contrary to census or poll data so often used in sociology, we do not master the conditions of their production. There is an inherent risk to invest time and energy in them. So we must adopt more flexible and iterative research designs. I am personally inspired by agile methods, but mixed methods also propose a relevant framework. It comes with more iterations, hence less time for each iteration. But it also comes with more heterogeneous data, requiring a wider palette of skills, methods, and tools. Hence less time to dedicate to each tool. We did not choose that situation, but we must adapt regardless. The situation calls for tools that we can use at a lower cost, even if the outcome is more superficial. These tools are used early on, when monitoring and exploring are the main goals of engaging with data. Once relevant data of good quality is identified, which is never given in advance, the research design can move to more traditional forms with more specialized tools. This agility is what Gephi’s convenience achieves, and it is beneficial to science.

Note that Gephi can also be high risk high reward, but it is a different use and other similar tools are probably more adapted (Cytoscape, Ucinet…). However even in this context, users have an interest in transfering their skills from a low engagement context to a high engagement context. Once you have learned Gephi, you have an incentive to use it again. Inquiring the Gephi user base showed us that most users move from small networks, easy to interpret, to much larger networks, requiring more skills.

Mobile, repurposable tools like Gephi disrupt research brokers. Some researchers seek a position of obligatory waypoints, to defend the quality of their field, and/or to get academic power, thus funds, and thus freedom. I can understand that. But Gephi empowers enthusiasts, which inherently drains influence from them. I believe today’s beginners are tomorrow’s experts. I believe that playing leads to understanding, that repurposing a tool cultivates critical thinking, that high levels of engagement start with low levels of engagement. I have a Brechtian ideal of knowledge, I see research brokers as Brecht’s (imaginary) Galileo, who kept knowledge captive in order to preserve power. I think this is why Gephi meets academic gatekeeping. Convenience empowers people that academia sees as illegitimate (data journalists, activists, students…). But I think that it is fine to suck at Gephi, this is how you learn, and how, ultimately, you make a better science.

“Science knows only one commandment — contribute to science.”

B. Brecht, in Life of Galileo

Why I use the term “big data”

I do not like the term “big data” but I use it anyway, though not systematically. I share my reasons to the community of research engineers.

From the perspective of an engineer, big data is the kind of technical term that the marketing guys repurposed for their own needs. A bullshit word. As an engineer in the industry, you tolerate it because you understand its purpose: selling, which ultimately pays your high salary. But as a research engineer, you try to maintain a state of affairs where words have a meaning, and the term “big data” does not.

Big data had a meaning, and lost it. The term was coined in the nineties and referred to something specific: data sets too big for the common computer. Engineers had to develop entirely new technologies to overcome this practical limitation. This meaning is still relevant today. When data do not fit in a computer, we have to use a specific infrastructure (eg. cloud technologies), where normal strategies do not apply, which requires specific competences. These skills are still different from everyday computer science, which justifies a new field with its own name, “big data”.

In the private sector, every company started using the label “big data” because it made you look innovative, regardless of the actual “too big for a computer” problem. At some point engineers accepted it had become another bullshit word, and moved on.

In the social science, we have a different problem. Scholars started using “big data” when exotic competences became necessary to deal with digital traces. But exotic to them, not to engineers or computer scientists. In short, “big data” just meant “data science”. So someone with a list of items too long to read, and thus requiring computer processing, would call it “big data”. Which also conflicts with the original meaning. So research engineers stopped using the term, and moved on.

At some point the term had become a marker of non-academic language. The underlying reason is legit: its meaning is disputed between at least three different fields. Only someone ignoring that would use it, naively causing painful misunderstandings.

But from the techno-anthropology perspective, “big data” is an empirical thing. It is no less and no more than what people say it is. And in that sense, it has a definition. Ruppert and Scheele remark that “while there are many definitions, statisticians usually adopt what is commonly referred to as the “3Vs” of big data: huge in Volume, high in Velocity and diverse in Variety of types and formats of structured and unstructured data.”, citing a paper by Kitchin. Like them, I need the term to be able to observe that big data performs, enacts politics.

It makes sense to me to frame network analysis as part of the world of big data, because it enacts similar politics. Otherwise I like Kitchin’s characterization. In that sense, I am fine with defining “big data” as data that are at least one of the following:

  • Massive, ie. too big for usual devices
  • Highly granular, ie. representing a population of beings, non aggregated
  • Dynamic, ie. finely described over time
  • Relational, ie making networks

This definition is general enough that it can cover the three meanings, even if it disagrees on the definition boundaries. But empiricism can rarely afford strong demarcations, so I am happy to make this sacrifice in order to gain the ability to describe and reflect on how people use data. Even if it means restating my definition every time.

Digital Glitter, the Curse of Big Data Visualization

In this piece I argue that in social science and humanities, data visualizations are cursed by a latent incentive to reframe them as spectacular outcomes, when in reality, most are mere by-products of scholarly work. Even though data visualization can be highly valuable as research publication (which requires expertise and commitment) I criticize the repurposing of intimate, exploratory imagery as a marketing asset.

“To be clear, our point is that discursive struggles often work together with digital devices such that the politics of method cannot be reduced to language games.”

Evelyn Ruppert and Stephan Scheel in The Politics of Method: Taming the New, Making Data Official

Big data visualization acts in more ways than just conveying information. I was recently at the IT University of Copenhagen (ITU) where I had the chance to attend a seminar where Evelyn Ruppert presented the paper quoted above, written with Stephan Scheel. They propose a conceptual framework to understand the politics of methods, and engage it in two empirical examples from their fieldwork. It puts words on an aspect of visualization practices in social science that I usually struggle to grasp. Coincidentally, during that very same seminar, just after Ruppert’s presentation, such a hard-to-grasp situation presented itself.

A few PhD students had the occasion to present their work in 10 minutes. One of them had a network visualization to show. It was a colorful screenshot of a decently sized Gephi network on a black background, hard to read because of the superposition of many labels, but displaying a clearly clustered structure. That person just said: “We also do these kinds of fancy visualizations”, and then just moved on to the rest of the slides. No explanations were added. Obviously that person wanted to showcase that part of their work but, pressured by time, skipped the comments. I do not blame anyone here, because I do not take this vagary too seriously. It is actually quite funny. Still, there is something worth unpacking here.

The situation presents a paradox. Why chosing to showcase a visualization, if only to denigrate it? The author might either decide that the visualization is worth it, and then display and explain it, or not, and then leave it aside. The author must have reasons to showcase the visualization. So why hide those and denigrate the visualization instead? And what are those reasons? There are no simple answers to that.

Visualization failure is an easy but poor explanation. There are multiple ways a network visualization can fail to meet expectations. When failure happens for good reason, showcasing it is relevant. Failure is a valid outcome to a science experiment and in practice, it is legitimate to showcase for instance how network analysis turned out unable to answer the research questions. However, in such situation there is still a point, even if it involves a negative appreciation. Our problem is not that some denigrate or criticize network visualization. Open criticism makes a valid argument, and thus does not elucidate the paradox of the absence of arguments.

The problem lies with digital glitter. Think of it as a convenient name we can use to give some existence to an invisible feature of data visualization. I gradually came up with this notion to reflect on my own practices and notably on what data visualization performs in terms of public relations. I am well aware that network visualization earns me “something”, and I started to call that “digital glitter”. I see it as a feature of both documents and people. Like a sticky matter, it appears first in certain objects (images, videos, interactive devices…) and from there moves on people by contact. Since I created tools and data visualizations, I became identified as a data scientist and gradually got digital glitter on myself. I would say that the person in our starting example wanted to get some digital glitter too. The metaphor resonates because like literal glitter it can make you shine (albeit in a vulgar, non-specific way) but too much of it and it turns into a cruel, poisonous joke. Like make-up or dress code, whether you like it or not it seems required in certain social situations. The notion of digital glitter helped me situating the problem. However, the analogy does not help much to unpack what is at play. Fortunately, Ruppert and Scheele’s framework on the politics of method does.

In their paper, Ruppert and Scheele come up with a detailed framing of the pair of pictures reproduced above. As you can see, those images are oozing with digital glitter. The authors narrate how they were employed: “Rather than charts, numbers or line graphs, [the person] displays a three‐dimensional heat map that has become a popular visual form and which shows a rather obvious pattern – the density of population in the inner city differs during the day versus night. The data, analysis and work that went into producing the visualisation are not discussed. But the deployment of a visualisation is not to settle technical questions. Rather, the visualization is a strategy to convince others that working with big data […] requires a change in ‘paradigm,’ which the visualization performs.” In this case, the paradigm shift is a change from statistics to modelling. The key point is how the role of visualization is specifically not to make a technical point, but to convince the audience of a need for change. “The demonstration shows how innovations need their diagrams not only to represent but […] as means to build allies and to persuade others.” As they accurately note, the visualization performs a change. It produces another effect than conveying a message, illustrating a point or providing visual evidence. Its agency goes beyond the classic role of data visualization, sharing knowledge.

Data massiveness, high granularity, dynamicity, and presence of relations have their own agency. When I asked Ruppert which features of the visualization helped convince, she mentioned dynamicity and high granularity, and remarked that there were no labels in the picture. In this specific regime of persuasion, some features get a crucial role and others lose relevance. Key features all contribute to “claimed self-evidence”, a property described as “a seamless correspondence between the visualization […] and the reality” or, as in this quote from Johanna Drucker, as a way to render “the phenomenal world (as if it) were self‐evident and the apprehension of it a mere mechanical task”. The article frames it as a trick, either as John Law’s “realist trick” or Donna Haraway’s “god trick” to see “everything from nowhere”. I am reluctant to frame it as a mere trick however, for at least two reasons. Contrary to a trick, the effect lies in the object (the visualization) and does not require to be enacted in person. And contrary to a trick, seeing through it does not dissolve it. I acknowledge that a form of mystification is at play, but I challenge its contingency. Self-evidence does trick you into forgetting that knowledge is always situated, but the effect is deeply rooted in materiality, and cannot be dismissed as a mere illusion. The article demonstrates that the presence of certain elements and the absence of others reconfigures visualization to perform better in a specific regime of persuasion. This reconfiguration draws on a correspondence between the 3D heat map and Ljubljana. This connivence of the image with the field may be partial and constructed, it is nevertheless real. The effective affinity between the image and the phenomenon fuels the visualization’s persuasive power. The reconfiguration emphasizes this correspondence and hides the rest, but it does not invent it on a purely semiotic level. The connivence is material-semiotic, and firmly roots the self-evidence effect.

My point here is constructivist, and like often requires to walk a perilous edge between relativism and positivism. There is a lot of room between “Big Data is bullshit” and Chris Anderson’s End of Theory. I resist both the relativist and the positivist readings of the situation. The caricatural relativist thinks big data persuasion is marketing fabricated by actors. In this perspective, even if actors convoke convenient visualizations, self-evidence is ultimately a social construction. I disagree because self-evidence is grounded in material-semiotic properties of visualizations. On the other end of the spectrum, the caricatural positivist thinks big data is the new paradigm of a datafied reality. In this perspective self-evidence is a legitimate feature of visualization insofar as data fit reality, even if not perfectly. I disagree because for me data remains heavily situated in that process, even if actors do not acknowledge it. But I agree with the relativists that big data enact politics, and with the positivists that big data reconfigures our practices. I see granular or complex data visualizations as artifacts with their own political agency, a specific and new configuration of influence. Digital glitter is a possible name for that specific agency, whispering promises of unmediated knowledge, predictive power, and increased control. The existence of such agency is neither surprising nor specific to big data visualizations, but for some reason it seems to cause a lot of trouble in social science.

Big data visualization is problematic to social science in particular because we reprove the fantasies of omniscience and control it promotes. The change it brings in methodological tradition is challenging the state of our affairs. It conflicts with our values. Worse, it fuels opposite agendas. It tends to legitimate that physicists and computer scientists take in charge social questions. It challenges the role of social theory, enacting a new world order where empiricism is datafied and fieldwork obsolete, as if such machinery could work. Big data visualization is not on our side, on the side of Noortje Marres’ “radical empiricism”. You may feel how strong is the temptation to reject big data visualization, and fight against its influence. A nice but naïve attitude, insofar as we also have a need for highly granular, dynamic, and/or relational data, as misaligned their political agency might be.

Why we visualize big data

Big data is more than an empirical opportunity, it has become a necessity to understand social phenomena. The moment when digital traces could be seen as a limited echo of “real life” is way behind us. We can no longer argue that Facebook friendship is no “real friendship”, we must now face how it contributes to reconfiguring friendship, kinship, and more. The digital even performs beyond its material infrastructure (Facebook impacts you even if you are not in there). Or to put it another way, the “non-digital world” does not exist anymore. This is not to say that any social study must engage with the digital, but that some need to. Situations will happen more and more often where an a-priori non-digital research design requires engaging with digital traces. We do not always get to choose whether to engage or not with the digital, sometimes we just have to. The same goes for many other constraints attached to investigating social phenomena, and of course that situation itself is not new. But the kinds of constraints we face change with time, and Big Data embodies the newness of digital-specific conditions. The necessity to engage with that material leads to face massive amounts of data, and/or highly granular, and/or dynamic, and/or relational. We do not do it to get digital glitter, but because certain inquiries call for it. That being said, the necessity to engage with big data does not imply to visualize it.

Before all else, visualizing big data is a condition to empiricism. More precisely, the double uncertainty we face with digital traces calls for it. Firstly, we are unsure of our research questions. The situation is not new. Exploratory data analysis had formalized it before the data deluge. But it has become more prevalent. The proliferation of data sources gives rise to more opportunistic and open-ended research strategies. The “data sprint” format familiar to Richard Rogers’ Digial Methods Initiative roots in this situation. Secondly, we often ignore if our data sets can answer our research questions. It is a direct consequence of scholars losing control over the production of the data they use. Census and audience measurement followed procedures forged with (and partially for) social scientists. They were produced in order to know the social (even if not necessarily in an academic perspective). On the contrary many digital traces we study have been produced by the industry for its own needs, with procedures alien to social sciences. We repurpose them by necessity and opportunism, but we cannot presume that the conditions of their production allow us to draw legitimate conclusions. This second uncertainty also fuels the first one: committing to research questions is a more risky investment if you have no guarantee that your data will correspond to them. The pervasiveness of repurposable data incentivizes more open-ended research designs. In such context, visualization is critical. Exploratory data analysis is of course a visualization-based approach, but even before the analysis step, we need to monitor data to assess basic quality and validity. Visualization is efficient for this purpose. It requires few assumptions, produces results fast, and allows to iterate quickly. Before analysis, we do not want to invest too much time and energy in interpreting the data. We just need a low cost, low reward strategy to engage with it. We will only gradually mobilize high cost, high reward strategies once we check that repurposed data are usable. Data visualization can also be high cost, high reward. But it is not the same kind of data visualization.

Exploratory visualizations are not meant to be settled. They are usually produced inside a framework where the user can strongly interact with the data, whether it is a dedicated software such as Gephi and Tableau, or an open platform such as R and Jupytr Notebooks. Such framework alway prioritizes flexibility over the efficiency of conveying a message. It takes in charge most graphic design decisions so that the user can focus on navigating inside multiple facets of the data, including the classic Schneiderman’s mantra “Overview first, zoom and filter, details on demand”, but not only. Algorithms are also often convoked to process the data into new facets. This step of exploration is a material-semiotic engagement with the data. It involves the body as much as the mind. It is intimate for that reason, and because interpretations are still idiosyncratic. Like an unfinished piece of art, the audience cannot properly enjoy it.

The process of settling an interpretation will gradually narrow down the number of relevant facets, strengthen a narrative that can be shared, and crystallize factual elements that can circulate. Those might be in the form of a visualization, but they do not have to. During this process, more and more people will become able to engage with the data, and flexibility will be gradually dropped to the benefit of circulability. It is important to understand that this process is necessarily gradual, because it explains why it takes place inside the exploratory framework. This continuity between exploratory and explanatory explains why a same tool can serve two apparently opposed purposes, investigating for yourself and conveying a message to others. Exploratory visualizations are not meant to be shared, even if the frameworks producing them also allow building explanatory visualizations.

Sharing exploratory visualizations makes sense in certain situations. Firstly, as we have seen the border between exploratory and explanatory is blurry. When digital traces are specific and data science becomes specialized, a division of labour emerges between domain experts (who know the phenomenon) and data scientists. Their collaboration requires sharing exploration. Secondly, it is sometimes useful to account for the exploratory step, to show what science in the making looks like. For instance a Gephi screenshot. Its purpose here is not to be understood, and it must not be framed as explanatory. Thirdly, exploratory visualizations can sometimes be attuned to a wider audience. The most classic example is Hans Rosling’s Gapminder. The datascapes we have developed at the médialab are also of that kind (eg. La Fabrique de la Loi). It must be noted however that it requires a massive investment in graphic and interaction design, that social science labs generally cannot afford.

Releasing explanatory data visualizations is expensive. It requires a specific expertise, and graphic design is not a rare resource to most social science labs. But “fancy visualizations”, if by that you mean screenshots featuring massiveness, high granularity, dynamicity or networks, are not rare at all. Exploratory visualizations are a by-product of every scholar’s normal work. So as a shortcut, some are tempted to repackage those as research outcomes. Or more precisely, as marketing assets. Unfortunately it fuels big data fantasies for free. My starting anecdote is of that kind. Every time we show a big data visualization without any context, every time we prey upon digital glitter by smuggling an otherwise pointless “fancy visualization”, we let big data enact its politics. The trick is cheap but it costs us a lot. If we are to drink the cursed chalice of big data visualization, let us at least get something from it. For instance, reconfiguring our immune system to resist the poison.

Pressure to get digital glitter

I tried to understand the latent incentive to get digital glitter, but I failed. I do not think that the politics of methods in the academia are the same as those Ruppert and Scheele observed. It seems reasonable to hypothesize it provides a form of competitive advantage, or the illusion of it. But I doubt there is a one-size-fits-all answer. Even though I observed multiple situations where digital glitter seemed to have an impact, these glimpses did not help me understanding the reasons why one would go after it.

Since this piece is not an academic publication, allow me to end it on a personal note. I believe that I cannot fully understand digital glitter because of my digital privilege. The notion of white male privilege extends to big data more naturally than I am comfortable with. Because my work inherently secretes digital glitter – lucky me. I do not know what it is like to lack it. I do not need to seek it. I do not have to cheat it. Of course I do not regret it, despite a few gatekeepers seeing my work as detrimental to social science. But when I see a fellow social scientist feeling compelled to give up a little bit of their work’s quality in exchange for the trappings of big data, it makes me feel sad.

What do we see when we look at networks?


With Tommaso Venturini and Pablo Jensen, we submitted the following paper to publication. In a nutshell, we propose and discuss a framework for interpreting networks visually, in the context of a dot-line diagram and a force-directed node placement algorithm, which is a common practice in the social sciences.

Download PDF: What do we see when we look at network?

Note: you CAN download it freely, but the SSRN website will try to trick you by putting the button in an illogical place so that you register. You do not need to.

We call Visual Network Analysis (VNA) the practice in which you spatialize a network and interpret it visually. Note that this practice is never isolated, it is usually associated with using metrics to measure the topology of the network and/or qualitative inquiry on the phenomenon represented or modeled as a graph. But in VNA the visualization plays an important role in the exploration of the data, in the spirit of Exploratory Data Analysis (EDA). This practice is not unusual in the social sciences, especially now that we have access to many relational data sets.

In the typical scenario we discuss, a researcher is using Gephi or a similar software and applies a layout algorithm such as Force Atlas 2 or similar to an empirical network of 100 to 100,000 nodes. The expected outcome of the visualization is to obtain basic insights on the structure of the network and formulate hypotheses that could be tested by other means (graph metrics, external knowledge on that phenomenon…).

The central question we investigate and frame is: how to interpret the network visualization? In other terms, what do we see that we can or cannot trust, and what is important that we cannot see?

Interpreting a network visualization requires first to work on the semiotics of the image: color and size of the nodes, edges, labels… We provide a few guidelines based on classics such as Bertin’s Semiology of Graphics for instance. On that matter, network visualization is like the rest of data visualization. Things get complicated when we try to interpret the position of the nodes.

The big question – which we could not fully answer – is about the meaning of node positions. Even though we know how the algorithms producing the placement work, it does not mean that we know how to interpret the outcome. We know a number of things from these algorithms’ design, we can conjecture other things from experience, and we ignore other things.

We know:

  • It is all about the relative distances between the nodes (which pairs are closer or more distant than others)
  • Axes have no meaning (you could rotate, scale, flip the image without impacting its interpretation)
  • Nodes that are closer have a tendency to be connected, but not always, not directly, and there is a massive amount of exceptions (basically, all long lines represent pairs of connected nodes that nevertheless end up far away)
  • A. Noack has shown that the visual clusters correspond to modularity clustering.
  • This strategy is bad at representing the asymmetries of directed networks (because visual distances are mutual while topological distances are not).

We conjecture:

  • Force-driven layout placement are optimizing something, but we do not know exactly what. It just turns out that these algorithms are very useful in practice, and people appreciate the insights they get from it, but we have no satisfying rationale to explain that.
  • The thing that is optimized is probably a distance between the nodes, because it is how we interpret the visualization. But we do not have any mathematical expression of that distance.

We ignore:

  • How to explain in everyday words what the visual distances represent.
  • What the distance between the nodes in the visualization is an approximation of (in mathematical terms).

If we were able to find a mathematical distance that reasonably correlates with the distances in the visualization, we could use it as a way to evaluate the different algorithms and provide a theoretical ground to VNA. We were not able to find this distance, but in the paper we propose a profile of this hypothetical distance, based on what we already know of force-driven placement algorithms.

The paper also proposes a few experiments. We show for instance that the visual distance does not correlate well with the geodesic distance (length of the shortest path, counted in number of links) or the the mean commuting time (another intuitive notion of distance in a graph). We also provide an empirical account for Noack’s theoretical claim that the visual clusters correspond to modularity clustering.

We do this in a pretty fun way:

  1. Spatialize the network
  2. Apply modularity clustering (Louvain algorithm). It finds groups of nodes that are well linked together and poorly linked from group to group (let us call these “modules”.
  3. Apply a k-means based on nodes positions: it makes groups of nodes that are visually close. Let us call these “visual clusters”. We set the algorithm so that we have the same number of clusters than modularity.
  4. We look at all the pairs of nodes, and we look at whether the two nodes are in the same module or not, and if they are in the same visual cluster or not. The two measures agree if (1) the two nodes are in different modules and different visual clusters, or (2) if the two nodes are in the same module and visual cluster. Measures disagree if (3) the two nodes are in the same module but different visual clusters or (4) different modules but the same visual cluster.

We show that A. Noack was right, but also that Force Atlas 2 and LinLog perform better than Früctherman Rheingold, for instance.

Close reading Wikipedia from Pareto to Network Science, part 5

This is part 5: What is traveling from statistical distributions to network science

In this last part of my close reading of Wikipedia on statistics and network science, I look at three transversal topics.

  1. Social inequalities, and notably the figure of Pareto
  2. Claims to pervasiveness (statements that a phenomenon is observed in many unrelated situations)
  3. Universality, a specific statistical concept imported from thermodynamics

These three micro analyses close the series of my empirical observations while reading my corpus of articles, and allow me to draw some conclusions on the rhetoric used in network science (the most recent field) to draw from statistics (the older field). My question here is what does it draw, and why.

As a post on my research blog, it is honest on what is actually done but at the cost of not-so-relevant material. I also follow the mantra of “release early, release often”, so please forgive the lack of finishing touches.

Findings

To understand these findings, we need a starting point. That starting point is a promise brought to existence by the recent (90s) discovery of a new type of mathematical structure, the complex network (whether you call it small world or scale-free does not matter much, as we have previously seen). This new discovery was/is an opportunity to make a scientific breakthrough, and for good reason. Indeed complex networks create an unprecedented bridge between empirical observations on the living and the social, and some new fascinating theories in physics. What if networks were a missing link to understand the laws of the social and the living? Such is the promise complex networks seem to make, and a necessary preliminary to understand the rhetoric of network science.

Vilfredo Pareto studied the question of inequalities, and more precisely “situations in which an equilibrium is found in the distribution of the ‘small’ to the ‘large'” (article on Pareto distribution). His work was published at the beginning of the 20th century, but he did not become such prominent figure at that time. A management consultant named J. M. Juran “stumbled across the work of Vilfredo Pareto and began to apply the Pareto principle to quality issues (for example, 80% of a problem is caused by 20% of the causes).” (article on Joseph M. Juran). It is Juran who coined the term of Pareto principle, and popularized his ideas. They seemed to have been influential in economics first, and not only about networks (Pareto efficiency for instance is related to games theory).

The figure of Pareto is an important figure in the rhetoric of network science, because he represents the empirical reach to social questions. Through him, the power law tells us something about how the social works. Inequalities are the archetypal example for the conceptual bridge between power-law and scale-free networks. This example is often declined in terms of number of citations for scientific papers, or hyperlink citations for web pages or websites, a context in which being cited is interpreted as a form of value. Preferential attachment, which generates power laws and may explain scale free networks, is also called “rich get richer”. Vilfredo Pareto and his legacy embody the empirical reach of the power law in the social world.

From the 50s to the 70s a fascinating new approach takes over the world of physics, from quantum mechanics to thermodynamics: renormalization group theory. It allows to reformulate old questions in a surprisingly fruitful way. It was remarkably successful in quantum mechanics, but also in thermodynamics where it allows to study phase transitions and solve an old mystery. Physicist observed that certain quantities follow power laws at the critical point of phase transitions. Puzzling observation, the exponents of these power laws are often the same, even in systems whose dynamics at a micro scale are completely different. This empirical fact obtained the pompous name of “universality”. The mathematical procedure of renormalization explained this universality by showing that dynamics at a micro scale do not matter at the critical point, and that dissimilar systems can have similar scaling dynamics. This theory, which also proves that phase transition give rise to power laws, is considered a masterpiece by physicists.

In the article on the power law we find what we call the rhetoric of universality, revolving around the idea that power laws are the sign of an underlying reality, as guaranteed by the theory of universality. Unfortunately this rationale is fallacious insofar as many other situations than phase transitions can give rise to power laws, and as universality is not a theory but an empirical observation, despite its unfortunate name. Phase transitions give rise to power laws, but power laws are not signs of phase transitions. The renormalization group theory can rarely be mobilized, universality does not apply, and finally there is no ground for stating that power laws are the symptom of an underlying mechanism. Credible sources explicitly discuss that fallacy, frame it as a misunderstanding and explain why it was prevalent. We hypothesize that the Wikipedia page on the power law reflects a past state of the scientific discussion.

The rhetoric of network science heavily relies on claims to pervasiveness. It constantly states that the different avatars of complex networks (scale-free/small-world networks, preferential attachment, power law…) are empirically observed in many unrelated situations. As we have already seen, pervasiveness is also a defining characteristic of scientific/statistical laws.

We observe that claims to pervasiveness as well as the (fallacious) rhetoric of universality are remarkably aligned with the belief that complex networks could lead to a scientific breakthrough similar to the discovery of a scientific law, but in the social and biological worlds. We hypothesize that the rhetoric of network science aims at sustaining the belief that complex networks are the key to scientific laws of a whole new empirical reach, even though it has not been the case so far.

We also conclude from our observation of that rhetoric that it is characteristic of structuralism.

What is traveling from statistical distributions to network science

Vilfredo Pareto and social inequalities

As we have seen, the Pareto distribution is an avatar, if not the main avatar, of the power law. It is named after an Italian scholar who studied inequalities in the late 19th / early 20th century.

Vilfredo Pareto 1870s2.jpg
Vilfredo Pareto, Wikipedia

Through his name, the question of inequalities connects to the concept of power law. The underlying mathematical reason is obvious: the power law is a model for the distribution of wealth.

“The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, […] is used in description of social, scientific, geophysical, actuarial, and many other types of observable phenomena. […] Originally applied to describing the distribution of wealth in a society, fitting the trend that a large portion of wealth is held by a small fraction of the population […] Pareto originally used this distribution to describe the allocation of wealth among individuals since it seemed to show rather well the way that a larger portion of the wealth of any society is owned by a smaller percentage of the people in that society. He also used it to describe distribution of income. […] This distribution is not limited to describing wealth or income, but to many situations in which an equilibrium is found in the distribution of the “small” to the “large”.”

Pareto distribution

The article dedicated to the distribution of wealth also acknowledges the importance of the Pareto distribution for wealth inequalities.

“Pareto Distribution has often been used to mathematically quantify the distribution of wealth at the right tail (the wealth of very rich). In fact, the tail of wealth distribution, similar to the one of income distribution, behave like Pareto distribution but with ticker tail. […] The distribution of wealth throughout the population is often closely approximated by a Pareto distribution, with tails which decay as a power-law in wealth.”

Distribution of wealth

In pages about statistics, income distribution is cited as an example of the power law, and economics are cited as an application of the statistics of the power law (or of complex networks), as in the following excerpt:

“A few notable examples of power laws are Pareto’s law of income distribution, structural self-similarity of fractals, and scaling laws in biological systems. Research on the origins of power-law relations, and efforts to observe and validate them in the real world, is an active topic of research in many fields of science, including […] economics and more.”

Power law

Another evidence for the connection between power law and inequalities is the “rich get richer” name for preferential attachment. In other words, inequalities come up as a name of the model bridging power-law and scale-free networks.

“The most widely known generative model for a subset of scale-free networks is Barabási and Albert’s (1999) rich get richer generative model in which each new Web page creates links to existing Web pages with a probability distribution which is not uniform, but proportional to the current in-degree of Web pages.”

Scale-free network

Notice how despite the “rich get richer” mention, that example does not involve literal wealth, but only the links of web pages. Wealth is understood as a metaphor of having many connections. It is also possible that having links is considered literally as a form of value. The point could be perfectly legit, but the Wikipedia articles are not explicit on that matter. We still find multiple mentions of hyperlinks-as-value, for instance:

“The concept drew in part from a February 2003 essay by Clay Shirky, “Power Laws, Weblogs and Inequality”, which noted that a relative handful of weblogs have many links going into them but “the long tail” of millions of weblogs may have only a handful of links going into them.”

Long tail

Also note that the “rich-get-richer” expression is not necessarily tied to the power law or scale-free networks. The statement has a life on its own in the economic sphere, and even has its own article.

“Thomas Piketty’s book Capital in the Twenty-First Century (2014) presents a body of empirical data spanning several hundred years that supports his central thesis that the owners of capital accumulate wealth more quickly than those who provide labour, a phenomenon widely described with the term “the rich-get-richer”.”

The rich get richer and the poor get poorer

Conversely, inequalities of wealth or income are not systematic in the illustration of the power law, even in an economic context. See for instance below, a list of imbalances featuring GDP and healthcare but not income or wealth:

“The Pareto principle is a popular example of such a “law”. It states that roughly 80% of the effects come from 20% of the causes, and is thusly also known as the 80/20 rule. In business, the 80/20 rule says that 80% of your business comes from just 20% of your customers. In software engineering, it’s often said that 80% of the errors are caused by just 20% of the bugs. 20% of the world creates roughly 80% of worldwide GDP. 80% of healthcare expenses in the US are caused by 20% of the population.”

Empirical statistical laws

Pareto is a key figure to the bridge between statistics and network science, between statistical laws and scale-free networks between distributions and complexity. This connection involves the question of social inequalities, not only as income and wealth (Pareto’s initial work) but also as number of connections. As the power law applies to scale-free networks via preferential attachment, theoretical elements on social inequalities move from statistics to network science.

Also note that the work of Pareto on inequalities is not limited to the statistical distribution. Another remarkable concept is the Pareto efficiency in the field of games theory.

“Pareto efficiency or Pareto optimality is a state of allocation of resources from which it is impossible to reallocate so as to make any one individual or preference criterion better off without making at least one individual or preference criterion worse off. […] “Pareto efficiency” is considered as a minimal notion of efficiency that does not necessarily result in a socially desirable distribution of resources: it makes no statement about equality, or the overall well-being of a society. It is simply a statement of impossibility of improving one variable without harming other variables in the subject of multi-objective optimization (also termed Pareto optimization).”

Pareto efficiency

Finally, we must also note that the multiple concepts named after Vilfredo Pareto were named as such and became popular much later, after World War II. The article on the Pareto principle states that “Management consultant Joseph M. Juran suggested the principle and named it after Italian economist Vilfredo Pareto”. In J. M. Juran’s own Wikipedia page we find confirmation:

“In 1941, Juran stumbled across the work of Vilfredo Pareto and began to apply the Pareto principle to quality issues (for example, 80% of a problem is caused by 20% of the causes). This is also known as “the vital few and the trivial many.” […] In a way, Pareto’s Principle puts numbers to the idea that in business, as in life, things are not evenly distributed. Pareto was studying land ownership in Switzerland. But Juran saw that it applied to business, as well.”

Joseph M. Juran (Wikipedia article on)

Everywhere: claims to pervasiveness

Statistical laws and complex networks are often presented as pervasive. Our Wikipedia articles mention in multiple occasions that distribution X or phenomenon Y can be found everywhere. These statements of empirical ubiquity can take multiple forms and seems to play an important role in the rhetoric of statistics as well as network science. By looking at what it looks like, we can understood why it is so important. We observed three types of claims to pervasiveness:

  1. Enumeration, a list of empirical exemples.
  2. Distanced statement, eg. X is believed/said to be found everywhere.
  3. Direct statement, eg. X can be found everywhere.

Enumeration is the most common type of claim. It is generally presented as self-evident. The two examples below are about statistical distributions.

“The Pareto distribution […] is used in description of social, scientific, geophysical, actuarial, and many other types of observable phenomena.”

Pareto distribution

“More than a hundred power-law distributions have been identified in physics (e.g. sandpile avalanches), biology (e.g. species extinction and body mass), and the social sciences (e.g. city sizes and income).”

Power law

We have seen that in the argumentation, a major bridge between statistical distributions/laws and network science is through preferential attachment, a model that explains why in scale-free networks the number of links per node follows a power law distribution. These three elements are all subject to claims to pervasiveness: the power law / Pareto distribution (as shown above), complex/scale-free networks, and preferential attachment itself, as in the citation below:

“This is the primary reason for the historical interest in preferential attachment: the species distribution and many other phenomena are observed empirically to follow power laws and the preferential attachment process is a leading candidate mechanism to explain this behavior. Preferential attachment is considered a possible candidate for, among other things, the distribution of the sizes of cities, the wealth of extremely wealthy individuals, the number of citations received by learned publications, and the number of links to pages on the World Wide Web.”

Preferential attachment

There is first a direct claim to empirical pervasiveness (“the species distribution and many other phenomena”) emphasized in the next sentence by an enumeration. Here pervasiveness is not only stated or implied, it is presented as “the primary reason for the historical interest”. The Wikipedia contributors are aware of the importance of pervasiveness even though it is not established as a fact. Note how cautious is the text, though.

Complex networks are also claimed to be pervasive, for instance in the article on scale-free networks, with a nice bullet-points list:

“A few examples of networks claimed to be scale-free include:
– Social networks, including collaboration networks. […]
– Many kinds of computer networks […]
– Software dependency graphs […]
– Some financial networks such as interbank payment networks
– Protein-protein interaction networks.
– Semantic networks.
– Airline networks.”

Scale-free network

Note that the distanciation here is about scale-freeness, but not pervasiveness. It might look the same but the example below makes it clear that the doubt is about the criteria used to establish that a network is scale-free.

“Although many real-world networks are thought to be scale-free, the evidence often remains inconclusive, primarily due to the developing awareness of more rigorous data analysis techniques.”

Scale-free network

The statement above is about beliefs. It displaces self-evidence from pervasiveness (many real-world networks are scale-free) to the state of beliefs (“many real-world networks are thought to be scale-free“). It claims that some believe in the pervasiveness of scale-free networks. It is similar to the “historical interest” in preferential attachment.

Sometimes the distanciation is not on pervasiveness itself but on its explanation. Note how, in the citation below, pervasiveness is assumed but its explanation is challenged and contextualized as a claim from Barabási.

“It is hypothesized by some researchers such as Barabási that the prevalence of small world networks in biological systems may reflect an evolutionary advantage of such an architecture.”

Small-world network

And distanciation is not even always present. Often times it is implied or explicitly mentioned in all its self-evidence. For instance:

“Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology”

Social network

The claim to pervasiveness lies in the word“most”. Here pervasiveness is casually assumed. Similarly but about a statistical law, note the “many” in the sentence below:

“Zipf’s law […] refers to the fact that many types of data studied in the physical and social sciences can be approximated with a Zipfian distribution.”

Zipf’s law

Naturally, claims to pervasiveness are stronger in the case of statistical laws because empirical pervasiveness is their defining characteristic. In such situations we often find explicit claims such as those:

“An empirical statistical law or (in popular terminology) a law of statistics represents a type of behaviour that has been found across a number of datasets and, indeed, across a range of types of data sets.”

Empirical statistical laws

“Scientific laws summarize and explain a large collection of facts determined by experiment”

Scientific law

The rhetoric of pervasiveness shows how important is the cultural aspect of network science. Pervasiveness is a notion, it is hard to establish as a solid fact. Some sources are sometimes mentioned in Wikipedia, for instance Mark Buchanan’s book Ubiquity : The New Science of Universal Patterns. But most of the time the claim to pervasiveness is presented on the mode of self-evidence. We can hypothesize that pervasiveness is indeed obvious, but there is a problem. Observing a power law requires an apparatus. If pervasiveness is obvious, it is only in a certain context and for certain people: I am certainly not myself in position to observe it, and you are probably not either. In Wikipedia we are often asked to believe in a form of self-evidence that is inaccessible to us. In that sense, pervasiveness is a belief. And by reading Wikipedia we can at least establish that this belief is so common that it has become self-evident itself: it is obvious that complex networks, preferential attachment and the power law are thought to be pervasive. Because if they were not, such claims would not be disseminated everywhere in our corpus of articles.

The power law article features multiple claims to pervasiveness. Describing the series provides insights on the role of such statements. It all starts with a double direct claim:

“The distributions of a wide variety of physical, biological, and man-made phenomena approximately follow a power law over a wide range of magnitudes.”

Power law

The power law is not only followed by many phenomena, it is also followed by them over many orders of magnitude. Once again, the point relies on self-evidence (argument from Wikipedia’s authority). Follows a sophisticated statement on universality and the status of law:

“Scientific interest in power-law relations stems partly from the ease with which certain general classes of mechanisms generate them. The demonstration of a power-law relation in some data can point to specific kinds of mechanisms that might underlie the natural phenomenon in question, and can indicate a deep connection with other, seemingly unrelated systems; see also universality above.”

Power law

Let us break down these two sentences. First of all we have “certain general classes of mechanisms.” In other words, a large group of phenomena. Those have something in common: they generate power-law relations with ease. Stating the existence of this large group of phenomena is a claim to pervasiveness, but more specific than the usual. Not just many things produce power-law, but “certain general classes” of phenomena, supposedly defined by other characteristics. And like for preferential attachment, this pervasiveness is the source of “scientific interest.”

But the second sentence makes a stronger point. It explains why the empirical observation (and “demonstration”) of the power law matters: because it hints at underlying mechanisms. The sentence states that some “deep” underlying mechanisms can produce similar effect (the power law) in seemingly “unrelated” phenomena. The power law would be a signature of a deeper mechanism, hence tracking the power law could allow investigating an underlying reality. As stated, this argument is that of “universality” (see next section).

Immediately following, the article makes a very explicit direct claim, using the term “ubiquity”, and followed by statements of quantity (“many”) and an enumeration of 3 “notable” examples and 7 scientific fields:

“The ubiquity of power-law relations in physics is partly due to dimensional constraints, while in complex systems, power laws are often thought to be signatures of hierarchy or of specific stochastic processes. A few notable examples of power laws are Pareto’s law of income distribution, structural self-similarity of fractals, and scaling laws in biological systems. Research on the origins of power-law relations, and efforts to observe and validate them in the real world, is an active topic of research in many fields of science, including physics, computer science, linguistics, geophysics, neuroscience, sociology, economics and more.”

Power law

The article also features a list of 51 examples of the power law, that constitutes a substantive piece of evidence for the pervasiveness claims:

“More than a hundred power-law distributions have been identified in physics (e.g. sandpile avalanches), biology (e.g. species extinction and body mass), and the social sciences (e.g. city sizes and income). Among them are:

Astronomy […4 examples follow]

Physics […17 examples follow]

Biology […6 examples follow]

Meteorology […1 example follows]

General Science […12 examples follow]

Mathematics […8 examples follow]

Economics […3 examples follow]”

Power law

The list in itself is not a scientific evidence but the article cited as a source for the list is one, and the list situates where and for whom pervasiveness appears. Although some of these claims to power law were challenged, as we have seen, they were not challenged as heavy-tail distributions. In that sense, even if the claim to pervasiveness were to shift from power law to heavy-tail distributions, it would ultimately remain.

Of the different articles we studied, the article on the power law is the one where the claims to pervasiveness are the most grounded, and where their importance is the most explicit: they support universality.

Universality

In its article, the power law has three main properties: scale invariance, lack of well-defined average value, and universality. What is universality?

Outline of the article “Power law”

The argument of universality is complicated. As a starting point, we will look into the universality section in the power law article, and unpack the series of statements. We will discuss the argumentation by explaining the necessary concepts and contextualizing the claims. We will mostly refer to what other Wikipedia articles mention, but we will also draw on other sources for the sake of clarity.

Universality in the power law article

“UNIVERSALITY
The equivalence of power laws with a particular scaling exponent can have a deeper origin in the dynamical processes that generate the power-law relation.”

Power law

The crucial role is played by the “equivalence of power laws”. This equivalence is a mathematical property also named scale invariance, and is the defining characteristic of the power law. As we have seen, power laws of the same exponent are indeed considered equivalent, which is a remarkable and specific property.

The argument of universality starts with this equivalence having a “deeper origin”. The rest of the Universality section develops this idea, starting with an example.

“In physics, for example, phase transitions in thermodynamic systems are associated with the emergence of power-law distributions of certain quantities, whose exponents are referred to as the critical exponents of the system.”

Power law

This sentence involves the concept of phase transition. It refers to what happens when a system moves from a certain stable state to another one, often brutally. The typical example is ice melting into water: ice is a stable state, water is a stable state, but many unusual things happen in between, during the melting. The transition between the phases has its own specific behavior.

Phase transitions in thermodynamics is not just a random example, it is the prototypical case that relates the power law to universality. The study of such physical phenomena motivated the development of a sophisticated mathematical apparatus that grounds the claim that power laws have a “deeper origin”. By design, this theory applies to the case of thermodynamic phase transitions.

The argument involves the variation of “certain quantities” involved in the process. These quantities are special because their evolution follows a power law. In the prototypical case, the studied system’s thermal capacity for instance is such a quantity. The sentence only states that phase transitions are “associated with” these quantities’ power law, but we can clarify the epistemic status of this association. Indeed the physics of phase transitions is mature and well established, theoretically and experimentally. It turns out that the association is both empirical and theoretical. Firstly, that a number of quantities follow a power law during a phase transition is an empirical fact. Secondly, theory predicts that certain quantities necessarily follow a power law at the crucial moment of the phase transition. In that sense, the scientific consensus seems to be that phase transitions give rise to power laws.

We must pay attention to the direction of the association between phase transitions and the power law. In a nutshell, phase transitions cause power laws. Strictly speaking, it is not really a cause-consequence relationship, so my statement is a little oversimplified. But it features the most important aspect: the “association” is from phase transition to power law. Though the statement does not precise this aspect, power laws are somehow a consequence of phase transitions, according the scientific consensus. And as we will see later, this point matters.

What are the “critical exponents of the system”? During phase transition, the special quantities follow a power law, but not necessarily the same: their exponents differ. As a reminder, the power law refers to a family of mathematical functions of the same general form but with different parameters. The general equation is f(x)=a.x^(-k) and its parameters are the factor a (generally considered unimportant) and the exponent k (characterizing the specific form of the power law). Here k is called the “critical exponent” of the system. The term critical is derived from the concept of critical point which refers to the key moment of the phase transition.

Why would the critical exponents be specific to the system, and not determined by empirical conditions? As surprising as it sounds, the independence of critical exponents to empirical conditions is both an empirical fact and a theoretical prediction, as stated just next.

“Diverse systems with the same critical exponents—that is, which display identical scaling behaviour as they approach criticality—can be shown, via renormalization group theory, to share the same fundamental dynamics.”

Power law

In substance, this states that a “same fundamental dynamics” (an underlying reality) explains why multiple quantities follow a power law with the same critical exponent across multiple empirical situations. The evidence of that underlying reality is supposed to be provided by renormalization group theory. What is it?

Renormalization group theory (RG theory) is a mathematical apparatus that can be mobilized in a situation of scale invariance, when a phenomenon appears the same across many scales. It “was initially devised in particle physics” (Renormalization group in Wikipedia) but extends to other branches, notably quantum physics. It is worth noting that this theory is a massive body of knowledge and has played a major role in modern science since the 50s. It has a high credibility.

RG theory allows to reformulate a problem (for instance a physical model) in terms of scaling, in terms of what happens when you look at a system at a larger scale. In simpler words, when you zoom out. This procedure at the core of the theory is called renormalization. It transforms equations that describe a phenomenon at a known scale into equations that describe it when the scale changes. For instance imagine a pile of sand. If you look at the number of grains of sands, it tells you that if you zoom out you see more grains. Conversely if you look at the size of the grains, it tells you they look smaller as you zoom out. So when you zoom out, certain quantities go up (number of grains…) and others go down (apparent size of grains…). Remark that this is true at all scales: it does not matter how much you are currently zooming, if you zoom out you will see more grains and they will be smaller. A third kind of quantity also exists, that neither goes up or down but stays the same. Such quantities are the scale invariant ones. A classic example is the size of the avalanches in the pile of sand. Avalanches come up in all sizes and at all scales, from a few grains to huge chunks of the pile. They are a scale invariant phenomenon. When a quantity is invariant through renormalization, we can solve certain equations and get new informations on that quantity, directly derived from its scale-invariance.

Intuitively, RG theory looks at what happens during scale changes, and from that tells us which quantities are relevant or not at larger scales. It predicts why certain quantities are scale invariant at the critical point, and thus follow a power law. More importantly, by observing the scaling behavior of these quantities at the critical point, it predicts that they cannot be fully independent. More precisely, it predicts a limited number of defining variables, of degrees of freedom. The exponents of the power laws of the special quantities are called “critical exponents” and all depend only on these few degrees of freedom. One way to put it is to observe that, at the critical point, the singular dynamics of the system constrains all the special quantities not only to follow power laws, but also to have specific critical exponents. According to RG theory these critical exponents do not depend on the specifics of the system but only of the dynamic at the critical point. Empirical observations back this interpretation insofar as multiple systems that differ completely on the micro level share the same scaling dynamic at the critical point are repeatedly observed to have the same set of critical exponents. In this theory, such pervasive set of critical exponents is named a “universality class”.

“For instance, the behavior of water and CO2 at their boiling points fall in the same universality class because they have identical critical exponents. In fact, almost all material phase transitions are described by a small set of universality classes.”

Power law

We can now understand this sentence. Universality is not mentioned in the mundane sense, but “universality classes” refer to the similar behavior of certain physical systems despite their different microphysic structures.

Of course, universality classes get their name from the pervasiveness of empirical observations. As we will see the pervasiveness of these sets of critical exponents across seemingly unrelated cases has puzzled physicist for a long time, which is one of the reasons why RG theory is considered a remarkable achievement. Note however that this pervasiveness is not absolute. Some empirical observations do not match theory for unknown reasons, and there is no general consensus on how many universality classes there are. The theory has validity conditions outside which it does not work. This is nothing unusual and I do not mention it to downplay the remarkable success of RG theory (notably in quantum physics) but to make it clear the universality classes are not as universal as their name suggests. They are pervasive, but not universal.

“Similar observations have been made, though not as comprehensively, for various self-organized critical systems, where the critical point of the system is an attractor. Formally, this sharing of dynamics is referred to as universality, and systems with precisely the same critical exponents are said to belong to the same universality class.”

Power law

The same point again, essentially, with an important detail: for the first time it defines the concept of “universality” in itself. And so ends the section on universality in the power law article.

At the end of this sophisticated rhetoric, universality is the “sharing of dynamics” between “various […] critical systems”, hence the “fundamental dynamics” “associated with the emergence of power-law distributions”. Back to the original statement, universality is the “deeper origin”, the underlying reality giving rise to power laws. In addition, the highly respected renormalization group theory is convoked as a guarantee for the statement. Leveraging our previous explanations on RG theory to simplify the central claim, we can reduce it to this sentence:

Power laws reveal the underlying presence of a phase transition, a phenomenon known as universality.

Unfortunately, that statement is false.

Debunking the universality rhetoric

First of all we must clarify that universality is indeed the name physicists give to the empirical pervasiveness of sets of critical exponents (universality classes). The observation finds a theoretical explanation by RG theory: microscopic details do not matter at the critical point, so that only the scaling dynamic determines the system’s behavior. But the name refers to an empirical observation, not a theoretical claim. Universality is not a statement according to which RG theory proves that power laws are pervasive, universal, or the sign of a phase transition. Despite its pompous name, universality just refers to an observation. An intriguing one, an important one, but still a simple observation.

“In statistical mechanics, universality is the observation that there are properties for a large class of systems that are independent of the dynamical details of the system.”

Universality (dynamical systems)

Moreover, power laws are simply not the sign of a phase transition. Though it is true that phase transitions give rise to power laws, the reverse is not. The presence of a phase transition implies that of a power law, but power laws imply phase transitions only in if you know for sure that no other factor can. Without any knowledge on that possibility, one cannot rule out the possibility that something else is generating the power law. So we cannot conclude that a power law implies a phase transition. The rationale of universality, as stated in the article on the power law, is bogus. But the situation is worse, because we actually know that a large number of unrelated situations can generate power laws. In that sense, it is largely established that power laws are not a sign of a phase transition.

As a reference for the “association” between phase transitions and the power law, I will use a recent source from outside Wikipedia. My rationale for bringing this source is that it provides both authority and clarity on that specific matter. It is reliable because it has been produced by the renowned Santa Fe Institute, a world class institution for the study of complexity. It brings clarity because it is intended as a pedagogical document, as a part of the Complexity Explorer online lessons. Also note that it is recent (March 2019). This 7 minutes 35 seconds video is available on YouTube and is titled: How many power laws are from phase transitions? It features professor Dave Feldman half-reluctantly explaining why he thinks almost no power laws observed in complex systems arise from phase transitions. The full text transcript is available as an appendix (but you should rather watch the video).

“I should mention that much more than some of the other videos, I’ll be giving some opinions, and less an accounting of mathematical or empirical facts. I think the position that I’m going to carve out is pretty much a standard one, within the study of complex systems, but there’s certainly some room for disagreement.

So, what fraction of the power laws that we observe in complex systems arise from phase transitions? So I think the answer is: almost none. That phase transitions likely are not an explanation for the vast majority of power laws we see in complex systems. […]

But phase transitions and power laws have often been closely linked, more so in the past, less so these days, and I want to offer some thoughts on why that may be. I think a lot of it has to do with the culture and habits of mind of physicists. […] Within physics, and I say this as a physicist, the theory of phase transitions […] has everything that physicists would get excited about. […]

So a lot of the claims that linked power laws and other areas of complex systems with phase transitions, were originated by physicists who are accustomed to associating power laws and phase transitions. […] But as we’ve seen in this unit, power laws also arise from many many other types of situations that have nothing to do with a phase transition. And this is something that many physicists, I think, weren’t aware of. And so, when they saw power law behavior, they were quick to say: “oh, this must indicate some sort of a phase transition, that the system is poised between two different states, a state of order, and disorder”. […]

So… to sum up, phase transitions most definitely give rise to power laws. Of that, there is no dispute. But there are many other mechanisms that give rise to power laws as well. So I think that the physicists’ understandable fascination with phase transitions and power laws has sometimes, maybe, extended a little bit too far into the field of complex systems. That phase transitions are beautiful physics, and a really impressing theory, but maybe not always so useful in the study of complex systems.”

Prof. Dave Feldman, in this YouTube video

In substance, D. Feldman frames the universality issue as a misunderstanding. He states that the misunderstanding used to be widespread among physicists, but that nowadays, in the study of complex system, it has been mostly debunked. We have seen however that it remains present inside Wikipedia. We can hypothesize that there is a delay in knowledge dissemination from research to Wikipedia (the video was published recently, in March 2019). In any case, Wikipedia reproduces a rhetoric that used to be quite standard in network science. If the argument was fallacious from start, it is important to understand why it did not appear as such.

D. Feldman suggests two hypotheses to excuse the fallacy. I just want to remind first that this fallacy is not universality in itself, but the belief that all power laws are related to universality. In addition, it is also important to remark that, as he puts it, that theory “has everything that physicists would get excited about.” In this context, D. Feldman suggests:

  1. A methodological bias: “physicists [were] accustomed to associating power laws and phase transitions.”
  2. A weakness in the “culture of physicists, that maybe suggests why there is a little bit of hype”: “physicists tend to not learn much statistics.”

It seems that the physicists studying complexity jumped to conclusions and convoked universality on the basis of superficial signs. The bridge was the power law, and a weak understanding of statistics allowed the misunderstanding of universality to stay. But what was the reason for the “hype”, the motivation for jumping to conclusions? What was universality doing for network science?

Universality was a structuralist argument. It provided a structure to complex networks, it made them comparable to a scientific law. A.-L. Barabási, famous champion of network sciences, has continuously pursued a structuralist agenda, culminating with the title of his last book: The Formula: The Universal Laws of Success.

The repeated mention of a “deeper connection”, “same fundamental dynamics” or other “underlying mechanism” is a signature of structuralism. As theorist Alison Assiter is quoted in the Wikipedia page on structuralism, “structures are the “real things” that lie beneath the surface or the appearance of meaning.” The rhetoric of pervasiveness and universality tries to confer power law and scale-free networks a status close to those of a scientific law. It sees universality as a structure, and formulates it as such. In that it differs from thermodynamics, from where the concept is imported, which has an empiricist formulation of universality.

Empiricist and structuralist versions of universality

In the articles related to the study of phase transitions, universality is constantly presented as an empirical fact, a repeated observation. The structuralist understanding of universality is an invention of scholars studying complexity (network science included). Physicists who study phase transitions in thermodynamics have an empiricist understanding of universality.

“In statistical mechanics, universality is the observation that there are properties for a large class of systems that are independent of the dynamical details of the system.”

Universality (dynamical systems)

“It is a remarkable fact that phase transitions arising in different systems often possess the same set of critical exponents. This phenomenon is known as universality.”

Phase transition

“This intriguing phenomenon, called universality, is explained, qualitatively and also quantitatively, by the renormalization group.”

Critical phenomena

Even when universality is presented as explained by theory, it remains framed as an empirical phenomenon, for instance in the article on RG theory:

“This coincidence of critical exponents for ostensibly quite different physical systems is called universality − and is now successfully explained by the RG”

Renormalization group

That being said, the empirical grounding is not incompatible with a more mundane notion of universality, in the structuralist sense of a mathematical or physical law. However these two conceptions of universality (empiricist and structuralist) are never confused in the field of physics. Take for instance the following passage:

“Critical exponents describe the behavior of physical quantities near continuous phase transitions. It is believed, though not proven, that they are universal, i.e. they do not depend on the details of the physical system, but only on some of its general features.”

Critical exponent

The word “universal” is used here in a structuralist sense, but then it is framed as a conjecture and not an empirical fact. We must remark however that the empiricist framing is always at risk to slip to a structuralist framing, at least because they share the exact same words.

The article on scale invariance presents an interesting mix of empiricist and structuralist statements. The following statement is for instance aligned with those we have just seen:

“Universality is the observation that widely different microscopic systems can display the same behaviour at a phase transition.”

Scale invariance

In the first sentence of the article however, we find the following statement, whose exact signification is unclear:

“scale invariance is a feature of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor, thus represent a universality.”

Scale invariance

I interpret this statement as structuralist. Note however that the rest of the article adopts the empiricist perspective, notably in a whole section dedicated to universality.

In the article on percolation, we find an explicitly structuralist version of universality, presented as a “principle”:

“The universality principle states that the numerical value of p_c is determined by the local structure of the graph”

Percolation theory

The other explicit structuralist statements accumulate in the article on the power law, the conceptual bridge between network science and statistics. We have seen those already. There are however implicit statements of universality, for instance:

“Six degrees of separation is the idea that all living things and everything else in the world are six or fewer steps away from each other”

Six degrees of separation

The idea of six degrees of separation is powerful. But why is it necessarily about “all living things and everything else in the world?” Could it not have a domain of validity? Would the idea be less powerful? I hypothesize that the idea is formulated this way because historically, it came from a structuralist intuition. But without a strong theory to back it, universality falls back to just pervasiveness.

We have seen countless claims to pervasiveness, and we can now understand them as a toned-down version of universality. Those pose as important empirical fact, but as we have seen about the power law, without a theory to unify the multiple empirical cases, those might as well be unrelated. Despite what the structuralist rhetoric we studied suggests, pervasiveness cannot be considered a piece of evidence for an underlying reality. Pervasiveness does not require universality. But universality of course requires pervasiveness, and even got its name from it.

“Universality gets its name because it is seen in a large variety of physical systems.”


Universality (dynamical systems)

Appendix

Text content of the video Fractals and Scaling: How many power laws are from phase transitions? produced for the Santa Fe Institute’s Complexity Explorer lessons.

“So we’ve seen that phase transitions give rise to power laws. At the critical point, at the point where the system goes from one phase to another, where the transition occurs, many quantities of interest are distributed according to a power law. And that’s not the case on either side of that transition. So power laws and phase transitions are closely linked. The question then is: what fraction of the power laws we observe, more generally in the study of complex systems, can be said to be due to a phase transition-like behavior of some sort? So that’s a question that I want to address from a number of different angles on this video. And I should mention that much more than some of the other videos, I’ll be giving some opinions, and less an accounting of mathematical or empirical facts. I think the position that I’m going to carve out is pretty much a standard one, within the study of complex systems, but there’s certainly some room for disagreement.

So, what fraction of the power laws that we observe in complex systems arise from phase transitions? So I think the answer is: almost none, that phase transitions likely are not an explanation for the vast majority of power laws we see in complex systems.

So what do I say that? Well, a phase transition is a very unusual state of affairs. The phase transition it’s a critical point, a critical temperature, a critical transition probability, only one out of a whole number of different things. So, it’s very unlikely that we sort of encounter that by chance. In the physical world, things are very rarely poised right at the critical point, right between liquid and solid, or magnet and non-magnet. Things tend to be solid or liquid or gas, and not poised right at that point in-between the two.

So because phase transitions by definition occur at a very narrow, at a very particular point, a particular set of parameters, it seems unlikely that that could be a generic explanation for power laws that we see quite commonly throughout a whole range of social and biological and technological phenomena.

But phase transitions and power laws have often been closely linked, more so in the past, less so these days, and I want to offer some thoughts on why that may be. I think a lot of it has to do with the culture and habits of mind of physicists. So let me explain what I mean by this.

So, within physics, and I say this as a physicist, the theory of phase transitions, also named as the theory of critical phenomena, is a fantastic theory. It has everything that physicists would get excited about. It explains a broad range of phenomena in fairly simple terms, right, so this is the idea of universality, that many different transitions, even in systems that seem very different, are characterized by the same exponents. So it collects a lot phenomena together, into a similar quantitative framework. There is some nontrivial mathematics behind it, renormalization group explaining why some of this is so… So it’s the sort of things that the physicists, most physicists, just love. It’s a great theory, it deservedly got a lot of attention in… the 70s and the 80s I would say. So the theory of critical phenomena is a significant accomplishment within the study of physics, both theoretical and experimental.

So a lot of the claims that linked power laws and other areas of complex systems, with phase transitions, were originated by physicists who are accustomed to associating power laws and phase transitions. And again, phase transitions are seen as a very interesting thing in physics, it’s an unusual state of affairs, difficult to explain but then, it can be understood with some mathematics.

So there is an interest, a fascination one is drawn towards phase transitions, and phase transitions give rise to power laws, and so in the mind of many physicists the two are closely linked. And of course that’s right, it is indeed the case that power laws arise from phase transitions. But as we’ve seen in this unit, power laws also arise from many many other types of situations that have nothing to do with a phase transition. And this is something that many physicists, I think, weren’t aware of. And so, when they saw power law behavior, they were quick to say: “oh, this must indicate some sort of a phase transition, that the system is poised between two different states, a state of order, and disorder”. So I think physicists first tended to be drawn towards power laws in the first place, because power laws are associated with phase transitions which are interesting, and then seeing power laws and immediately associate them with phase transitions and critical phenomena.

There is one other reason having to do with the culture of physicists, that maybe suggest why there is a little bit of hype, or maybe alleged power laws that turned out upon reexamination not be that well-described by a power law. And that is that physicists tend to not learn much statistics. It is rarely a degree requirement for physicists. My background is in statistical physics but I was never required to take a statistics class, and insofar as we do learn things like statistics, it is more about error propagation in experiments, and less about testing hypotheses and model verification.

So those are skills that are taught more often in the sciences, I think in biology and economics, and actually less in physics, and of course it’s taught in statistics all the time. So all that is to say is that physicists didn’t necessarily know about some of the more advanced or modern data analysis techniques that I described in unit 5, leading some to claim that power laws were present in light of, maybe, not such convincing evidence.

So… to sum up, phase transitions most definitely give rise to power laws. Of that, there is no dispute. But there are many other mechanisms that give rise to power laws as well. So I think that the physicists’ understandable fascination with phase transitions and power laws has sometimes, maybe, extended a little bit too far into the field of complex systems. That phase transitions are beautiful physics, and a really impressing theory, but maybe not always so useful in the study of complex systems.”