Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Is Gephi a Black Box?

This text was inspired by Emilija Jokubauskaite’s master’s thesis at DMI under Bernhard Rieder’s supervision. She studied Gephi and its epistemic culture, conducting a series of interviews (including mine) and reflecting on the relations between the tool and its users, mostly in the social sciences.

Gephi and Its Context: Three Oppositions in The Epistemic Culture of Network Analysis And Visualisation

As Gephi’s original author, ie. before Mathieu Bastian and later Eduardo Ramos Ibáñez assumed the role of lead developer, and having continuously influenced its design, I find Jokubauskaite’s work precious. It accounts for the community of users in a remarkably productive way for the social science. It covers many aspects, but here I just picked one that is important to me: blackboxing.

In this text I draft a series of reflections sparked by her thesis. I did not take the time to organize them as a narrative, and prefered a series of points, mostly unsorted. And as a draft, the writing is loose and verbose. Sorry. It allows me to open the piece to an early discussion, which is the point of this research blog.

Though focusing on Gephi as a case, my perspective extends more generally to the use and design of data analysis software.

Does blackboxing matter?

Wikipedia, cites Bruno Latour: blackboxing is “the way scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become.” I appreciate how respectfully positive this definition is: Latour frames blackboxing as a by-product of success.

Not all agree on that matter. In the academia, some people criticize tools on the ground of their opacity. They claim those are black boxes, and it is not a compliment. To them, blackboxing is not a side effect of success, but an issue, a failure. Gephi is no exception. For instance Claire Lemercier and Claire Zalc criticize Gephi for being a black box. There are others. They write:

“More recent software devised in the context of “digital humanities” or “big data” sometimes only offers visualizations at the expense of calculations; or does not offer the analysis tools that are useful in the social sciences (as opposed to in physics); or presents some analysis tools like “black boxes”, with little accessible documentation as to how they work. (contrary to many, we are no fans of Gephi, for this type of reasons; we do not object, of course, to advanced network researchers using it, but we do not find it well-suited to beginners)”

C. Lemercier and C. Zalc

If blackboxing causes trouble, then it matters. And more precisely: what is the problem with black boxes, how can we fix it, and make better science tools? I will argue against the idea that Gephi is a black box, but at least, we can all agree that blackboxing is an issue we need to address.

Blackboxing is in part a cultural issue

Blackboxing is not really a tool thing but a culture thing. It is only a feature of the tool insofar as the tool impacts culture. Tools can give rise and impact cultures, but cultures also have their own reasons.

Some people find Gephi hard to grasp while others praise its simplicity. Users with very different sets of skills and knowledge have a different understanding of how it works and what it allows you to do. For some it is opaque, for others transparent. The defining quality of the black box, opacity, is relative.

We have two reasons to refuse the model of blackboxing as inaccessibility to underlying methods. Firstly, accessing is not understanding. You understand what you see only if you have the right competences. In the case of Gephi, an open source software, accessing the source code is possible but it does not guarantee understanding the method. Gephi exposes many settings, but accessing those is not knowing how to set them. On a different level, a decently sized body of Gephi documentation is accessible online, but it does not mean that users find it, or even search for it. Blackboxing can happen even when the method is accessible. Secondly, the method might be inaccessible but you understand it anyways. This is post-hoc interpretability, a mode of understanding algorithms that works well with deep learning but not only, and I will address this point separately. Anyways, blackboxing is not primarily about accessing methods.

The opacity of blackboxing relates to a complex dynamic of resistances and affordances offered by the tool, and incentives and expectations set by culture. Jokubauskaite writes extensively about it:

“It can be hypothesised that some of the specificities and affordances of the software […] are a part of an epistemic culture in which the relationship between the method of network analysis and the Gephi tool can be regarded atypical. Firstly […] scholars tend to approach network analysis with this tool as intuitively following some ‘recipe’ without always consciously acknowledging the subtleties in its steps. While Jacomy aimed to educate social scientists about graph theory and ‘what can be trusted’ in a network visualisation, the academic environment has possibly steered it towards learning how to productively use the tool. […] [T]his thesis would like to argue that Gephi cannot be regarded as simply a method that has been packed into a software. Rather, looking from the perspective of current research practices, it can be largely seen to have an academic practice of its own than being a part of the larger tradition of network analysis and visualisation historically. Specific Gephi affordances and the research practices of using it may be seen as constituting a self-sufficient epistemological routine apart from its complex historical and methodological underpinnings.”

E. Jokubauskaite

Or more simply:

“I would like to argue that in the case of Gephi, black-boxing is less reliant on the interface and the tool in itself, but is more related to the epistemic culture as well as the agency and knowledge of the user.”

E. Jokubauskaite

We resisted blackboxing Gephi

I was surprised to realize that Gephi can be considered a black box, because we intentionally aimed at the opposite. I do not want to spend too much time justifying ourselves, but here are a few of our reasons:

  • The method is exposed. The code is open and all implemented algorithms refer explicitly to relevant papers inside the graphical user interface.
  • We have a website, a forum, a Facebook group, a wiki, and many third party sources for documentation.
  • We expose visually what happens during node placement algorithms, a crucial part of visual network analysis. Instead of a load bar as in GUESS, you see the nodes moving and you can stop when you want. You have a chance to understand how it works (post-hoc).
  • Arbitrary meta-parameters are exposed, users see they exist and can edit them. Applies to both layout algorithm and metrics (clustering, centralities…).
  • We published a an open access paper on our own layout algorithm (ForceAtlas2) where we provide both the equations and visual explanations on each of its settings.
  • By design, Gephi never initiates a process without a user action. By default, it does not compute any statistics, any layout. Users have to do it, and acknowledge the existence of these possibilities, along with their alternatives.

Despite our best intent, we might still have failed. In that case, I am interested in understanding why Gephi is a black box, and why our constant efforts did not matter. But I am not convinced by the diagnosis.

As well known in the field of design, users typically misdiagnose problems. They detect actual problems, but they generally situate them in the wrong place. It can be explained by their ignorance of design constraints. In that sense, the primary way any device is a black box, is by hiding the design process. In particular, the work of exploring alternative solutions (that were removed) is never visible. This reduction of the exploratory process to a bounded object is a classic aspect of design, applying to virtually anything humans can produce, from cooking to writing a paper. For your own productions as well, you have certainly received that kind of unrealistic feedback, candidly oblivious of the many constraints you faced, possibly from your mother-in-law, or worse, peer-reviewer #2. Feedback is generally sparked by a real issue, though. The work of a designer is to understand where it comes from and elaborate a solution that takes constraints into account. This is how I frame the issue of Gephi’s blackboxing. I acknowledge the existence of an issue, but I do not take for granted that it is a matter of opacity.

Once you factor in constraints, you realize that there will always be a certain amount of blackboxing. Blackboxing is nuanced, and if it were a scale, it would not start at zero. Building a tool without blackboxing is like making a cake without cooking. But some tools are more or less blackboxed, or in different ways. Since a tool can be transparent to certain users and opaque to others, certain publics might have been favored over others. More simply, we can ask how much blackboxing could have been avoided. It is not the same question as the amount of blackboxing, because it accounts for the amount of inevitable blackboxing. This is where users miss an important factor. I believe, in the case of Gephi, that we could not do much better while retaining the tool’s quality (or spending a time we could not afford). However I acknowledge that we might have blackboxed it another way. In that sense the right diagnosis is not how much it is blackboxed, but which way. Despite our efforts, Gephi might be a black box in a way that is detrimental to a certain public.

Gephi was intentionally favoring beginners. We had two personas in mind when we designed it:

  1. Someone trained in social sciences or humanities engaging with relational data
  2. Someone trained in network science with an empirical case to study (eg. myself)

We wanted Gephi to be usable by people without a background in network science, while still being useful to more advanced users, but we did not want to favor the expert public at the detriment of the beginner public. My own design skills were poor at that time, and I made a number of mistakes that I can see now. But even if our learning curve was not as smooth as we thought, it paid off and as Jokubauskaite observed, many users feel how it helps them gradually engage with network science. “Some other interviewees also acknowledged this characteristic of the instrument, saying, for example, that ‘through the time of working with the actual tool you […] get to know what it […] represents. So it gives you more insight into what it is […] that you are working with’. Moreover, several of the interviewees expressed a wish to do more related research in order to learn more about the tool.”

What if Gephi is transparent to beginners but opaque to advanced users? We might have sacrificed the expert public in our quest of a tool usable by all. This hypothesis is aligned with a number of observations: Gephi is appreciated by beginners, it is criticized by some experts, and as Jokubauskaite notes, its users frame it as an entry point to the field of network science. However those facts can be explained otherwise. The critique of experts is probably a selection bias: only an expert would take the time to criticize Gephi, because a frustrated beginner would silently move on to an alternative. Additionally, Lemercier and Zalc explicitly make the opposite point: “we do not object, of course, to advanced network researchers using it, but we do not find it well-suited to beginners”. Our hypothesis does not seem to hold, and Gephi’s blackboxing problem does not seem to be caused by a bias towards the beginner public. There is no simple way in which Gephi is too much blackboxed, or blackboxed the wrong way.

Users repurpose Gephi as fast food

The model according to which tools implement methods is too simplistic. Naturally, users repurpose tools in unexpected ways, diverting features, and deviating from safe methodological paths. This is one of the reasons why blackboxing is a cultural issue. What happened when users repurposed Gephi?

Jokubauskaite notes: “Gephi cannot be regarded as simply a method that has been packed into a software. Rather, looking from the perspective of current research practices, it can be largely seen to have an academic practice of its own than being a part of the larger tradition of network analysis and visualisation historically. Specific Gephi affordances and the research practices of using it may be seen as constituting a self-sufficient epistemological routine apart from its complex historical and methodological underpinnings.”

I personally frame Gephi as a network science tool. So I was a little surprised to learn that Gephi’s practice is seen as different from network science. But it actually makes a lot of sense. In particular, as I have already shown in this blog, most of network science is a structuralist or even universalist approach to social and living phenomena, trying to leverage mathematical theories and computing to unveil hidden laws. That project is quite different from Gephi’s project, engaging empirically with relational phenomena through visual and interactive interfaces. What are doing the users observed by Jokubauskaite with Gephi?

“[W]hile the researchers often arrived at the use of Gephi without former knowledge of network analysis, their educational path from the beginning can be seen as largely focused on producing findings in short-time projects as opposed to taking time to empirically explore networks and learn about them.”

“Gephi is used is often quite time-restricted (for example, in short project-work). Additionally, when learning how to carry out network analysis, researchers often learn from step-by-step tutorials or ‘best practices’ that increase the productivity when using the tool, however might disincentivise further attempts at method clarification. Following that, the researchers have reported being encouraged to ‘just use it’ or instructed to take some methodological steps without providing further information on why and what the implications might be. These practices, in combination, can be seen as further black-boxing the method from the user of Gephi.”

E. Jokubauskaite

Some Gephi users consume it as fast food – possibly most of them. That is an interesting finding. My own experience confirms it, though it only became clear to me as I read Jokubauskaite’s work. I can even propose an explanation, focusing on the question of layouts, as for why it happened despite our efforts to slow down users.

Every Gephi user had already seen network visualizations before trying to produce one. I know for a fact that in such situation, you just assume that the placement results from a conventional process, or worse, that nodes have a natural position. But of course the nodes do not have Euclidean coordinates in the network, so we must produce them. We generally use an algorithm to place the nodes so that their relative distances tell us something on the structure of the network (see our introduction to visual network analysis). There are different ways to do that, different algorithms. None of them is “the one”. You have to choose, and as a beginner you do not have the knowledge to inform your decision.

This is a classic design dilemma. If you pick a choice for the user, you take agency from them, you make a methodological decision invisible, and you narrow down options. It causes more blackboxing and it is detrimental to expert users. But if you expose the choice, you require the beginner user to take an impossible decision. There are different ways to deal with the situation: convoking knowledge via documentation, or on-screen help, suggesting a default choice while presenting alternatives… But in the end, the easier you make the life of the user, the less visible you make the decision. It is a tradeoff.

In Gephi, despite our focus on beginners, we took the line that makes their life the most difficult: we confronted them with the impossible decision. We expose a list of layouts without any default choice. We do not even throw the choice at the user’s face, we do not guide them towards the decision. I proposed this guideline in the very first steps of Gephi’s design: all actions must be initiated by the user. In that sense, Gephi behaves more like a sandbox (where you pick tools to do stuff, as in Photoshop or Word) and less like a scripted method (where you execute a process, as in pushing a button). As a consequence, if you ignore the necessity of a layout, your network is not spatialized and appears as the “infamous Gephi Borg cube”, a square resulting from randomizing node coordinates between 0 and 1. The situation is known to be frustrating, hence the fun pejorative name.

Our choice requires the user to understand what they are doing, and the necessary knowledge is not inside Gephi, but in the community (YouTube, Facebook group, blogs…). This is where the cultural dimension of blackboxing kicks in. We deliberately chose to slow down users to force them to face their responsibility. To progress, they must face that the layout is a decision, that there are multiple choices, that there are settings, and that they must stop it. They can monitor the effect of the layout on the main view.

So how can users repurpose Gephi as fast food? Because despite all our efforts, users find shortcuts. The acclaimed game designer Mark Rosewater summarized his 20 years at designing Magic: the Gathering in a famous talk named Twenty Years, Twenty Lessons, and his #1 lesson is: Fighting against human nature is a losing battle. He writes:

“Your audience […] is humans. They come with a complex operating system. It’s quirky at times, but it can be understood. Just remember that humans are quite stubborn. They like to do things the way they like to do them and it’s hard to change their behavior. […] Don’t get yourself into a fight you’re probably not going to win. Human behavior is a powerful force. We are creatures of habit and instinctually fear change. Yes, there are things that come along—like the cell phone—that humans change their behavior around, but don’t assume your [device] is going to be one of those revolutionary things.”

M. Rosewater

Confronted with a choice we cannot make, we simply search for a recommendation and pass the difficulty if we can. Even if just to have an idea of what happens next. And most users have an idea of what they want to obtain: nodes spread out in nice clusters, interpretable as a map. Despite the frustration generated by our design, motivated users can find an answer on the web, pick the same algorithm as the others, apply it and go on. Any Gephi tutorial explains how. We can slow them down, force them to face underlying methodological fundamentals, but we cannot prevent them from using all sorts of shortcuts.

Jokubauskaite observes this phenomenon exactly, and correctly attributes this kind of blackboxing to the practice, and not the tool per se. When it comes to human nature, there is not much we can do. The contextual necessity to get a network analysis by dedicating the less possible time and energy does not have much to do with Gephi, but rather with digital glitter. It creates an irreducible amount of blackboxing, an opacity that does not lie in the tool but in practices and that we cannot easily reduce.

What is a clear box?

If black boxes are bad, what is the ideal we seek? How to build a clear box?

In our initial Gephi design, we had an implicit ideal: automating manual tasks. For instance the search and replace feature present in all text editors is transparent. Anyone understands what it does, it is just automated. It accelerates a repetitive task. This acceleration can unlock new possibilities, become a qualitative change, but it is still something you can understand. You can predict the outcome, anticipate issues, monitor results. Many metrics, for instance the TF-IDF used in text mining, are transparent as well because they just count stuff. Those can be seen as clear boxes. When I coded the first Gephi prototype, I had the previous experience of manually retracing and drawing networks, in the tradition of Moreno’s sociograms. I was also developing my own layout algorithms. For me, Gephi was just automating manual operations. But of course it felt very different to other users, because if you have no experience of those manual operations, it is not transparent to you. Besides, this version of the clear box has a worse flaw. In most algorithms, simple steps are automated but also combined in such a way that understanding the steps does not necessarily bring light to what the algorithm does. In fact, all computer algorithms are based on simple Boolean operations, but it does not make them transparent. Involving simple operations is not a satisfying characterization of a clear box.

Another version of the clear box is mentioned by Jokubauskaite after Bernhard Rieder and Theo Röhle in Digital Methods: Five Challenges. Here transparency is related to scrutiny, to the ability to access and discuss the method implemented by the tool. A clear box is when an algorithm can be held accountable. Rieder and Röhle’s point is that methods they call experimental, in the sense that the results they produce cannot be easily mapped back to the algorithms and the data they process,” can be scrutinized even if we do not understand them “in the same way we understand statistical concepts like variance or regression”. The typical example here is deep learning. Others call this post-hoc interpretability. We can scrutinize and learn the method from the outside, even if we cannot know it from the inside. I like this version of the clear box because in many ways, the explanations provided by academic papers on algorithms are far from satisfying anyways. For instance the common way to rationalize force-driven placement algorithms is not only false, but also unable to account for the results they produce. How is it a good explanation, then? I will not develop that point here, but I want to mention that explaining an algorithm from the inside is often better at providing the comfort of a rationalization than accounting for what the algorithm actually performs, which is more important. However this version of the clear box assumes tools implement methods, which is too naive. It does not account for users repurposing devices for their unexpected needs, in unexpected ways.

There is an apparent tradeoff between these two versions of the clear box. Either you ground transparency in the process, ignoring the results, and you can then explain repurposing but not scrutinize. Or you embrace post-hoc interpretability, grounding transparency on the results, scrutinizing the method but ignoring unexpected uses. I do not have the knowledge to engage with this discussion much further. It is possible that there is a middle way, or that these two conceptions correspond to two different kinds of devices, for instance close-ended and open-ended.

Anyways, it seems that there is no obvious definition of a clear box, no obvious solution to the problem of blackboxing. We have identified three issues. The first is what Rieder and Röhle call “the classic two cultures problem: even if specifications and source code are accessible, who can actually make sense of them?” It also deserves to be extended. Even inside one culture, the accessibility to source code is no guarantee to understand anything. All developers know that you cannot always understand your own code, especially when you wrote it a long time ago. We argue it is not a culture problem in itself, even if cultures do matter. It is just that accessibility to the method is necessary but not sufficient. The second issue lies in that tools, and especially open-ended (exploratory) devices, do not strictly speaking implement a method and are commonly repurposed by users in unexpected ways. This prevents from grounding transparency on method scrutiny, because it does not necessarily matches practice. Last but not least, the third issue is the relativity of transparency. What is transparent to some are opaque to others and, importantly, vice-versa. If only for that reason, there is no universal clear box.

Transparency as predictability

Jokubauskaite notes that some researchers criticize the unpredictability of force-driven layout algorithms. This is a reasonable point, but it misses the essential. She narrates: “While very popular and having different layout versions, the force-directed layout algorithms have […] been quite extensively critiqued. Krzywinski et al., for example, point to the reliability of force-directed layout visualisations. They state that ‘the effectiveness of these methods is reduced by inherent unpredictability, inconsistency and lack of perceptual uniformity’. Furthermore, they argue that the unpredictability of such network visualisations is influenced by the fact that they are ‘driven by an aesthetic heuristic that can influence how specific structures are rendered’ and that ‘different algorithms generate very different layouts of the same network’ […]. It needs to be noted, however, that even the same force-directed algorithms may produce different final visualisations of the same dataset […], as they are ‘notoriously brittle: they have many parameters that can be tweaked’ and ‘[t]he result varies depending on the initial state (Jacomy et al.).”

Predictability is a non obvious property. Firstly, it must not be absolute. If the results of algorithm A are fully predicted by algorithm B, then they are technically equivalent. We often use algorithms because we cannot predict their results. That is how they are useful to us, for instance by unveiling a hidden structure. But predictability could mean that, in a series of settled cases that serves as a benchmark, we know what to expect. The representativeness of that benchmark then becomes a characteristic of predictability, which leads us to another point. Predictability can be easily engineered. There are obvious techniques to make a non-deterministic algorithm deterministic, and vice-versa. You can give or take predictability easily, but only in a superficial way. Because in a practical situation, predictability also assumes a form of continuity: small variations in the input should produce small variations in the output. But engineering predictability typically breaks this rule. When a process is inherently random, because it depends on arbitrary initial conditions, you cannot fully hide that randomness. The technical predictability you can obtain is a form of lie and is detrimental to empirical situations. Thirdly, sometimes some things have to be unpredictable so that others are predictable (I am resisting quoting a misinterpretation of Heisenberg’s principle). Some quantities are expected to be predictable while others are not. It is typically the case of force-directed placement algorithms where visual clusters are fairly predictable despite exact coordinates being unpredictable.

In the case of force-directed placement algorithms, exact coordinates cannot be predicted by design, because iterative placement is both the source of randomness and the reason why it is efficient. Criticizing the unpredictability of coordinates is pretty superficial, when the visual clusters are predictable and the kind of structure you try to unveil. That kind of unpredictability is not relevant enough to claim opacity, because the key result can still be predictable. On the contrary, predictability of a series of known quantities seems a major way to ground transparency.

Jokubauskaite notes that users seem pretty good at predicting certain aspects of the layout. She notices “a tendency in decision making [related] to the aspects directly observable from the interface, for example, ‘I know that it is going to the middle, because it is being pulled by gravity’.” She notes an important difference between predicting what the tool does and understanding the method. “Examples such [as this] can be seen as contributing to an observation of a tendency among Gephi users to rationalise the methodological decisions based on the tool itself, in contrast to providing justifications with regards to network analysis as a method.” I am not sure of Jokubauskaite’s position on that matter, but the way I see it, predictability is more important than understanding the method, because the former is necessary to the latter. There can be many ways to provide “justifications with regards to […] a method”, some of which are superficial or just plain wrong. But the ability to predict enacts a true form of understanding, even if quite empirical. If one cannot predict a method, they do not really understand it. But if predictability is necessary, it is not sufficient. Understanding the method also provides a necessary frame for interpretation. Nevertheless, the ability to predict contributes to understanding the method. I think that Gephi, by the interactive user experience it offers, supports an active learning of the predictable features of layout algorithms. But of course, it is much more effective in complement to other forms of learning (documentation).

Knowledge, ignorance and transparency

It may seem paradoxical but blackboxing produces transparency – just not the kind of transparency we opposed to opacity. It produces the transparency of the mediation. The typical and literal example is glasses: you do not see them, you see through them. The mediation can be said transparent because you forget it. Your glasses become an organ, a part of your body. This phenomenon of incorporation is well known in cognitive science and applies more generally to our use of technology. A pen is not transparent in the visual sense, but it is transparent in the sense that when you hold it, you feel the paper you write on, you feel it through the pen, but you forget the pen. The pen is not writing, you are writing, through the pen. For similar reasons, the car you drive is a part of you and you are on the road, and your web browser is not online while you sit on a chair looking at it, you are on the web reading this blog post. All those mediations are like windows, we see the world through them, but we do not see them per se (at least when the coupling is working, which is not always the case). Their transparency is us forgetting about them, and forgetting that they change our perceptions – that is why we use them in the first place.

The blackest box is the invisible one. Not only we cannot open the box, but we forget it exists. Technology then becomes like Magic, it “just works”. And we hate being reminded it exists, because it feels like a part of our body suddenly stops working properly. We hate when our smartphone loses internet, when Google or Facebook go offline, when the mouse does not move the pointer, when a letter of the keyboard stops working… It is a sign that some technologies are blackboxed to the point that we usually forget their existence. And it is a good thing, because that is how they are useful. We do not want to see our own glasses, their transparency is necessary to their function.

Unfortunately, even when that kind of transparency is necessary, it raises issues. In fact, it raises the exact same issue as blackboxing: by not seeing the technology, we ignore what it performs, we lose the ability to scrutinize it, to criticise it, and to make it accountable. A transparent technology gradually fades into the background of our lives, as if covered by Harry Potter’s cloak of invisibility. Making it visible again takes a lot of effort. Surveillance capitalism has largely leveraged that effect (and convenience, but that is a different point). That transparency is problematic because we double ignore it. Not only we ignore how it works, but we also ignore that we ignore it.

“Real knowledge is to know the extent of one’s ignorance,” said Confucius. In many ways, modern scientific knowledge starts with the knowledge of ignorance, by managing the limits of knowledge. The requirement of falsifiability serves this purpose, and more generally all methods of validation. Even in Mathematics, knowability has become a major concept, as in Gödel’s incompleteness theorems. That kind of transparency blackboxes by preventing the knowledge of ignorance. And blackboxing produces that kind of transparency by hiding mechanics and thus putting them out of scrutiny’s reach.

It is very unfortunate that we have two notions of transparency, one opposed to opacity, the other aligned with it. And we need both of those to discuss Gephi’s blackboxing. I cannot just pretend one of those does not matter. In order to bring clarity, I will precise both notions and stop using the word “transparency” as often as I can.

  1. Transparency as mediation invisibility. It is opposed to visibility. Making mediations invisible blackboxes because it hides the box in the first place, making your chances to open it even lower. It blackboxes by preventing you to know that you ignore something, the mediation itself. As we argued, while this is problematic, it is also necessary to the proper functioning of those mediations.
  2. Transparency as the ability to scrutinize methods and processes. It is opposed to opacity. Black boxes are typically resisting scrutiny. As we have mentioned, it is unclear if the object of scrutiny is the method or the process, but in both cases we must be able to see through, to open the box.

In a nutshell, we have two kinds of black boxes: those we do not see, and those we cannot open. Some boxes are both.

In Gephi we tried to resist invisibilizing mediations while still leveraging them. It means that we wanted users to benefit from the active learning of interacting with the network through a mediation like a force-driven layout algorithm. This experience of seeing the network unfold and reach equilibrium while still being able to interact with it has certainly contributed to Gephi’s success. In this context the mediation is an asset, but it tends to invisibilize the underlying method of placing nodes with a force-driven placement algorithm. This is the reason why we required the user to trigger that feature and to always confront the existence of multiple choices and settings. To embody the method in the user experience, as frustrating as it can be, and resist its invisibilization. But as we pointed out, our attempt might be a relative failure because it fights the losing battle against human nature.

Post-hoc interpretability

Jokubauskaite writes: “[T]his thesis would like to argue that in the case of Gephi, especially, the notion of blackboxing is not as clear and universal. […] [T]he interviewees reported on not being sure of how certain aspects of the tool work, even taken into consideration the aim of the developers and the possibilities in the tool. They have, more specifically, reflected on often using the tool without making the decisions consciously and not being aware of what and why they should question in the process of implementing the tool-use in their research.” She notes three things at the same time: (1) users were successful at using Gephi, (2) but they do not know how it works, and (3) they are aware of this lack of knowledge. This awareness of ignorance that is key to resisting invisibilizing mediations is, in my eyes, a major point against Gephi being a black box. Surely, as Jokubauskaite writes, “[i]t has been articulated that the tool is ‘complicated and there’s a lot of things that you have no idea what is happening and what is going on.’” But I want to stress that as long as you are aware of it, it is not such a problem. And it seems to be the case with Gephi, which I am glad to know.

Some things in Gephi are blackboxed by opacity, in the sense that their internal functioning is hidden. I assume most of those as an editorial choice of favoring post-hoc interpretability. I have written a little bit on that concept on this blog. By post-hoc interpretability, I mean the ability to understand an algorithm from the outside, by benchmarking it and/or engaging with it over many iterations and situations. It does not require to know the internal mechanics. But it can lead to a high degree of prediction. It also fulfills the need for scrutiny, and can make algorithms accountable. It can be much more costly to achieve than reasoning after the internal mechanics of the algorithm, but it is sometimes our only option. In particular, it is the primary mode of understanding deep learning algorithms, whose internal mechanics are too complex to be interpreted by humans.

Note that “post-hoc” does not mean “a posteriori.” This mode of knowledge is not about looking at the output. It is about looking at how the output depends on the input, looking at that loop, over many iterations. It is “post” something because it requires the algorithm to be implemented and executed, as opposed to a method defended on a theoretical level.

My prototype example for this mode of understanding is of course force-driven placement algorithms. I know them both for their internal mechanics and the results they produce. For me, the internal mechanics and the rationale that comes with it are unable to explain the results they produce. For that reason, the real way I know these algorithms is by having observed the results they produce in many different situations, maybe more than anyone else. It might be surprising but I think that my expertise comes more from this experience, which I share with Gephi users, than from understanding the underlying equations, which I share with the computer scientists who publish the papers specifying these algorithms. But of course the extent of my knowledge goes much beyond what a Gephi user can see, at least because when I developed algorithms like Force Atlas I have seen how countless unreleased variations of the algorithm impact the result. The extent of my understanding is very different from those of a Gephi user, but the mode is the same. Post-hoc interpretation.

Gephi has been initially forged after my own use for exploring empirical networks (from the web, mostly). It has changed a lot since, but the influence of my own perspective is still largely present in today’s version. I have tried to transfer my own experience to the users. I did not try to transfer my knowledge, as I would do in writing a text. I did not try to transfer my opinions either (how could that be?). I tried to transfer the conditions of my empirical engagement so that they could interact with their data the same way I interacted with mine. In the multiplicity of my own experience, I selected the most critical and transferable aspects, reducing the large territory of my explorations to a narrower but more operational set of features, and by doing so I did some blackboxing. It was necessary, but it is another point. As Gephi grew and multiple influences applied to it, the empirical understanding of layout algorithms became a key feature. We made it the entry point to the world of network science. Gephi’s blackboxing was an editorial choice to favoring post-hoc interpretability, supported by active learning, over rationalizations based on the method, supported by reading or watching documentation.

Blackboxing is a design resource

Designing a tool requires a selection. The process both opens up possibilities and narrows them down to a set of coherent, consistent features. The reduction to a set of features is inevitable because of two limited resources: the time budget for making the tool, and user attention. We cannot develop too many features, and users cannot manage too many features. These choices made by the designer are already a form of blackboxing, because they hide alternatives. But a device is a curated object, by definition it has limited boundaries and the process of making it does not fit in the device itself. The design process has many influences and attachments, but the device itself must be free from those links in order to circulate. Which also explains why it can be repurposed. A device that is not free from its attachment, that cannot be repurposed, cannot be used by anyone else than its designer. Any device requires a form of enclosure in order to be shared. It has to be packaged. Even code libraries, the most open objects you can find in computer science, are packaged. As we have seen there is no universal clear box, and in that sense any box is, in some way or to some people, a black box.

But blackboxing is not only a necessity, it is also one of the most powerful design tools. It is even at the core of object-oriented programming. By packaging certain things, a designer can hide complexities and reduce the cognitive toll of certain features while retaining usefulness, or even improve it. Not all features are explained before the user tries it. Actually documentation is almost never read before a user tries a device. It is very productive to hide a feature until the user needs it, and some features are so contextual that they work as mediations. For instance auto-completion: how queries are suggested when you start typing in a search field. Completely hidden as long as you don’t need it, it appears just in time without interfering strongly with your action, so that you could ignore it. But it can also quickly become a part of your experience, almost invisible because you never think of it, but you would feel it missing if it were suddenly disabled. Yet there is quite a lot of complexity and agency behind this seemingly simple feature, as you can imagine. Are you sure you understand where Google search suggestions come from, and how much it impacts you? Anyways, despite the issues it raises, blackboxing is extremely efficient at providing more features, more value to the user.

Beyond the necessity to curate features, blackboxing is a major design resource. As surprising as it sounds, it can even contribute to less blackboxing. It all depends on how it is used. Firstly, a feature can be blackboxed in a given context and not in another one. The goal here is typically to smoothen the learning curve. Most complicated software applications have alternative user interfaces for beginners and advanced users. When you get more comfortable with the main features, you can switch to a state of the graphic user interface exposing more of the internal mechanics, and providing more control at the expense of simplicity. Secondly, not all features of a device are related to its core purpose. Text editors include drawing capabilities, and image editors include text editing features. These secondary features can be efficiently blackboxed to make room for the core features. By requiring less attention, they allow the user to focus more on what is important. For instance in Gephi, the search feature has autocompletion. No one ever argued it was making Gephi more opaque, because here it is secondary, while in Google Search it is not. Applied to secondary features, blackboxing can contribute to make a device less opaque.

Convenience

Like blackboxing, convenience is a resource for the designer. Ultimately, the convenience offered by Gephi is where I situate the problem that others see as blackboxing. I have argued that Gephi cannot be significantly less blackboxed than it is, and that its opacity roots more in epistemic practices than in its design. Jokubauskaite’s observation are aligned with this analysis. But if Gephi’s design is responsible for incentivizing opaque fast-food practices, it is because of the convenience it offers.

There is a lot to say on technological convenience, notably in relation to surveillance capitalism. From Google to Uber and voice assistants, convenience has been a driving force in our technological environment. See that recent piece for instance. I will not engage with a general discussion on convenience, but I can see why some scholars place Gephi somewhere on the Uber axis of convenience, albeit quite far away from those highly popular and profitable giants. Indeed Gephi offers convenience, and convenience is problematic. But that is not where it ends.

People are not fools, not even Gephi users. When repurposing Gephi as a pushbutton visualization tool, researchers do not ignore that they trade quality for convenience. They could hardly ignore it considering all the steps we have left in their way, intentionally, for that specific purpose. But there is a better reason: they are not fools. But clearly the tradeoff is worth it, which means that Gephi is still convenient enough despite its somehow frustrating user interface. Users are not puppeteered or tricked by Gephi, they willingly trade for convenience. And they are right to do so, because contrary to Google and Uber, Gephi does not use convenience as currency. It is convenient because a convenient tool is more useful. And we even sacrificed some convenience to resist opacity. People have their own reasons to trade away quality for time, and I do not consider myself as responsible for the existence of digital glitter. As I already argued, that is a losing battle against human nature. But convenience has a positive side, that not all scholars have an interest in supporting.

Something clicked when I read, in Jokubauskaite’s work, that “the interviews showed Gephi to be used in somewhat of project-to-project cases.” It is a core strength of Gephi: you can engage with it without much background knowledge, use it with some degree of success, and move on to something else. It is convenient enough that it supports a form of disposable use. Now, the question becomes: what is the quality obtained from this low level of engagement? Is this form of fast-food consumption good for science? I suspect that to Gephi’s critiques, this practice is seen as detrimental. On the contrary, I think it is beneficial. I believe that this is the main sticking point about Gephi.

I suspect the core disagreement is on how to maintain a high scientific quality. I am in the business of enrolling a wide crowd of people into a better engagement with data (network data). Expert researchers who critique Gephi are in the business of demarcating good from bad science. They see me as a glorified enthusiast, I see them as gatekeepers. We do not have to agree. Just note that I stand behind Gephi for the same reason why they criticize it, in the name of a better science. Not in spite of it.

I stand behind Gephi’s convenience because it supports low risk, low reward research strategies. And we are in great need of those. Digital traces are a fickle material. Contrary to census or poll data so often used in sociology, we do not master the conditions of their production. There is an inherent risk to invest time and energy in them. So we must adopt more flexible and iterative research designs. I am personally inspired by agile methods, but mixed methods also propose a relevant framework. It comes with more iterations, hence less time for each iteration. But it also comes with more heterogeneous data, requiring a wider palette of skills, methods, and tools. Hence less time to dedicate to each tool. We did not choose that situation, but we must adapt regardless. The situation calls for tools that we can use at a lower cost, even if the outcome is more superficial. These tools are used early on, when monitoring and exploring are the main goals of engaging with data. Once relevant data of good quality is identified, which is never given in advance, the research design can move to more traditional forms with more specialized tools. This agility is what Gephi’s convenience achieves, and it is beneficial to science.

Note that Gephi can also be high risk high reward, but it is a different use and other similar tools are probably more adapted (Cytoscape, Ucinet…). However even in this context, users have an interest in transfering their skills from a low engagement context to a high engagement context. Once you have learned Gephi, you have an incentive to use it again. Inquiring the Gephi user base showed us that most users move from small networks, easy to interpret, to much larger networks, requiring more skills.

Mobile, repurposable tools like Gephi disrupt research brokers. Some researchers seek a position of obligatory waypoints, to defend the quality of their field, and/or to get academic power, thus funds, and thus freedom. I can understand that. But Gephi empowers enthusiasts, which inherently drains influence from them. I believe today’s beginners are tomorrow’s experts. I believe that playing leads to understanding, that repurposing a tool cultivates critical thinking, that high levels of engagement start with low levels of engagement. I have a Brechtian ideal of knowledge, I see research brokers as Brecht’s (imaginary) Galileo, who kept knowledge captive in order to preserve power. I think this is why Gephi meets academic gatekeeping. Convenience empowers people that academia sees as illegitimate (data journalists, activists, students…). But I think that it is fine to suck at Gephi, this is how you learn, and how, ultimately, you make a better science.

“Science knows only one commandment — contribute to science.”

B. Brecht, in Life of Galileo

OpenEdition suggests that you cite this post as follows:
Mathieu Jacomy (May 13, 2019). Is Gephi a Black Box? Reticular. Retrieved January 20, 2025 from https://reticular.hypotheses.org/976


One thought on “Is Gephi a Black Box?”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.