Situating Visual Network Analysis (PhD thesis)

Situating Visual Network Analysis is my PhD thesis, just accepted for defense, scheduled for June 1 at 12:00 CET in Copenhagen. I will provide additional details when I have them.

In the meanwhile here is the manuscript for download if you want to take a look:

Download the manuscript (PDF)

Abstract

Visual network analysis (VNA) is the practice of analyzing networks by visual means. In this dissertation, I account for this practice and the techniques involved by focusing on force-directed node placement algorithms, the most popular strategy for drawing network maps. I explore the question of what we see when we look at networks, address some of the criticism faced by network visualization, and reflect on the role of the layout algorithm in the visual mediation of the network’s topological structure. My argument unfolds in six theses: (1) VNA consists of practices that are only partially determined by the graph-drawing and data-visualization literature; (2) some visualizations, including network maps, prompt a visual inquiry into the meaning of emergent patterns as contributing to their apparent self-evidence; (3) for historical reasons, the graph-drawing literature mainly promotes an interpretation regime adapted to small networks (diagrammatic), while practices partially shifted in the 2000s to large networks (topological interpretation regime); (4) some issues with reading network maps can be attributed to the misalignment between our visual cognition and the computational standpoint, notably the notion of a group; (5) the existing justifications of algorithm designers do not provide a compelling explanation of what we see in networks; and (6) the literature on community detection focuses on clear-cut clusters, while force-driven placement algorithms make visible other non-clear-cut community structures.

As a technical mediation, Gephi shifts your goals (and vice-versa)

5 min. read

We had to cut this text from a paper we wrote with Emilija Jokubauskaitė, but I find it useful, so I share it here, slightly reworked. We borrow Bruno Latour’s (1994) explanation of the mediation, and provide examples about Gephi. Latour’s approach draws on Gilbert Simondon’s thinking of the technical object (1958), and more recently on Madeleine Akrich (1993). It contends the existence of a script, a program, attached to the object; and that program is neither determined by just the practices, nor just the instrument. It helps us make a very important point: Gephi influences its users by shifting their goals.

Have you been tempted to share a Gephi screenshot even though other people would not understand it? Gephi’s image-production abilities can shift its users’ goals.

Latour (1994) observes that there is a “materialistic” and a “sociological” version of the question about whether (and how) tools influence us. In a “materialistic” perspective, “each artifact has its script, its ‘affordance’.” This version emphasizes the role of material-semiotic features over practices. And there is a “sociological” perspective where the tool is a “neutral carrier of will that adds nothing to the action.” This version emphasizes practices and rhetorics, and puts the blame on people. As Latour notes, “the two positions are absurdly contradictory.” Analyzing technical mediations requires getting out of these false alternatives. When it comes to the criticism of data visualization, I find for instance that Johanna Drucker’s statement that “graphical tools are a kind of intellectual Trojan horse” (2011, 2014) sounds absurdly materialistic (more quotes in this post). Latour offers four meanings for mediation: “translation”, “composition”, “reversible blackboxing” and “delegation.”

Translation refers to the ability of the user-tool assemblage to displace the goals of both the user and the tool. The goals of researchers using Gephi may differ both from the goals of unequipped researchers and, symmetrically, from the goals of Gephi not in use. Producing images that can circulate may arise without being a goal of Gephi. “Responsibility for action must be shared among the various actants” (Latour, 1994). Let me explain. Gephi has been thought mainly as an exploration device. Even the image-export features were aimed at exploration: their purpose was to print images to annotate and discuss networks on a paper support, which is often better than the screen. We have stated it multiple times (Bastian et al., 2009; Jacomy et al., 2014). Yet I have seen countless Gephi screenshots on Twitter or in people’s slides that are shared to an audience who cannot understand what they mean, because they lack context and/or graphic design. Yet these images produce an effect. For instance, they showcase technical mastery, or the complexity or massiveness of the data. And that narrative might even be unintentional, which I call storyletting: letting your visualizations tell their own story, because you share them without a narrative. Producing such images is not Gephi’s goal, and I assume that it is not the researchers’ goal either. These images are brought into existence by the exploration process, and are legitimately stored as traces of the process. The same way Lada Adamic and Natalie Glance (2005) featured the famous figure 1 of Divided they blog to document their process. These screenshots are not intended as marketing assets, but they may become ones by circulating. This shift in their significance is what Latour calls “translation.”

Composition refers to the fact that the tool enables certain things. Even if we attribute agency to the researcher and not to Gephi, what the equipped researcher is able to do is made possible by both what the researcher can do, and what Gephi can do. Like translation, this meaning of mediation emphasizes symmetry between the tool and its user. For instance, Gephi allows seeing clusters, which is not visible in the raw list of nodes and edges; but of course Gephi symmetrically needs you to see the clusters.

Reversible blackboxing refers to the fact that the tool’s transparency is relative and ever-changing. When a device breaks, it has to be unblackboxed to be repaired, and can get blackboxed again to be put in use. The archetypal example comes from Merleau-Ponty’s (1962) phenomenology of perception: the blind man’s stick. When in use, that tool is embodied. The blind man ceases to feel it in their hand. It becomes part of their body, and they feel through it. Similarly, your glasses work insofar as you do not see them; if you see them, then they are dirty and that is a problem. Of course you need to look at your glasses to clean them, then you remove them, you disembody them, you open the black box for maintenance. And when you put them back, you return to blackboxing. Similarly, a researcher may consider Gephi a black box at the moment of using it, as usage requires a form of convergence between the tool and the user. The same researcher may open the black box at a later moment, when it comes to write down the method in a publication. There is a moment to discuss the algorithms you use, and there is a different moment when you want the algorithms to work. Researchers navigate between these moments. Gephi may be more or less blackboxed in different situations, at different moments, or to different people.

Delegation, the “most important” meaning of mediation according to Latour (1994), and based on the previous three, refers to the ability to shift signification. An apparatus may play a rhetorical role, it also has a meaning that is part of no discourse. Technical objects “act, displace goals, and contribute to their redefinition.” By shifting goals, the technical mediation ultimately shifts meanings. I find it a good explanation of why Gephi is sometimes understood as a machinery to produce images. Of course visualization is its purpose; but that is intended for exploration: visualize for yourself. Yet Gephi has somehow become a way to produce images for other people. Instead of wondering whether to blame Gephi’s program of action or user community, which would be returning to the materialistic and sociological versions of technical mediation, we can now understand that the [researcher+Gephi] composite’s new goals have shifted the meaning of Gephi towards the production of circulable images, if not marketing assets. This does not mean that one cannot intervene onto Gephi’s design or community, but it reminds us that something intended by neither its designers nor its users can nevertheless emerge from the interactions with the instrument.

References

Adamic, L. A., and Glance, N. (2005) The political blogosphere and the 2004 US election: divided they blog, Proceedings of the 3rd international workshop on Link discovery, pp. 36-43.

Akrich, M. (1993) Les formes de la médiation technique, Réseaux, 60, pp. 87-98.

Bastian, M., Heymann, S. and Jacomy, M. (2009) Gephi: an open source software for exploring and manipulating networks, Third international AAAI conference on weblogs and social media.

Drucker, J. (2011) Humanities approaches to graphical display, Digital Humanities Quarterly, 5(1), pp. 1-21.

Drucker, J. (2014) Graphesis: Visual Forms of Knowledge Production (Cambridge, MA: Harvard University Press).

Latour, B. (1994) On Technical Mediation, Common Knowledge, 3(2), pp. 29-64.

Jacomy, M., Venturini, T., Heymann, S., & Bastian, M. (2014). ForceAtlas2, a continuous graph layout algorithm for handy network visualization designed for the Gephi software. PloS one, 9(6).

Merleau-Ponty, M. (1962) [1945]. Phenomenology of Perception. transl. Colin Smith (New
York: Humanities Press).

Simondon, G. (1958) Du mode d’existence des objets techniques (Paris: Aubier).

State of the conversation at the FOSDEM open research tools and technologies devroom

40 minutes read, 8 minutes for the TL;DR version.

Faces of FOSDEM (CC-BY-SA Diégo Antolinos-Basso)

The FOSDEM is a major event of the open-source community, especially in Europe. FOSDEM stands for “Free and Open source Software Developers’ European Meeting” and is volunteer-organized each year at the Université Libre de Bruxelles. It looks like Diégo‘s pictures of the crowd just above: diverse, joyful, exciting – despite the frequent Winter rain. It usually features a main track in the impressive Janson auditorium (1500 seats) and about 50 so-called devrooms (parallel tracks) in smaller teaching rooms. In 2020 and 2021 I participated to the creation and organization of the Open Research Tools and Technologies devroom (follow it on Twitter). After its two years of existence, I offer here an overview of what our speakers presented.

The devroom is aimed at developers and users of open tools and technologies, working in a context of knowledge production such as scientific research, investigative journalism, NGO fieldwork, etc. We accepted 19 presentations in 2020 and 22 in 2021. It is worth noting that the 2020 was just before COVID hit Europe and was in person as usual, while the 2021 edition featured pre-recorded talks with live Q&As. On the positive side, it made it easier for people far from Bruxelles to participate. We had 17 male and 4 female speakers in 2020, and 19 male and 5 female speakers in 2021. The 41 talks featured 20 demos, 11 speakers shared their story as a developer, and 3 as a user. I obviously cannot share all the interesting things these speakers have said, and I consequently focus on recurrent arguments, but you can watch the original presentations in video. As a consequence, purely technical talks are underrepresented in my report. This is not a statement, just a pragmatic editorial choice.

I delineated 9 distinct topics developed by multiple speakers. These topics are not completely separated, they largely overlap in ways that should be clear enough. Understand them as landmarks to get oriented in the discussion. The two central topics are open science and the question of research tools and academic currency, as one could expect considering the focus of the devroom. Seven other topics decline these two central discussion points: fears about open-sourceness; the reproducibility crisis in science and how to face it; open digital notebooks; issues with data; empowerment and activism; funding issues and the precariousness of open-source projects; and issues of developers working in lab or newsroom cultures. I will develop each topic starting with a short summary, then decline what different speakers bring to the discussion. For a quick overview, just read the first paragraph of each section, marked as TL;DR (too long; didn’t read). If you want to dive into the speakers’ videos, read the subsequent paragraphs and follow the links. I highlight the most important presentations (in my opinion) in the topic they focus on the most, even though all of them touch multiple topics. I intentionally restrain my links to the FOSDEM talk pages, from where you will find much more information and links about each speaker and project.

Note about vocabulary: FLOSS means “Free/Libre, Open-Source Software”, a notion I often shorten here into open source or just openness. But mind two important nuances: (1) freeness and openness are not the same thing and do not always go together; and (2) freeness has the double meaning of free as in free beer, and free as in free speech (hence the term “libre” borrowed to French). To a large extent, the former is just a means to obtain the latter.

Open science

TL;DR: Open science is more than just the use of open-source tools in science. It is also about the publishing infrastructure: open access to papers, data sets, and software artifacts; and it is also about cultures and practices. As opening the code is only useful to those who can read it, annotation and documentation are crucial to researchers. Moreover, technical friction and pain points are major obstacles to the adoption of FLOSS. Usability is a central need, yet it is difficult to justify and fund it in current academic culture. But some speakers propose ideas to intervene on research practices directly, by promoting more transparent and reflexive ways to work with data, and ways to reinvent scientific collaboration through transculturation. Most researchers seek openness, and the open-source movement has a lot to offer by replacing the current closed-by-design, open-as-an-afterthought academic publishing infrastructure with open-by-design solutions such as Software Heritage, DSpace, or PubPub. However, this requires a political mobilization beyond the open-source movement.

Roberto Di Cosmo, computer science full professor at University Paris Diderot, starts his presentation of the Software Heritage initiative, which he leads, with the “three pillars of open science:” open-access repositories, open-data-sets repositories, and open-source repositories. He articulates the different needs of different actors. First, the researchers need to archive and reference software used in articles, to find useful tools, to get credit for developed software, and to verify, reproduce, and improve their results. Second, the laboratories or teams need to track software contributions, produce reports and web pages. Third, the research organizations need to know their software assets, and measure the impact of their software production. This is one of the reasons why it is so important that research software artifacts get archived, referenced, described, and credited. These needs are answered by Software Heritage, a long term, non-profit, multistakeholder initiative with the ambitious goal to collect, preserve and share all source code publicly available, protecting our software commons, in collaboration with UNESCO. In his talk, Roberto also shows how we can use and benefit from Software Heritage in a research context.

Travis Rich, executive director at Knowledge Futures Group, presents PubPub, the open-source publishing platform he has contributed to create. We’re not “there” yet, he says: despite having orders or magnitude more open-source software than 20 years ago, an open and fair access to knowledge is not offered to everyone. Open source is not enough. PubPub received a lot of support because there was only a small set of industrial tools, all closed, and there was a strong demand for open publishing. In the process of meeting that demand, the PubPub team discovered multiple open-source projects that had tried the same but had become either hard to maintain, slow to adapt, or just outdated. Many non-technical mission-oriented groups just wanted to use open-source tech for philosophical and ethical reasons and were willing to pay for it. Although PubPub is open source, its functioning depends on relations with third party services that are not necessarily open source (ex: Google Scholar), which requires maintenance and sometimes payment. This hints at a bigger problem: we do not have models for building and sustaining digital infrastructure that serves as reliable, affordable, and accessible public utility. Which is why Travis and his team founded the Knowledges Futures Group, an independent non-profit dedicated to the production and maintenance of digital infrastructure as public utility. Travis highlights that if our real goal is a fair distribution of power, then technical aspects are not enough: institutional alliances and a sound funding scheme are also necessary which requires some degree of political mobilization (on that last point, see also Markus Suhr and Marcel Parciak’s presentation).

In a similar spirit, Bram Luyten presents DSpace, an open-source repository software package typically used for creating open-access repositories for scholarly content (notably). Bram presents himself as “open-access advocate” and is a co-founder of Atmire, a service provider for DSpace. He argues that although open access accelerates scientific progress, when it is thought as a hybrid of closed access, it is broken. Besides, pre-prints are not the solution because of the lack of peer-review. This is where DSpace is useful. Like any project, it needs developers and contributors; yet it is the most successful institutional repository platform, in part because it was localized very early (China, Taiwan) and released under MIT Licensing.

Emmy Tsang is innovation community manager at eLife, a non-profit organization and peer-reviewed open-access scientific journal for the biomedical and life sciences. Emmy shares her community-driven approach towards open innovation for research communication. Like Travis and Bram, she takes note that internet publishing is broken – even though it was created by scientists to share knowledge. The goal of eLife, she says, is to move from a slow, expensive, closed, and draining publication process to a fast, cheap, open, and user-friendly publication process. Like Bram she defends open-by-design as opposed to closed-by-design later made open. Indeed, closed systems propagate design biases (e.g. diversity biases) and produce unusable research, because it is hidden behind paywalls and inaccessible in various ways to different publics. The ambition of eLife is to offer open, inclusive, and user-centric research communication tools to the community. In that perspective, Emmy presents Reproducible Document Stack (RDS), eLife’s solution to produce papers where the code and results are reproducible.

Yo Yehudi is open-source tech lead for the data for science and health at the Wellcome Trust, a charitable foundation focused on health research. She lists important challenges for computational research: the lack of computational tools for research, the lack of incentives to draw, retain and reward talent, and the insufficient trust in computational work. She pinpoints a paradox with tools in research: sometimes research is too narrow and the tools that researchers need do not exist; and sometimes there are too many tools and standards and the fragmentation of that landscape is a problem for researchers. Yo explains that Wellcome’s goal is to fund the tools, talent and trust researchers need – it is a science funder. She showcases two open-source projects Wellcome has funded: Afrimapr and OpenSAFELY. Like Travis, she argues that code is only part of the problem, almost secondary: we need maintenance money, we need to justify doing documentation for users and developers, to fund accessibility, to build a community, and to spend time on improving the user experience. Those are all justifiable software costs! she insists.

Not all speakers are developers, some are tool users. Maya Anderson-González is a researcher in computational social sciences and digital humanities, and narrates how she used FLOSS tools “and lived to tell the tale.” She explains how she got acquainted with open-source principles, the issues she faced, and how she built confidence through exposure and training – her project was to visualize and analyze a Twitter network of the FOSDEM 2020. She comments on the importance of documenting one’s own process, as well as accessing other people’s process. In this setting it is valuable to share work-in-progress reflections and results, which naturally leads to the reflexivity of open research (which Maya found intimidating at first). In traditional science, collaboration can be coded in specific ways, for instance around seniority. But using tools designed for community participation changes existing collaboration processes. She concludes on the usefulness of the concept of “transculturation,” which Maya sees as what FLOSS developers do with social science researchers to create an open science culture.

From the presentation of Maya Anderson-González, FLOSS meets Social Science Research (and lived to tell the tale)

Erik Borra, Stijn Peeters and Bernhard Rieder are assistant and associate professors at the media studies department of the University of Amsterdam. The three of them have the hybrid profile of humanities scholar and tool developer, having offered us many of the Digital Methods Initiative tools to analyze online platforms such as Netvizz (Facebook), TCAT (Twitter) and 4CAT (4chan, Reddit and more). They remind us that just opening the code is only useful to those who can read it, which is why annotation and documentation are key to making the software open. They highlight three issues. First, our relationships with large platform companies are changing, a phenomenon known as “the APIcalypse.” Netvizz, for instance, was a victim of Facebook’s policy change. The new situation offers a few new opportunities, but it does not work the same way and it sparks a debate about scraping (its legality and ethics). Second issue, privacy and legal compliance (Europe’s new GDPR regulation): new requirements, securing access to the data… how to react to these challenges in the design of our tools? Third, the web has been changing over recent years. Platforms started deplatforming users, and some platforms may disappear entirely (ex: Parler, even though it is now back online). We need to understand these elements before reacting to them and helping our users. Because of this everchanging landscape, they say, the role of the research engineer must be expanded: one foot in research; one foot in software; but also one foot in software education and one foot in strategic planning. On the bright side, Erik, Stijn and Bernhard just obtained a Dutch grant from the platform Digital Infrastructure for the Social Science and Humanities to start address these issues (their project is named CAT4SMR).

Still on the tool-making side but for the natural sciences, Aniket Pradhan presents NeuroFedora, a Linux distribution dedicated to neuroscience he contributes to. Emmy Tsang, Yo Yehudi and Maya Anderson-González have highlighted the importance of usability. The problem NeuroFedora solves is exactly of that kind. Many powerful packages exist to help the neuroscientist, but they are often difficult to install, which is a major obstacle to their adoption. NeuroFedora embeds those packages, alleviating the technical pain even in cases where the documentation is lacking or the installation skill-demanding, making relevant tools available in a way that just works for the user. It also serves open science by publicizing tools. The NeuroFedora team consists of about 20 contributors, only 5 of which have a neuroscience background.

Lilly Winfree is product owner of the Frictionless Data for Reproducible Research project at Open Knowledge Foundation. As Aniket Pradhan’s contribution highlights, sometimes what it takes to promote openness just consists of lackluster tasks such as writing an installation script. In the realm of data management, the “boring” tasks are to clean the data, check their quality, document their origin or find their license. This “friction” is an issue for scientists, data journalists and more. Lilly presents the specification and open-source toolkit known as Frictionless Data, aiming at alleviating that friction. Its ambition is to bring reproducibility to the process of transforming messy data into clean data, and then into hosted data. The central concept is called a “data package”, typically a metadata file (with optional schema) documenting your usual CSV data file, that can be validated automatically. Also check Carles Pina Estany’s demo of the Data Package Creator.

In complement, let me mention a few points from other talks I will return to. Julia Sprenger remarks that for many researchers, publication comes first, and then maybe the software is released later on, even though publishing code improves scientific results (she gives an example). Sébastien Rochette similarly observes that, in a hackathon he organized, researchers were reluctant to expose their project; but they ultimately changed their practices. Finally, different speakers such as Karthik Ram reflect that open-source tools are not properly valued in laboratory culture; for instance, many researchers ignore how to cite software, or pick a license for their production. A situation caused by the overwhelming importance given to peer-reviewed publications in academia.

Research tools and academic currency

TL;DR: Even though software impacts all modern research, even though research tools shape scientific practices, publications remain the prime currency in science. Developing software does not contribute directly to publication, which leads to multiple issues. Tool making is difficult to fund and justify, if not simply considered a side occupation. Refactoring and maintenance are undervalued, and releasing code is not a priority. The research software exists as an identified object in academia, but research institutions lack incentives to attract and retain talent and improve the situation. This situation might be explained by the different perspectives of the actors at stake. Researchers do not know how to cite software properly; tool makers struggle to promote and get credit for their production; laboratories ignore how to track and report the software contributions of their members; and research institutions ignore that they have software assets that improve their impact. Yet there is some hope, as initiatives such as the Journal of Open Source Science arise to promote software in the academic sphere.

Karthik Ram, from the Berkeley Institute for Data Science, is a contributor and editor of the Journal of Open Source Science (JOSS). He starts with the observation that even though software impacts all modern research, we still don’t know how to cite it. For instance, how to cite the specific version used? Yet we need to be able to peer-review software, among other things (see the picture below). To promote a tool in the current academic system, one must publish a software paper (or “proxy paper”). As a peer-reviewed publication in an existing journal it is easy to cite, and does not need to change the academic infrastructure, but it requires writing said paper (additional work, and of a different nature), while many journals do not accept software papers, and besides, static authorship is not appropriate for collaborative tools. As Karthik notes, the Jupyter team is an example of the gap between the contributors to versions used in practice and the authors of the software paper (that is sensibly older). The JOSS is an answer to these issues, based on the idea of hacking something around what exists, because we cannot change the whole ecosystem at once. It is free, open, and developer friendly: if good practices are respected, a paper can be written in one hour. It consists of a high-level description of the tool, a simple citable object for the paper. This is as conventional as possible in the scholarly space: it uses ORCID for login, and archives papers with Portico. Check Karthik’s talk for more information, I found it captivating from start to end.

From the presentation of Karthik Ram, The Journal of Open Source Software: Credit for invisible work.

Julia Sprenger is doctoral student at the Research Centre Jülich in electrophysiology. Publications are the currency in science, she says. Time spent to develop a tool is an issue for a researcher, because it does not directly contribute to publication. Refactoring are maintaining code is not valued as academic work, and it is difficult to fund software development. She also observes a trust issue: self-made software is seen as the right thing for small projects, while commercial software is perceived as the better option for complex projects. As a consequence, the classic thing researchers do is to ensure publication first, and then maybe the software is released after that. Julia comments that making errors is taboo in science. She provides an example of how publishing code contributes to scientific progress, but noting that in most cases, software development is de-facto a side occupation. Julia offers ideas to improve the situation. First, how to help scientists as a software developer: comment, provide feedback, advice, advertise projects in dev communities, and make it easy for scientists to reuse your tools (easy to install, compile, good documentation). Small projects die when people leave science, she reminds. Second, how to help as a scientist: use existing open-source tools, don’t start from scratch and make sure that your project outlives your career.

Teresa Gomez-Diaz is CNRS research engineer at the Gaspard-Monge Computer Science laboratory. She shares her two-decades-long experience in a lab that has an important software production in various domains. Teresa was recently tasked with making an inventory of the tools produced by her lab. Unsurprisingly, some tools were not identified (no dates, lists of authors, license). It was even unclear, she says, what counted as “lab’s software.” Who decides of it? And who makes other choices, such as picking the license? The lab had to clarify its policies, which begged the question of its software production’s value. Teresa has observed similar problems in many other laboratories, and on multiple levels: scientific, legal… To help face these issues, Teresa offers a practical definition of “research software”: a well-identified set of code that has been written by a well-identified research team, i.e. software that has been built and used to produce a result published or disseminated in some article or scientific contribution. 15 years ago, it was not possible to publish software papers, but it is now possible to promote software production, although the dissemination procedure is a problem (also check Karthik Ram’s talk on that point). On that level, Teresa recommends separating the evaluation of research from that of software. She distinguishes four attention points. (1) Citation, i.e. measuring of the research software is well identified as a research output. This is a legal point: which are the authors, affiliation, participation percentages? (2) Dissemination. Are best practices followed? This is a policy point (about open science) and a legal point (licenses). (3) Usability. Are computations correct? Is the tool reliable and easy to install and use? This is a reproducibility point. (4) Research, i.e. the quality of the scientific work embedded in the software, and related publications. This is a research point about ensuring and measuring impact.

Teresa Gomez-Diaz’s four attention points nicely complement Roberto Di Cosmo’s remark that different actors have different needs: the researcher needs to archive and reference software used in articles, find useful tools, and get credit for developed software; the laboratory needs to track software contributions, produce reports and web pages; while the research organization needs to know its software assets, and measure the impact of its software production.

Technological means are not a secondary question in science. Erik Borra, Stijn Peeters and Bernhard Rieder remind us, as scholars and tool developers, that tools shape practices. Software for the humanities is different from that for the computational sciences because the needs are different; for instance, media studies require the epistemic flexibility to “follow the medium.” Yo Yehudi similarly argues that the lack of computational tools for research is due to the difficulty to attract and retain talent in the laboratory, which boils down to the fact that code is not paper. The way software is valued, promoted and funded in science impacts knowledge production in ways that are well worth understanding.

I am ultimately surprised by the close connection between the issue of research tools’ value and status in academia and the question of open science. It might be a bias of the FOSDEM community, but the point made by the speakers suggest that the notions of openness are imported from the tech world into science, which seems a reasonable idea to me. My own observations certainly go this way. Remark that, facing the “broken” state of academic publication infrastructure (to reuse Emmy Tsang’s word), Bram Luyten and Travis Rich have proposed open-source tools (DSpace and PubPub) before moving to institutional consolidation (Atmire and Knowledge Futures Group, respectively). Academic culture is naturally inclined to sharing knowledge freely, which resonates with the values of open-source software, but the legal and political tools (e.g., licences) and the practices are different. We see that the discussion about openness has moved from purely technological questions to wider political issues in academia, such as publication infrastructure, modes of collaboration, and funding.

Fears

TL;DR: Some speakers reflect on fears of open source in academia. Errors are taboo in science and trust issues develop. Some researchers are afraid to publish their code (fear to be judged, to have bugs exposed), misrepresent themselves as noncoders even though they produce code (ex: R scripts), or even hide that they do code (detrimental to seeking funds). Others dissimulate their code to retain intellectual property. Beyond emphasizing that judgmentalism is harmful to the community, the good practice of FLOSS development offers guidelines to mitigate those fears: open source your code from day one, make your tools discoverable, mind the license, and define responsibilities.

Mateusz Kuzak, research software community manager at the Netherlands eScience Center, tells us why researchers are afraid of putting their code in the open (see picture below). They are afraid of being judged for their “crappy code,” of bugs being discovered, and of getting “scooped.” Mateusz co-authored a paper offering practical solutions to these fears, titled Four simple recommendations to encourage best practices in research software (DOI: 10.12688/f1000research.11407.1). Those are (1) open source your code from day one, (2) make your tools discoverable, (3) mind the license, and (4) define responsibilities (more explanations in his presentation).

From the presentation of Mateusz Kuzak, On the road to sustainable research software.

Mateusz Kuzak is not the only one to take note of fears in science. Julia Sprenger observes that making errors is taboo in science, and that researchers do not trust open-source software for large projects. Maya Anderson-González narrates how, as a user, some installation issues scared her to the point she switched to more usable tools, unable to find appropriate help in her network. Yo Yehudi also highlights the importance of usability and observes that some researchers may want to hide that they code, because they think that it might be a problem when seeking funds. Finally, Sébastien Rochette reports on researchers having issues accepting the exposure of their coding project. He tells us that it is important that the open-source community is welcoming and indulgent towards coding researchers.

Facing the reproducibility crisis

TL;DR: Researchers might fail to replicate their peers’ experiments for many reasons, which is a known and major issue of experimental knowledge. Some of the most preventable reasons are missing data and underdocumented experimental details, like information hidden in hardware and software. As multiple speakers argue, it is, at the core, a data management issue where the open-source community has a lot to offer.

Lilly Winfree and Jan Grewe (I will return to him) comment on the importance of open-source software to address the reproducibility crisis (or “replication crisis”). Lilly focuses on solving the underlying data management issue with Frictionless Data while Jan gives the example of researchers who cannot reproduce each-other’s work because some tiny details are not mentioned in the papers (information hidden in settings of hard- and software) which motivated him to develop his own solution (the tool Relacs). Similarly, Emmy Tsang highlights the importance of this issue for eLife, which is why they developed RDS, the Reproducible Document Stack, a technology they use to ensure that published code and results are reproducible. Indeed, computational research requires extra steps to ensure reproducibility (a point also made by Thibault Lestang and Offray Luna). And as Teresa Gomez-Diaz remarks, the validation of scientific results is one of the ways to give value to software produced in research.

Jan H. Höffler founded ReplicationWiki, a database of empirical studies documenting the availability of replication material for them and of replication studies. His project supports transparency by making more scientific material available, which improves the quality of empirical social science. ReplicationWiki is open source and based on MediaWiki, with some adaptations. Jan shares the challenges he faces, which are not only technical: he seeks contributors and funding to help him in his public-utility endeavor.

Reproducibility, like validation and the improvement of existing results, is part of researchers’ needs, as Roberto Di Cosmo tells us. Like for ReplicationWiki, addressing it is a core mission of Software Heritage, as well as of various kinds of notebooks such as Nicolas CARPi’s eLabFTW software.

Notebooks

TL;DR: Notebooks are, generally speaking, appreciated by the research community for their ability to improve reproducibility. Researchers rely on them to mix text, data, and visualizations, during writing as well as publishing. We all know about Jupyter notebooks, but many other tools have a similar perspective, half a dozen of which have been presented by their respective developers. Those insist on the importance of openness to disseminate high quality knowledge (reproducible, verifiable, and circulable).

The topic of notebooks is obviously connected to that of reproducibility, as it is one of their main purposes. “Notebook” might even be a too restrictive term, as some of the devices that allow a reproducible mix of text, code, data and visualization (which I call here hybrid) only loosely resemble the Jupyter archetype. It is the case of Org-mode, a set of functionalities that live inside GNU-Emacs and can be used to bundle software, data and figures into one single executable plain-text document, as presented by Thibault Lestang, a computational physicist turned research software engineer. Or eLife’s Reproducible Document Stack presented by Emmy Tsang, which is not about hybrid online publishing.

In the natural sciences, the laboratory notebook (where experimental situations and results get logged) becomes digitized as an object known as “ELN” (electronic lab notebook). Nicolas CARPi is an engineer at Institut Curie and creator of eLabFTW, an open-source ELN solution. Where the data is hosted in a ELN? and what if the company who does it disappears? he asks. On the level of data security and durability issues, an open-source project is preferable. The development of eLabFTW is community driven, and it can be hosted on your own network (you own the data), respecting the standards of secure software. His presentation features more information and a demonstration. In addition to that, Niels Cautaerts, experimental materials scientist and eLabFTW user, presents his own usage and experiments with it. He showcases a project leveraging the eLabFTW Python API to print QR codes to streamline some lab procedures.

In the social sciences and humanities, the notebook is less about logging and more about disseminating. Robin de Mourat presents FONIO and Ovide, a content edition solution for the social sciences (footnotes, bibliographic references, internal links…). Antoine Fauchié, PhD student, presents Stylo, a user-friendly text editor for humanities scholars. It offers content structuration and multiple publishing formats (PDF, XML, HTML…) with only one source document, while keeping a simple and usable interface. Offray Luna, hacktivist and designer, and Santiago Bragagnolo, software engineer and researcher, present a similar project known as Grafoscopio. It consists of a notebook tool aimed at supporting a reproducible research: visualizing and editing text in a tree fashion; supporting a mix of code, data, visualization, and text; and exporting as HTML, LaTeX, or PDF. It is intended for science but also data journalism and activism. Grafoscopio is aimed at bridging fields that have similar needs: research, civic hacktivism, data feminism… Offray calls it a “pocket infrastructure” because it is simple, self-contained, extensible and offline first (which is a major concern for the global south), yet you only have to download one thing (the Grafoscopio tool). Like Erik Borra, Stijn Peeters and Bernhard Rieder he asks: how do we change the tools that change us? This “we” is a call for bridging communities, for instance through workshops.

Data issues

TL;DR: Data accessibility, transparency and accountability not only improve the quality and reproducibility of academic papers and investigative journalism, they also support the security we need to protect our privacy or other sensitive data such as whistleblower leaks. Moreover, data sustainability is a major concern in research, where data are relevant over a much longer term than the infrastructure supporting them (formats, institutional actors). As a response, multiple speakers advocate decentralized data storing (web3). And storage is not the only issue, because we also have to make data verifiable in practice, which prompts new design goals for the tools we use. Here the FAIR principles are useful: findable, accessible, interoperable, and re-usable data.

Markus Suhr and Marcel Parciak are research associates in medical informatics at the University Medical Center in Göttingen. They reflect on the “dreaded black box” of (mostly) proprietary software that lies in the center of the information flow of the field of medicine (they give an example). As medical data is sensitive, security is a primary concern. To empower the patient with a transparent and accountable workflow, they say we need a political campaign for free and decentralized software in healthcare. Travis Rich, Emmy Tsang and Yo Yehudi have similarly emphasized that technical issues are only a small part of the question.

Michael Hanke, who self-describes as full-time informagician and real-life psychologist, presents us DataLad, a decentralized digital objects management system. Its core idea is that a dataset is a Git repository. With this radical approach, the data would not disappear if DataLad died, because it relies on uncompromised decentralization (as Julia Sprenger says, “small projects die when people leave science”). DataLad exists because a single repository is not enough in science: additional features are necessary, such as utilities for metadata, provenance capture… DataLad brings convenience and simplification while respecting the core principle that the data is also just a Git repository.

Data decentralization is in fact a major topic when it comes to security issues. Anne L’Hôte and Bruno Thomas mention in their presentation that the “leaks” data analyzed at the International Consortium of Investigative Journalism (ICIJ) cannot be hosted in the cloud because the investigation needs to be protected. Molly Mackinlay, lead of the IPFS Project and Filecoin Network team, presents a decentralized infrastructure for the web known as “web3.” The ecosystem of web3 is turning centralized applications into decentralized protocols. It is a movement to make the web more decentralized, verifiable, and secure. The key element to all this is to make data verifiable. IPFS, standing for “Inter-Planetary File System,” aims at verifiably addressing and distributing content across a peer-to-peer network. Molly presents more elements of the web3 stack, like libP2P and Filecoin, a decentralized storage network (and protocols) and payment mechanism.

The presentation of Frictionless Data by Lilly Winfree pinpoints that the “boring” parts of data management such as cleaning, picking a license, checking quality, crediting source, ensuring interoperability and documenting attributes are causing a problematic friction. Indeed, we need to validate the data in a reproducible way. But Erik Borra, Stijn Peeters and Bernhard Rieder point to other frictions with data in science such as the legal and ethical issues with scraping, or the question of privacy and legal compliance. They ask how to react to these challenges in the design of our tools, for instance with encryption, or automatic upload to dedicated hosting services – it seems to me that this question should connect with the debate about data decentralization. On that matter, Sébastien Rochette‘s reminder of the FAIR principles is useful: Findable, Accessible, Interoperable, and Re-usable data. See also Datasette, a multi-tool for exploring and publishing data, aimed at data journalists, museum curators, archivists, local governments and anyone else who has data that they wish to share with the world. It is presented by its creator Simon Willison, also co-creator of the web framework Django.

Empowerment and activism

TL;DR: Solidarity and transparency are core values of the open-source community, echoing a similar inclination in science and journalism. Some speakers highlight the social benefits of FLOSS, like public accountability and inclusiveness, in contrast to closed systems propagating design biases. Openness is a way to counterbalance the tech world’s lack of diversity and focus on the industrialized world as a market. Open-source tools can empower various publics (minorities, voters, medical patients) but some remind that technical aspects are not enough to ensure a fair distribution of power, notably when it comes to funding, which begs wider political questions.

Damien Marié is a developer and member of Regards Citoyens, a French NGO that lobbies for open parliamentary data. Regards Citoyens and Science Po, France’s main school of political sciences, co-developed La Fabrique de la Loi, a data infrastructure and web platform retracing in details the law-building work of parliamentarians. Damien presents this tool and tells us that it has been a force to push for open data in France, to the point that it has now been institutionalized by the French senate.

Indeed, technology empowers. The open-source community is aware of that and discusses who needs to be empowered, and how. Xavier Coadic, biohacktivist, reflects on reverse-engineering as way to reclaim power over existing technology. Damien Marié offers to empower the citizen, and Guillaume Plique the social scientist. Yo Yehudi says that the tech world lacks diversity and technology should be used to bridge communities. Markus Suhr and Marcel Parciak want to empower the medical patient. Emmy Tsang and eLife aim at empowering “people and communities.” For Travis Rich, the real goal is a fair distribution of power, a sustainable digital infrastructure as public utility, and as he says, we are not there yet. Like many other speakers he reminds us that technical aspects are not enough: we also need political mobilization, institutional alliances, and a better funding scheme.

In the same spirit but in a different direction, some speakers mention publics with different digital equipment standards and/or practices. Santiago Bragagnolo and Offray Luna proposed the idea of “pocket infrastructure,” taking into account that with some publics like the global south, we cannot assume permanent internet connection. Albert Yumol, data activist based in the Philippines, shows us his repurposing of open data to investigate socio-economic indicators. His presentation highlights the importance of open-source technologies in “underrepresented countries,” notably when it comes to data activism. Albert showcases a supervised machine-learning approach to predict income classification of urban and rural areas in the Philippines, based on OpenStreetMap features, and drawing data from the Humanitarian Data Exchange (HDX). It is worth noting that Offray Luna is based in Colombia, and that we failed to find the funds for him to come to Bruxelles in 2020, and as a consequence Santiago Bragagnolo had to perform the talk on his behalf. In contrast, thanks to the 2021 edition being virtual, Albert Yumol was able to give his talk and Q&A from the Philippines.

Funding and precariousness

TL;DR: Many speakers tell us that their project needs contributors: the FOSDEM is also a place to attract developers. Open-source initiatives are generally precarious: small teams depending on one lead developer, with dependencies to projects in similar positions. Moreover, attractiveness depends mostly on the tool chain (e.g. Python is more attractive than C++) while behind-the-scene work is unattractive and difficult to fund. Speakers remind us that open source does not need to be a hobby, and that the movement needs options for long-term support. But successful projects share what has worked for them: Open Refine attracted contributions and became sustainable by reaching out to neighboring communities, improving localization, creating a steering committee, and building institutional alliances. Raw Graph launched a successful crowdfunding campaign, not only providing resources but also building institutional alliances and engaging users to gather feedback.

Jan Grewe, neurobiologist and tool developer, reflects on the good and the bad sides of developing open-source tools for neuroscience. His tool, Relacs, is maintained by a small team and all maintainers depend on the main developer. Moreover, its dependencies have the same issue: just a few maintainers. Jan remarks that the attractiveness of a project depends on its tool chain (for instance, Python is more attractive than C++). And developing the graphical user interface (GUI) is more attractive than behind-the-scene work. Jan advocates that open source does not need to be a hobby, it does not imply that one may not make a living out of it. He concludes that the way open source works does not always align with the way scientists want to use it, and that FLOSS needs options for long-term support.

Many speakers mention a situation like that Jan Grewe described. Julia Sprenger attests that it is difficult to find resources for refactoring and maintaining, and simply to fund software development in a research culture where it is considered a side occupation. As a consequence, she adds, small projects die when people leave science. Bram Luyten mentions that like any project, DSpace needs developers and contributors. The same goes for ReplicationWiki, says Jan H. Höffler. And as we hear from Yo Yehudi, code is only part of the problem, it’s almost secondary: projects need maintenance money, to justify doing documentation for users and developers, to improve accessibility and build a community. As a science funder, Wellcome addresses these issues in the health sector, she says. But other speakers share a number of other solutions.

The popular data visualization tool RawGraphs succeeded in raising funds through a crowdfunding campaign. Giorgio Uboldi, designer and co-founder of studio Calibro, presents RawGraphs and shares his insights on the process. The tool was born from the need to create complex non-conventional visualizations and up to 2019, it was mostly a side project. The crowdfunding campaign was launched in 2019 as a response to the project not being sustainable anymore. Giorgio tells us that the campaign was not just a way to get funds, but also to build institutional alliances and to engage users to gather feedback. Check Giorgio’s talk for more information about the campaign.

Antonin Delpeuch, Open Refine developer and PhD student, presents us this data wrangling tool and how it was revamped. How to attract contributions to make a sustainable project? Antonin shares what worked for the OpenRefine team. They reached out to neighboring communities. They improved localization using a tool named WebLate. They started a W3C community group to improve the “reconciliation API.” They created a steering committee whose role is to decide who to partner with, how to get fundings, etc. They candidated to the Google Summer of Code and Outreachy programmes, and they revamped the architecture of the tool. Some questions remain open though: How to introduce breaking changes without disrupting the ecosystem of extensions? Which tasks to leave to new contributors to pick up?

Developers in another culture

TL;DR: Some speakers reflect on what it means to be part of a laboratory or newsroom culture. Beyond observing that developer activities are rarely recognized as productive, they reflect on the different needs one finds in such cultures. For example, in the social sciences, researchers may favor rich description instead of modeling; in investigative journalism, security is a major constraint; and in both cases, the specialists need to understand the analytical steps offered by the tools they use. Making informed choices requires technological transparency, and user experience is crucial to a public that is not always acculturated to advanced computing. Not everyone wants to learn Python, and applications are a great way to provide such publics with pioneer techniques, but it requires an increased attention to design, which is yet another field to mobilize in tool making. Hybrid profiles are essential to bridging over these different areas: open-source developers can rarely afford being just developers.

I mentioned the culture clash experienced by developers in science culture when I unfolded the topic of research tools and academic currency, with Julia Sprenger and Teresa Gomez-Diaz commenting on the many misunderstandings about the practice of tool making and the value of software. But there is more to it, as different speakers show by accounting for what Maya Anderson-González calls the transculturation of development and science.

Guillaume Plique, research engineer at the Sciences Po médialab, endeavors to empower social scientists with web-mining tools. He asks: how to teach researchers web technologies? Like Erik Borra, Stijn Peeters and Bernhard Rieder, he recalls the importance of scraping, as opposed to crawling, to deal with the APIcalypse. He argues that Jupyterizing researchers is not a solution, because it’s OK to not want to learn Python “sometimes.” Yet web mining is a demanding skill that researchers cannot rarely afford to master, hence the necessity to make tools. Even though this requires the contribution of designers and a trade-off between usability and scalability. Guillaume offers a demo of two of his tools: Artoo.js, a client-side scraping companion, and Minet, a web-mining CLI tool & library for Python.

Anne L’Hôte and Bruno Thomas are developers at the ICIJ, the International Consortium of Investigative Journalism. In their presentation and demonstration of Datashare, the tool used to deal with the ICIJ “leaks” data (Luxembourg Leaks, Panama Papers). They highlight the specific constrains applying to technology in this context. Indeed, security is a major issue and no data can be hosted in the cloud because the investigation needs to be protected. And similarly to the research context mentioned by Guillaume Plique, usability and user experience are crucial because the investigators are not computer scientists.

Sébastien Rochette, data scientist, R consultant and marine biologist, shares his experience of helping researchers transforming a series of scattered analyses into a documented, reproducible and shareable workflow. There is a big step from coding for yourself to sharing with a community, he notes. Mentoring at the start of the project was very beneficial to the researchers, as they were reluctant to accept the exposure of the project (they feared to be judged). Sébastien comments on the importance for the open-source community to be welcoming and indulgent to the researchers, as those have to adapt their practices on contact with open-source projects, which might be the most important thing. Let us also remind Julia Sprenger’s recommendations on that matter: developers can help researchers by commenting, providing feedback, advertising projects in the tech communities, and improving usability; while scientists can help developers by using existing open-source tools, not starting from scratch, and make it sure that their project outlives their career.

Erik Borra, Stijn Peeters and Bernhard Rieder, as developers and humanities scholars, highlight the necessity of bridging academic culture with the tool development culture. They promote the use of “recipes:” series of analytical steps, some of which require an interpretation from the researcher, allowing him or her to make informed choices about the research design. They call for the role of the research engineer to be expanded, not only from software to research, but also to education and strategic planning. Indeed, the role of research engineer is hybrid by nature.

In the same spirit, Robin de Mourat, research designer at the Sciences Po médialab, tells us about his professional experience in a hybrid lab and reflects on what interdisciplinary contexts do to tool development, not only as a developer, but also as a designer and scholar. He focuses on a case where a tool is redeveloped by a hybrid collective to answer new needs. From his standpoint, redesign meetings can be seen as a battlefield where participants have diverging attachments (see picture below). The original designers want to respect the original intent; the developers want to prevent recoding avalanches; the teachers want to ensure that they can adapt their courses; the researchers want to rediscuss the methodology; information specialists want to expertise the methodology; and mediators want to take actual practices into account. But crucially, each person is not in only one role, but two or more. Participants are hybrid actors who face inner contradiction due to their multiple attachments.

From the presentation of Robin de Mourat, Developing from the field: Shifting design processes and roles between makers and practitioners around research tools development within an interdisciplinary research lab.

Demos, dev stories and user stories

A few words about the tech-oriented content of the devroom. Many speakers present an open-source project, either as the main focus of their talk, or as a ground for critical reflection. In these tool-oriented presentations one finds demonstrations as well as stories from developers, and sometimes from users, which is always appreciated.

20 open source tools were demonstrated: Advene, a tool to annotate videos in Digital Humanities; Artoo.js, a client-side scraping companion; DataLad, a distributed data management system; Datasette, a multi-tool for exploring and publishing data; Datashare, the tool used to deal with the ICIJ “leaks” data; eLabFTW, a digital solution for electronic lab notebooks; Frictionless Data, and notably the Data Package Creator; Gazouilloire: a command line tool for long-term tweets collection; HyBro, a web crawler for the social sciences; La Fabrique de la Loi, a datafication of the French law-making process; Minet, a web-mining CLI tool & library for Python; NeuroFedora, a Linux distribution dedicated to neuroscience; Open Refine, a reproducible data wrangler; Org-mode, a set of functionalities that live inside GNU-Emacs; Pandoræ, a data exploration and analysis tool; RawGraphs, an open-source visualization tool and framework; RECITAL, a digital humanities project on Italian comedy; Shrivelling World, a tool to represent geographical time-spaces; Stylo, a text editor for humanities scholars; and the Software Heritage platform.

11 speakers offered their testimony about their experience as open source project contributors: Aniket Pradhan with NeuroFedora; Benjamin Ooghe Tabanou with Hyphe and HyBro; Bernhard Rieder, Erik Borra and Stijn Peeters with their tools for social media research TCAT, 4CAT, and Netvizz; Giorgio Uboldi with RawGraphs; Jan Grewe with Relacs, a tool dedicated to electrophysiological recordings; Karthik Ram with the Journal of Open Source Science; Michael Hanke with DataLad; Nicolas CARPi with eLabFTW; Robin de Mourat with too many experiments and tools for me to list here; Sébastien Rochette narrated is experience in a hackathon; and Travis Rich with PubPub, the open source publishing platform.

Finally, 3 users shared their experience with open-source projects: Albert Yumol with Open Street Maps and the Humanitarian Data Exchange (HDX); Maya Anderson-González presented a micro project of visualizing and analyzing a Twitter network about FOSDEM 2020; and Niels Cautaerts presented his experience with eLabFTW.

Thank you to the co-organizers of the Open Research Tools and Technologies devroom, with whom I was super happy to collaborate: Diégo Antolinos-Basso, Paul Girard, Célya Gruson-Daniel, Achilleas Koutsou, Michael Sonntag, and Lilly Winfree. 😊

The public of the open research tools & technologies devroom, FOSDEM 2020.
(CC-BY-SA Mathieu Jacomy)

In short: nuances between Network Science, Social Network Analysis, and Network Analysis

7 min read

Here is another part of my thesis that you may find useful on its own. With academic references.

The two main network-specific fields are social network analysis (SNA) and network science (NS). The scientific literature also mentions network analysis (NA), mostly referring to the practices of analyzing networks, but also, by extension, to their methodological critique (figure below). It is then useful to clarify the nuances between the three notions.

Network science (NS), social network analysis (SNA) and network analysis (NA) are three distinct domains. Although they overlap, each has its own specific knowledge and/or practices; none can be summarized as a combination of the others.

Network analysis (NA) is a set of research practices that progressively stabilized on specific methodological foundations. Although there is a relative consensus on its theoretical ground, its practices are not unified. They include both what Erikson (2013) calls the formalist approach, based on a “structuralist interpretation” (networks are phenomena, e.g. in Georg Simmel’s sociology), and the relationalist approach, that “rejects [the] essentialism” of the network (as an apparatus to know, e.g. in the natural sciences).

Although NA is primarily a practice, we can also see it as a field; a field about a practice. And this practice is much older than its formalization as a field. The overview proposed by Borgatti et al. (2009) places the point of origin of NA within the social sciences (with Moreno’s sociograms, 1934), before it “radiated into a great number of fields, including physics and biology” during the nineties. For these authors, “network analysis” is not a field but a longstanding practice progressively formalized into SNA, and later, NS. However, other authors acknowledge it as an independent field (Brandes and Erlebach, 2005; Chiesi, 2015), with its own methodological knowledge derived from graph theory, and its own theoretical discussions (e.g. Barnes & Harary, 1983; Butts, 2009). Even so, NA is centered on practice. Brandes and Erlebach for instance find “adequate to treat network analysis as a field of its own” (2005). But they add that “[f]rom a computer science point of view, it might well be subsumed under ‘applied graph theory,’ since structural and algorithmic aspects of abstract graphs are the prevalent methodological determinants in many applications, no matter which type of networks are being modeled.” Similarly, for Chiesi NA “can be regarded as a set of techniques with a shared methodological perspective” (2015). NA is part of other fields, including NS and SNA, as a practice. But NA as a field is additionally concerned with the foundations of this practice. It has its own intellectual and cultural space.

Social network analysis (SNA) predates both NS and NA. Indeed, the network is a key idea to multiple schools of thought in the social sciences, from Moreno’s sociograms (1934) to White’s kinship models (1963), Milgram’s “six degrees of separation” (1967), Lévy-Strauss’ structural anthropology (1973) and Granovetter’s “strength of weak ties” (1973). These long and rich considerations on the relational nature of the social coalesced into the field of SNA. In accordance with this thick heritage, the field sustains an in-depth discussion on the empirical nature of the networks it studies, and pays close attention to the various methodological issues tied to the use of its instruments.

Network science (NS) emerged during the late nineties as a “highly interdisciplinary research area” (Börner et al., 2007; see also Barabási, 2016; Hidalgo, 2016) around the object of complex network. Graph theory is generally presented as its point of origin, and more precisely the random graph model (Erdős & Renyi, 1960). As scholars across various disciplines realized that their empirical networks were usefully described by the newly formalized concept of complex network, the theories of NS disseminated as an operational toolkit for analyzing networks. It is worth mentioning that the apparition of the web, and later social media, provided plenty of network data that called for a democratization of network analysis methods. Epistemic clashes in network science features an in-depth inquiry into the epistemic foundations of the field, including a presentation of its key concepts and an analysis of its main controversy.

Network science (NS), social network analysis (SNA), and network analysis (NA) are three distinct domains. Such similar names are unfortunate, because it downplays important differences. I acknowledge the profound intrication of the fields; my attempt to make the distinction is not about enforcing a clear demarcation between them. I rather aim at clarifying the fringe of knowledge and practices that are, in each field, incompatible with the other two. Indeed, despite their important overlap, key specificities subsist, that notably explain that NA resists dissolving into NS and/or SNA.

Differences between NA and SNA. They boil down to the fact that from the perspective of NA, a network is a set of inscriptions. NA is concerned with networks-as-data: it can be used with potentially any data set, as long as it is formatted as a network. Thus it differs from SNA on two notable points: (1) NA is also interested in non-social networks (e.g. protein-gene interactions), and (2) it is not concerned with the gap between a social phenomenon and its reduction as a network. Since SNA is about networks-as-phenomena, it is deeply concerned about the part of reality that is left aside datafied networks; while for NA the datafied network is a given. This is not to say that networks-as-data can dodge the ordeal of empirical validity, but rather that this discussion takes place outside of NA, in the field where empirical data come from (e.g. molecular biology).

Differences between NA and NS. Contrary to NS, NA aims at just describing networks. NS is a broad field with its own subcultures and practices, relatively united around the notion of complex network; it comes with its own research questions such as: Can we find universal laws capable of explaining the pervasiveness of complex networks? NA, by contrast, is quite agnostic in terms of research questions. It focuses instead on how to describe and account for a given network. Analyzing networks is part of what network scientists do, of course, but NA also extends beyond the domain defined by the research questions of NS. An idiographic account of a particular network does not typically meet the publication criteria of NS, contrary to media studies (e.g. Áragon et al., 2013), sociology (e.g. Adamic and Glance, 2005), or digital humanities (e.g. Grandjean, 2016). As an example, the journal Network Science (Brandes et al., 2013) publishes many different types of papers, part of which are case-based, but none of them are just empirical accounts.

The overlap between NA, NS and SNA. While the three fields aim to accomplish different things, they overlap in how they deal with networks. The algorithms and metrics used are largely the same, despite the presence of some specificities in each field. On a practical level, the three fields can easily meet. For instance, within the computational social sciences (Lazer et al., 2009) intersect (1) the figure of the complex network and the practice of modeling, characteristic of NS, (2) the knowledge on the social of SNA, and (3) the empirical practice of NA. The techniques developed by NS disseminated to SNA, but as Freeman (2008) narrates, some methods also travelled the other way around. This important overlap makes it tempting to simplify the situation by assuming that one of the fields subsumes or otherwise contains the other two (pick your favorite!). But doing so would just make us blind to the epistemic trouble caused, at the fringe of each field, by its peculiarities.

References

Adamic, L. A. and Glance, N. (2005) ‘The political blogosphere and the 2004 U.S. election: divided they blog’, in Proceedings of the 3rd international workshop on Link discovery (LinkKDD ’05). Association for Computing Machinery, New York, NY, USA, 36–43. DOI:https://doi.org/10.1145/1134271.1134277

Aragón, P., Kappler, K. E., Kaltenbrunner, A., Laniado, D. and Volkovich, Y. (2013) ‘Communication dynamics in twitter during political campaigns: The case of the 2011 Spanish national election’, Policy & internet, 5(2), 183-206.

Barabási, A. L. (2016) Network science. Cambridge university press.

Barnes, J. A. and Harary, F. (1983) Graph theory in network analysis, Social networks, 5(2), 235-244.

Borgatti, S. P., Mehra, A., Brass, D. J. and Labianca, G. (2009) ‘Network analysis in the social sciences’, Science, 323(5916), 892-895.

Börner, K., Sanyal, S. and Vespignani, A. (2007) ‘Network science’, Annual review of information science and technology, 41(1), 537-607.

Brandes, U. and Erlebach, T. (2005) ‘Introduction’, in: Brandes U., Erlebach T. (eds) Network Analysis. Lecture Notes in Computer Science, vol 3418. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-31955-9_1

Brandes, U., Robins, G., McCranie, A. and Wasserman, S. (2013) ‘What is network science?’, Network science, 1(1), 1-15.

Butts, C. T. (2009) ‘Revisiting the foundations of network analysis’, Science, 325(5939), 414-416.

Chiesi, A. M. (2015) ‘Network Analysis’, International Encyclopedia of the Social & Behavioral Sciences, 518–523. https://doi.org/10.1016/B978-0-08-097086-8.73055-8

Erdős, P. and Rényi, A. (1960) ‘On the evolution of random graphs’, Publ. Math. Inst. Hung. Acad. Sci, 5(1), 17-60.

Erikson, E. (2013) ‘Formalist and relationalist theory in social network analysis’, Sociological Theory, 31(3), 219–242. https://doi.org/10.1177/0735275113501998

Freeman, L. C. (2008) ‘Going the Wrong Way on a One-Way Street: Centrality in Physics and Biology’, Journal of Social Structure, 9(2), 1–15.

Grandjean, M. (2016). ‘A social network analysis of Twitter: Mapping the digital humanities community’, Cogent Arts & Humanities, 3(1), 1171458.

Granovetter, M. S. (1973) ‘The strength of weak ties’, American journal of sociology, 78(6), pp.1360-1380.

Hidalgo, C. A. (2016) ‘Disconnected, fragmented, or united? a trans-disciplinary review of network science’, Applied Network Science, 1(1), 1–19. https://doi.org/10.1007/s41109-016-0010-3

Lazer, D., Pentland, A., Adamic, L., Aral, S., Barabási, A. L., … and Jebara, T. (2009) ‘Computational social science’, Science, 323(5915), 721-723.

Lévy-Strauss, C. (1973) Anthropologie structurale deux. Paris: Plon, 33-34.

Milgram, S. (1967) ‘The small world problem’, Psychology today, 2(1), 60-67.

Moreno, J. L. (1934) Nervous and mental disease monograph series, no 58.Who shall survive?: A new approach to the problem of human interrelations. Nervous and Mental Disease Publishing Co. https://doi.org/10.1037/10648-000

White, H. (1963) An Anatomy of Kinship: Mathematical Models for Structures of Cumulated Roles. Prentice Hall, Engelwood, NJ

Hidden structures in hairballs, and how to see them

15 minutes read

At the moment, I am writing my PhD dissertation. As this part of my draft can be useful to those who work with network maps, I reproduce it here right away. I propose an example-based exploration of the Gestalt approach to the semiotics of network maps, and I explain why and how we can see structures in hairball networks. My argument is essentially visual. For an overview of Gestalt theory, see Wagemans et al., 2012. On Gestalt and network visualization, see Bennett et al., 2007; Kobourov et al., 2015.

When it comes to network visualization, the most important Gestalt principle is perceptual grouping. “Historically, the visual phenomenon most closely associated with perceptual organization is grouping: the fact that observers perceive some elements of the visual field as ‘going together’ more strongly than others.” (Wagemans et al., 2012). Multiple factors influence what we perceive as groups, but two of them stand out in our situation: proximity, and closure. In short, you may perceive a set of dots more or less as a textured shape (figure 1), provided that they are distributed homogeneously, that the contour draws a recognizable shape, and that these shapes are well separated. Technically, your perception works the other way around: you associate the dots because they are close, and this interplay of proximity and distance makes you see a contour, then you associate it to a known shape. This process is not perfect, however. The shapes we see have some degree of ambiguity, and different people may perceive different groups.

Figure 1. A homogeneous distribution of dots can be perceived as a group with a given shape. These shapes may be ambiguous, and get perceived differently by different persons.

The topological structure is mainly mediated by the node placement, to the point that some authors propose to not display the edges at all (Noack, 2007). Here I will simplify the problem by considering that nodes are represented by dots of the same size and color. Size and color are known visual variables (in the sense of Bertin, 1967) but also influence the perception of groups (Wagemans et al., 2012) and should be accounted for in a complete perceptive model of network maps. Here, however, I only touch on this subject. My main concern is our perception of node groups, that we intuitively interpret as topological clusters. Gestalt theory provides tools to discuss this intuition.

As a starting point, let me emphasize the importance of gaps in our perception of groups. Gaps are places where the continuum of proximities breaks. We see groups when (i.e. because) there are gaps between them. We need gaps to separate clusters visually (figure 2). If the gaps are too small or inexistent, we do not perceive different groups. Different persons will agree more easily on the groups with big gaps than small gaps. Big gaps make visual groups less ambiguous and easier to see. This is, Gestalt says, how our cognitive system works.

Figure 2. A same set of three groups of dots may be perceived as one group if they are close enough.

Unfortunately, computations do not follow the same principles as human vision. Although a force-directed node placement algorithm makes, in some sense, groups, it does not care about the same gaps. Intuitively, the eye looks at the gaps border-to-border, while the algorithm cares about the distance between middle points, between statistical averages (figure 3). When there are large gaps, the eye and the algorithm agree. But when the gaps are small or inexistent, it is possible that the algorithm “sees” a gap where the human eye does not.

Figure 3. A visual intuition of the disagreement between the algorithm and the eye. Human vision looks at borders, while the algorithm cares about barycentres.

As an experimental illustration of this phenomenon using networks, I will use a planted partition model, also called stochastic block model. Here I build a network with two groups of nodes, and create links between the nodes by following a statistical rule. For two nodes in the same group, I will create a link with a probability Pin. For nodes in different groups, I will use a probability Pout. As long as Pin is bigger than Pout, each group is promised to be a cluster in a topological sense (e.g. modularity). I choose Pin and Pout so that Pin is smaller, and the sum equals 100%. When Pin is large, the clusters are well-defined. When Pin is 50%, the community structure has entirely disappeared and we just have a random network (indeed Pin = Pout = 50%).

I generate a series of networks with a decreasing probability Pin (figure 4). As expected, the most separated groups topologically (Pin high) are the most separated visually. As Pin gets closer to 50%, the two groups start to merge. At 70%, the general contour starts to look like a circle, but there is still a gap. At 60%, there is no more gap. Yet there is still a topological structure. Indeed, the layout correctly positions the nodes in the right group; but the groups are stuck to each other. We still see the groups because we have colors, but from just the node positions, we would not perceive a cluster. Gestalt theory says why: we need gaps and contours to perceive groups.

Figure 4. Planted partition networks: nodes in the same group have Pin chances to be connected, nodes in different groups, Pout. Each group has a distinct color. The most separated groups topologically are also the most separated visually. But when Pin is low, the node placement does not display a gap. Layout: Force Atlas 2, default settings.

You may think that at a Pin of 60%, the two groups are too entangled to be considered distinct. There is some sense to this point, yet it does not change that there is some topological structure, and more importantly, that the force-directed layout is able to display it. Even though we do not see it. It displays it in the sense that nodes of the same group get placed next to each other. The layout algorithm is so consistently successful at retrieving these groups (figure 5) that we cannot deny their existence. But we do not see it because there is no gap. Here I only give a visual argument, but we could quantify it.

Figure 5. Planted partition networks with Pin= 60%. Layout: Force Atlas 2, default settings.

You may also think that we actually see the two groups, even without the colors. Take your chance at making the distinction between a planted partition of Pin=60% and a random network in figure 6, using solely the node placement. I doubt you see any gaps, and without them, the groups do not appear. But it does not mean that there is no locality principle: close nodes may still be, on average, more connected.

Figure 6. Six of these networks are planted partitions with Pin = 60%, and six are random networks with a connection probability of 50%. Layout: Force Atlas 2, default settings.
Answer: the random networks are, in reading order, 3 to 5 and 7 to 9.

Hidden structures in hairballs

Many authors have blamed network visualization, and notably force-directed networks, for producing hairballs (e.g. Correa and Ma, 2011; Van Den Elzen and Van Wijk, 2014). Hairballs are typically networks such as those of figures 5 and 6, with a “significant node occlusion and link crossings that can almost completely fill the inter-node space” (Edge et al., 2018). Nocaj et al. (2015) and Edge et al. (2018) propose a sparsification approach based on reduction to subgraphs to tackle this specific problem, and Dianati (2016) a pruning approach. I do not deny the practical problem of hairballs (one does not see the structure), and I certainly think that sparsification methods have applications. Yet the hairball is most often a straw man.

The hairball rhetoric is easy to track in academic literature, since it precisely employs the term “hairball”. The argument is always to blame the layout for failing at representing the structure. But this statement is never grounded on a model of how we perceive network maps, and relies instead on a series of noteworthy assumptions. It assumes that network maps are, to some extent, self-evident. It assumes that the (community) structure is translated by the layout as visual groups. And finally, it assumes that the absence of visible groups is a failure. These assumptions are, in fact, wrong. The layout does not exactly translate the structure as visual groups, but as a locality principle. The difference is the presence of visual gaps: the layout may place same-cluster nodes together in a way that produces no visual gaps, which makes these groups invisible to the human eye. From an ethnographic perspective, this rhetoric is etic, it assesses the algorithm from the outside, using criteria alien to its own functioning. But it does not allow predicting the behavior of the algorithm, because it misses the way it assembles a visual structure (even though we do not see it). I claim that there are structures in hairballs, and that we may see them if we learn the way the algorithm communicates them. By mobilizing Gestalt theory, I hope to sketch an emic approach to hairballs.

Firstly, not all networks produce hairballs. Here, I assume that the layout algorithm has been properly parametrized (a high “gravity” setting in Force Atlas 2 tends to produce a hairball regardless of the network’s structure, and that is not a true hairball). So, if a network is properly visualized as a hairball, this does tell us something about its structure: it tells that it is pretty dense. You may find the result banal, it nevertheless discriminates between different structures, so it mediates the topology. Secondly, network layouts are bad at manifesting visual gaps, because in some sense, it does not matter to them. But they are good at placing connected nodes next to each other (on average). It is possible that nodes placed together in the pictures, i.e. local areas, have a meaning. It is possible for clusters to be present even in the absence of any gaps. You may think of it as a bunch of clay balls smashed together (figure 7): the local structures are still present, but they touch each other. Now, to be clear, not all hairballs hide such structures. But since we know they may exist, we can check for them. We may for instance reveal them by coloring clusters: if the hairball has a structure, the colors will not be mixed but gathered in coherent localities. Those clusters may be obtained from the data set (categorical node attributes) or from a community detection algorithm, for instance modularity clustering. For instance, Edge et al. (2018) claim that hairballs lack “clear separation and grouping” but in their example (figure 8 on the left), the hairball has a clear community structure, if we look at colors, despite the lack of visual gaps.

Figure 7. A metaphor for hairball networks: there may be clusters, but they are smashed together like plastic clay balls.
Figure 8. From left to right, the pruning of hairballs, using the layout Force Atlas 2, as illustrated in Edge et al. (2018, figure 2). “How skeletal community structure emerges from an initial ‘hairball’ graph.”

In practice, the contour of the hairball may tell about the internal clustering. In figure 9 the classic network C. Elegans (Watts and Strogatz, 1998) is spatialized by the LinLog energy model. As there are no clear gaps, we do not perceive distinct groups in the node placement (9-a) unless we add additional information, such as color (9-b). The relatively homogeneous distribution of nodes is perceived as a weirdly shaped stain (9-c). The shape of this stain mediates the topology, even in the absence of clear groups. Intuitively, some denser groups of nodes may pull the network in different directions, under the action of the algorithm. This stretches the contour in certain directions, and may create bumps and headlands in specific directions. Intuitively, those are partial clusters that create distinct localities despite being interlinked (hence the absence of visual gaps). I do not offer any proof of this statement here, but you can check, in the example below, that the clusters found by modularity maximization (colors in 9-b) match the elongation and protrusions in the contour of the network (9-c).

Figure 9. The neural network of C. Elegans (Watts and Strogatz, 1998) spatialized by the LinLog energy model. (a) No color: no clear gaps allow to separate groups visually: it can be considered a hairball. (b) Nodes colored by modularity clustering (modularity = 0.346). Three homogeneous areas appear, showing a locality principle. (c) Poles and elongation highlighted. Intuitively, the modularity clusters correspond to the ends of the shape that pull it apart.

I propose to call these pseudo-clusters “poles”, as they tend to appear on the sides (figure 10). The lack of visual separation is meaningful: poles are not only linked, but also weakly separated. A number of nodes lie in-between the two poles, creating an ambiguous area with no clear divide. The poles themselves, however, may be sufficiently dense to be considered topological landmarks. From a clustering perspective, one may say that each pole strongly defines a weakly delineated cluster. The existence of the cluster is robust, but its limits are ambiguous. The pole itself, however, acts as a local anchor. In figure 11 the same community detection algorithm (Blondel et al., 2008) was applied to the same network (same as figure 9). I used the Gephi implementation, which is non-deterministic (Lambiotte et al., 2008). You can observe that the poles always end up in different clusters, but that the boundaries between clusters are not stable. The existence of clusters around poles is stable, but their boundaries are not. In other terms, each denser zone on the side is non-ambiguously local, but most nodes in the middle are ambiguously connected to the different poles.

Figure 10. Poles are weakly separated clusters that one can detect by looking at denser areas on the sides of a network spatialized by a force-driven algorithm.
Figure 11. Six renditions of the same community detection algorithm. The poles consistently end up in different clusters, but the boundaries are not stable. Note: one rendition found 4 clusters, contrary to the others. Network: C. Elegans spatialized by LinLog.

As we argue in What do we see when we look at networks, the ambiguity of this middle space is a feature of the data. The absence of visual gap reflects the absence of a clear boundary in the topology. The community detection algorithms are tasked to find a boundary anyways, and react to the lack of natural gaps in the topology with a high variance in where they put the limit. From an interpretative standpoint, these boundaries do not deserve much trust, since they are poorly reproducible. Therefore, we argue, the layout is a better reduction of the topology. Indeed, it faithfully reflects the inherent ambiguity of clustering. This assumes that we know how to interpret the layout, of course. And it does not undermine the usefulness of clear-cut categories in various situations where ambiguity is a problem. Yet it is essential to realize that clusters do not exist as separated things, but as a continuous and ambiguous landscape of link density.

Now that we are equipped to understand clustering as, more generally, a matter of locality, we can start to find structures in hairballs. Although the lack of visual gaps prevents us from seeing clusters right away, we can rely instead on the subtleties of the contour and the denser zones on the borders. Hairballs may have non-obvious poles. As a test, we can run a modularity clustering a few times to check whether, and where, it is consistent. This approach reveals a community structure in some networks (figures 12 and 13) but not all (figure 14). Important note: in this visual experiment, I used the same settings for the community detection algorithm, and I used consistent colors. However, the number of clusters depends on the settings I used.

Figure 12. The hairball network from Venturini et al. (2018), spatialized by Force Atlas 2, with different renditions of a community detection algorithm. The same clusters appear consistently but their boundaries are not consistent.
Figure 13. The hairball network from Cardon et al. (2019), spatialized with the LinLog energy model, with different renditions of a community detection algorithm. The same clusters appear consistently but their boundaries are not consistent.
Figure 14. A random network (500 nodes, 5% chances of connection) spatialized by Force Atlas 2, with different renditions of a community detection algorithm. Found clusters do not map specific areas, and do not characterize the same nodes consistently.

Takeaways

  • We need visual gaps to see groups (Gestalt law of proximity).
  • Force-directed layouts do not care about visual gaps, and may compress a community structure into a hairball. Then we do not see the structure, even though it is there.
  • Force-driven layout creates localities, but it does not always make clear-cut clusters.
  • Localities may be more true to the data than clear-cut clusters.
  • Weakly connected clusters form poles.
  • Poles can be detected by looking at elongated contours and denser areas on the sides.

References

Bennett, C., Ryall, J., Spalteholz, L. and Gooch, A. (2007) ‘The aesthetics of graph visualization’, Proceedings of the 2007 Computational Aesthetics in Graphics, Visualization, and Imaging, pp. 57–64. doi: 10.2312/COMPAESTH/COMPAESTH07/057-064.

Bertin, J. (1967) Sémiologie Graphique. Les diagrammes, les réseaux, les cartes, Paris, La Haye, Mouton, Gauthier-Villars. 2e édition : 1973, 3e édition : 1999, EHESS, Paris.

Blondel, V. D., Guillaume, J.-L., Lambiotte, R. and Lefebvre, E. (2008) ‘Fast unfolding of communities in large networks’, Journal of Statistical Mechanics: Theory and Experiment, 2008(10). doi: 10.1088/1742-5468/2008/10/P10008.

Cardon, D., Cointet, J.P., Ooghe, B. and Plique, G. (2019) Unfolding the multi-layered structure of the French mediascape.

Correa, C. D. and Ma, K.-L. (2011) ‘Visualizing Social Networks’, in Social Network Data Analytics. Boston, MA: Springer US, pp. 307–326. doi: 10.1007/978-1-4419-8462-3_11.

Dianati, N. (2016) Unwinding the hairball graph: pruning algorithms for weighted complex networks. Physical Review E, 93(1), p.012304.

Edge, D., Larson, J., Mobius, M. and White, C. (2018) December. Trimming the hairball: Edge cutting strategies for making dense graphs usable. In 2018 IEEE International Conference on Big Data (Big Data) (pp. 3951-3958). IEEE.

Kobourov, S. G., McHedlidze, T. and Vonessen, L. (2015) ‘Gestalt Principles in Graph Drawing’, in International Symposium on Graph Drawing and Network Visualization. Los Angeles: Springer, Cham, p. 13. doi: 10.1007/978-3-319-27261-0_50.

Lambiotte, R., Delvenne, J.C. and Barahona, M. (2008) Laplacian dynamics and multiscale modular structure in networks. arXiv preprint arXiv:0812.1770.

Noack, A. (2007) ‘Energy Models for Graph Clustering’, Journal of Graph Algorithms and Applications JGAA, 11(112), pp. 453–480.

Nocaj, A., Ortmann, M. and Brandes, U. (2015) ‘Untangling the Hairballs of Multi-Centered, Small-World Online Social Media Networks’, Journal of Graph Algorithms and Applications, 19(2), pp. 595–618. doi: 10.7155/jgaa.00370.

Van Den Elzen, S. and Van Wijk, J. J. (2014) ‘Multivariate network exploration and presentation: From detail to overview via selections and aggregations’, IEEE Transactions on Visualization and Computer Graphics, 20(12), pp. 2310–2319. doi: 10.1109/TVCG.2014.2346441.

Venturini, T., Jacomy, M., Bounegru, L. and Gray, J. (2018) ‘Visual Network Exploration for Data Journalists’, in Franklin, S. E. I. and B. (ed.) The Routledge Handbook to Developments in Digital Journalism Studies. Abingdon: Routledge.

Wagemans, J., Elder, J.H., Kubovy, M., Palmer, S.E., Peterson, M.A., Singh, M. and von der Heydt, R. (2012) A century of Gestalt psychology in visual perception: I. Perceptual grouping and figure–ground organization. Psychological bulletin, 138(6), p.1172.

Watts, D. and Strogatz, S. (1998) Collective dynamics of “smallworld” networks. Nature 393: 440–442.