What is Web Cartography? by Franck Ghitalla

13 min. read

With Dominique Boullier and OpenEdition Press, we edited and published Franck Ghitalla’s posthumous book, Qu’est-ce que la Cartographie du Web?. The book is in French, and Franck Ghitalla is mostly known in France, but it is worth introducing his work to a larger, English-speaking audience (and if you speak French, even better! I have more resources for you at the end). His book is in open access online, which allows you to Google-translate it in-place. Check for instance my foreword, or browse the chapters from the summary.

Portrait of Franck Ghitalla
Franck Ghitalla the “champion of networks”, as introduced by Le Monde in 2012.

In short, Franck was a linguist who used to teach at a French engineering school (UTC). He tragically died in December 2018, while teaching his famous course, the one I had followed in 2005 as a student, which had started our collaboration and friendship. From a French perspective, his originality was to have imported early the concepts of network science, for instance scale-freeness, from Barabási’s bestseller Linked. That worked very well because his UTC students were avid of discovering things and building stuff. The visions he inspired were immediately implemented into prototypes of web crawlers, digital libraries, network visualization tools, and more. That dynamic gave birth to Gephi. His students also founded startups like Linkfluence or Linkurious. Franck was renowned as an innovator. He was a very singular figure in the French academia, which caused him some trouble. He narrates it in the book. For instance, he was accused of Americanizing young French minds, which I find hilarious because from a US standpoint, Franck looks so irredeemably French. That is what I would like to briefly explain here, because it is important in a European context, where other researchers may find Franck’s perspective unique and interesting, and relate to it.

Qu’est-ce que la Cartographie du Web? is available in paperback version.

Franck’s starting point was knowledge. His interest in networks was driven by the idea that information had a geography. He saw networks (graphs) as a knowledge technique, in the spirit of Vannevar Bush, and the web as an empirical field to investigate, like the pioneers of network science (Albert-Laszló Barabási and Réka Albert; Lada Adamic and Natalie Glance…). He was drawn to the web because it was datafied knowledge. This unique form of writing allowed him computing visualizations, revealing the geography of information. But it is worth noting that modeling was never his goal. Contrary to the network science movement in the USA, largely founded by statistical physicists, Franck did not trust the machine over the person, and was not willing to blindly delegate analysis to algorithms. He wanted to see by himself, to explore and make discoveries. Franck was interested in information visualization for its hermeneutic potential, he did not want to use algorithms as a way to mechanize the work of interpretation, but to enrich it. He understood information cartography as a craftmanship, a qualitative art enabled by quantitative computations (but not limited to them), and an enlightened form of reading (and writing). For Franck, information cartography was not an automated process of reduction, but an instrumented process of revelation. It was an hermeneutic practice.

The letters for rods
Original illustration of the Memex from the Life reprint of “As We May Think” by Vannevar Bush. It inspired the hypertext.

His endeavor to seek interpretive opportunities in datafied knowledge was built upon the idea that complex phenomena had something to show, something more than universal laws, something specific to each empirical case. In that, he was meeting a forgotten idea of Gabriel Tarde, also promoted by Bruno Latour: the idea that complex phenomena like society, culture, and knowledge consist of nothing more than what composes them, like individual interactions. The idea roots in the philosophy of Leibniz, who wanted to show that thinking the world does not require the idea of God. Leibniz conceptualized the monad as a way to give meaning to things without resorting to God’s will, the soul, Plato’s ideal forms, or any other avatars of divinity. Leibniz conceptualized a purely material world, and Tarde reused the concept of monad for the same purpose, to state that even the most complex collective behaviors depend on nothing more than local interactions. Unfortunately, Tarde’s rival Emile Durkheim famously won the battle of ideas, and incepted sociology as a quantitative science. For Durkheim, on the contrary, complex collective phenomena are sui generis entities, they exist on their own, independently. They are something more than what composes them, they exist as something else. Social entities like nations exist on a different level than individuals, in this perspective. They have an essence, something reminiscent of a soul. Durkheim supports discarding individual information by using statistics (i.e., reductionism) because it gets you closer to the essence of collective phenomena. In a Tardian perspective, that essence is an unnecessary assumption. So if we are to be radical empiricists and minimize our assumptions, then we must get rid of the essence (or hidden truth, or universal law) and find another role and justification for reductive methods. You can read the long and better version of this argument in The Whole is Aways Smaller than its Parts (Latour et al., 2012).

For Tarde, society is entirely contained in individuals, it is entirely material; yet, and that is the crucial point, that is something that we can only see with the right tools. And, as Tarde admitted, such tools did not exist in his time. He knew that his argument was quite speculative, which also explains why Durkheim seemed to be right. This is where the web and big data, in the mid-2000s, were relevant to people like Franck Ghitalla and Bruno Latour. Because they believed that those could be the first real-world realizations of Tardian tools, allowing us to track collective phenomena down to their smallest components. This explains why modeling, and statistical reductionism in general, was not on Franck Ghitalla’s mind. He was confident that he would see collective phenomena in qualitative data sets, if they were large enough, and with the right tools. He trusted the topology of the web to offer an image of the collective interests of humans, an overview of knowledge. A distorted image, certainly, but still truer to our minds than how libraries and encyclopedias were organized. He saw the web and other data sources as fields for phylogenetic investigations à la Foucault, and he did not presume of what he would find. He had no grand theory, he only bothered with finding good empirical cases and interpreting them. He was a radical empiricist, and a practitioner. He did not conduct quantitative experiments to test hypotheses formulated by theorists, in contrast to the literature of network science (that he nevertheless loved). The only theoretical elements he proposed consisted of down-to-earth advices based on accumulated observations.

I remade a number of images for the book, and I made English versions. I feature them here (licensed as CC-BY, feel free to reuse) with a short version of the argument about them. This will give you a non-representative idea of the book’s content. Check my foreword for a more representative overview.

The web as layers

The web has been described as scale-free, although this is now controversial; I have written on that topic, and I will keep things simple here. The fact is that a few web pages have all the hyperlinks, while most pages have almost none of the hyperlinks. And this is also true at the level of websites. If we were to sort pages from the most linked to the less linked and plot the number of links, like in the figure below, we would see a power-law distribution, or at least a heavy-tailed distribution, which is basically the same for us. This type of distribution is sometimes called 80/20, because 20% of the pages would have 80% of the links; the numbers may vary but the important is that it is extremely skewed.

Let us call the top of the curve the “hubs” (the most connected pages) and the rest the “long tail” or “heavy tail”. This distribution has a curious property: the tail is itself a power-law distribution (figure below, bottom). That means that if you remove the head, the tail is still composed of its own head and its own tail. This is, in short, why it is called scale free: it looks the same at different scales (if you consider that zooming is focusing on the tail).

The power law distribution

Franck had a clever idea that we refined over the years. We think of the web as a series of layers, and these layers correspond to the distribution of links. From top to bottom, we have: the high layer, with the most connected websites like Google and Wikipedia; the intermediate layer, with the moderately linked websites; and the deep layer, with mostly disconnected resources. In this model, the specificity of the intermediate layer is to contain aggregates of web documents. In short, communities; I will return to that. The deep layer also deserves an explanation. It is said deep because it is so poorly linked that it is hard to reach by following hyperlinks. It contains most of the information of the web, but each piece of information is accessed rarely. It notably consists of specialized databases and storage systems used as in the web infrastructure, but with a logic of resource providing, not of curated hyperlinks. For instance, Wikimedia Commons. In other words, these are not resources that people link intentionally. By contrast, the intermediate layer contains resources that we see naturally as documents, such as blog posts, articles, or tweets. We share them via hyperlinks, and that creates communities.

The three layers of the web

In terms of link distribution, it requires thinking of the power law not in two parts (the head and the tail) but three: the hubs, the aggregation area, and the tail (figure below). The middle part is the intermediate layer, not as connected as the global hubs, but connected enough that a geography can emerge.

It is worth mentioning that, as the curve suggests, the demarcations are not clear-cut. Similarly, the layers are not clear-cut either. In reality, the layers form a continuum, but it is useful to think of them as different subspaces with different properties, for simplicity.

The intermediate layer is the middle part of the power law

The direction of the hyperlinks is crucially important. Each layer has different properties, both in terms of how it is linked to itself, and to the other layers. The high layer is massively cited by every other layer (figure below, left). Most links converge to the top, so the web behaves like an ocean where the internet user tends to bubble to the surface. Importantly, the deep layer points directly to the high layer, so there is no need to travel through the middle to reach the surface. There are always shortcuts to the top.

The deep layer is the opposite, it is only accessible from the intermediate layer (figure below, center). It is as difficult to go deeper as it is easy to reach the top. The crawler or internet user has to travel through the intermediate layers, from sublayer to sublayer, to reach the depths.

The intermediate layer is basically a bridge between the surface and the depths (figure below, right). These links are asymmetrical, as they tend to be bottom-up, but the layer is nevertheless a passage point. More importantly, it consists of aggregates that are also linked to each other.

Each layer is a space with different topological properties.

The intermediate layer is composed of aggregates. Each aggregate is composed of a core and a periphery. The core is composed of hubs and authorities. The hubs cite a lot of the aggregate’s resources (portals) while the authorities are cited a lot by within the aggregate. The core is highly connected, while the periphery is less connected.

In fact, each aggregate replicates the whole structure of the web. Intuitively, the core is closer to the top, and the periphery to the bottom. There might also be sub-aggregates, on multiple levels. This structure is quite intuitive if we think of it as communities. We may see the gamers as a community with its own hubs and authorities, but it certainly has sub-communities like FPS, RPG, and then possibly sub-sub-communities by sub-genre or game. And there might be bridges or overlaps between different communities (aggregates are not clear-cut either). The important is that the distribution of hyperlinks is heterogeneous and creates a number of self-organized localities.

Each aggregate is not just a denser subspace, it is also thematically centered. This is why aggregates are communities: their resources share a common interest or practice. They are not purely topological. They are subspaces where content and structure correspond to each other.

The intermediate layer is the only one where we can empirically observe a geography of information, because the deep layer is not connected enough, and the high layer is too connected. In other words, the high layer is everywhere, while the deep layer is nowhere. Only the intermediate layer offers a “somewhere” that we can study.

Structure of an aggregate (a community)

Different aggregate structures are possible. It depends on the core (or center, in the figure below), and it depends on the lesser connected websites too. The more hyperconnected the center, the stronger the aggregation. But a community may also exist as links between the less connected actors, which we call here a lattice. Community activity does not have to go through relation brokers (the center). It might be hierarchical, or on the contrary, flat. But to be considered an aggregate, i.e. a local subspace, it needs either a lattice plus a weak core, or a strong core.

Different types of aggregates and non-aggregates

The layers also correspond to different practices of information diffusion. The high layer corresponds to a broadcast model, where actors, typically mainstream media, push information to less connected actors (the public). On the contrary, the aggregates correspond to a viral model where the information spreads in the community between actors of an equivalent level of connectivity and visibility (horizontally).

Different information diffusion models in different layers

These models are not mutually exclusive. As shown in the figure below (c), a hybrid scenario is also possible, where the information circulates virally first, then gets broadcasted by a highly visible actor, then resumes to a viral circulation in the intermediate layer.

Three scenarios of information diffusion

This hybrid scenario is in fact a pretty good example of fake news laundering. Many fake news circulate in low-visibility layers, as partisan bullshit or humour. It is pretty difficult for these contents to disseminate widely from these subspaces. High visibility actors may however launder these contents and promote them to spaces where they could not otherwise circulate. This laundering has been relentlessly documented and is critical providing reach to fake news. The fake news may be debunked and resume a viral circulation, but the laundering will have spread it to localities of the intermediate layer that it could not have reached just from viral circulation.

I have no more original images to share, and I will leave you at that. More details in Franck’s book. I hope I have piqued your interest, and in that case, have a good read.

Links

An anecdote about the resistance of things

3 minutes read

I had a print a roll-up kakemono, and I checked a cheap option (5€) to have an actual human check my file. I thought: stupid mistakes happen, better have someone double-check, for this cheap.

Later that day (today), I get an unfortunate email: my file is non-conform. Due to a bad resolution, they say, and the printed result could be pixelated. But is it, though? I double check. The kakemono is a composite image of different sources (see it at the end), and most of them are well above the required resolution (a modest 150 dpi). Not the background image, though; but it’s blurry by nature and it does not matter. I can’t see anything else. There might be an option to tell them to proceed anyway, but I try a quick fix. I add a slight blur on the background image in the case that some pixels would be visible, then I export all as a 300 dpi pixel image, and import it back in the PDF format they require.

At that moment I thought that those 5€ just cost me time and money, as the original result would presumably be exactly the same. I thought so because their feedback was not the kind of mistake I had expected to make; in fact I had not made the feared mistake, I precisely knew what I was doing all along. And I had better things to do. I was frustrated.

But now I think: what if that frustration is precisely the sign that it worked? I started to accept the idea that the background image might have been actually pixelated, and that my quick fix was indeed what I was supposed to do. I started to see things differently once the initial frustration was gone.

So was it a good deal, or not, these five euros? Here is the interesting thing: it can only work by being annoying. If you have an idea of the kind of mistake you could make, you have already checked the file for that at the point you validate the option. No, you go for it as a protection against unexpected issues, albeit you think your file is right. So every time it actually proves useful, it’s a bad news in the eye of the user. Frustration is inherent to the success of the feature.

This happens because the feature is expressed as a resistance of the system. It resists your mistakes. The problem comes from you. Although it also works for you; you’re the client and the problem. You give the system the right to frustrate you. You pay for that. Because you know that you’re also that person who fucks up.

Now, you may not accept this resistance, you may disagree. You may think that it’s wrong, and you’re right. Your emotions, if you’re like me, will push you down that road. Remark that it depends on the feedback. If it told me: “check that the background image is not pixelated”, I might have accepted it better. But that’s because I have some degree of judgment about what I am doing.

Now picture a slightly different situation. You lack judgement about what you are doing. The system resists, but in the end you have no clue whether it is right or wrong. And it prevents you from doing what you are supposed to do. Would you think that it works? Would you feel it that way?

Tool makers don’t want their users to feel frustrated, hampered. Users make the success of a tool. A tool perceived to malfunction would presumably be abandoned by its users. Or would it? I will leave you at that: I think that scientific instruments are too docile, and that’s in part because their designers, for instance the computer scientists who publish algorithms, believe that it is not worth the pain of frustrating their potential users. If they fuck up, they fuck up; especially when they will never know.

The damned kakemono. More about its content later on…

Situating Visual Network Analysis (PhD thesis)

Situating Visual Network Analysis is my PhD thesis, just accepted for defense, scheduled for June 1 at 12:00 CET in Copenhagen. I will provide additional details when I have them.

In the meanwhile here is the manuscript for download if you want to take a look:

Download the manuscript (PDF)

Abstract

Visual network analysis (VNA) is the practice of analyzing networks by visual means. In this dissertation, I account for this practice and the techniques involved by focusing on force-directed node placement algorithms, the most popular strategy for drawing network maps. I explore the question of what we see when we look at networks, address some of the criticism faced by network visualization, and reflect on the role of the layout algorithm in the visual mediation of the network’s topological structure. My argument unfolds in six theses: (1) VNA consists of practices that are only partially determined by the graph-drawing and data-visualization literature; (2) some visualizations, including network maps, prompt a visual inquiry into the meaning of emergent patterns as contributing to their apparent self-evidence; (3) for historical reasons, the graph-drawing literature mainly promotes an interpretation regime adapted to small networks (diagrammatic), while practices partially shifted in the 2000s to large networks (topological interpretation regime); (4) some issues with reading network maps can be attributed to the misalignment between our visual cognition and the computational standpoint, notably the notion of a group; (5) the existing justifications of algorithm designers do not provide a compelling explanation of what we see in networks; and (6) the literature on community detection focuses on clear-cut clusters, while force-driven placement algorithms make visible other non-clear-cut community structures.

As a technical mediation, Gephi shifts your goals (and vice-versa)

5 min. read

We had to cut this text from a paper we wrote with Emilija Jokubauskaitė, but I find it useful, so I share it here, slightly reworked. We borrow Bruno Latour’s (1994) explanation of the mediation, and provide examples about Gephi. Latour’s approach draws on Gilbert Simondon’s thinking of the technical object (1958), and more recently on Madeleine Akrich (1993). It contends the existence of a script, a program, attached to the object; and that program is neither determined by just the practices, nor just the instrument. It helps us make a very important point: Gephi influences its users by shifting their goals.

Have you been tempted to share a Gephi screenshot even though other people would not understand it? Gephi’s image-production abilities can shift its users’ goals.

Latour (1994) observes that there is a “materialistic” and a “sociological” version of the question about whether (and how) tools influence us. In a “materialistic” perspective, “each artifact has its script, its ‘affordance’.” This version emphasizes the role of material-semiotic features over practices. And there is a “sociological” perspective where the tool is a “neutral carrier of will that adds nothing to the action.” This version emphasizes practices and rhetorics, and puts the blame on people. As Latour notes, “the two positions are absurdly contradictory.” Analyzing technical mediations requires getting out of these false alternatives. When it comes to the criticism of data visualization, I find for instance that Johanna Drucker’s statement that “graphical tools are a kind of intellectual Trojan horse” (2011, 2014) sounds absurdly materialistic (more quotes in this post). Latour offers four meanings for mediation: “translation”, “composition”, “reversible blackboxing” and “delegation.”

Translation refers to the ability of the user-tool assemblage to displace the goals of both the user and the tool. The goals of researchers using Gephi may differ both from the goals of unequipped researchers and, symmetrically, from the goals of Gephi not in use. Producing images that can circulate may arise without being a goal of Gephi. “Responsibility for action must be shared among the various actants” (Latour, 1994). Let me explain. Gephi has been thought mainly as an exploration device. Even the image-export features were aimed at exploration: their purpose was to print images to annotate and discuss networks on a paper support, which is often better than the screen. We have stated it multiple times (Bastian et al., 2009; Jacomy et al., 2014). Yet I have seen countless Gephi screenshots on Twitter or in people’s slides that are shared to an audience who cannot understand what they mean, because they lack context and/or graphic design. Yet these images produce an effect. For instance, they showcase technical mastery, or the complexity or massiveness of the data. And that narrative might even be unintentional, which I call storyletting: letting your visualizations tell their own story, because you share them without a narrative. Producing such images is not Gephi’s goal, and I assume that it is not the researchers’ goal either. These images are brought into existence by the exploration process, and are legitimately stored as traces of the process. The same way Lada Adamic and Natalie Glance (2005) featured the famous figure 1 of Divided they blog to document their process. These screenshots are not intended as marketing assets, but they may become ones by circulating. This shift in their significance is what Latour calls “translation.”

Composition refers to the fact that the tool enables certain things. Even if we attribute agency to the researcher and not to Gephi, what the equipped researcher is able to do is made possible by both what the researcher can do, and what Gephi can do. Like translation, this meaning of mediation emphasizes symmetry between the tool and its user. For instance, Gephi allows seeing clusters, which is not visible in the raw list of nodes and edges; but of course Gephi symmetrically needs you to see the clusters.

Reversible blackboxing refers to the fact that the tool’s transparency is relative and ever-changing. When a device breaks, it has to be unblackboxed to be repaired, and can get blackboxed again to be put in use. The archetypal example comes from Merleau-Ponty’s (1962) phenomenology of perception: the blind man’s stick. When in use, that tool is embodied. The blind man ceases to feel it in their hand. It becomes part of their body, and they feel through it. Similarly, your glasses work insofar as you do not see them; if you see them, then they are dirty and that is a problem. Of course you need to look at your glasses to clean them, then you remove them, you disembody them, you open the black box for maintenance. And when you put them back, you return to blackboxing. Similarly, a researcher may consider Gephi a black box at the moment of using it, as usage requires a form of convergence between the tool and the user. The same researcher may open the black box at a later moment, when it comes to write down the method in a publication. There is a moment to discuss the algorithms you use, and there is a different moment when you want the algorithms to work. Researchers navigate between these moments. Gephi may be more or less blackboxed in different situations, at different moments, or to different people.

Delegation, the “most important” meaning of mediation according to Latour (1994), and based on the previous three, refers to the ability to shift signification. An apparatus may play a rhetorical role, it also has a meaning that is part of no discourse. Technical objects “act, displace goals, and contribute to their redefinition.” By shifting goals, the technical mediation ultimately shifts meanings. I find it a good explanation of why Gephi is sometimes understood as a machinery to produce images. Of course visualization is its purpose; but that is intended for exploration: visualize for yourself. Yet Gephi has somehow become a way to produce images for other people. Instead of wondering whether to blame Gephi’s program of action or user community, which would be returning to the materialistic and sociological versions of technical mediation, we can now understand that the [researcher+Gephi] composite’s new goals have shifted the meaning of Gephi towards the production of circulable images, if not marketing assets. This does not mean that one cannot intervene onto Gephi’s design or community, but it reminds us that something intended by neither its designers nor its users can nevertheless emerge from the interactions with the instrument.

References

Adamic, L. A., and Glance, N. (2005) The political blogosphere and the 2004 US election: divided they blog, Proceedings of the 3rd international workshop on Link discovery, pp. 36-43.

Akrich, M. (1993) Les formes de la médiation technique, Réseaux, 60, pp. 87-98.

Bastian, M., Heymann, S. and Jacomy, M. (2009) Gephi: an open source software for exploring and manipulating networks, Third international AAAI conference on weblogs and social media.

Drucker, J. (2011) Humanities approaches to graphical display, Digital Humanities Quarterly, 5(1), pp. 1-21.

Drucker, J. (2014) Graphesis: Visual Forms of Knowledge Production (Cambridge, MA: Harvard University Press).

Latour, B. (1994) On Technical Mediation, Common Knowledge, 3(2), pp. 29-64.

Jacomy, M., Venturini, T., Heymann, S., & Bastian, M. (2014). ForceAtlas2, a continuous graph layout algorithm for handy network visualization designed for the Gephi software. PloS one, 9(6).

Merleau-Ponty, M. (1962) [1945]. Phenomenology of Perception. transl. Colin Smith (New
York: Humanities Press).

Simondon, G. (1958) Du mode d’existence des objets techniques (Paris: Aubier).

State of the conversation at the FOSDEM open research tools and technologies devroom

40 minutes read, 8 minutes for the TL;DR version.

Faces of FOSDEM (CC-BY-SA Diégo Antolinos-Basso)

The FOSDEM is a major event of the open-source community, especially in Europe. FOSDEM stands for “Free and Open source Software Developers’ European Meeting” and is volunteer-organized each year at the Université Libre de Bruxelles. It looks like Diégo‘s pictures of the crowd just above: diverse, joyful, exciting – despite the frequent Winter rain. It usually features a main track in the impressive Janson auditorium (1500 seats) and about 50 so-called devrooms (parallel tracks) in smaller teaching rooms. In 2020 and 2021 I participated to the creation and organization of the Open Research Tools and Technologies devroom (follow it on Twitter). After its two years of existence, I offer here an overview of what our speakers presented.

The devroom is aimed at developers and users of open tools and technologies, working in a context of knowledge production such as scientific research, investigative journalism, NGO fieldwork, etc. We accepted 19 presentations in 2020 and 22 in 2021. It is worth noting that the 2020 was just before COVID hit Europe and was in person as usual, while the 2021 edition featured pre-recorded talks with live Q&As. On the positive side, it made it easier for people far from Bruxelles to participate. We had 17 male and 4 female speakers in 2020, and 19 male and 5 female speakers in 2021. The 41 talks featured 20 demos, 11 speakers shared their story as a developer, and 3 as a user. I obviously cannot share all the interesting things these speakers have said, and I consequently focus on recurrent arguments, but you can watch the original presentations in video. As a consequence, purely technical talks are underrepresented in my report. This is not a statement, just a pragmatic editorial choice.

I delineated 9 distinct topics developed by multiple speakers. These topics are not completely separated, they largely overlap in ways that should be clear enough. Understand them as landmarks to get oriented in the discussion. The two central topics are open science and the question of research tools and academic currency, as one could expect considering the focus of the devroom. Seven other topics decline these two central discussion points: fears about open-sourceness; the reproducibility crisis in science and how to face it; open digital notebooks; issues with data; empowerment and activism; funding issues and the precariousness of open-source projects; and issues of developers working in lab or newsroom cultures. I will develop each topic starting with a short summary, then decline what different speakers bring to the discussion. For a quick overview, just read the first paragraph of each section, marked as TL;DR (too long; didn’t read). If you want to dive into the speakers’ videos, read the subsequent paragraphs and follow the links. I highlight the most important presentations (in my opinion) in the topic they focus on the most, even though all of them touch multiple topics. I intentionally restrain my links to the FOSDEM talk pages, from where you will find much more information and links about each speaker and project.

Note about vocabulary: FLOSS means “Free/Libre, Open-Source Software”, a notion I often shorten here into open source or just openness. But mind two important nuances: (1) freeness and openness are not the same thing and do not always go together; and (2) freeness has the double meaning of free as in free beer, and free as in free speech (hence the term “libre” borrowed to French). To a large extent, the former is just a means to obtain the latter.

Open science

TL;DR: Open science is more than just the use of open-source tools in science. It is also about the publishing infrastructure: open access to papers, data sets, and software artifacts; and it is also about cultures and practices. As opening the code is only useful to those who can read it, annotation and documentation are crucial to researchers. Moreover, technical friction and pain points are major obstacles to the adoption of FLOSS. Usability is a central need, yet it is difficult to justify and fund it in current academic culture. But some speakers propose ideas to intervene on research practices directly, by promoting more transparent and reflexive ways to work with data, and ways to reinvent scientific collaboration through transculturation. Most researchers seek openness, and the open-source movement has a lot to offer by replacing the current closed-by-design, open-as-an-afterthought academic publishing infrastructure with open-by-design solutions such as Software Heritage, DSpace, or PubPub. However, this requires a political mobilization beyond the open-source movement.

Roberto Di Cosmo, computer science full professor at University Paris Diderot, starts his presentation of the Software Heritage initiative, which he leads, with the “three pillars of open science:” open-access repositories, open-data-sets repositories, and open-source repositories. He articulates the different needs of different actors. First, the researchers need to archive and reference software used in articles, to find useful tools, to get credit for developed software, and to verify, reproduce, and improve their results. Second, the laboratories or teams need to track software contributions, produce reports and web pages. Third, the research organizations need to know their software assets, and measure the impact of their software production. This is one of the reasons why it is so important that research software artifacts get archived, referenced, described, and credited. These needs are answered by Software Heritage, a long term, non-profit, multistakeholder initiative with the ambitious goal to collect, preserve and share all source code publicly available, protecting our software commons, in collaboration with UNESCO. In his talk, Roberto also shows how we can use and benefit from Software Heritage in a research context.

Travis Rich, executive director at Knowledge Futures Group, presents PubPub, the open-source publishing platform he has contributed to create. We’re not “there” yet, he says: despite having orders or magnitude more open-source software than 20 years ago, an open and fair access to knowledge is not offered to everyone. Open source is not enough. PubPub received a lot of support because there was only a small set of industrial tools, all closed, and there was a strong demand for open publishing. In the process of meeting that demand, the PubPub team discovered multiple open-source projects that had tried the same but had become either hard to maintain, slow to adapt, or just outdated. Many non-technical mission-oriented groups just wanted to use open-source tech for philosophical and ethical reasons and were willing to pay for it. Although PubPub is open source, its functioning depends on relations with third party services that are not necessarily open source (ex: Google Scholar), which requires maintenance and sometimes payment. This hints at a bigger problem: we do not have models for building and sustaining digital infrastructure that serves as reliable, affordable, and accessible public utility. Which is why Travis and his team founded the Knowledges Futures Group, an independent non-profit dedicated to the production and maintenance of digital infrastructure as public utility. Travis highlights that if our real goal is a fair distribution of power, then technical aspects are not enough: institutional alliances and a sound funding scheme are also necessary which requires some degree of political mobilization (on that last point, see also Markus Suhr and Marcel Parciak’s presentation).

In a similar spirit, Bram Luyten presents DSpace, an open-source repository software package typically used for creating open-access repositories for scholarly content (notably). Bram presents himself as “open-access advocate” and is a co-founder of Atmire, a service provider for DSpace. He argues that although open access accelerates scientific progress, when it is thought as a hybrid of closed access, it is broken. Besides, pre-prints are not the solution because of the lack of peer-review. This is where DSpace is useful. Like any project, it needs developers and contributors; yet it is the most successful institutional repository platform, in part because it was localized very early (China, Taiwan) and released under MIT Licensing.

Emmy Tsang is innovation community manager at eLife, a non-profit organization and peer-reviewed open-access scientific journal for the biomedical and life sciences. Emmy shares her community-driven approach towards open innovation for research communication. Like Travis and Bram, she takes note that internet publishing is broken – even though it was created by scientists to share knowledge. The goal of eLife, she says, is to move from a slow, expensive, closed, and draining publication process to a fast, cheap, open, and user-friendly publication process. Like Bram she defends open-by-design as opposed to closed-by-design later made open. Indeed, closed systems propagate design biases (e.g. diversity biases) and produce unusable research, because it is hidden behind paywalls and inaccessible in various ways to different publics. The ambition of eLife is to offer open, inclusive, and user-centric research communication tools to the community. In that perspective, Emmy presents Reproducible Document Stack (RDS), eLife’s solution to produce papers where the code and results are reproducible.

Yo Yehudi is open-source tech lead for the data for science and health at the Wellcome Trust, a charitable foundation focused on health research. She lists important challenges for computational research: the lack of computational tools for research, the lack of incentives to draw, retain and reward talent, and the insufficient trust in computational work. She pinpoints a paradox with tools in research: sometimes research is too narrow and the tools that researchers need do not exist; and sometimes there are too many tools and standards and the fragmentation of that landscape is a problem for researchers. Yo explains that Wellcome’s goal is to fund the tools, talent and trust researchers need – it is a science funder. She showcases two open-source projects Wellcome has funded: Afrimapr and OpenSAFELY. Like Travis, she argues that code is only part of the problem, almost secondary: we need maintenance money, we need to justify doing documentation for users and developers, to fund accessibility, to build a community, and to spend time on improving the user experience. Those are all justifiable software costs! she insists.

Not all speakers are developers, some are tool users. Maya Anderson-González is a researcher in computational social sciences and digital humanities, and narrates how she used FLOSS tools “and lived to tell the tale.” She explains how she got acquainted with open-source principles, the issues she faced, and how she built confidence through exposure and training – her project was to visualize and analyze a Twitter network of the FOSDEM 2020. She comments on the importance of documenting one’s own process, as well as accessing other people’s process. In this setting it is valuable to share work-in-progress reflections and results, which naturally leads to the reflexivity of open research (which Maya found intimidating at first). In traditional science, collaboration can be coded in specific ways, for instance around seniority. But using tools designed for community participation changes existing collaboration processes. She concludes on the usefulness of the concept of “transculturation,” which Maya sees as what FLOSS developers do with social science researchers to create an open science culture.

From the presentation of Maya Anderson-González, FLOSS meets Social Science Research (and lived to tell the tale)

Erik Borra, Stijn Peeters and Bernhard Rieder are assistant and associate professors at the media studies department of the University of Amsterdam. The three of them have the hybrid profile of humanities scholar and tool developer, having offered us many of the Digital Methods Initiative tools to analyze online platforms such as Netvizz (Facebook), TCAT (Twitter) and 4CAT (4chan, Reddit and more). They remind us that just opening the code is only useful to those who can read it, which is why annotation and documentation are key to making the software open. They highlight three issues. First, our relationships with large platform companies are changing, a phenomenon known as “the APIcalypse.” Netvizz, for instance, was a victim of Facebook’s policy change. The new situation offers a few new opportunities, but it does not work the same way and it sparks a debate about scraping (its legality and ethics). Second issue, privacy and legal compliance (Europe’s new GDPR regulation): new requirements, securing access to the data… how to react to these challenges in the design of our tools? Third, the web has been changing over recent years. Platforms started deplatforming users, and some platforms may disappear entirely (ex: Parler, even though it is now back online). We need to understand these elements before reacting to them and helping our users. Because of this everchanging landscape, they say, the role of the research engineer must be expanded: one foot in research; one foot in software; but also one foot in software education and one foot in strategic planning. On the bright side, Erik, Stijn and Bernhard just obtained a Dutch grant from the platform Digital Infrastructure for the Social Science and Humanities to start address these issues (their project is named CAT4SMR).

Still on the tool-making side but for the natural sciences, Aniket Pradhan presents NeuroFedora, a Linux distribution dedicated to neuroscience he contributes to. Emmy Tsang, Yo Yehudi and Maya Anderson-González have highlighted the importance of usability. The problem NeuroFedora solves is exactly of that kind. Many powerful packages exist to help the neuroscientist, but they are often difficult to install, which is a major obstacle to their adoption. NeuroFedora embeds those packages, alleviating the technical pain even in cases where the documentation is lacking or the installation skill-demanding, making relevant tools available in a way that just works for the user. It also serves open science by publicizing tools. The NeuroFedora team consists of about 20 contributors, only 5 of which have a neuroscience background.

Lilly Winfree is product owner of the Frictionless Data for Reproducible Research project at Open Knowledge Foundation. As Aniket Pradhan’s contribution highlights, sometimes what it takes to promote openness just consists of lackluster tasks such as writing an installation script. In the realm of data management, the “boring” tasks are to clean the data, check their quality, document their origin or find their license. This “friction” is an issue for scientists, data journalists and more. Lilly presents the specification and open-source toolkit known as Frictionless Data, aiming at alleviating that friction. Its ambition is to bring reproducibility to the process of transforming messy data into clean data, and then into hosted data. The central concept is called a “data package”, typically a metadata file (with optional schema) documenting your usual CSV data file, that can be validated automatically. Also check Carles Pina Estany’s demo of the Data Package Creator.

In complement, let me mention a few points from other talks I will return to. Julia Sprenger remarks that for many researchers, publication comes first, and then maybe the software is released later on, even though publishing code improves scientific results (she gives an example). Sébastien Rochette similarly observes that, in a hackathon he organized, researchers were reluctant to expose their project; but they ultimately changed their practices. Finally, different speakers such as Karthik Ram reflect that open-source tools are not properly valued in laboratory culture; for instance, many researchers ignore how to cite software, or pick a license for their production. A situation caused by the overwhelming importance given to peer-reviewed publications in academia.

Research tools and academic currency

TL;DR: Even though software impacts all modern research, even though research tools shape scientific practices, publications remain the prime currency in science. Developing software does not contribute directly to publication, which leads to multiple issues. Tool making is difficult to fund and justify, if not simply considered a side occupation. Refactoring and maintenance are undervalued, and releasing code is not a priority. The research software exists as an identified object in academia, but research institutions lack incentives to attract and retain talent and improve the situation. This situation might be explained by the different perspectives of the actors at stake. Researchers do not know how to cite software properly; tool makers struggle to promote and get credit for their production; laboratories ignore how to track and report the software contributions of their members; and research institutions ignore that they have software assets that improve their impact. Yet there is some hope, as initiatives such as the Journal of Open Source Science arise to promote software in the academic sphere.

Karthik Ram, from the Berkeley Institute for Data Science, is a contributor and editor of the Journal of Open Source Science (JOSS). He starts with the observation that even though software impacts all modern research, we still don’t know how to cite it. For instance, how to cite the specific version used? Yet we need to be able to peer-review software, among other things (see the picture below). To promote a tool in the current academic system, one must publish a software paper (or “proxy paper”). As a peer-reviewed publication in an existing journal it is easy to cite, and does not need to change the academic infrastructure, but it requires writing said paper (additional work, and of a different nature), while many journals do not accept software papers, and besides, static authorship is not appropriate for collaborative tools. As Karthik notes, the Jupyter team is an example of the gap between the contributors to versions used in practice and the authors of the software paper (that is sensibly older). The JOSS is an answer to these issues, based on the idea of hacking something around what exists, because we cannot change the whole ecosystem at once. It is free, open, and developer friendly: if good practices are respected, a paper can be written in one hour. It consists of a high-level description of the tool, a simple citable object for the paper. This is as conventional as possible in the scholarly space: it uses ORCID for login, and archives papers with Portico. Check Karthik’s talk for more information, I found it captivating from start to end.

From the presentation of Karthik Ram, The Journal of Open Source Software: Credit for invisible work.

Julia Sprenger is doctoral student at the Research Centre Jülich in electrophysiology. Publications are the currency in science, she says. Time spent to develop a tool is an issue for a researcher, because it does not directly contribute to publication. Refactoring are maintaining code is not valued as academic work, and it is difficult to fund software development. She also observes a trust issue: self-made software is seen as the right thing for small projects, while commercial software is perceived as the better option for complex projects. As a consequence, the classic thing researchers do is to ensure publication first, and then maybe the software is released after that. Julia comments that making errors is taboo in science. She provides an example of how publishing code contributes to scientific progress, but noting that in most cases, software development is de-facto a side occupation. Julia offers ideas to improve the situation. First, how to help scientists as a software developer: comment, provide feedback, advice, advertise projects in dev communities, and make it easy for scientists to reuse your tools (easy to install, compile, good documentation). Small projects die when people leave science, she reminds. Second, how to help as a scientist: use existing open-source tools, don’t start from scratch and make sure that your project outlives your career.

Teresa Gomez-Diaz is CNRS research engineer at the Gaspard-Monge Computer Science laboratory. She shares her two-decades-long experience in a lab that has an important software production in various domains. Teresa was recently tasked with making an inventory of the tools produced by her lab. Unsurprisingly, some tools were not identified (no dates, lists of authors, license). It was even unclear, she says, what counted as “lab’s software.” Who decides of it? And who makes other choices, such as picking the license? The lab had to clarify its policies, which begged the question of its software production’s value. Teresa has observed similar problems in many other laboratories, and on multiple levels: scientific, legal… To help face these issues, Teresa offers a practical definition of “research software”: a well-identified set of code that has been written by a well-identified research team, i.e. software that has been built and used to produce a result published or disseminated in some article or scientific contribution. 15 years ago, it was not possible to publish software papers, but it is now possible to promote software production, although the dissemination procedure is a problem (also check Karthik Ram’s talk on that point). On that level, Teresa recommends separating the evaluation of research from that of software. She distinguishes four attention points. (1) Citation, i.e. measuring of the research software is well identified as a research output. This is a legal point: which are the authors, affiliation, participation percentages? (2) Dissemination. Are best practices followed? This is a policy point (about open science) and a legal point (licenses). (3) Usability. Are computations correct? Is the tool reliable and easy to install and use? This is a reproducibility point. (4) Research, i.e. the quality of the scientific work embedded in the software, and related publications. This is a research point about ensuring and measuring impact.

Teresa Gomez-Diaz’s four attention points nicely complement Roberto Di Cosmo’s remark that different actors have different needs: the researcher needs to archive and reference software used in articles, find useful tools, and get credit for developed software; the laboratory needs to track software contributions, produce reports and web pages; while the research organization needs to know its software assets, and measure the impact of its software production.

Technological means are not a secondary question in science. Erik Borra, Stijn Peeters and Bernhard Rieder remind us, as scholars and tool developers, that tools shape practices. Software for the humanities is different from that for the computational sciences because the needs are different; for instance, media studies require the epistemic flexibility to “follow the medium.” Yo Yehudi similarly argues that the lack of computational tools for research is due to the difficulty to attract and retain talent in the laboratory, which boils down to the fact that code is not paper. The way software is valued, promoted and funded in science impacts knowledge production in ways that are well worth understanding.

I am ultimately surprised by the close connection between the issue of research tools’ value and status in academia and the question of open science. It might be a bias of the FOSDEM community, but the point made by the speakers suggest that the notions of openness are imported from the tech world into science, which seems a reasonable idea to me. My own observations certainly go this way. Remark that, facing the “broken” state of academic publication infrastructure (to reuse Emmy Tsang’s word), Bram Luyten and Travis Rich have proposed open-source tools (DSpace and PubPub) before moving to institutional consolidation (Atmire and Knowledge Futures Group, respectively). Academic culture is naturally inclined to sharing knowledge freely, which resonates with the values of open-source software, but the legal and political tools (e.g., licences) and the practices are different. We see that the discussion about openness has moved from purely technological questions to wider political issues in academia, such as publication infrastructure, modes of collaboration, and funding.

Fears

TL;DR: Some speakers reflect on fears of open source in academia. Errors are taboo in science and trust issues develop. Some researchers are afraid to publish their code (fear to be judged, to have bugs exposed), misrepresent themselves as noncoders even though they produce code (ex: R scripts), or even hide that they do code (detrimental to seeking funds). Others dissimulate their code to retain intellectual property. Beyond emphasizing that judgmentalism is harmful to the community, the good practice of FLOSS development offers guidelines to mitigate those fears: open source your code from day one, make your tools discoverable, mind the license, and define responsibilities.

Mateusz Kuzak, research software community manager at the Netherlands eScience Center, tells us why researchers are afraid of putting their code in the open (see picture below). They are afraid of being judged for their “crappy code,” of bugs being discovered, and of getting “scooped.” Mateusz co-authored a paper offering practical solutions to these fears, titled Four simple recommendations to encourage best practices in research software (DOI: 10.12688/f1000research.11407.1). Those are (1) open source your code from day one, (2) make your tools discoverable, (3) mind the license, and (4) define responsibilities (more explanations in his presentation).

From the presentation of Mateusz Kuzak, On the road to sustainable research software.

Mateusz Kuzak is not the only one to take note of fears in science. Julia Sprenger observes that making errors is taboo in science, and that researchers do not trust open-source software for large projects. Maya Anderson-González narrates how, as a user, some installation issues scared her to the point she switched to more usable tools, unable to find appropriate help in her network. Yo Yehudi also highlights the importance of usability and observes that some researchers may want to hide that they code, because they think that it might be a problem when seeking funds. Finally, Sébastien Rochette reports on researchers having issues accepting the exposure of their coding project. He tells us that it is important that the open-source community is welcoming and indulgent towards coding researchers.

Facing the reproducibility crisis

TL;DR: Researchers might fail to replicate their peers’ experiments for many reasons, which is a known and major issue of experimental knowledge. Some of the most preventable reasons are missing data and underdocumented experimental details, like information hidden in hardware and software. As multiple speakers argue, it is, at the core, a data management issue where the open-source community has a lot to offer.

Lilly Winfree and Jan Grewe (I will return to him) comment on the importance of open-source software to address the reproducibility crisis (or “replication crisis”). Lilly focuses on solving the underlying data management issue with Frictionless Data while Jan gives the example of researchers who cannot reproduce each-other’s work because some tiny details are not mentioned in the papers (information hidden in settings of hard- and software) which motivated him to develop his own solution (the tool Relacs). Similarly, Emmy Tsang highlights the importance of this issue for eLife, which is why they developed RDS, the Reproducible Document Stack, a technology they use to ensure that published code and results are reproducible. Indeed, computational research requires extra steps to ensure reproducibility (a point also made by Thibault Lestang and Offray Luna). And as Teresa Gomez-Diaz remarks, the validation of scientific results is one of the ways to give value to software produced in research.

Jan H. Höffler founded ReplicationWiki, a database of empirical studies documenting the availability of replication material for them and of replication studies. His project supports transparency by making more scientific material available, which improves the quality of empirical social science. ReplicationWiki is open source and based on MediaWiki, with some adaptations. Jan shares the challenges he faces, which are not only technical: he seeks contributors and funding to help him in his public-utility endeavor.

Reproducibility, like validation and the improvement of existing results, is part of researchers’ needs, as Roberto Di Cosmo tells us. Like for ReplicationWiki, addressing it is a core mission of Software Heritage, as well as of various kinds of notebooks such as Nicolas CARPi’s eLabFTW software.

Notebooks

TL;DR: Notebooks are, generally speaking, appreciated by the research community for their ability to improve reproducibility. Researchers rely on them to mix text, data, and visualizations, during writing as well as publishing. We all know about Jupyter notebooks, but many other tools have a similar perspective, half a dozen of which have been presented by their respective developers. Those insist on the importance of openness to disseminate high quality knowledge (reproducible, verifiable, and circulable).

The topic of notebooks is obviously connected to that of reproducibility, as it is one of their main purposes. “Notebook” might even be a too restrictive term, as some of the devices that allow a reproducible mix of text, code, data and visualization (which I call here hybrid) only loosely resemble the Jupyter archetype. It is the case of Org-mode, a set of functionalities that live inside GNU-Emacs and can be used to bundle software, data and figures into one single executable plain-text document, as presented by Thibault Lestang, a computational physicist turned research software engineer. Or eLife’s Reproducible Document Stack presented by Emmy Tsang, which is not about hybrid online publishing.

In the natural sciences, the laboratory notebook (where experimental situations and results get logged) becomes digitized as an object known as “ELN” (electronic lab notebook). Nicolas CARPi is an engineer at Institut Curie and creator of eLabFTW, an open-source ELN solution. Where the data is hosted in a ELN? and what if the company who does it disappears? he asks. On the level of data security and durability issues, an open-source project is preferable. The development of eLabFTW is community driven, and it can be hosted on your own network (you own the data), respecting the standards of secure software. His presentation features more information and a demonstration. In addition to that, Niels Cautaerts, experimental materials scientist and eLabFTW user, presents his own usage and experiments with it. He showcases a project leveraging the eLabFTW Python API to print QR codes to streamline some lab procedures.

In the social sciences and humanities, the notebook is less about logging and more about disseminating. Robin de Mourat presents FONIO and Ovide, a content edition solution for the social sciences (footnotes, bibliographic references, internal links…). Antoine Fauchié, PhD student, presents Stylo, a user-friendly text editor for humanities scholars. It offers content structuration and multiple publishing formats (PDF, XML, HTML…) with only one source document, while keeping a simple and usable interface. Offray Luna, hacktivist and designer, and Santiago Bragagnolo, software engineer and researcher, present a similar project known as Grafoscopio. It consists of a notebook tool aimed at supporting a reproducible research: visualizing and editing text in a tree fashion; supporting a mix of code, data, visualization, and text; and exporting as HTML, LaTeX, or PDF. It is intended for science but also data journalism and activism. Grafoscopio is aimed at bridging fields that have similar needs: research, civic hacktivism, data feminism… Offray calls it a “pocket infrastructure” because it is simple, self-contained, extensible and offline first (which is a major concern for the global south), yet you only have to download one thing (the Grafoscopio tool). Like Erik Borra, Stijn Peeters and Bernhard Rieder he asks: how do we change the tools that change us? This “we” is a call for bridging communities, for instance through workshops.

Data issues

TL;DR: Data accessibility, transparency and accountability not only improve the quality and reproducibility of academic papers and investigative journalism, they also support the security we need to protect our privacy or other sensitive data such as whistleblower leaks. Moreover, data sustainability is a major concern in research, where data are relevant over a much longer term than the infrastructure supporting them (formats, institutional actors). As a response, multiple speakers advocate decentralized data storing (web3). And storage is not the only issue, because we also have to make data verifiable in practice, which prompts new design goals for the tools we use. Here the FAIR principles are useful: findable, accessible, interoperable, and re-usable data.

Markus Suhr and Marcel Parciak are research associates in medical informatics at the University Medical Center in Göttingen. They reflect on the “dreaded black box” of (mostly) proprietary software that lies in the center of the information flow of the field of medicine (they give an example). As medical data is sensitive, security is a primary concern. To empower the patient with a transparent and accountable workflow, they say we need a political campaign for free and decentralized software in healthcare. Travis Rich, Emmy Tsang and Yo Yehudi have similarly emphasized that technical issues are only a small part of the question.

Michael Hanke, who self-describes as full-time informagician and real-life psychologist, presents us DataLad, a decentralized digital objects management system. Its core idea is that a dataset is a Git repository. With this radical approach, the data would not disappear if DataLad died, because it relies on uncompromised decentralization (as Julia Sprenger says, “small projects die when people leave science”). DataLad exists because a single repository is not enough in science: additional features are necessary, such as utilities for metadata, provenance capture… DataLad brings convenience and simplification while respecting the core principle that the data is also just a Git repository.

Data decentralization is in fact a major topic when it comes to security issues. Anne L’Hôte and Bruno Thomas mention in their presentation that the “leaks” data analyzed at the International Consortium of Investigative Journalism (ICIJ) cannot be hosted in the cloud because the investigation needs to be protected. Molly Mackinlay, lead of the IPFS Project and Filecoin Network team, presents a decentralized infrastructure for the web known as “web3.” The ecosystem of web3 is turning centralized applications into decentralized protocols. It is a movement to make the web more decentralized, verifiable, and secure. The key element to all this is to make data verifiable. IPFS, standing for “Inter-Planetary File System,” aims at verifiably addressing and distributing content across a peer-to-peer network. Molly presents more elements of the web3 stack, like libP2P and Filecoin, a decentralized storage network (and protocols) and payment mechanism.

The presentation of Frictionless Data by Lilly Winfree pinpoints that the “boring” parts of data management such as cleaning, picking a license, checking quality, crediting source, ensuring interoperability and documenting attributes are causing a problematic friction. Indeed, we need to validate the data in a reproducible way. But Erik Borra, Stijn Peeters and Bernhard Rieder point to other frictions with data in science such as the legal and ethical issues with scraping, or the question of privacy and legal compliance. They ask how to react to these challenges in the design of our tools, for instance with encryption, or automatic upload to dedicated hosting services – it seems to me that this question should connect with the debate about data decentralization. On that matter, Sébastien Rochette‘s reminder of the FAIR principles is useful: Findable, Accessible, Interoperable, and Re-usable data. See also Datasette, a multi-tool for exploring and publishing data, aimed at data journalists, museum curators, archivists, local governments and anyone else who has data that they wish to share with the world. It is presented by its creator Simon Willison, also co-creator of the web framework Django.

Empowerment and activism

TL;DR: Solidarity and transparency are core values of the open-source community, echoing a similar inclination in science and journalism. Some speakers highlight the social benefits of FLOSS, like public accountability and inclusiveness, in contrast to closed systems propagating design biases. Openness is a way to counterbalance the tech world’s lack of diversity and focus on the industrialized world as a market. Open-source tools can empower various publics (minorities, voters, medical patients) but some remind that technical aspects are not enough to ensure a fair distribution of power, notably when it comes to funding, which begs wider political questions.

Damien Marié is a developer and member of Regards Citoyens, a French NGO that lobbies for open parliamentary data. Regards Citoyens and Science Po, France’s main school of political sciences, co-developed La Fabrique de la Loi, a data infrastructure and web platform retracing in details the law-building work of parliamentarians. Damien presents this tool and tells us that it has been a force to push for open data in France, to the point that it has now been institutionalized by the French senate.

Indeed, technology empowers. The open-source community is aware of that and discusses who needs to be empowered, and how. Xavier Coadic, biohacktivist, reflects on reverse-engineering as way to reclaim power over existing technology. Damien Marié offers to empower the citizen, and Guillaume Plique the social scientist. Yo Yehudi says that the tech world lacks diversity and technology should be used to bridge communities. Markus Suhr and Marcel Parciak want to empower the medical patient. Emmy Tsang and eLife aim at empowering “people and communities.” For Travis Rich, the real goal is a fair distribution of power, a sustainable digital infrastructure as public utility, and as he says, we are not there yet. Like many other speakers he reminds us that technical aspects are not enough: we also need political mobilization, institutional alliances, and a better funding scheme.

In the same spirit but in a different direction, some speakers mention publics with different digital equipment standards and/or practices. Santiago Bragagnolo and Offray Luna proposed the idea of “pocket infrastructure,” taking into account that with some publics like the global south, we cannot assume permanent internet connection. Albert Yumol, data activist based in the Philippines, shows us his repurposing of open data to investigate socio-economic indicators. His presentation highlights the importance of open-source technologies in “underrepresented countries,” notably when it comes to data activism. Albert showcases a supervised machine-learning approach to predict income classification of urban and rural areas in the Philippines, based on OpenStreetMap features, and drawing data from the Humanitarian Data Exchange (HDX). It is worth noting that Offray Luna is based in Colombia, and that we failed to find the funds for him to come to Bruxelles in 2020, and as a consequence Santiago Bragagnolo had to perform the talk on his behalf. In contrast, thanks to the 2021 edition being virtual, Albert Yumol was able to give his talk and Q&A from the Philippines.

Funding and precariousness

TL;DR: Many speakers tell us that their project needs contributors: the FOSDEM is also a place to attract developers. Open-source initiatives are generally precarious: small teams depending on one lead developer, with dependencies to projects in similar positions. Moreover, attractiveness depends mostly on the tool chain (e.g. Python is more attractive than C++) while behind-the-scene work is unattractive and difficult to fund. Speakers remind us that open source does not need to be a hobby, and that the movement needs options for long-term support. But successful projects share what has worked for them: Open Refine attracted contributions and became sustainable by reaching out to neighboring communities, improving localization, creating a steering committee, and building institutional alliances. Raw Graph launched a successful crowdfunding campaign, not only providing resources but also building institutional alliances and engaging users to gather feedback.

Jan Grewe, neurobiologist and tool developer, reflects on the good and the bad sides of developing open-source tools for neuroscience. His tool, Relacs, is maintained by a small team and all maintainers depend on the main developer. Moreover, its dependencies have the same issue: just a few maintainers. Jan remarks that the attractiveness of a project depends on its tool chain (for instance, Python is more attractive than C++). And developing the graphical user interface (GUI) is more attractive than behind-the-scene work. Jan advocates that open source does not need to be a hobby, it does not imply that one may not make a living out of it. He concludes that the way open source works does not always align with the way scientists want to use it, and that FLOSS needs options for long-term support.

Many speakers mention a situation like that Jan Grewe described. Julia Sprenger attests that it is difficult to find resources for refactoring and maintaining, and simply to fund software development in a research culture where it is considered a side occupation. As a consequence, she adds, small projects die when people leave science. Bram Luyten mentions that like any project, DSpace needs developers and contributors. The same goes for ReplicationWiki, says Jan H. Höffler. And as we hear from Yo Yehudi, code is only part of the problem, it’s almost secondary: projects need maintenance money, to justify doing documentation for users and developers, to improve accessibility and build a community. As a science funder, Wellcome addresses these issues in the health sector, she says. But other speakers share a number of other solutions.

The popular data visualization tool RawGraphs succeeded in raising funds through a crowdfunding campaign. Giorgio Uboldi, designer and co-founder of studio Calibro, presents RawGraphs and shares his insights on the process. The tool was born from the need to create complex non-conventional visualizations and up to 2019, it was mostly a side project. The crowdfunding campaign was launched in 2019 as a response to the project not being sustainable anymore. Giorgio tells us that the campaign was not just a way to get funds, but also to build institutional alliances and to engage users to gather feedback. Check Giorgio’s talk for more information about the campaign.

Antonin Delpeuch, Open Refine developer and PhD student, presents us this data wrangling tool and how it was revamped. How to attract contributions to make a sustainable project? Antonin shares what worked for the OpenRefine team. They reached out to neighboring communities. They improved localization using a tool named WebLate. They started a W3C community group to improve the “reconciliation API.” They created a steering committee whose role is to decide who to partner with, how to get fundings, etc. They candidated to the Google Summer of Code and Outreachy programmes, and they revamped the architecture of the tool. Some questions remain open though: How to introduce breaking changes without disrupting the ecosystem of extensions? Which tasks to leave to new contributors to pick up?

Developers in another culture

TL;DR: Some speakers reflect on what it means to be part of a laboratory or newsroom culture. Beyond observing that developer activities are rarely recognized as productive, they reflect on the different needs one finds in such cultures. For example, in the social sciences, researchers may favor rich description instead of modeling; in investigative journalism, security is a major constraint; and in both cases, the specialists need to understand the analytical steps offered by the tools they use. Making informed choices requires technological transparency, and user experience is crucial to a public that is not always acculturated to advanced computing. Not everyone wants to learn Python, and applications are a great way to provide such publics with pioneer techniques, but it requires an increased attention to design, which is yet another field to mobilize in tool making. Hybrid profiles are essential to bridging over these different areas: open-source developers can rarely afford being just developers.

I mentioned the culture clash experienced by developers in science culture when I unfolded the topic of research tools and academic currency, with Julia Sprenger and Teresa Gomez-Diaz commenting on the many misunderstandings about the practice of tool making and the value of software. But there is more to it, as different speakers show by accounting for what Maya Anderson-González calls the transculturation of development and science.

Guillaume Plique, research engineer at the Sciences Po médialab, endeavors to empower social scientists with web-mining tools. He asks: how to teach researchers web technologies? Like Erik Borra, Stijn Peeters and Bernhard Rieder, he recalls the importance of scraping, as opposed to crawling, to deal with the APIcalypse. He argues that Jupyterizing researchers is not a solution, because it’s OK to not want to learn Python “sometimes.” Yet web mining is a demanding skill that researchers cannot rarely afford to master, hence the necessity to make tools. Even though this requires the contribution of designers and a trade-off between usability and scalability. Guillaume offers a demo of two of his tools: Artoo.js, a client-side scraping companion, and Minet, a web-mining CLI tool & library for Python.

Anne L’Hôte and Bruno Thomas are developers at the ICIJ, the International Consortium of Investigative Journalism. In their presentation and demonstration of Datashare, the tool used to deal with the ICIJ “leaks” data (Luxembourg Leaks, Panama Papers). They highlight the specific constrains applying to technology in this context. Indeed, security is a major issue and no data can be hosted in the cloud because the investigation needs to be protected. And similarly to the research context mentioned by Guillaume Plique, usability and user experience are crucial because the investigators are not computer scientists.

Sébastien Rochette, data scientist, R consultant and marine biologist, shares his experience of helping researchers transforming a series of scattered analyses into a documented, reproducible and shareable workflow. There is a big step from coding for yourself to sharing with a community, he notes. Mentoring at the start of the project was very beneficial to the researchers, as they were reluctant to accept the exposure of the project (they feared to be judged). Sébastien comments on the importance for the open-source community to be welcoming and indulgent to the researchers, as those have to adapt their practices on contact with open-source projects, which might be the most important thing. Let us also remind Julia Sprenger’s recommendations on that matter: developers can help researchers by commenting, providing feedback, advertising projects in the tech communities, and improving usability; while scientists can help developers by using existing open-source tools, not starting from scratch, and make it sure that their project outlives their career.

Erik Borra, Stijn Peeters and Bernhard Rieder, as developers and humanities scholars, highlight the necessity of bridging academic culture with the tool development culture. They promote the use of “recipes:” series of analytical steps, some of which require an interpretation from the researcher, allowing him or her to make informed choices about the research design. They call for the role of the research engineer to be expanded, not only from software to research, but also to education and strategic planning. Indeed, the role of research engineer is hybrid by nature.

In the same spirit, Robin de Mourat, research designer at the Sciences Po médialab, tells us about his professional experience in a hybrid lab and reflects on what interdisciplinary contexts do to tool development, not only as a developer, but also as a designer and scholar. He focuses on a case where a tool is redeveloped by a hybrid collective to answer new needs. From his standpoint, redesign meetings can be seen as a battlefield where participants have diverging attachments (see picture below). The original designers want to respect the original intent; the developers want to prevent recoding avalanches; the teachers want to ensure that they can adapt their courses; the researchers want to rediscuss the methodology; information specialists want to expertise the methodology; and mediators want to take actual practices into account. But crucially, each person is not in only one role, but two or more. Participants are hybrid actors who face inner contradiction due to their multiple attachments.

From the presentation of Robin de Mourat, Developing from the field: Shifting design processes and roles between makers and practitioners around research tools development within an interdisciplinary research lab.

Demos, dev stories and user stories

A few words about the tech-oriented content of the devroom. Many speakers present an open-source project, either as the main focus of their talk, or as a ground for critical reflection. In these tool-oriented presentations one finds demonstrations as well as stories from developers, and sometimes from users, which is always appreciated.

20 open source tools were demonstrated: Advene, a tool to annotate videos in Digital Humanities; Artoo.js, a client-side scraping companion; DataLad, a distributed data management system; Datasette, a multi-tool for exploring and publishing data; Datashare, the tool used to deal with the ICIJ “leaks” data; eLabFTW, a digital solution for electronic lab notebooks; Frictionless Data, and notably the Data Package Creator; Gazouilloire: a command line tool for long-term tweets collection; HyBro, a web crawler for the social sciences; La Fabrique de la Loi, a datafication of the French law-making process; Minet, a web-mining CLI tool & library for Python; NeuroFedora, a Linux distribution dedicated to neuroscience; Open Refine, a reproducible data wrangler; Org-mode, a set of functionalities that live inside GNU-Emacs; Pandoræ, a data exploration and analysis tool; RawGraphs, an open-source visualization tool and framework; RECITAL, a digital humanities project on Italian comedy; Shrivelling World, a tool to represent geographical time-spaces; Stylo, a text editor for humanities scholars; and the Software Heritage platform.

11 speakers offered their testimony about their experience as open source project contributors: Aniket Pradhan with NeuroFedora; Benjamin Ooghe Tabanou with Hyphe and HyBro; Bernhard Rieder, Erik Borra and Stijn Peeters with their tools for social media research TCAT, 4CAT, and Netvizz; Giorgio Uboldi with RawGraphs; Jan Grewe with Relacs, a tool dedicated to electrophysiological recordings; Karthik Ram with the Journal of Open Source Science; Michael Hanke with DataLad; Nicolas CARPi with eLabFTW; Robin de Mourat with too many experiments and tools for me to list here; Sébastien Rochette narrated is experience in a hackathon; and Travis Rich with PubPub, the open source publishing platform.

Finally, 3 users shared their experience with open-source projects: Albert Yumol with Open Street Maps and the Humanitarian Data Exchange (HDX); Maya Anderson-González presented a micro project of visualizing and analyzing a Twitter network about FOSDEM 2020; and Niels Cautaerts presented his experience with eLabFTW.

Thank you to the co-organizers of the Open Research Tools and Technologies devroom, with whom I was super happy to collaborate: Diégo Antolinos-Basso, Paul Girard, Célya Gruson-Daniel, Achilleas Koutsou, Michael Sonntag, and Lilly Winfree. 😊

The public of the open research tools & technologies devroom, FOSDEM 2020.
(CC-BY-SA Mathieu Jacomy)