State of the conversation at the FOSDEM open research tools and technologies devroom

40 minutes read, 8 minutes for the TL;DR version.

Faces of FOSDEM (CC-BY-SA Diégo Antolinos-Basso)

The FOSDEM is a major event of the open-source community, especially in Europe. FOSDEM stands for “Free and Open source Software Developers’ European Meeting” and is volunteer-organized each year at the Université Libre de Bruxelles. It looks like Diégo‘s pictures of the crowd just above: diverse, joyful, exciting – despite the frequent Winter rain. It usually features a main track in the impressive Janson auditorium (1500 seats) and about 50 so-called devrooms (parallel tracks) in smaller teaching rooms. In 2020 and 2021 I participated to the creation and organization of the Open Research Tools and Technologies devroom (follow it on Twitter). After its two years of existence, I offer here an overview of what our speakers presented.

The devroom is aimed at developers and users of open tools and technologies, working in a context of knowledge production such as scientific research, investigative journalism, NGO fieldwork, etc. We accepted 19 presentations in 2020 and 22 in 2021. It is worth noting that the 2020 was just before COVID hit Europe and was in person as usual, while the 2021 edition featured pre-recorded talks with live Q&As. On the positive side, it made it easier for people far from Bruxelles to participate. We had 17 male and 4 female speakers in 2020, and 19 male and 5 female speakers in 2021. The 41 talks featured 20 demos, 11 speakers shared their story as a developer, and 3 as a user. I obviously cannot share all the interesting things these speakers have said, and I consequently focus on recurrent arguments, but you can watch the original presentations in video. As a consequence, purely technical talks are underrepresented in my report. This is not a statement, just a pragmatic editorial choice.

I delineated 9 distinct topics developed by multiple speakers. These topics are not completely separated, they largely overlap in ways that should be clear enough. Understand them as landmarks to get oriented in the discussion. The two central topics are open science and the question of research tools and academic currency, as one could expect considering the focus of the devroom. Seven other topics decline these two central discussion points: fears about open-sourceness; the reproducibility crisis in science and how to face it; open digital notebooks; issues with data; empowerment and activism; funding issues and the precariousness of open-source projects; and issues of developers working in lab or newsroom cultures. I will develop each topic starting with a short summary, then decline what different speakers bring to the discussion. For a quick overview, just read the first paragraph of each section, marked as TL;DR (too long; didn’t read). If you want to dive into the speakers’ videos, read the subsequent paragraphs and follow the links. I highlight the most important presentations (in my opinion) in the topic they focus on the most, even though all of them touch multiple topics. I intentionally restrain my links to the FOSDEM talk pages, from where you will find much more information and links about each speaker and project.

Note about vocabulary: FLOSS means “Free/Libre, Open-Source Software”, a notion I often shorten here into open source or just openness. But mind two important nuances: (1) freeness and openness are not the same thing and do not always go together; and (2) freeness has the double meaning of free as in free beer, and free as in free speech (hence the term “libre” borrowed to French). To a large extent, the former is just a means to obtain the latter.

Open science

TL;DR: Open science is more than just the use of open-source tools in science. It is also about the publishing infrastructure: open access to papers, data sets, and software artifacts; and it is also about cultures and practices. As opening the code is only useful to those who can read it, annotation and documentation are crucial to researchers. Moreover, technical friction and pain points are major obstacles to the adoption of FLOSS. Usability is a central need, yet it is difficult to justify and fund it in current academic culture. But some speakers propose ideas to intervene on research practices directly, by promoting more transparent and reflexive ways to work with data, and ways to reinvent scientific collaboration through transculturation. Most researchers seek openness, and the open-source movement has a lot to offer by replacing the current closed-by-design, open-as-an-afterthought academic publishing infrastructure with open-by-design solutions such as Software Heritage, DSpace, or PubPub. However, this requires a political mobilization beyond the open-source movement.

Roberto Di Cosmo, computer science full professor at University Paris Diderot, starts his presentation of the Software Heritage initiative, which he leads, with the “three pillars of open science:” open-access repositories, open-data-sets repositories, and open-source repositories. He articulates the different needs of different actors. First, the researchers need to archive and reference software used in articles, to find useful tools, to get credit for developed software, and to verify, reproduce, and improve their results. Second, the laboratories or teams need to track software contributions, produce reports and web pages. Third, the research organizations need to know their software assets, and measure the impact of their software production. This is one of the reasons why it is so important that research software artifacts get archived, referenced, described, and credited. These needs are answered by Software Heritage, a long term, non-profit, multistakeholder initiative with the ambitious goal to collect, preserve and share all source code publicly available, protecting our software commons, in collaboration with UNESCO. In his talk, Roberto also shows how we can use and benefit from Software Heritage in a research context.

Travis Rich, executive director at Knowledge Futures Group, presents PubPub, the open-source publishing platform he has contributed to create. We’re not “there” yet, he says: despite having orders or magnitude more open-source software than 20 years ago, an open and fair access to knowledge is not offered to everyone. Open source is not enough. PubPub received a lot of support because there was only a small set of industrial tools, all closed, and there was a strong demand for open publishing. In the process of meeting that demand, the PubPub team discovered multiple open-source projects that had tried the same but had become either hard to maintain, slow to adapt, or just outdated. Many non-technical mission-oriented groups just wanted to use open-source tech for philosophical and ethical reasons and were willing to pay for it. Although PubPub is open source, its functioning depends on relations with third party services that are not necessarily open source (ex: Google Scholar), which requires maintenance and sometimes payment. This hints at a bigger problem: we do not have models for building and sustaining digital infrastructure that serves as reliable, affordable, and accessible public utility. Which is why Travis and his team founded the Knowledges Futures Group, an independent non-profit dedicated to the production and maintenance of digital infrastructure as public utility. Travis highlights that if our real goal is a fair distribution of power, then technical aspects are not enough: institutional alliances and a sound funding scheme are also necessary which requires some degree of political mobilization (on that last point, see also Markus Suhr and Marcel Parciak’s presentation).

In a similar spirit, Bram Luyten presents DSpace, an open-source repository software package typically used for creating open-access repositories for scholarly content (notably). Bram presents himself as “open-access advocate” and is a co-founder of Atmire, a service provider for DSpace. He argues that although open access accelerates scientific progress, when it is thought as a hybrid of closed access, it is broken. Besides, pre-prints are not the solution because of the lack of peer-review. This is where DSpace is useful. Like any project, it needs developers and contributors; yet it is the most successful institutional repository platform, in part because it was localized very early (China, Taiwan) and released under MIT Licensing.

Emmy Tsang is innovation community manager at eLife, a non-profit organization and peer-reviewed open-access scientific journal for the biomedical and life sciences. Emmy shares her community-driven approach towards open innovation for research communication. Like Travis and Bram, she takes note that internet publishing is broken – even though it was created by scientists to share knowledge. The goal of eLife, she says, is to move from a slow, expensive, closed, and draining publication process to a fast, cheap, open, and user-friendly publication process. Like Bram she defends open-by-design as opposed to closed-by-design later made open. Indeed, closed systems propagate design biases (e.g. diversity biases) and produce unusable research, because it is hidden behind paywalls and inaccessible in various ways to different publics. The ambition of eLife is to offer open, inclusive, and user-centric research communication tools to the community. In that perspective, Emmy presents Reproducible Document Stack (RDS), eLife’s solution to produce papers where the code and results are reproducible.

Yo Yehudi is open-source tech lead for the data for science and health at the Wellcome Trust, a charitable foundation focused on health research. She lists important challenges for computational research: the lack of computational tools for research, the lack of incentives to draw, retain and reward talent, and the insufficient trust in computational work. She pinpoints a paradox with tools in research: sometimes research is too narrow and the tools that researchers need do not exist; and sometimes there are too many tools and standards and the fragmentation of that landscape is a problem for researchers. Yo explains that Wellcome’s goal is to fund the tools, talent and trust researchers need – it is a science funder. She showcases two open-source projects Wellcome has funded: Afrimapr and OpenSAFELY. Like Travis, she argues that code is only part of the problem, almost secondary: we need maintenance money, we need to justify doing documentation for users and developers, to fund accessibility, to build a community, and to spend time on improving the user experience. Those are all justifiable software costs! she insists.

Not all speakers are developers, some are tool users. Maya Anderson-González is a researcher in computational social sciences and digital humanities, and narrates how she used FLOSS tools “and lived to tell the tale.” She explains how she got acquainted with open-source principles, the issues she faced, and how she built confidence through exposure and training – her project was to visualize and analyze a Twitter network of the FOSDEM 2020. She comments on the importance of documenting one’s own process, as well as accessing other people’s process. In this setting it is valuable to share work-in-progress reflections and results, which naturally leads to the reflexivity of open research (which Maya found intimidating at first). In traditional science, collaboration can be coded in specific ways, for instance around seniority. But using tools designed for community participation changes existing collaboration processes. She concludes on the usefulness of the concept of “transculturation,” which Maya sees as what FLOSS developers do with social science researchers to create an open science culture.

From the presentation of Maya Anderson-González, FLOSS meets Social Science Research (and lived to tell the tale)

Erik Borra, Stijn Peeters and Bernhard Rieder are assistant and associate professors at the media studies department of the University of Amsterdam. The three of them have the hybrid profile of humanities scholar and tool developer, having offered us many of the Digital Methods Initiative tools to analyze online platforms such as Netvizz (Facebook), TCAT (Twitter) and 4CAT (4chan, Reddit and more). They remind us that just opening the code is only useful to those who can read it, which is why annotation and documentation are key to making the software open. They highlight three issues. First, our relationships with large platform companies are changing, a phenomenon known as “the APIcalypse.” Netvizz, for instance, was a victim of Facebook’s policy change. The new situation offers a few new opportunities, but it does not work the same way and it sparks a debate about scraping (its legality and ethics). Second issue, privacy and legal compliance (Europe’s new GDPR regulation): new requirements, securing access to the data… how to react to these challenges in the design of our tools? Third, the web has been changing over recent years. Platforms started deplatforming users, and some platforms may disappear entirely (ex: Parler, even though it is now back online). We need to understand these elements before reacting to them and helping our users. Because of this everchanging landscape, they say, the role of the research engineer must be expanded: one foot in research; one foot in software; but also one foot in software education and one foot in strategic planning. On the bright side, Erik, Stijn and Bernhard just obtained a Dutch grant from the platform Digital Infrastructure for the Social Science and Humanities to start address these issues (their project is named CAT4SMR).

Still on the tool-making side but for the natural sciences, Aniket Pradhan presents NeuroFedora, a Linux distribution dedicated to neuroscience he contributes to. Emmy Tsang, Yo Yehudi and Maya Anderson-González have highlighted the importance of usability. The problem NeuroFedora solves is exactly of that kind. Many powerful packages exist to help the neuroscientist, but they are often difficult to install, which is a major obstacle to their adoption. NeuroFedora embeds those packages, alleviating the technical pain even in cases where the documentation is lacking or the installation skill-demanding, making relevant tools available in a way that just works for the user. It also serves open science by publicizing tools. The NeuroFedora team consists of about 20 contributors, only 5 of which have a neuroscience background.

Lilly Winfree is product owner of the Frictionless Data for Reproducible Research project at Open Knowledge Foundation. As Aniket Pradhan’s contribution highlights, sometimes what it takes to promote openness just consists of lackluster tasks such as writing an installation script. In the realm of data management, the “boring” tasks are to clean the data, check their quality, document their origin or find their license. This “friction” is an issue for scientists, data journalists and more. Lilly presents the specification and open-source toolkit known as Frictionless Data, aiming at alleviating that friction. Its ambition is to bring reproducibility to the process of transforming messy data into clean data, and then into hosted data. The central concept is called a “data package”, typically a metadata file (with optional schema) documenting your usual CSV data file, that can be validated automatically. Also check Carles Pina Estany’s demo of the Data Package Creator.

In complement, let me mention a few points from other talks I will return to. Julia Sprenger remarks that for many researchers, publication comes first, and then maybe the software is released later on, even though publishing code improves scientific results (she gives an example). Sébastien Rochette similarly observes that, in a hackathon he organized, researchers were reluctant to expose their project; but they ultimately changed their practices. Finally, different speakers such as Karthik Ram reflect that open-source tools are not properly valued in laboratory culture; for instance, many researchers ignore how to cite software, or pick a license for their production. A situation caused by the overwhelming importance given to peer-reviewed publications in academia.

Research tools and academic currency

TL;DR: Even though software impacts all modern research, even though research tools shape scientific practices, publications remain the prime currency in science. Developing software does not contribute directly to publication, which leads to multiple issues. Tool making is difficult to fund and justify, if not simply considered a side occupation. Refactoring and maintenance are undervalued, and releasing code is not a priority. The research software exists as an identified object in academia, but research institutions lack incentives to attract and retain talent and improve the situation. This situation might be explained by the different perspectives of the actors at stake. Researchers do not know how to cite software properly; tool makers struggle to promote and get credit for their production; laboratories ignore how to track and report the software contributions of their members; and research institutions ignore that they have software assets that improve their impact. Yet there is some hope, as initiatives such as the Journal of Open Source Science arise to promote software in the academic sphere.

Karthik Ram, from the Berkeley Institute for Data Science, is a contributor and editor of the Journal of Open Source Science (JOSS). He starts with the observation that even though software impacts all modern research, we still don’t know how to cite it. For instance, how to cite the specific version used? Yet we need to be able to peer-review software, among other things (see the picture below). To promote a tool in the current academic system, one must publish a software paper (or “proxy paper”). As a peer-reviewed publication in an existing journal it is easy to cite, and does not need to change the academic infrastructure, but it requires writing said paper (additional work, and of a different nature), while many journals do not accept software papers, and besides, static authorship is not appropriate for collaborative tools. As Karthik notes, the Jupyter team is an example of the gap between the contributors to versions used in practice and the authors of the software paper (that is sensibly older). The JOSS is an answer to these issues, based on the idea of hacking something around what exists, because we cannot change the whole ecosystem at once. It is free, open, and developer friendly: if good practices are respected, a paper can be written in one hour. It consists of a high-level description of the tool, a simple citable object for the paper. This is as conventional as possible in the scholarly space: it uses ORCID for login, and archives papers with Portico. Check Karthik’s talk for more information, I found it captivating from start to end.

From the presentation of Karthik Ram, The Journal of Open Source Software: Credit for invisible work.

Julia Sprenger is doctoral student at the Research Centre Jülich in electrophysiology. Publications are the currency in science, she says. Time spent to develop a tool is an issue for a researcher, because it does not directly contribute to publication. Refactoring are maintaining code is not valued as academic work, and it is difficult to fund software development. She also observes a trust issue: self-made software is seen as the right thing for small projects, while commercial software is perceived as the better option for complex projects. As a consequence, the classic thing researchers do is to ensure publication first, and then maybe the software is released after that. Julia comments that making errors is taboo in science. She provides an example of how publishing code contributes to scientific progress, but noting that in most cases, software development is de-facto a side occupation. Julia offers ideas to improve the situation. First, how to help scientists as a software developer: comment, provide feedback, advice, advertise projects in dev communities, and make it easy for scientists to reuse your tools (easy to install, compile, good documentation). Small projects die when people leave science, she reminds. Second, how to help as a scientist: use existing open-source tools, don’t start from scratch and make sure that your project outlives your career.

Teresa Gomez-Diaz is CNRS research engineer at the Gaspard-Monge Computer Science laboratory. She shares her two-decades-long experience in a lab that has an important software production in various domains. Teresa was recently tasked with making an inventory of the tools produced by her lab. Unsurprisingly, some tools were not identified (no dates, lists of authors, license). It was even unclear, she says, what counted as “lab’s software.” Who decides of it? And who makes other choices, such as picking the license? The lab had to clarify its policies, which begged the question of its software production’s value. Teresa has observed similar problems in many other laboratories, and on multiple levels: scientific, legal… To help face these issues, Teresa offers a practical definition of “research software”: a well-identified set of code that has been written by a well-identified research team, i.e. software that has been built and used to produce a result published or disseminated in some article or scientific contribution. 15 years ago, it was not possible to publish software papers, but it is now possible to promote software production, although the dissemination procedure is a problem (also check Karthik Ram’s talk on that point). On that level, Teresa recommends separating the evaluation of research from that of software. She distinguishes four attention points. (1) Citation, i.e. measuring of the research software is well identified as a research output. This is a legal point: which are the authors, affiliation, participation percentages? (2) Dissemination. Are best practices followed? This is a policy point (about open science) and a legal point (licenses). (3) Usability. Are computations correct? Is the tool reliable and easy to install and use? This is a reproducibility point. (4) Research, i.e. the quality of the scientific work embedded in the software, and related publications. This is a research point about ensuring and measuring impact.

Teresa Gomez-Diaz’s four attention points nicely complement Roberto Di Cosmo’s remark that different actors have different needs: the researcher needs to archive and reference software used in articles, find useful tools, and get credit for developed software; the laboratory needs to track software contributions, produce reports and web pages; while the research organization needs to know its software assets, and measure the impact of its software production.

Technological means are not a secondary question in science. Erik Borra, Stijn Peeters and Bernhard Rieder remind us, as scholars and tool developers, that tools shape practices. Software for the humanities is different from that for the computational sciences because the needs are different; for instance, media studies require the epistemic flexibility to “follow the medium.” Yo Yehudi similarly argues that the lack of computational tools for research is due to the difficulty to attract and retain talent in the laboratory, which boils down to the fact that code is not paper. The way software is valued, promoted and funded in science impacts knowledge production in ways that are well worth understanding.

I am ultimately surprised by the close connection between the issue of research tools’ value and status in academia and the question of open science. It might be a bias of the FOSDEM community, but the point made by the speakers suggest that the notions of openness are imported from the tech world into science, which seems a reasonable idea to me. My own observations certainly go this way. Remark that, facing the “broken” state of academic publication infrastructure (to reuse Emmy Tsang’s word), Bram Luyten and Travis Rich have proposed open-source tools (DSpace and PubPub) before moving to institutional consolidation (Atmire and Knowledge Futures Group, respectively). Academic culture is naturally inclined to sharing knowledge freely, which resonates with the values of open-source software, but the legal and political tools (e.g., licences) and the practices are different. We see that the discussion about openness has moved from purely technological questions to wider political issues in academia, such as publication infrastructure, modes of collaboration, and funding.

Fears

TL;DR: Some speakers reflect on fears of open source in academia. Errors are taboo in science and trust issues develop. Some researchers are afraid to publish their code (fear to be judged, to have bugs exposed), misrepresent themselves as noncoders even though they produce code (ex: R scripts), or even hide that they do code (detrimental to seeking funds). Others dissimulate their code to retain intellectual property. Beyond emphasizing that judgmentalism is harmful to the community, the good practice of FLOSS development offers guidelines to mitigate those fears: open source your code from day one, make your tools discoverable, mind the license, and define responsibilities.

Mateusz Kuzak, research software community manager at the Netherlands eScience Center, tells us why researchers are afraid of putting their code in the open (see picture below). They are afraid of being judged for their “crappy code,” of bugs being discovered, and of getting “scooped.” Mateusz co-authored a paper offering practical solutions to these fears, titled Four simple recommendations to encourage best practices in research software (DOI: 10.12688/f1000research.11407.1). Those are (1) open source your code from day one, (2) make your tools discoverable, (3) mind the license, and (4) define responsibilities (more explanations in his presentation).

From the presentation of Mateusz Kuzak, On the road to sustainable research software.

Mateusz Kuzak is not the only one to take note of fears in science. Julia Sprenger observes that making errors is taboo in science, and that researchers do not trust open-source software for large projects. Maya Anderson-González narrates how, as a user, some installation issues scared her to the point she switched to more usable tools, unable to find appropriate help in her network. Yo Yehudi also highlights the importance of usability and observes that some researchers may want to hide that they code, because they think that it might be a problem when seeking funds. Finally, Sébastien Rochette reports on researchers having issues accepting the exposure of their coding project. He tells us that it is important that the open-source community is welcoming and indulgent towards coding researchers.

Facing the reproducibility crisis

TL;DR: Researchers might fail to replicate their peers’ experiments for many reasons, which is a known and major issue of experimental knowledge. Some of the most preventable reasons are missing data and underdocumented experimental details, like information hidden in hardware and software. As multiple speakers argue, it is, at the core, a data management issue where the open-source community has a lot to offer.

Lilly Winfree and Jan Grewe (I will return to him) comment on the importance of open-source software to address the reproducibility crisis (or “replication crisis”). Lilly focuses on solving the underlying data management issue with Frictionless Data while Jan gives the example of researchers who cannot reproduce each-other’s work because some tiny details are not mentioned in the papers (information hidden in settings of hard- and software) which motivated him to develop his own solution (the tool Relacs). Similarly, Emmy Tsang highlights the importance of this issue for eLife, which is why they developed RDS, the Reproducible Document Stack, a technology they use to ensure that published code and results are reproducible. Indeed, computational research requires extra steps to ensure reproducibility (a point also made by Thibault Lestang and Offray Luna). And as Teresa Gomez-Diaz remarks, the validation of scientific results is one of the ways to give value to software produced in research.

Jan H. Höffler founded ReplicationWiki, a database of empirical studies documenting the availability of replication material for them and of replication studies. His project supports transparency by making more scientific material available, which improves the quality of empirical social science. ReplicationWiki is open source and based on MediaWiki, with some adaptations. Jan shares the challenges he faces, which are not only technical: he seeks contributors and funding to help him in his public-utility endeavor.

Reproducibility, like validation and the improvement of existing results, is part of researchers’ needs, as Roberto Di Cosmo tells us. Like for ReplicationWiki, addressing it is a core mission of Software Heritage, as well as of various kinds of notebooks such as Nicolas CARPi’s eLabFTW software.

Notebooks

TL;DR: Notebooks are, generally speaking, appreciated by the research community for their ability to improve reproducibility. Researchers rely on them to mix text, data, and visualizations, during writing as well as publishing. We all know about Jupyter notebooks, but many other tools have a similar perspective, half a dozen of which have been presented by their respective developers. Those insist on the importance of openness to disseminate high quality knowledge (reproducible, verifiable, and circulable).

The topic of notebooks is obviously connected to that of reproducibility, as it is one of their main purposes. “Notebook” might even be a too restrictive term, as some of the devices that allow a reproducible mix of text, code, data and visualization (which I call here hybrid) only loosely resemble the Jupyter archetype. It is the case of Org-mode, a set of functionalities that live inside GNU-Emacs and can be used to bundle software, data and figures into one single executable plain-text document, as presented by Thibault Lestang, a computational physicist turned research software engineer. Or eLife’s Reproducible Document Stack presented by Emmy Tsang, which is not about hybrid online publishing.

In the natural sciences, the laboratory notebook (where experimental situations and results get logged) becomes digitized as an object known as “ELN” (electronic lab notebook). Nicolas CARPi is an engineer at Institut Curie and creator of eLabFTW, an open-source ELN solution. Where the data is hosted in a ELN? and what if the company who does it disappears? he asks. On the level of data security and durability issues, an open-source project is preferable. The development of eLabFTW is community driven, and it can be hosted on your own network (you own the data), respecting the standards of secure software. His presentation features more information and a demonstration. In addition to that, Niels Cautaerts, experimental materials scientist and eLabFTW user, presents his own usage and experiments with it. He showcases a project leveraging the eLabFTW Python API to print QR codes to streamline some lab procedures.

In the social sciences and humanities, the notebook is less about logging and more about disseminating. Robin de Mourat presents FONIO and Ovide, a content edition solution for the social sciences (footnotes, bibliographic references, internal links…). Antoine Fauchié, PhD student, presents Stylo, a user-friendly text editor for humanities scholars. It offers content structuration and multiple publishing formats (PDF, XML, HTML…) with only one source document, while keeping a simple and usable interface. Offray Luna, hacktivist and designer, and Santiago Bragagnolo, software engineer and researcher, present a similar project known as Grafoscopio. It consists of a notebook tool aimed at supporting a reproducible research: visualizing and editing text in a tree fashion; supporting a mix of code, data, visualization, and text; and exporting as HTML, LaTeX, or PDF. It is intended for science but also data journalism and activism. Grafoscopio is aimed at bridging fields that have similar needs: research, civic hacktivism, data feminism… Offray calls it a “pocket infrastructure” because it is simple, self-contained, extensible and offline first (which is a major concern for the global south), yet you only have to download one thing (the Grafoscopio tool). Like Erik Borra, Stijn Peeters and Bernhard Rieder he asks: how do we change the tools that change us? This “we” is a call for bridging communities, for instance through workshops.

Data issues

TL;DR: Data accessibility, transparency and accountability not only improve the quality and reproducibility of academic papers and investigative journalism, they also support the security we need to protect our privacy or other sensitive data such as whistleblower leaks. Moreover, data sustainability is a major concern in research, where data are relevant over a much longer term than the infrastructure supporting them (formats, institutional actors). As a response, multiple speakers advocate decentralized data storing (web3). And storage is not the only issue, because we also have to make data verifiable in practice, which prompts new design goals for the tools we use. Here the FAIR principles are useful: findable, accessible, interoperable, and re-usable data.

Markus Suhr and Marcel Parciak are research associates in medical informatics at the University Medical Center in Göttingen. They reflect on the “dreaded black box” of (mostly) proprietary software that lies in the center of the information flow of the field of medicine (they give an example). As medical data is sensitive, security is a primary concern. To empower the patient with a transparent and accountable workflow, they say we need a political campaign for free and decentralized software in healthcare. Travis Rich, Emmy Tsang and Yo Yehudi have similarly emphasized that technical issues are only a small part of the question.

Michael Hanke, who self-describes as full-time informagician and real-life psychologist, presents us DataLad, a decentralized digital objects management system. Its core idea is that a dataset is a Git repository. With this radical approach, the data would not disappear if DataLad died, because it relies on uncompromised decentralization (as Julia Sprenger says, “small projects die when people leave science”). DataLad exists because a single repository is not enough in science: additional features are necessary, such as utilities for metadata, provenance capture… DataLad brings convenience and simplification while respecting the core principle that the data is also just a Git repository.

Data decentralization is in fact a major topic when it comes to security issues. Anne L’Hôte and Bruno Thomas mention in their presentation that the “leaks” data analyzed at the International Consortium of Investigative Journalism (ICIJ) cannot be hosted in the cloud because the investigation needs to be protected. Molly Mackinlay, lead of the IPFS Project and Filecoin Network team, presents a decentralized infrastructure for the web known as “web3.” The ecosystem of web3 is turning centralized applications into decentralized protocols. It is a movement to make the web more decentralized, verifiable, and secure. The key element to all this is to make data verifiable. IPFS, standing for “Inter-Planetary File System,” aims at verifiably addressing and distributing content across a peer-to-peer network. Molly presents more elements of the web3 stack, like libP2P and Filecoin, a decentralized storage network (and protocols) and payment mechanism.

The presentation of Frictionless Data by Lilly Winfree pinpoints that the “boring” parts of data management such as cleaning, picking a license, checking quality, crediting source, ensuring interoperability and documenting attributes are causing a problematic friction. Indeed, we need to validate the data in a reproducible way. But Erik Borra, Stijn Peeters and Bernhard Rieder point to other frictions with data in science such as the legal and ethical issues with scraping, or the question of privacy and legal compliance. They ask how to react to these challenges in the design of our tools, for instance with encryption, or automatic upload to dedicated hosting services – it seems to me that this question should connect with the debate about data decentralization. On that matter, Sébastien Rochette‘s reminder of the FAIR principles is useful: Findable, Accessible, Interoperable, and Re-usable data. See also Datasette, a multi-tool for exploring and publishing data, aimed at data journalists, museum curators, archivists, local governments and anyone else who has data that they wish to share with the world. It is presented by its creator Simon Willison, also co-creator of the web framework Django.

Empowerment and activism

TL;DR: Solidarity and transparency are core values of the open-source community, echoing a similar inclination in science and journalism. Some speakers highlight the social benefits of FLOSS, like public accountability and inclusiveness, in contrast to closed systems propagating design biases. Openness is a way to counterbalance the tech world’s lack of diversity and focus on the industrialized world as a market. Open-source tools can empower various publics (minorities, voters, medical patients) but some remind that technical aspects are not enough to ensure a fair distribution of power, notably when it comes to funding, which begs wider political questions.

Damien Marié is a developer and member of Regards Citoyens, a French NGO that lobbies for open parliamentary data. Regards Citoyens and Science Po, France’s main school of political sciences, co-developed La Fabrique de la Loi, a data infrastructure and web platform retracing in details the law-building work of parliamentarians. Damien presents this tool and tells us that it has been a force to push for open data in France, to the point that it has now been institutionalized by the French senate.

Indeed, technology empowers. The open-source community is aware of that and discusses who needs to be empowered, and how. Xavier Coadic, biohacktivist, reflects on reverse-engineering as way to reclaim power over existing technology. Damien Marié offers to empower the citizen, and Guillaume Plique the social scientist. Yo Yehudi says that the tech world lacks diversity and technology should be used to bridge communities. Markus Suhr and Marcel Parciak want to empower the medical patient. Emmy Tsang and eLife aim at empowering “people and communities.” For Travis Rich, the real goal is a fair distribution of power, a sustainable digital infrastructure as public utility, and as he says, we are not there yet. Like many other speakers he reminds us that technical aspects are not enough: we also need political mobilization, institutional alliances, and a better funding scheme.

In the same spirit but in a different direction, some speakers mention publics with different digital equipment standards and/or practices. Santiago Bragagnolo and Offray Luna proposed the idea of “pocket infrastructure,” taking into account that with some publics like the global south, we cannot assume permanent internet connection. Albert Yumol, data activist based in the Philippines, shows us his repurposing of open data to investigate socio-economic indicators. His presentation highlights the importance of open-source technologies in “underrepresented countries,” notably when it comes to data activism. Albert showcases a supervised machine-learning approach to predict income classification of urban and rural areas in the Philippines, based on OpenStreetMap features, and drawing data from the Humanitarian Data Exchange (HDX). It is worth noting that Offray Luna is based in Colombia, and that we failed to find the funds for him to come to Bruxelles in 2020, and as a consequence Santiago Bragagnolo had to perform the talk on his behalf. In contrast, thanks to the 2021 edition being virtual, Albert Yumol was able to give his talk and Q&A from the Philippines.

Funding and precariousness

TL;DR: Many speakers tell us that their project needs contributors: the FOSDEM is also a place to attract developers. Open-source initiatives are generally precarious: small teams depending on one lead developer, with dependencies to projects in similar positions. Moreover, attractiveness depends mostly on the tool chain (e.g. Python is more attractive than C++) while behind-the-scene work is unattractive and difficult to fund. Speakers remind us that open source does not need to be a hobby, and that the movement needs options for long-term support. But successful projects share what has worked for them: Open Refine attracted contributions and became sustainable by reaching out to neighboring communities, improving localization, creating a steering committee, and building institutional alliances. Raw Graph launched a successful crowdfunding campaign, not only providing resources but also building institutional alliances and engaging users to gather feedback.

Jan Grewe, neurobiologist and tool developer, reflects on the good and the bad sides of developing open-source tools for neuroscience. His tool, Relacs, is maintained by a small team and all maintainers depend on the main developer. Moreover, its dependencies have the same issue: just a few maintainers. Jan remarks that the attractiveness of a project depends on its tool chain (for instance, Python is more attractive than C++). And developing the graphical user interface (GUI) is more attractive than behind-the-scene work. Jan advocates that open source does not need to be a hobby, it does not imply that one may not make a living out of it. He concludes that the way open source works does not always align with the way scientists want to use it, and that FLOSS needs options for long-term support.

Many speakers mention a situation like that Jan Grewe described. Julia Sprenger attests that it is difficult to find resources for refactoring and maintaining, and simply to fund software development in a research culture where it is considered a side occupation. As a consequence, she adds, small projects die when people leave science. Bram Luyten mentions that like any project, DSpace needs developers and contributors. The same goes for ReplicationWiki, says Jan H. Höffler. And as we hear from Yo Yehudi, code is only part of the problem, it’s almost secondary: projects need maintenance money, to justify doing documentation for users and developers, to improve accessibility and build a community. As a science funder, Wellcome addresses these issues in the health sector, she says. But other speakers share a number of other solutions.

The popular data visualization tool RawGraphs succeeded in raising funds through a crowdfunding campaign. Giorgio Uboldi, designer and co-founder of studio Calibro, presents RawGraphs and shares his insights on the process. The tool was born from the need to create complex non-conventional visualizations and up to 2019, it was mostly a side project. The crowdfunding campaign was launched in 2019 as a response to the project not being sustainable anymore. Giorgio tells us that the campaign was not just a way to get funds, but also to build institutional alliances and to engage users to gather feedback. Check Giorgio’s talk for more information about the campaign.

Antonin Delpeuch, Open Refine developer and PhD student, presents us this data wrangling tool and how it was revamped. How to attract contributions to make a sustainable project? Antonin shares what worked for the OpenRefine team. They reached out to neighboring communities. They improved localization using a tool named WebLate. They started a W3C community group to improve the “reconciliation API.” They created a steering committee whose role is to decide who to partner with, how to get fundings, etc. They candidated to the Google Summer of Code and Outreachy programmes, and they revamped the architecture of the tool. Some questions remain open though: How to introduce breaking changes without disrupting the ecosystem of extensions? Which tasks to leave to new contributors to pick up?

Developers in another culture

TL;DR: Some speakers reflect on what it means to be part of a laboratory or newsroom culture. Beyond observing that developer activities are rarely recognized as productive, they reflect on the different needs one finds in such cultures. For example, in the social sciences, researchers may favor rich description instead of modeling; in investigative journalism, security is a major constraint; and in both cases, the specialists need to understand the analytical steps offered by the tools they use. Making informed choices requires technological transparency, and user experience is crucial to a public that is not always acculturated to advanced computing. Not everyone wants to learn Python, and applications are a great way to provide such publics with pioneer techniques, but it requires an increased attention to design, which is yet another field to mobilize in tool making. Hybrid profiles are essential to bridging over these different areas: open-source developers can rarely afford being just developers.

I mentioned the culture clash experienced by developers in science culture when I unfolded the topic of research tools and academic currency, with Julia Sprenger and Teresa Gomez-Diaz commenting on the many misunderstandings about the practice of tool making and the value of software. But there is more to it, as different speakers show by accounting for what Maya Anderson-González calls the transculturation of development and science.

Guillaume Plique, research engineer at the Sciences Po médialab, endeavors to empower social scientists with web-mining tools. He asks: how to teach researchers web technologies? Like Erik Borra, Stijn Peeters and Bernhard Rieder, he recalls the importance of scraping, as opposed to crawling, to deal with the APIcalypse. He argues that Jupyterizing researchers is not a solution, because it’s OK to not want to learn Python “sometimes.” Yet web mining is a demanding skill that researchers cannot rarely afford to master, hence the necessity to make tools. Even though this requires the contribution of designers and a trade-off between usability and scalability. Guillaume offers a demo of two of his tools: Artoo.js, a client-side scraping companion, and Minet, a web-mining CLI tool & library for Python.

Anne L’Hôte and Bruno Thomas are developers at the ICIJ, the International Consortium of Investigative Journalism. In their presentation and demonstration of Datashare, the tool used to deal with the ICIJ “leaks” data (Luxembourg Leaks, Panama Papers). They highlight the specific constrains applying to technology in this context. Indeed, security is a major issue and no data can be hosted in the cloud because the investigation needs to be protected. And similarly to the research context mentioned by Guillaume Plique, usability and user experience are crucial because the investigators are not computer scientists.

Sébastien Rochette, data scientist, R consultant and marine biologist, shares his experience of helping researchers transforming a series of scattered analyses into a documented, reproducible and shareable workflow. There is a big step from coding for yourself to sharing with a community, he notes. Mentoring at the start of the project was very beneficial to the researchers, as they were reluctant to accept the exposure of the project (they feared to be judged). Sébastien comments on the importance for the open-source community to be welcoming and indulgent to the researchers, as those have to adapt their practices on contact with open-source projects, which might be the most important thing. Let us also remind Julia Sprenger’s recommendations on that matter: developers can help researchers by commenting, providing feedback, advertising projects in the tech communities, and improving usability; while scientists can help developers by using existing open-source tools, not starting from scratch, and make it sure that their project outlives their career.

Erik Borra, Stijn Peeters and Bernhard Rieder, as developers and humanities scholars, highlight the necessity of bridging academic culture with the tool development culture. They promote the use of “recipes:” series of analytical steps, some of which require an interpretation from the researcher, allowing him or her to make informed choices about the research design. They call for the role of the research engineer to be expanded, not only from software to research, but also to education and strategic planning. Indeed, the role of research engineer is hybrid by nature.

In the same spirit, Robin de Mourat, research designer at the Sciences Po médialab, tells us about his professional experience in a hybrid lab and reflects on what interdisciplinary contexts do to tool development, not only as a developer, but also as a designer and scholar. He focuses on a case where a tool is redeveloped by a hybrid collective to answer new needs. From his standpoint, redesign meetings can be seen as a battlefield where participants have diverging attachments (see picture below). The original designers want to respect the original intent; the developers want to prevent recoding avalanches; the teachers want to ensure that they can adapt their courses; the researchers want to rediscuss the methodology; information specialists want to expertise the methodology; and mediators want to take actual practices into account. But crucially, each person is not in only one role, but two or more. Participants are hybrid actors who face inner contradiction due to their multiple attachments.

From the presentation of Robin de Mourat, Developing from the field: Shifting design processes and roles between makers and practitioners around research tools development within an interdisciplinary research lab.

Demos, dev stories and user stories

A few words about the tech-oriented content of the devroom. Many speakers present an open-source project, either as the main focus of their talk, or as a ground for critical reflection. In these tool-oriented presentations one finds demonstrations as well as stories from developers, and sometimes from users, which is always appreciated.

20 open source tools were demonstrated: Advene, a tool to annotate videos in Digital Humanities; Artoo.js, a client-side scraping companion; DataLad, a distributed data management system; Datasette, a multi-tool for exploring and publishing data; Datashare, the tool used to deal with the ICIJ “leaks” data; eLabFTW, a digital solution for electronic lab notebooks; Frictionless Data, and notably the Data Package Creator; Gazouilloire: a command line tool for long-term tweets collection; HyBro, a web crawler for the social sciences; La Fabrique de la Loi, a datafication of the French law-making process; Minet, a web-mining CLI tool & library for Python; NeuroFedora, a Linux distribution dedicated to neuroscience; Open Refine, a reproducible data wrangler; Org-mode, a set of functionalities that live inside GNU-Emacs; Pandoræ, a data exploration and analysis tool; RawGraphs, an open-source visualization tool and framework; RECITAL, a digital humanities project on Italian comedy; Shrivelling World, a tool to represent geographical time-spaces; Stylo, a text editor for humanities scholars; and the Software Heritage platform.

11 speakers offered their testimony about their experience as open source project contributors: Aniket Pradhan with NeuroFedora; Benjamin Ooghe Tabanou with Hyphe and HyBro; Bernhard Rieder, Erik Borra and Stijn Peeters with their tools for social media research TCAT, 4CAT, and Netvizz; Giorgio Uboldi with RawGraphs; Jan Grewe with Relacs, a tool dedicated to electrophysiological recordings; Karthik Ram with the Journal of Open Source Science; Michael Hanke with DataLad; Nicolas CARPi with eLabFTW; Robin de Mourat with too many experiments and tools for me to list here; Sébastien Rochette narrated is experience in a hackathon; and Travis Rich with PubPub, the open source publishing platform.

Finally, 3 users shared their experience with open-source projects: Albert Yumol with Open Street Maps and the Humanitarian Data Exchange (HDX); Maya Anderson-González presented a micro project of visualizing and analyzing a Twitter network about FOSDEM 2020; and Niels Cautaerts presented his experience with eLabFTW.

Thank you to the co-organizers of the Open Research Tools and Technologies devroom, with whom I was super happy to collaborate: Diégo Antolinos-Basso, Paul Girard, Célya Gruson-Daniel, Achilleas Koutsou, Michael Sonntag, and Lilly Winfree. 😊

The public of the open research tools & technologies devroom, FOSDEM 2020.
(CC-BY-SA Mathieu Jacomy)



Cite this blog post
Mathieu Jacomy (2021, March 9). State of the conversation at the FOSDEM open research tools and technologies devroom. Reticular. Retrieved March 28, 2024, from https://reticular.hypotheses.org/1825

One thought on “State of the conversation at the FOSDEM open research tools and technologies devroom”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search OpenEdition Search

You will be redirected to OpenEdition Search