Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us
Are we the baddies?

Off-label use of algorithmic techniques in SSH: are we the baddies?

17 min read.

The situation at stake is the following: we, social science and humanities (SSH) scholars, use a method from another field, but we do not use it the way it was designed to be used. For instance, we do topic modeling, but only as a shortcut to categorize documents manually. More generally, we use predictive models but we do not predict, and we do not believe that the assumptions baked in the model are appropriate. For example, we use community detection, but we do not work with the communities obtained as if they were communities. We disregard some of the features the algorithm provides, while we leverage some of its side effects. Is this good science? Or are we the baddies?

Martin Grandjean was visiting us at the Tantlab for a few months in 2021, and before he left, Anders Munk and I had a long discussion with him to prepare the writing of a paper on our shared interests: network analysis, epistemic cultures, and knowledge technologies. This long-due post basically consists of my notes, to fixate some elements of our discussion. It focuses on networks, and the practice of repurposing algorithms in digital methods.

We routinely see people interpret network maps on a self-evident mode, that is, as if they had no epistemic commitments. As if looking at the picture was sufficient to understand the network structure. But of course, certain competences are required to understand what is going on in the picture. See an example below. These self-evident interpretations beg at least three kinds of questions.

First kind of question: Is the network structure visible or not? This leads to questioning what the network structure is, and what makes it visible. I have a lot to say about that, but I still see it as an open question.

Second kind of question: Why is the practice of self-evidence commonplace? There are obvious answers, for instance: some may believe that the network structure is directly visible; that there is no mediation. It may seem obvious that beginners could get tricked into self-evidence, because they lack training, and/or they are careless. But the obvious answers are not always right, especially when it comes to cultures. Let us refrain from doing armchair anthropology. What do we really know of the beliefs of these persons? We can actually look into these practices and investigate their purpose. What do they provide to those who enact them?

Third kind of question: Is this practice bad? …and if so, in what sense? The answer depends on the answers to the previous questions. The simplest hypothesis is that the network structure is not visible, yet people are tricked into believing that they see it. In that case, the picture does not properly refer to the network structure, so the argument is invalid, so it is bad science. The circulating reference is broken, the signifier does not refer to the signified anymore. The simplest possible answer to this question is that network maps are misleading.

The problem, again, comes from the fact that tools such as Gephi have made network analysis accessible to broad audiences that happily produce network diagrams without having acquired robust understanding of the concepts and techniques the software mobilizes. This more often than not leads to a lack of awareness of the layers of mediation network analysis implies and thus to limited or essentialist readings of the produced outputs that miss its artificial, analytical character. A network visualization is closer to a correlation coefficient than to a geographical map and needs to be treated accordingly.

Rieder and Röhle (2017)

Community detection

The same question can be asked about community detection. This technique works a bit like layout algorithms, insofar as it translates the topological structure, but instead of providing node coordinates, it provides groups of nodes (the “community”). There are different ways to build the groups, depending on what one means by “group”; there are different techniques to community detection. I will present two, but let us consider first what people do with those groups.

If your network is big enough, there are too many nodes and edges to analyze them individually. Having groups is an invaluable commodity, as it offers a reduced set of things to talk about. Node groups reduce the network to something we can analyze more efficiently. Of course, now we have to deal with where those groups come from. We have to justify them. But on the other hand, we can now assess those groups qualitatively and quantitatively. We can measure their properties. In that sense, groups are a coding of the network. A reduction that we can assess and use for analysis.

There are as many ways of assessing the coding (the groups) as there are research designs. You could for instance measure intercoder reliability, a well-codified technique of qualitative analysis that can be calculated in different ways. You could benchmark the groups against ground truth(s), if you have such empirical information. You could also measure their properties of your groups. For instance, if you expect them to be assortative (more connected within each group than across), you could compute the modularity of your groups, and compare it to other ways of making groups. The relevant validity criteria depend on the role of the groups in your analysis.

Let me sketch three examples. In the first one, the groups are not used in the analysis; in the second case they play a minor role; in the third case, the play a major role in the analysis.

First case, you just want to be able to refer to some parts of a network map in the text. The example below maps a discussion about rewilding (in short, the reintroduction of wild animals) by technoanthropology students (Anders and I teach them controversy mapping). Nodes are expressions connected by co-occurrence in Facebook posts, in Danish. The discussion has been analyzed qualitatively, but the map helps to communicate the analysis. The colors come from community detection. In this case, the blue cluster is about emotions (they named it “pathos”) while the orange cluster contains the expected expressions related to rewilding. Having colored groups allows to guide the reader’s attention to certain parts of the map, for instance: The blue group connects to the orange group through nodes like “dyr bag hegn”, which means “animals behind fences”. The students know, from their reading of the empirical material, that the mention of animals behind fences happens to mobilize strong emotions in Facebook posts. They have many other ways than this network to make that case, and they proceed to do so. Yet it helps to visualize where emotions (blue group) are connected to rewilding (orange group), and to check which other concerns may also play that role. The map was exploratory, and sharing it allows the reader to retrace that exploration, from the visualization to the empirical material.

Statement: “The blue group connects to the orange group through nodes like ‘dyr bag hegn’: animals behind fences. Our qualitative analysis shows that this concern often generates emotional discussions.”. The groups, represented as colors, are only used to help the reader navigate the map. They are not part of the qualitative analysis. This example is borrowed from the work of students of Aalborg University who mapped a controversy about rewilding.

Second case, you have a ground truth that you need to simplify. The example below represents airports and the airlines connecting them, in 2021. We know the country of each airport, but there are too many countries. If we use the country as the group and we assign it a color, we obtain the image on the left. If we use community detection, we obtain the image on the right. There are much less colors. The big red group happens to be Europe: although it has many countries, it appears as a single group because the countries are highly interconnected. The “communities” found are not exactly groups of countries, but it works well-enough to be used as a basis for the analysis, for instance by measuring which of these macro-groups are better connected.

Network of airports connected by airlines in 2021. On the left, colors represent countries. On the right, colors represent groups of nodes found by community detection. The red cluster is Europe: although it contains many countries, it has the structure of a unified ensemble.

Third and last case, the groups are a primary goal of the analysis. Check for instance this recent paper (John et al., 2021): their goal is to identify groups of people from their mobility pattern to profile them in further analyses. Community detection is a key step of the research design, an obligatory passage point of the method. In that case, obviously, the methodological commitments of the community detection technique employed contribute to determining the meaning of the groups further analyzed. Communities are literally modeled, following a number of assumptions.

Do you see communities?

Tiago Peixoto, who made decisive contributions to the science of community detection, happened to visit us while Martin was there. Tiago showed us a case that he later defended on his blog and on Twitter. I have written about it in a previous post. His post contains a provocative picture that I struggled to understand. I will unpack this case because it allows pinpointing the gap between the standpoint of algorithm designers (like Tiago) and scholars who use it (in that case, Martin and I).

Tiago compares two different approaches to community detection, and I need to explain that real quick, with my own words. The first technique is called “modularity clustering”. It is the oldest one, it is popular, notably among Gephi users. In short, it tries to find groups that optimize a certain metric, modularity. It’s too costly to find the absolute best, but we can get close thanks to a few approximations. A high modularity means that most of the links are within groups, not across. The second technique, developed by Tiago, uses Bayesian inference. It gives you the groups that are the most likely to fit a model.

Do these two approaches sound very different to you? If not, bear with me. The difference will appear shortly. Tiago proposes the image below. He asks: do you think there are communities, or not? Look at the network in the image, and remember your answer.

Tiago’s argument goes as follows (in my own words). At first glance, it looks like there are communities in this network. And indeed, if you run modularity clustering, it finds communities (check the colors on the left). However, the communities are not real. Indeed, Tiago generated this network from a model that has no notion of community whatsoever. Instead, it just requires that 13 nodes have 20 neighbors, and 230 nodes have 1 neighbor. So the nodes in each of the detected group have nothing more in common than with the other nodes, all of the nodes are by definition on an equal foot, despite the particular configuration generated in this specific case (see image below).

My initial reaction was: I do see the communities, so how can you argue that they are not real? Maybe those communities are only specific to the network generated that specific time, OK. But it only means that the generator gives you different communities every time, it does not mean that those communities are not real. But Tiago did not agree on that.

A different phrasing helps find a common ground and situate the disagreement. By “I do see the communities”, I mean that if I met such an empirical case, I would describe it as having a community structure. It may mean, for instance, that I could cut just 14 edges and separate the network into 13 pieces of roughly equal size. This criterion boils down to having a good modularity score. Tiago calls that a description, fair enough. Modularity clustering is descriptive. It applies to a situation where the network is empirical. Obtained from the field, as we say.

In comparison, Tiago’s situation is not empirical. His network is generated after a model. By “there are no communities”, he means that the likelihood that the found groups play a role in the connections is low. Which is a given, since the model has no groups to begin with. Still the argument holds; but let me explain in a different way. The model is like the rules of a game. Let me give you a simple example. We are given 2 groups of 5 nodes, and the game is to decide which nodes are connected. The rules tell us how the groups impact the chances to be connected. For each pair of nodes, we roll a dice. If the nodes are in the same group, we connect them on a result of 2+; else on a 6. We play the game, and we get edges that depend on the groups (and of the rules!). The Bayesian inference algorithm for community detection helps us play the game backwards. We have the edges, and we must guess the groups that generated them. But crucially, we must also know the rules (the model). Given the edges and the rules, it gives us the groups that are the most likely. In fact, we could even propose groups, and it would tell us how likely it is that those groups were the ones used for the game. By “there are no communities”, Tiago means that the groups obtained are no more likely than any other distribution. He also argues that the model does more than describing: it explains, although “explains” has a narrow definition in this context.

I find the description/explanation dichotomy too self-serving for two reasons. First, it suggests that descriptive techniques cannot explain, yet I contend that they may contribute to explaining by feeding into other methods. Modeling is far from the only means to provide an explanation. Second, when you get an empirical network, it never actually fits a model. The processes that exist in the world are never as simple as the rules of a game. “All models are wrong, but some are useful”, as one says in statistics. If the explanatory powers of modeling cannot exceed the justifications of the models, and if those are weak, then models only explain in theory, not in practice… yet modeling is useful. There are no doubts about it. The question is: what do researchers actually do with modeling techniques?

I helped Tiago implement a version of his Bayesian inference algorithm in Gephi. This version is based on the simple assumption that each node belongs to exactly one group, and that nodes are more connected within a group than from one to another. These assumptions are reasonable, but one cannot take them too seriously. The model is obviously unrealistic: no person belongs to a single social circle, no word to a single topic, etc. Yet it is useful because most of the time, we want each node to have exactly one group. Possibly for pragmatic reasons, like the necessity to visualize groups as colors, or because we use groups as a reduction, a simplification. Those are good reasons. We want a 1-group model not because we believe that it is how the network was generated, but because our research design demands it. In that situation, the usefulness of Bayesian inference is not about its explanatory powers. We cannot take for granted that the usefulness of an algorithm depends on the usage prescribed by its designers.

Misuse versus off-label use

Algorithms can be repurposed; they should be reappropriated; yet they can still be misused. The problem is to differentiate between those situations. Who gets to tell the misuse from the reappropriation? I am reluctant to be normative on that matter, and this post is long enough. So I will now explore a different direction, and engage with a topic that is faithful to the discussion we had with Martin and Anders: off-label use.

Off-label use is an expression you will primarily find about drugs, about medicine. It mainly refers to the widespread practice of using a drug in an unapproved way. Let us extend this notion to every technology, and refer to any unapproved use. The notion just assumes some degree of normativity, and a practice breaking that norm.

Off-label use takes a different meaning depending on the normativity. The pharmaceutical version of off-label use is often about what the health authorities have determined legal or not. The norm is set by the state. But in other contexts, the norm might be set by the manufacturer, cultures, society at large… and those are not mutually exclusive. In the case of algorithms, we should at least consider what is prescribed in the paper(s) defining it, and the culture of the field.

Let me explore a few examples of off-label use. My goal is to provide analogies as food for thought, but also to show that off-label use is more common than it may seem. I want to make it clear that off-label use is legitimate insofar as it consists of using something for what it is rather than what it is supposed to be.

Nitrous oxide. “Commonly known as “laughing gas”, this odourless substance is used in medicine, as an anaesthetic, and in catering to make whipped cream. It is the whipped-cream chargers that people buy for recreational use. The gas is usually inhaled by discharging a canister containing small amounts of the gas into a balloon.” (The Conversation) Anecdotes: I know those canisters only as a cooking technique, and wondered for a long time why people seemed to throw them away on the streets.

Cannisters of nitrous oxide.

Sildenafil, better known as Viagra, “was initially studied for use in hypertension … and angina pectoris … Phase I clinical trials … suggested the drug had little effect on angina, but it could induce marked penile erections. Pfizer therefore decided to market it for erectile dysfunction, rather than for angina; this decision became an often-cited example of drug repositioning.” (Wikipedia). The off-label use became the intended use.

Ikea’s BEKVÄM spice rack is simple and inexpensive, as any spice rack should probably be. But it can hold more than spices. People started buying it as a book shelf for kids. Then it was discovered that if you hang it upside-down, it also allows you to hang a number of things, like jewels or a towel. These uses became so popular that Ikea now also showcases the spice rack as a book shelf.

Jimi Hendrix was left-handed, but used to play a right-handed guitar held as if it were a left-handed guitar. As a result, the affordances of the instrument are upside-down: the buttons and the switch are under your arm instead of at the tip of your fingers, the tuning keys are far from you instead of close… an ergonomic nightmare. Now, who would dare stating that Jimi Hendrix was not playing the guitar appropriately? The history of musical instruments is full of off-label uses that became mainstream because they defined the sound of popular artists. Most guitar pedal effects, in fact, started as repurposed artifacts (distortion, vibrato, delay…).

In conclusion

I don’t think we are the baddies when we repurpose algorithmic techniques borrowed from other fields to do social sciences and humanities. We have different goals, different methodological commitments, and we have the right to reclaim those techniques for ourselves. This is not inherently bad science.

De facto, we are doing it. I want to frame it as off-label use. We use these techniques for what they are rather than what they are supposed to be. We disagree with the norm because our situation is different. For instance, in the case of community detection, we do not model; yet we may use modeling. We use it as a way to produce a reduction, which it functionally does. We are not misunderstanding the algorithm, it actually performs what we expect, even though this is not what the designers intended.

That being said, normativity also protects against misuse. Although I reclaim the right to use techniques off-label, I also acknowledge that it requires doubling down on assessing the algorithm to ensure that it actually does what we think it does. Off-label use comes with increased risks. It is not inherently bad science, but it exposes us nevertheless. Let’s not become the baddies.

References

Rieder, B. & Röhle, T. (2017). Digital Methods: From Challenges to Bildung. In M. T. Schäfer & K. van Es (Eds.), The Datafied Society: Studying Culture Through Data (pp. 109–124). Amsterdam: Amsterdam University Press.

John, E., Cauthen, K., Brown, N. & Nozick, L. (2021). Detecting Communities and Attributing Purpose to Human Mobility Data, 2021 Winter Simulation Conference (WSC), pp. 1-12, doi: 10.1109/WSC52266.2021.9715396.

Update:


OpenEdition suggests that you cite this post as follows:
Mathieu Jacomy (October 14, 2022). Off-label use of algorithmic techniques in SSH: are we the baddies? Reticular. Retrieved January 20, 2025 from https://reticular.hypotheses.org/2003


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.