The register effect: lists, regimes of absence, and the design of discreteness

This is a follow-up to this blog post where I call bullshit on the claim that computer are radically incapable of certain things because they are discrete, while real life is continuous.

The circulation of such claims matters because it shifts accountability. Indeed, we use computations for analyzing social life, computations relying on discrete data. But social life is continuous, right? So we seem doomed to misrepresent real life with computers, to miss something key. Whose fault is it? If computers just cannot deal with continuity, it is our fault: let’s preserve analog sociology. But if computers could deal with continuity, if alternatives existed, then the responsibility would lie with whoever steered us towards discreteness. Did someone decide, at some point, that computers would renounce the continuous? Of course no single being ever had such power, because our digital environment is produced by a myriad of actors with heterogeneous practices and oppositive agendas. But at the same time, computers seem sentenced to discreteness.

General claims about computers shift responsibility away from the architects of the digital infrastructure. Statements uniting the Digital archipelago into a single territory, as if its inhabitants were bound by a shared destiny, renounce to see the won and lost battles that decided its geography. These narratives sever the chain of events that produced our digital environment as it is, rather than as it might have been, echoing the voice of winners (eg. Google and Facebook) and muffling the dissonance of losers (eg. free open source software). Although the latter keep resisting – count me in.

But there is more to it. Hearing attentively, the hymn of computational modernity has its own dissonance. Algorithms are too dumb to recognize a black face but clever enough to power mass surveillance in China. We normalized the remarkable achievements of weather forecast. We normalized the stupidity of human-replacing algorithms, such as the “computerized system” supposed to determine if your emergency call is an actual emergency. Computers are both threateningly efficient and frustratingly inefficient. Of course, those are different computers. The systems thrown at our faces in the name of cost-reduction have a clear motive for dumbness. But military and industrial systems are a different thing entirely. On that side of the balance of power, the supposed inaptitude to continuity sounds more like a wish than an evidence.

Lists

The first brick of the Google empire was just a search engine. Name a thing, and you get a list of results curated from the entire web. We are in 1998, this engine’s speed is mind-blowing, its quality competitor-crushing, it works like magic. There is a whole story to tell, but we will leave it aside and question the premises of the problem. Why a list?

The search engine problem is to order the list of results by relevance, in a sufficiently short time to satisfy a human user: seconds, milliseconds if you can. Sergey Brin and Larry Page solved it by just using the hyperlinks to sort the list (the famous PageRank), blaspheming against the semantic web by stating that relevance has nothing to do with the content but just with the links, making many enemies in the process, and crushing them under the weight of success. The problem preexisted, they did not invent it, but they killed it by challenging its own terms – relevance. But not that it would be a list. So once again: why a list? Where does it come from?

Lists are as old as writing, and in some ways older: the human species started listing stuff before writing sentences. But the Mesopotamian list has a radical difference with the Google list: the former is inscribed on a continuous substrate while the latter is not. On this literal matter, the discrete and the continuous differ. But in subtle ways, changing seemingly minor details about what we can or cannot do. A mundane example will help us visualize: the shopping list.

You write your shopping list on paper: apples, oranges, milk, cheese, shampoo. Later on, you realize you also need butter. What do you do, just append it to the end? No, because you do not want to go in the dairy products aisle to pick up milk and cheese, then to the hygiene aisle, and then back for butter. The order matters in multiple and subtle ways. Firstly, there are multiple reasons why you grouped the items. It is more convenient once in the shop. But writing is also thinking: adding an item makes you think of other items to add, and this groups them. Secondly, there are multiple reasons why this order matters. It is a convenience, in the sense that grouping items helps you pick multiple things at once. But it is more: a program. Because you trust the list to help you. You expect items to be grouped, so if you break the order by putting the butter at the end of the list, you will surely get the unnecessary trip. So you plan ahead. You know the list is a program, and you write it accordingly. You insert the butter between the milk and the cheese. That way, even if you are tired when you shop, you will not waste your time traveling between yogurts and shower gels.

The insertion of an item between two others is a graphical operation made possible by the properties of the paper sheet – or the clay tablet. The sheet has its own planar space, distinct from the intrinsic space of the list. You can find room in between precisely because the sheet is continuous. And even if there is no room in between, your options are as endless as doodling can be. These spatial relations are natural to our cognition, and we naturally excel at graphic manipulation.

The sheet also renders ordering possible, in the sense that we can for instance decide to read the inscriptions top-down. It also allows groupings by proximity. Different supports have different properties.

The list of Google results is also an inscription on a support. The support is a memory structure made of 0/1 bits, somewhere on the digital infrastructure. It is actually replicated in multiple places and under multiple forms, and many sophisticated layers are involved. It is nevertheless an inscription on a material substrate. And that support has properties, the most important of which is discreteness.

Wait, did I just prove the radical difference of the digital? Inscriptions are determined by the substrate, and the digital has a radically different substrate, QED. Oops. The argument is superficial but reasonable. Unfortunately, this is not where the journey ends. This is just a checkpoint. Still, I want to acknowledge here that discreteness as a radical difference of the digital is a sound hypothesis. I am arguing against it for other reasons: I contest the implications it has in practice. A discrete substrate does not prevent all forms of continuity.

Computers deal with graphic space two ways: pixels and vectors. The Gimp versus Inkscape. Photoshop versus Illustrator. With pixels, discreteness seems quite irreducible. With vectors, not so much.

In the world of pixels, space is quite directly memory bits. You cannot zoom infinitely. At some point, you see the pixels, and they are the limit. In that use of the computer, it seems that indeed the discreteness of the substrate translates to graphic discreteness. In part to overcome this limit, we also have the alternative of vector graphics.

In the world of vector graphics, we have no pixels but the coordinates of points and curves to create objects. This produces a strong impression of continuity. The limit is not the size of the space, but the number of objects you can represent. The potential continuity of the graphic space is, in practice, well respected. The discreteness of the digital substrate translates here to the discreteness of represented objects. It is a very different trade-off. For all practical purpose, it allows the same kind of graphic manipulations as the paper sheet.

Vector graphics show how the flexibility of computers can be leveraged to subvert the limitations of discreteness. There are many ways to build our digital infrastructure, and not all of them are stupidly stuck in discreteness. For similar reasons, the list of Google results did not have to be a simple list. In fact, other designs have existed, such as the “visual” search engine Kartoo.

Kartoo results were displayed as a map. Like with vector graphics, they were discrete elements in a continuous space. The continuity of the underlying space suggested that some results might occupy intermediate positions, while in Google the space between elements does not mean anything. Kartoo did not feel as discrete as the Google list.

So at the end of the day, why are Google results a list? By design.

The answer is unspectacular, because we overlook its dissonance with the popular beliefs on computers. Indeed, there is a much stronger narrative out there: that Google is a big database of some sort, and that a list is what you get when you query that. In other words, because of how computers work. This narrative is uncritical and short-sighted. The Google search engine is on par with a weather forecast system in terms of scale and sophistication. It runs on a huge distributed infrastructure. It draws information from other sophisticated systems such as the Google Knowledge Graph or Google Maps. It does not work like a regular database. It does not have to look like one. Results are a list because it works better for Google that way, whatever that means.

We tend to think of computers with the pixel mindset, where continuous things are implemented by a straightforward strategy. Discreteness kicks in at a certain granularity level. In some ways this thought is comforting because the limitation is visible if you look close enough. The digital is a fake that we can unmask with a simple act: zooming in. Our everyday computer confirms this belief. But at the other end of the spectrum, we forget invisible systems that we can only conceive with the vector mindset. Strategies that implement continuous things in oblique ways, blackboxing discreteness in contingent constraints. In such systems, agency is near impossible to retrace, and we never find the corner from where to grasp and pull away the veil of the Matrix. For good reason: these systems are efficient at producing continuity in practice.

We are right to assume that there must be a validity domain outside of which the illusion of continuity breaks. The discreteness of computers requires it. But we are wrong to believe that we will necessarily face the walls of that domain. The vast capabilities of industrial devices allow them blending into the background of our lives. They work well enough that we have stopped paying attention. Not every digital system can be thought of on the basis of our everyday computer.

What does it change that Google makes lists? I have a few ideas, but a more important question first: can we know the difference it makes? These systems have already blended into the background, and I would not presume of our ability to discern them. I do not trust my own powers of observation. But the few I can see is all the more important, because I am probably missing more. It is not so impressive, though: the discrete and continuous lists institute different regimes of absence. They produce distinct kinds of holes, and thus distinct kinds of wholes. Absence and exhaustiveness are instituted differently.

In a nutshell, there are no holes in the Google list, it always seems exhaustive. You never have a clue that something is missing. Absence is invisible, which progressively leads you to believe that Google is exhaustive. Whereas in the world of planar representation, absence leaves a blank, a negative space, a hole with visible borders. Absence is visible, which alerts you. It jumps at your face. This argument will be more understandable with a few graphics.

A visual journey through regimes of absence

In images, absence is generally a question of background and contours. Let us start with the notion of negative space. To put it simply, it is a space occupied by the background, an empty space, a hole. But because it is surrounded by nonempty space, because as a hole it has borders, you can see it. You can, but because it is background you also overlook it. The typical example is the FedEx logo.

You probably know it contains an arrow. If not, enjoy a little epiphany. Its designer, Lindon Leader, set up that aha moment intentionally. The arrow is hidden in plain sight: some people see it right away while others never realize. Our visual system works in such way that the background color of the arrow prevents it from being processed as an object, as a shape – until we get aware of it. The negative space is on the edge between visible and invisible absence.

Removing something often leaves a hole – but not always, as we will see. The marks it leaves can be used to communicate absence. I recently stumbled upon a nice example in the famous French newspaper Le Canard Enchaîné, published on 2019-06-19. In a context where the New York Times decided to stop publishing cartoons, the Canard published an empty image as a way to raise awareness on the issue. The negative space conveys the message of absence, because the hole is visible. It leaves a scar in the page, as you can see below.

Conversely, if you were owning a newspaper and wanted to censor an article, you would not leave a blank as a piece of evidence. You would cover your tracks by putting something else instead. Or by having the embarrassing article not written in the first place. Censorship has a long story of avoiding traces of absence, predating the digital era. Soviet Union did not wait for Photoshop to paint backgrounds over embarrassing people.

This series does not tell the story Stalin wanted to tell

In the example above, most of the work has been done by reframing the original picture, another efficient way to make absence invisible. Still, you can note how some shadows had to be repainted to avoid suspect negative spaces. This technique has become so commoditized that we now erase things for fun.

Post image

Keep in mind that despite what the result looks like, photoshopping out stuff is not erasing but painting. Extending the background over the shape you want to remove, but also fixing surrounding shadows, reflections, etc. Doing it well requires an effort, and digital artists frequently make mistakes, like forgetting to remove a hand on a shoulder. Spotting these “Photoshop fails” in magazines has become an equally fun practice.

In photographs and video, manipulation is so accessible that we cannot reliably track absence except with specialized tools. We cannot really tell if something is missing with the naked eye.

Visual media allow to make absence visible or invisible depending on how we use it. Not everyone wants to hide absence. In the sciences and humanities, highlighting it is key to assessing authenticity. In archaeology, missing pieces are usually blanked in an explicit way, leaving stunning negative spaces that add to the story of the object.

Negative spaces can make you happy

On the question of absence, writing is better understood as a visual medium. Handwritten manuscripts display how important is the practice of editing texts by crossing out words, writing between lines and in the margins, and using arrows and lines to reorder paragraphs. On a medium where you cannot erase, editing leaves marks. This has been essential to the study of literature, and to teaching: remember how your teachers provided feedback by correcting your works, and the humble but powerful technology supporting it. Visible absence is not a problem, it is a feature.

Victor Hugo’s Les Misérables manuscript, Bibliothèque nationale de France

Erasure technologies show how vast the spectrum from visible absence to invisible absence is. Crossing out a word keeps it readable. Blacking it out makes it unreadable, but its absence is still visible – redacted text has become iconic of censorship. Whiting it out with correction fluid leaves a mark of absence, but if you write over it, there is no hole: absence becomes less visible. Erasing pencil, if well done, can leave almost no trace of editing – invisible absence. This spectrum also tells us that technologies allowing hiding absence often allow showing it as well. There is a choice; then let us pay attention to who makes that choice.

CIA Declassified Mind Control Document

Important remark on censorship in popular culture. Representing censorship requires to make absence visible. For example the blacked-out text above is the sign of censorship, the black blocks embody censorship. Only a visible mark of absence can convey the idea of censorship, the idea that something was there and has been removed. Which leads us to misconceive it. Indeed the most dangerous form of censorship is where absence is invisible. By definition, invisible absence cannot represent censorship – it cannot represent anything because it is invisible. As a consequence, the popular conception of censorship is dangerously naive; it assumes censorship always leaves marks, which underestimates and misdirects the efforts necessary to oppose it.

Digital inscriptions can be rewritten – except in a few edge cases. So digital media can invisibilize absence, which does not mean they always do. Visible absence is so useful to human practices that many devices reproduce the behavior of writing on paper. For instance you can strikethrough text, a feature that only makes sense on a computer as an extension of analog practices. More generally, the revision system in text editors like Open Office Writer, MS Word, and Google Docs, have multiple ways to display absence to optimize the user’s reading and writing experience. Edit marks are useful to track revisions, but they get in the way of normal reading. So you can hide edits if you want. Once aware of the edits, hiding them is desirable and useful – because you can make them reappear. But of course, invisible absence becomes problematic if you have no clue that something is missing.

The discrete list’s regime of absence

You query Google for “Tiananmen square protest”, in China, in 2010. You get nice tourism pictures (see below on the left). The event itself is missing (see the same query from the UK, on the right). You can certainly tell, but not everyone in the world can. That is why it matters that in this list, the missing knowledge leaves no hole, no mark, no scar. Results look whole, complete.

Tiananmen query in Google, from a Reddit thread in 2010.
On the left, results in China. On the right, in the UK.

I must now be fair with Google, which is neither the only search engine nor the first. Others (Bing, Ask, Duck Duck Go…) work the same. Since the beginning of this text I referred to Google because everyone knows how it works, not because Google is the worst. And to be fair, in this case Google mentioned that some results were missing. But precisely because the missing results left no marks, they had to make their absence visible another way. Here, a textual mention that some results were removed. Not as efficient, but better than nothing. Note that Google stopped displaying this mention in China in 2012, this time invisibilizing absence completely – actual censorship takes care to cover its tracks. On this topic Wikipedia is a good start, and there is a rich discussion on the web and in the academia.

You might expect ethical search engines like Duck Duck Go to oppose censorship (this one is of course blocked in China). But in fact, we cannot tell by just looking at the results, because absence is still invisible by design. We have to rely on trust, which is fine, but they might still have their own unintentional biases, and those are invisible. Duck Duck Go sets up no different regime of absence.

Years ago I had the occasion to discuss with some of the architects of a search engine on how its design was decided. They had different metrics related to user satisfaction (query speed etc.) and conducted their own market research. They had for instance observed that for many simple queries (eg. “gravity”), most users expected a definition or general knowledge on that topic. Wikipedia provides that kind of answer, and people like it. So Wikipedia was often artificially pushed to the top, which measurably improved user satisfaction. But exhaustiveness was not part of their goals and surprisingly, quite the contrary. I asked explicitly. If one result makes many users happy (like Wikipedia) then similar results will not make the same users more happy. Being exhaustive is not worth it. It is more profitable to satisfy other users, for instance those who had the movie “Gravity” in mind. The top of the list is a very limited space and there, search engines favor variety over exhaustiveness, since it satisfies more users – make your own mind! This strategy requires to remove some results to make room for others. It actively curates the results order – this is no surprise. But it also actively blackboxes those manipulations as a design choice. Discreteness and the regime of absence it enacts make this design choice possible.

I am not saying here that Google and other search engines use the discrete list and its regime of absence in order to hide censorship. Just that it performs something invisible, which is both dangerous and efficient. It works like a charm because it performs something invisible. Indeed the complicated functioning of a search engine is fruitfully blackboxed into something reminiscent of the mundane shopping list – a browsing list. The trick has proven its relevance. This design is accessible, simple and clear. It has these qualities because it is blackboxed. Those are excellent reasons to use the discrete list – as I argued in another post, blackboxing is a resource for design. So good reasons exist, rationales supporting a public good agenda (eg. access to knowledge). But we cannot deny that Google et al. have their own twists on the notion of public good. Hiding the absence of removed results might not primarily a support to censorship, its benefits meet various nuances of morality. Being invisibly absent from search engines can deny a fragment of knowledge its very existence.

At this point you might ask: “but absent with respect to what?” Indeed, absence is relative. It implies comparing two states, a reference state where something could potentially be present, and a state where it is absent. In the Tiananmen example we compared two versions of Google, which conveniently provides a reference state – we implicitly assumed that Google should work in China like in the UK. But most of the time, there is no reference state. When Google works like it is supposed to, what is the comparison point? Which reference could allow us to assess what is missing? This is connected to the question of algorithm bias: biased compared to what? Even implemented straight from the paper, algorithms can be subject to bias because they reproduce undesirable discriminations present in our society, in our data. Bias has to be addressed from outside. The reference point cannot be produced by the algorithm and must be curated externally. What could it be for search engines? On Tiananmen, we can think of the historical consensus about the massacre, for instance. It seems more grounded than how Google works in the UK. But even that notion is situated, hard to implement, and it covers only a tiny bit of knowledge. This thought exercise gives us a glimpse of the difficulty: mankind has no reference point for something as vast as the web, as generic. We have encyclopedias, Wikipedia, but those are not a valid reference points for the web because they contain a different kind of information, a different distribution of knowledge. We have no good reference to assess how a search engine should work, which makes it difficult to deal with the question bias and censorship in practice.

As a society, our incapacity to evaluate a search engine, to make it accountable for censorship or disinformation, shows how much power we have delegated to the digital infrastructure. It shows how far we are from framing the political dimension of computational systems in a democratic setting. The question of absence is the same as the question of accountability. Without a background against which assessing absence, we cannot enforce accountability. Without a reference opposable to algorithms, we cannot evaluate them. As long as our strategy to unveil disinformation is to ask Mark Zuckerberg if Facebook knowingly implemented evil algorithms, we implicitly admit powerlessness. This strategy boils down to asking architects of our digital environment to be nice with us please. Engaging with this question makes us realize that in this game, we have almost no cards in hand. All the cards are in the opponent’s hand. It feels like we arrived late in a game against an opponent who did not wait for us and started playing without resistance a few turns ago. Why did we miss the beginning of the game? Because we believed digital stuff was not political, at least not as much, not like that. We did not see from where would the trouble come. We were blind. And here I would refrain from paranoia and complotism: I am inclined to believe that no one knew, that actors in position to design our digital environment – architects – had no sinister agenda but the usual greed-related goals (user satisfaction, profitability…). I would not presume that architects of the digital knowingly trapped us, the public, in this powerlessness. No one knew we were playing that game, and these actors pushed their agenda the way everyone does. I can believe that our blindness was shared. We all realized quite late the strange places where power had accumulated. Delegating to the digital infrastructure seemed innocuous in the absence of any particular danger. This is where I think regimes of absence, for instance, matter. Invisible absence is precisely the kind of pipe for power we tend to overlook. Once again, I do not trust my own powers of observation: I am not sure of how important it is, or which other sociotechnical assemblages allowed power to circulate and reconfigure. My goal here is to help making these reconfigurations visible by contextualizing overlooked details of the digital.

I am now reaching the limit of my perspicacity. There is certainly more to search engine design, but I find myself without much landmarks to go further – and I am poorly educated in this academic literature. But we traveled far enough for me to draft an argument, to show a direction. I want to wrap up this journey and articulate more tightly my two points: (1) computers are not bound to discreteness in practice and (2) different supports establish different regimes of absence. In this wider story, the list of search engines is just an example among others.

The register effect

To situate discreteness I will draw from an analogy with chemistry. Diamond is much more similar to quartz than to coal, even though its constitutive element is, like coal, carbon. It turns out that constitutive elements matter less than how they are arranged. Diamond and quartz are alike because they are both crystals. Seemingly essential properties like hardness or color are not determined by the elements but by their arrangement. Similarly, even though the digital is discrete by nature, depending on how the code is arranged, the resulting system is rendered discrete or continuous to the human experience. I evoked a few examples but there is more to it. Think for instance of how much infrastructure we dedicate to dealing with technical failures: those are occurrences of the analog bleeding on the digital. Think of webcams, sensors, etc. Computers are not stuck in a bubble of discreteness, they constantly deal with the continuous. We cannot just assume that “digital is discreteness”. We must inquire why and how discreteness matters to the digital. Comparing discrete-looking and continuous-looking systems can help us clarify the situation.

A French nuclear facility censored in Google Maps (2019)

Google Maps makes a strong impression of continuity, while the Google search engine looks discrete. Both enact different regimes of absence. Removing a location in Google Maps leaves marks due to the nature of Earth’s surface (see above), and hiding or minimizing these marks requires an effort. Conversely, in a search engine, removing results is invisible. Without an underlying space like Earth’s surface, there is no background against which checking what is missing. In a discrete list, absence is hidden by default. This has consequences for individual users, critical thinking of our digital environment, and ultimately how such devices are framed in a democratic setting.

Search engines work like registers of the web. They seem to list its entire content. Of course they do not exactly, first of all because they did not crawl everything, but also because they have various reasons to promote or demote certain resources. In this they fulfill their function of offering an entry point to the vast space of the web. This role comes with responsibilities and curating results is, to a certain extent, necessary to face them. Search engines have to distort and reduce web information. But this distortion and reduction is not visible. Unlike looking at the world through a piece of glass, nothing seems distorted, nothing seems missing. The list of results always looks complete. Because search engines are the principal entry point to the web, what they omit is rendered invisible. Users have no clues they might be missing something. Hence my choice of the word register. Search engines produce the impression that whatever is not in the list is simply not there. Like metaphorical screens on the surface of which the web is projected, they hide it by showing a reduced version, and the audience ends up taking the projection for what is projected, conflating the search engine and the web. This is what I call the register effect: the ability of certain lists to produce the impression they register all there is.

Many actors have actively tried to become such screen, and their main asset has always been technology – and more precisely, its design. Microsoft tried to become the entry point to the web by conflating the browser and the file system explorer. AOL tried to become an OS inside the OS and conflate its services with the rest of the web. Facebook tries to conflate its own space with web browsing in order to keep you inside its walls. On that topic, design choices, including a myriad of seemingly minor details, are much more revealing than “algorithms”.

This is where all the pieces come together. The discreteness of search engine results is not a question of technological essence, but of design. Being the entry point to the web is a critical issue. To that end, the register effect is a major asset. And its active principle is the regime of absence enacted by discreteness. Discreteness allows to cover the tracks of manipulation (from legit tuning to upfront censorship), which in turn preserves the impression that the search engine is a neutral interface. A technical commodity. A simple doorway to the web. As it blends into the background of our digital experience, we forget that this “simple doorway” has the potential to hide the existence of entire areas of knowledge. That is how we lose the ability to reflect on a piece of technology, when it fades into the very fabric of our experience, as a mediation.

Why I write in English / Pourquoi j’écris en anglais

[English version below]

Je suis français. J’écris en anglais, alors que les nombreux francophones qui parlent peu ou pas l’anglais ne peuvent pas accéder à ces connaissances. Ce faisant, il semble que j’alimente l’hégémonie académique anglo-saxonne. Suis-je vendu à MacDonald’s?

Je veux dire aux français qui pensent qu’écrire en anglais est une concession à la mondialisation et à l’influence américaine qu’ils se trompent. En bref, mon territoire académique naturel n’est pas la France mais l’Europe. La France en fait partie, mais pas seulement. Dans ce cadre, l’anglais est nécessaire.

Il y a quelques années j’ai participé à un colloque exclusivement en français, italien et allemand – pas d’anglais. C’était dur, mais c’était chouette. Mais cela ne suffit pas, car il n’y a pas que ces langages en Europe. Je pense à mes collègues scandinaves, hollandais, hongrois… Notre langue commune est l’anglais.

D’autres chercheurs français ayant travaillé hors de France vous le diront: l’académie française vit dans une bulle isolée du reste de l’Europe. Pas complètement déconnectée, mais plutôt dans cette demi-surdité qui entretient le déni mais enferme inexorablement. Français: il y a une fête en Europe et vous ne le savez pas. Vous finirez par croire que vous n’êtes pas invités, mais vous avez seulement jeté le carton d’invitation parce que vous n’arriviez pas à le lire.

La France est un petit pays, cela n’a rien de honteux. Collaborer avec les autres petits pays qui nous entourent ne nous affaiblit pas, et les Etats-Unis n’ont rien à voir là-dedans. Pour cela, l’anglais est nécessaire. Tant pis si au passage on y fait des dégâts ;-)

English version

I am French and I write in English. Many French-speaking colleagues are not comfortable with English, or just cannot read it. By writing in English, it seems I leave them aside to the benefit of the hegemonic influence of US academia. Or at least, that is how some of them feel. So, am I sold to McDonald’s?

I want to tell the French who think so, that they are wrong. In a nutshell, my territory is not France but Europe. Which of course includes France, but not only. That is where English is necessary.

A few years ago, I attended a seminar exclusively in French, Italian and German – no English allowed. It was hard but exciting. Such initiatives are great. But they are not enough, because Europe has much more languages than that. I am thinking of my Scandinavian colleagues, and Dutch, Hungarian… In common, we only have English.

Other French researchers working abroad are well aware of it: French academia lives in a bubble. Not entirely separated from others, but afflicted with a half-deafness that keeps you in both denial and isolation. French people: there is a party out there and you don’t know it. You will end up believing that you were not invited. But you were, you just overlooked the invite because you could not read it.

France is a small country, and that is perfectly fine. Collaborating with other small countries around us does not make us weaker. This has nothing to do with the USA. We need English to work with other European academics, who are pretty much in the same situation. We might trash that language on the way but eh, c’est la vie ;-)

Is living experience radically non-digital?

I read once again the claim to a radical difference between computers and empirical reality, because the former is discrete (discontinuous) and the latter is not. This claim states that computations are digital, reality analogous, and concludes that big data will never fully account for social life. This argument is bullshit.

First of all, let me acknowledge that there might be radical differences between computers and the empirical world, and the computations might never be able to grasp certains things. But I call bullshit on the argument that concludes it only from the discreteness of the digital.

To make it easier to understand, here is an example of the argument.

When one phenomenon either appears in the form of numbers or is converted in quantitative indicators, the continuity disappears since calculation requires discretisation, even though it looks blurred in fuzzy logic algorithms, for instance, or highly granular in Leibniz’s infinitesimal calculus. For that reason, classification is as critical in computer science as categorisation is in social science. It introduces discretisation into a living experience, that is, of course, continuous and analogous.

D. Boullier, in Médialab stories: How to align actor network theory and digital methods

I have two points against the argument. The first is quite short, the second even shorter.

I.

Firstly, arguments stating that something is impossible are suspiciously imprudent. What can or cannot non-human beings do? Since centuries, our science (and societies) have proven chronically overconfident on that matter. We believed that animals could not think, or at least not like us, that we had a radical difference like a soul or something. Until we found out that cats can dream, that crows can solve puzzles as well as five-year-olds, and that dolphins call themselves by unique names. Now it is the computer’s turn. Didn’t we just realize that computers can hallucinate? But surely, there must be other things they cannot do.

The argument is weak because it is generic. Maybe certain computers cannot do certain things for some reasons. This could constitute a good point, if we know which computers cannot do which things and why. If the argument is specific. But in the classic “computers are discrete” argument, we do not know what exactly they cannot do. Because of course, as soon as you state something specific, a computer scientist pops up with a counterexample.

The argument is also weak because it relies on a belief. The existence of the human soul is a belief, and a reckless ground for the inexistence of animal intelligence. The radical difference between the continuous and the discontinuous is a belief. It is a belief until you can properly unpack it, which is harder than it sounds (spoiler: Leibniz and Turing are clichés of no help). Calling unspecifically to the powers of mathematics is not different from calling to gods, an argument of authority. Continuity is far from a triviality. It is a highly abstract concept, that we cannot distinguish from discontinuity in our everyday life, with a complicated history and multiple, problematic mathematical definitions.

II.

Our empirical reality is not continuous. Or at least, we don’t know for sure. You might think that the laws of physics have to be continuous, that the physical equations could not work on the basis of a discrete space-time. You would be wrong. We already know it’s a valid possibility. Knowing if space-time is discrete or continuous is an empirical question, and it is yet unsettled.

It is usually assumed that space-time is a continuum. This assumption is not required by Lorentz invariance.

H. Snyder in Quantized Space-Time, 1946

The point of this presentation is not to convince readers that space-time really is discrete but rather to convince them that we do not yet know whether or not it is.

P. Forrest in Is Space-Time Discrete or Continuous? — An Empirical Question, 1995

Maybe empirical reality is continuous after all. But so far, this world’s best physicists cannot tell. At least we can know one thing from sure: we are profoundly inapt at telling the difference. We cannot tell the difference between what discrete systems and continuous systems can do. So if there is a radical difference between computers and living experience, it is not that one.

Two stories about “Divided they Blog”, figure 1.

Lada Adamic and Nathalie Glance published one of the first papers analyzing an empirical network of websites, Divided they Blog. It features a network of political blogs, harvested before the 2004 US presidential election. For good or for bad, its first figure may have had more influence than its findings. In this figure we see two clusters, one for the democrats, one for the republicans, and colored accordingly.

The page from “Divided they Blog” where the figure 1 appears.

The paper was published in 2005 and has been hugely cited (2551 times in 2019 according to Google Scholar). I have two stories about this image, a tale and a horror story, which together form a bigger story. Let us start with the tale.

I. The Journey of Figure 1

I was super happy to discover a paper from Brooke Foucault Welles and Isabel Meirelles, Visualizing Computational Social Science: The Multiple Lives of a Complex Image, published just ten years after Divided they Blog. They take a close look at the journey of the figure 1 in academic literature and beyond.

As they observe, the image was not supporting any argument in its original paper. The conclusions were grounded on network metrics, independent on visualization. The role of the figure was only to illustrate methodology. It was not convoked as a piece of evidence. However, once it circulated, it became a piece of evidence to other authors. Foucault Welles and Meirelles cite for example “a popular-press book about network science”, Connected:

What immediately stands out is the extreme separation between liberals and conservatives. … Just like the real-world political networks …, the online social network appears to be strongly homophilous and polarized. This suggests that political information is used more to reinforce preexisting opinions than to exchange differing points of view.

Connected, Christakis & Fowler, 2009, p. 206 (ellipses my own)

As Foucault Welles and Meirelles put it, “[if] readers simplify the image by ignoring the links connecting the parties, then the full blogosphere visualization more easily communicates the desired message of political polarization.” They even provide their own version of the figure, blurred in order to emphasize the effect it produces on an untrained eye:

What we perceive “at first glance” of the figure from “Divided they Blog”, as presented in the paper “Visualizing Computational Social Science”

Their paper makes a number of very interesting points, but I am not summarizing them here. You can read it by yourself, it is short enough. The part of the tale that I want to retrace here is how much these two clusters seem to mean to so many people. Few visualizations have traveled this far, or in such complicated conditions. Foucault Welles and Meirelles conclude this way:

Finally, and most critically, we call on computational social scientists, especially network scientists, to interrogate their own visualization practices. As discussed above, constructing network graphs remains as much an art as a science, with few conventions regarding the “right” way to represent node-link data.

Visualizing Computational Social Science, Foucault Welles and Meirelles

I certainly agree, though in my perspective, there is a rigorous method behind network analysis. Good visual network analysis should be teachable. I take the occasion to quickly unfold how to read visual cluster, which will be necessary to our second story.

How to interpret visual clusters?

The key fact that the audience easily misses when looking at a network visualization, is that the layout ignores the node attributes. The position is independent of the color. The structure does not reproduce the content. The hyperlinks do not follow political affiliation. Until they do, and then it’s remarkable. That is why figure 1 tells something new. That is why there is a fact, a finding. The fact that each cluster has its own color is not a given, but an obtained empirical observation. A correlation between the content and the structure.

More generally, the backbone of such image-based rationale is as follows:

  1. The nodes are colored according to attribute X.
  2. The layout algorithm places the nodes according to their links, regardless of anything else. It optimizes the distances between nodes so that closer nodes have more chances to be connected (directly or indirectly).
  3. We observe areas of higher node density. According to the layout algorithm, those are “clusters” where nodes are more connected than on average in this network.
  4. We also observe that each cluster is mostly populated by nodes of a distinct modality of attribute X.
  5. Crossing these two observations, we conclude of homophily: in this network, nodes of the same modality of X have more chances to be connected.

This argumentation is important to the researcher during exploration. In a paper you need a stronger argument, and like Adamic and Glance, you should favor a metric, such as the densities of the groups formed by the modalities. This way you can ground your evidence on the data without the the mediation of the layout. But the final paper is not the only place where you need evidence. Exploration also requires it, albeit on a weaker standard. Although visual interpretation is not the best evidence, it guarantees that your exploration leads to findings. And that, is something.

II. A Horror Story

I was horrified by reading a paper published in the Journal of Machine Learning Research, Community Detection and Stochastic Block Models: Recent Developments. It is about the stochastic block model (SBM), a famous model used for community detection, ie. finding clusters. The paper is sound overall, and its core points seem a valuable contribution. But the figures are the worst misuse of network visualization I have ever seen published.

The relevant part of the paper explains what community detection aims at – finding clusters. On the surface, the figures seem to do the job. Take a look at this one, and I will explain its caption:

As in the case of Adamic and Glance, the image is not key to the argument, merely serving as a way to help the reader understand. But if you are familiar with network visualization, you ask yourself one of the following questions:

  • Did the SBM change the structure of the graph? That’s weird because clustering algorithm are not supposed to. Or maybe…
  • It is the same graph? But in that case, does it mean the SBM changed the nodes placement? Isn’t that the job of a layout algorithm?

It turns out that here, the network is the same “before” and “after”. The clustering classes are only known “after”, but the nodes and edges are the same in both cases. Which raises this question: why are there two different layouts? Remember: the layout algorithm only acknowledges the links, not the attributes (the clustering classes represented as color). There is no reason why the layout should be different.

The author seems to believe that a node placement algorithm ignores the structure and only looks at the attributes. Which is the opposite of what they really do. Or more precisely, what force-driven placement algorithms do, the kind used by Adamic and Glance, and known to manifest clusters. Interestingly, the image featured in the paper mimics the general layout resulting of such algorithm. What could work the opposite way of a force-driven placement algorithm, while producing similar shapes? A fake.

Both “before” and “after” images are faked. A random placement algorithm has been tweaked to produce this result – the paper calls it a “random arrangement”. This otherwise useless algorithm ignores links to place nodes randomly in a disk, possibly more often at the center. The image on the right is of the same kind, except that each cluster has been rendered separately and then placed apart. These images do not translate the network’s topology, while simultaneously referring to a practice where this translation is central.

The motivation is simple. Layout algorithms already visualize the clusters, but the author wants to illustrate how SBM finds the clusters. So I presume he designs or repurposes a process inapt at visualizing clusters, so that the lead role can be played by SBM. Note that I do not mean that layouts are preferable to community detection (both are super valuable for different uses). I just mean that this argumentation is dishonest.

I also consider the possibility that the author believes visualization is just artistic pictures aiming at a supplement of clarity, so that anyone can do whatever they want without consequences on the scientific level. This is equally problematic.

I am horrified that an academic audience could be tricked by these figures. I find it threatening because it shows that a complete misunderstanding of network visualization might go unnoticed under peer-review. In my worst nightmares, I am the only one to see the problem and everyone thinks I am splitting hairs. I hope you, reader, see where these figures are flawed. But I realize that it is not so easy to unfold.

Like if debating with flat-earthers, trying to expose the flaws of this rationale I find myself entangled in a web of nonsense. Without the backing of commonsense, I realize that establishing the obvious requires considerable effort and I start doubting my ability to show how dangerous it is. So instead, I will push the flawed logic further to expose its absurdity. And it will be about the figure 1 of Divided they Blog, because of course, the author also features it. Take a look at this glorious image, and its baffling caption:

In the fictional world where this rationale makes sense, Lada and Nathalie found themselves deeply puzzled by their data. No matter which algorithm they used to visualize the blogosphere, it always appeared unified as a pure center-periphery structure, an obscure hairball, all blogs orbiting randomly regardless of their political affiliation. What an incredible finding! The expected homophily was completely absent from the blogosphere. No metric could find a correlation. As they hesitantly started drafting a publication for Science, a computer science colleague suggested the stochastic block model. And as they took a try at it, an even more surprising event happened. Not only did the liberal/conservative divide appear, coloring the network in red and blue, but the very structure of the blogosphere suddenly unveiled. The blogosphere hatched before their very eyes, unfolding its two hidden clusters like the wings of a magnificent chicken. The initial network was an egg that only the stochastic block model could crack open, into an explosion of colors that would make them famous. This would be remembered as a moment of pure magic, the unveiling of realities hidden to the simple human mind, the grand opening of a portal to Knowledge whose only key has to be an algorithm.

Narrated Algorithms

Each algorithm has its own mythology. Most algorithms are complicated machineries requiring a lot of expert work to be properly understood or implemented. In that sense, they are opaque, and it is hard (ie. costly) to debunk false ideas about them. Myths about algorithms tend to stick.

Academic papers about algorithms tell nice stories. A publication cannot just expose an algorithm, certain criteria are required (newness, efficiency…) and a rationale must frame them. In other words, a story. But it does not have to be true. Its role is to justify and explain why the algorithm matches the criteria. In this rationale, the logical conclusion predicts the features of the algorithm. And indeed, it has them. But it does not mean that the reason exposed by the rationale is the right one.

In those narratives, the authors always seem to have a good understanding of the reasons why the algorithm is better (performance, quality…). That, I must admit, is a complete joke. Although theoretical work can lead to the discovery of a new algorithm, which can come with an explanation, algorithms can also be discovered by heuristic, trial-and-error iterations. In that case, there is no guarantee that its authors understand the reasons why it is better. Fortunately, as long as they come up with a reasonable narrative, they can get away with it.

As weird as it sounds, you can perfectly understand how an algorithm works and completely ignore why it performs better. I even believe it is generally the case. But academic practices do not incentivize to be honest on that matter. So authors narrate their algorithm so that a justification appears. That justification is rarely discussed, especially if the algorithm works. And it gives birth to a myth.

I was decided to write on that matter when I read the short piece titled “Everything you know about word2vec is wrong”. The author, assuming here the role of a software engineer, tried to reimplement the famous word2vec algorithm. Following the instructions from the original paper and other major sources (Wikipedia…), they could not get the impressive results that the algorithm is known for. By checking the code of reference implementations, they discovered that those are “drastically different”. The piece is discussed on Hacker News and according to different comments, this discrepancy between the paper and the implementation is not unusual.

“One infamous example of this is SSAPRE – to this day, people have a lot of trouble understanding the paper (and it has significant errors that make the algorithm incorrect as written). The concept sure, but the exact algorithm – less so. Reading the source code … it is just wildly different than the paper (and often requires a lot of thought for people to convince themselves it is correct).”

DannyBee, most upvoted comment

The word2vec paper narrates that “subsampling of frequent words during training results in a significant speedup … , and improves accuracy of the representations of less frequent words” and also that “a simplified variant of Noise Contrastive Estimation … results in faster training and better vector representations for frequent words.” It comes up with a theory about why it performs better. But it turns out that a seemingly minor detail, barely mentioned in the paper, has a major impact on the result. At the very least, the paper’s narrative does not seem so good at explaining why the algorithm performs well.

This story reminded me about my own process of writing and publishing the Force Atlas 2 paper, on a force-directed network layout algorithm. Our motivation was absolutely not the algorithm’s performance. It proposed novel ideas, solved engineering issues on integrating different existing techniques, and had a specific design. It was used by many Gephi users, and we thought a peer-reviewed reference on the algorithm would be of help to the research community. We wanted to provide a reference explanation for how it worked, a ground for interpreting the resulting network maps. But the peer-review asked for a benchmark. Fortunately, it was also faster than existing alternatives, and we were able to come up with a nice story about performance. We were published. But let us be honest: that is not why people cite the paper. It is because they use it. And that is because it produces good results. But why does it work so well?

The best quality force-driven algorithm is arguably the LinLog algorithm, proposed by Andreas Noack. He provides a convincing narrative in this 2007 paper but the clearest explanation relies on a picture from his a 2009 paper. In short, all algorithms of that kind use two parameters, one for the attraction force (a) and one for the repulsion force (r). Those are positive or null integer, and the attraction must be stronger than the repulsion (a>r). Noack showed that the result is better when a and r are smaller. This leaves one optimal solution: a equals 1 (linear attraction) and r equals 0 (logarithmic repulsion), hence the optimal algorithm, LinLog. The picture below shows where other algorithms place themselves, and we see that the best spot is in the top-left corner.

Do not get me wrong: LinLog is indeed the best performing algorithm, in the sense that it is the one representing clusters the most clearly. That is the entire point of his 2009 paper, “Modularity clustering is force-directed layout”. But the narrative, as convincing as it is, is wrong.

The explanation is in plain sight in the 2007 paper. The LinLog model comes in two flavors: node repulsion and edge repulsion. Both match the nice narrative. But only the edge version provides the nice results. In the pictures from his paper, you can judge by yourself how the node version compares more to the venerable “Früchterman Reingold” algorithm than to the edge version. Noack himself chose to highlight it visually.


Once again, the decisive parameter is absent from the algorithm’s narrative. The algorithm is as good as advertised, the narrative predicts it, but the narrative is nevertheless false.

Last example, similar concerns arise in the field of machine learning. For a long time Bayesian strategies were the most successful. They were considered mathematically elegant, requiring fewer assumptions than alternatives. In the spirit of Occam’s razor, this elegance crystallized as a rationale for the superiority of the method. Certainly, if your approach requires many meta-parameters, it must mean that you did not model your problem properly, and thus cannot reach an optimal solution. But that narrative could only hold until deep learning and its inherent messiness dethroned the Bayesian approach. The unreasonable effectiveness of mathematical elegance turned out a myth. Was it wrong? That, I do not know. But it is certainly not dead. Myths have a thick skin.