Some quotes on how tools influence the humanities

40 minutes read, 3 minutes without the quotes.

Now that I have identified what I want to salvage from the critique of big data visualization, I must forge a general argument. I do not have a clear overview of the literature on the subject; I only remember the multiple times where I had this something-is-not-quite-right feeling. Here I gather some quotes on the subject, from my recent readings.

Let me briefly state my argument, as it helps understanding the perspective of this selection of quotes. Some authors criticize digital tools and/or visualization because they supposedly carry a regrettable external influence on the humanities. My discomfort lies in two points.

Firstly, users repurpose tools in unexpected ways, and (re-)interpret them in ways widely different from their original framing. So if there is an influence, it is not self-evident; the origin of a tool does not guarantee its influence.

Secondly, science is “disunified” (Galison’s word). A given discipline such as physics is not a single culture, but many (you may think of Knorr Cetina’s epistemic cultures). When humanists use Gephi, they do not borrow it from computational social scientists like Lada Adamic, or network scientists like Albert-László Barabási, but from engineers like Mathieu Bastian, the 2010 me, and fieldwork-engaged sociologists like Dana Diminescu. Our project was to explore and describe, not model and predict. Furthermore, what some scholars perceive as their own fight against science-as-seeking-universal-laws is already existing within “hard science” fields such as physics. Why seeing tools as Trojan horses when the landscape of knowledge production is so bafflingly fragmented and complex that an entire field of research is dedicated to studying it?

These points deserve more development, I know; but this post is not the right place, and I will stop for now.

I must precise, however, that I do not contest the existence of an external influence on the humanities. I am only complicating things to improve the criticism of scientific instruments so that it can better help designing and using them in practice. Let’s only fix tools where, when, and if there is something broken. Which is still, for me, a pretty open question.

I expect the following quotes to help me identify a few things:

  • Who, they say, is influencing?
  • What, they say, is being influenced?
  • How, they say, does this influencing operate?
  • Why, they say, is it a matter of tools, visualization or technology?

I want to know if there is a core argument, or a constellation of semi-related points. And I want to know if the point is serious or, worst case scenario, a glorified misunderstanding.

Casual finding

These authors have a variety of perspectives and conceptualizations of the tool question. Most papers are not directly, or at all, about tools and/or visualization. Yet most of them casually state, often when they summarize their point, that presuppositions/ontologies/methods are built into the tool and/or visualization. Yet none of these papers tells who stuffed the tool/visualization with its epistemic filling, why, and how.

A first kind of author just does not ground the claim. A second kind cites Johanna Drucker, but she does not ground the claim (I am not saying she has to, but that is how it is). Finally, a third kind actually proposes an explanation; but the closer we get to the question, the more it seems to be about something else. Judge for yourself.

Some quotes

I present the quotes by paper, sorted by anti-chronological order, because it sometimes helps following the network of references. This way it feels more like one progresses upstream toward the source of the argument.

Of course, at some point the argument grounds on cognition and semiotics; I stop before extracting quotes from those papers. That space is too large. I only track who says what, and on which ground. I focus on the science and technology studies, and the humanities (roughly speaking). I do not even try to be exhaustive.

Ruppert & Scheel, 2018

Ruppert, E., & Scheel, S. (2019). The Politics of Method: Taming the New, Making Data Official. International Political Sociology13(3), 233-252.

Evelyn Ruppert and Stephan Scheel’s piece is not about the social science and humanities. However, it emphasizes the materiality of practices with big data, notably visualization (see the quote below). This is why I include it here.

we depart from Savage’s conception of the politics of method to argue that these strategies … also feature material‐semiotic practices like demonstrations that seek to legitimise innovations in methods and data as official. … The politics of method rather require a symmetrical analysis that accounts for how different kinds of digital devices are mobilised

For the authors, visualizations perform realities two ways: by rendering the phenomenal world as self-evident, and by making absent the work put into obtaining and refining the data (enacting a reality independent of the method).

we understand visualizations as crafted set‐ups that involve situated enactments of realities. … visualizations bring realities performatively into being.

In contrast to other statistical accounts of mobility, such as static charts and tables, MPD seems to speak for itself precisely because it moves. The moving red dots become not only a vehicle for the data, but first and foremost for its claimed self‐evidence. The red dots moving along Estonia’s main transport routes suggest that they correspond to the commuters they are meant to represent. Through this “realist trick” (Law 2012) mobility is enacted as a reality that exists independently of the methods that are used to describe it. There appears to be a seamless correspondence between the visualization (the moving dots) and the reality (commuting patterns in Estonia) it represents and renders “the phenomenal world (as if it) were self‐evident and the apprehension of it a mere mechanical task” (Drucker 2011, 2). In this way MPD is constituted as the perfect method for tracing the movements and locations of increasingly mobile populations, a method that offers an unrestricted vision from above, a vision that allows, in tradition of the “god trick” described by Donna Haraway (1988, 581), to see “everything from nowhere.”

the map’s capacity to build allies derives precisely from making absent all the work that goes into the map’s crafting as a coherent account of mobility.

the demonstration is a strategy to convince other statisticians and build allies … In other words, the heat map demonstrates a paradigm shift that statisticians sometimes refer to as a change from statistics to modelling.

References on that matter:

  • Law, 2012
  • Drucker, 2011
  • Haraway, 1988

Masson, 2017

Masson, E. (2017). Humanistic Data Research: An Encounter between Epistemic Traditions. In Mirko Tobias Schäfer and Karin van Es, The Datafied Society. Studying Culture through Data.

For Eef Masson, the humanities borrow digital tools from positivist and epistemic traditions. The humanities are thus inevitably indebted because of presuppositions built into the tool. Masson does not specify how it works, but points to authors who made the argument. She also mentions a transparency effect of computational methods and visualization.

with the introduction of digital research tools, and tools for data research specifically, humanistic scholarship seems to get increasingly indebted to positivist traditions

those tools, more often than not, are borrowed from disciplines centred on the analysis of empirical, usually quantitative data. Inevitably, then, they incorporate the epistemic traditions they derive from.

Johanna Drucker points out that tools for information visualization are inevitably indebted to the disciplines from which they derive. The same, one might add, applies to tools for data scraping, and for the cleaning, sorting or otherwise processing of collected data.

At the most basic level, the indebtedness Drucker speaks of can be understood as a set of built-in presuppositions about how knowledge is obtained. In this context, it is important to consider not only the assumptions of the practitioners for whom the tools were designed … but also those of the software engineers who conceived them.

Eef Masson draws her argument from Drucker, Kitchin, and Rieder & Röhle. Here is her summary.

In the absence of readily legible clues as to their epistemic foundations, computational research tools are often assigned such values as reliability and transparency (Kitchin 2014: 130). As Rieder and Röhle observe, the automated processing of empirical data that they enable seems to suggest a neutral perspective on reality, unaffected by human subjectivity (2012: 72). Drucker, a specialist in the history of graphics, makes a similar point, focusing more closely on practices of data visualization. She argues that the tools used for this purpose are often treated as if the representations they render provide direct access to ‘what is’. This way, the distinction between scientific observation (‘the act of creating a statistical, empirical, or subjective account or image’) and the phenomena observed is being collapsed (Drucker 2014: 125; see also Drucker 2012: 86).

References on that matter:

  • Drucker, 2012
  • Drucker, 2014
  • Kitchin, 2014
  • Rieder & Röhle, 2012

Rieder & Röhle, 2017

Rieder, B., & Röhle, T. (2017). Digital Methods: From Challenges to Bildung. In M. T. Schäfer & K. van Es (Eds.), The Datafied Society: Studying Culture Through Data (pp. 109–124). Amsterdam: Amsterdam University Press. 

For Bernhard Rieder and Theo Röhle, knowledge is embedded, stuffed within tools; tools express and perform methods. By this, they mean that (1) tools make methods possible, but also (2) independently from their user (to a certain extent).

They frame that matter in the context of an encounter between the humanities (understood in a broad sense) and computing (seen as a method drawing from multiple fields). They pay attention to not reducing the dichotomy to quali/quanti or critical/administrative.

Largely drawing from their 2012 paper, they break down the influence of digital tools as such:

  • People believe computers can be more objective that humans
  • Visualizations are spectacular, which plays a rhetorical role
  • Their opacity undermines a pillar of scholarship: open scrutiny
  • If one believes in the existence of underlying laws of nature, computer calculations are quintessential
  • By commoditizing methods to profane publics, they render mediations invisible and thus promote essentialist interpretations.

For the authors, users play an active role in the influence of the tool, because they may or may not understand the concepts and methods embedded in the tools – hence the importance of Bildung.

The encounter between the humanities and computing plays out in different ways in different arenas, but needs to be addressed in principle as well as in relation to particular settings. … While terms like ‘digital humanities’, ‘Cultural Analytics’, ‘digital methods’ or ‘web science’ can play the role of buzzwords, their proliferation can be seen as indicator for a ‘computational turn’ that runs deeper than a simple rise of quantitative or ‘scientific’ modes of analysis.

Even if ‘the digital’ has become a dominant passage point, it works like a meat grinder: the shredded material does not come out as a single thread, but as many. To connect back to the Methodenstreit: computational methods can be both deductive and inductive (see e.g. Tukey’s (1962) concept of exploratory data analysis), both quantitative and qualitative in outlook, both critical and administrative.

why computational tools have sparked such a tremendous amount of interest when it comes to studying social or cultural matters [?] One explanation might be the notion that the computer is able to reach beyond human particularities and into the realm of objectivity

Since [visualizations of network topologies, timelines or enriched cartographies] possess spectacular aesthetic – and thus rhetorical – qualities, we asked how the argumentative power of images could (or should) be criticized. … The challenge is thus to maintain a productive self- reflexive inquiry into our own visual practices … without abandoning the promise of gaining insights via visual forms (Drucker 2014: 130-137).

Despite the fact that writing software forces us to make procedures explicit by laying them out in computer code, ‘readability’ is by no means guaranteed. However, an open process of scrutiny is one of the pillars of scholarship and, in the end, of scholarship’s claim to social legitimacy.

When reality is perceived to adhere to a specifiable system of rules, the computer appears to be the quintessential tool to represent this system and to calculate its dynamics.

A critique of digital tools is incomplete without a critique of their users and the wider settings they are embedded in.

If students and researchers are trained in using these tools without considerable attention being paid to the conceptual spaces they mobilize, the outcomes can be highly problematic. Digital Bildung thus requires attentiveness not just to the software form, but to the actual concepts and methods expressed and made operational through computational procedures.

It is again important to notice that the point and line form comes with its own epistemic commitments and implications, and graph analysis and visualization tools like Gephi further structure the research process.

The problem, again, comes from the fact that tools such as Gephi have made network analysis accessible to broad audiences that happily produce network diagrams without having acquired robust understanding of the concepts and techniques the software mobilizes. This more often than not leads to a lack of awareness of the layers of mediation network analysis implies and thus to limited or essentialist readings of the produced outputs that miss its artificial, analytical character. A network visualization is closer to a correlation coefficient than to a geographical map and needs to be treated accordingly.

While our three examples might be considered very specific, we think that similar arguments could be made for a wide variety of cases where software performs a method.

The problem of black boxing does not begin with the opacity of computer code, but with the desire to banish technology from the ‘world of signification’ (Simondon 1958: 10). Behind the laudable efforts to increase levels of technical capacity lies the dangerous phantasm that technology’s epistemologies are ultimately ‘thin’ and that once programming skill has been acquired, mastery and control return. We believe, on the contrary, that any nontrivial software tool implies thick layers of mediation that connect to computation as such, certainly, but in most cases also imply concepts, methods and styles of reasoning adapted from various other domains.

Digital methods are here to stay and to go beyond the simplistic reflexes of enthusiasm and rejection we need to engage in critical practice that is aware of the shocking amounts of knowledge we have stuffed into our tools.

References on that matter:

  • Rieder & Röhle, 2012
  • Drucker, 2014

Marres & Gerlitz, 2016

Marres, N., & Gerlitz, C. (2016). Interface methods: Renegotiating relations between digital social research, STS and sociology. The Sociological Review, 64(1), 21-46.

For Noortje Marres and Carolin Gerlitz, re-purposing a digital tool is ambivalent. As the tool was not exactly made for our needs, yet for something similar, it comes with constraints but also freedom. This is made possible because these tools are often plastic, if not unstable.

I must precise that the author’s approach assume born-digital data, so here they do not address the problem of digitization.

digital analytics invoke a methodological uncanny for social research. The tools mentioned above closely resemble the techniques and methods deployed in social inquiry, but we can certainly not call them ‘our own’. ‘Not our own’ because in second instance the methods built into popular tools often prove to have more alien disciplinary provenances, and to serve the objectives of digital platforms rather than those of research. … We will propose that there are decisive advantages to affirming the ambivalence of digital analytics – according to which data tools are both similar and different from sociological research techniques.

A key characteristic of the methodological uncanny is that it is not necessarily clear, which analytic purposes digital tools may serve, what research objectives they may align with or what disciplinary agendas they enact. One of us has previously characterised social research tools as ‘multifarious instruments’ which have the capacity to serve multiple purposes, which may not always be clearly distinguished, and which require some form of experimental test in order to be established (Marres, 2012).

much of the debate about digital methods in social media studies has focused on the possibility of the re-purposing of digital devices (Rogers, 2009). Sociologists have drawn attention to the instability and under-determinacy of digital research methods themselves, proposing notions such as plastic methods (Lury, 2012) and live methods (Back and Puwar, 2012).

If a tool can serve multiple purposes, it cannot be simply defined as a sociological tool or method, but can only become so through its deployment and in assembly with research questions, objectives and narrativation.

Taking into account the alignment of research objectives, data, tools, media and analytical purpose, we can conclude that digital research metrics may be called ‘thick’ provided we take the research context into account: they are propositions that suggest particular ways to equip, organize, and valuate practices and knowledges. While the measures built into online data tools are arguably rather ‘thin’ indeed, the socio-technical apparatus they enable – the detection of currency (for free!) – is much ‘thicker’: it integrates the analysis of live data into digital practices, and as such helps to realize informational societies orientated towards liveness. For this reason, we think of co-occurrence, or at least its implementation in data tools online, as a highly ‘interested method’ (Asdal, 2014).

References on that matter:

  • Marres, 2012 a
  • Ruppert, Law & Savage, 2013
  • Rogers, R., (2013), Digital Methods, Cambridge, MA: MIT Press.

Drucker, 2014

Drucker, J. (2014). Graphesis: Visual Forms of Knowledge Production. Cambridge, MA: Harvard University Press.

For Johanna Drucker, the graphical tools developed in empirical sciences and adopted by humanists are Trojan horses conveying assumptions about the nature of information:

  • Their familiarity conceals their epistemological biases, and collapses the critical distance between the phenomenal world and its interpretation.
  • Their simplicity and legibility hide the fact that data was obtained.
  • Their stable reading conventions hide the fact that phenomena are not independent from their observer.

“We know this”, she writes; but I find none of these three points self-evident. One can find details in other texts but not this one. She does not explain why some (all?) visualizations feel familiar: does it stem from their semiotics, or did an external influence weaponize it? Similarly, it is unclear whether she just contests the reductionist approach, or visualization techniques have inherent reductionist qualities.

Also note that although her vocabulary suggests an intent to deceive, that intentionality is not argued.

Most information visualizations are acts of interpretation masquerading as presentation. In other words, they are images that act as if they are just showing us what is, but in actuality, they are arguments made in graphical form.

the primary effect of visual forms of knowledge production in any medium … is to mask the very fact of their visuality, to render invisible the very means through which they function as argument.

Expectations about images changed and even the concept of what constitutes a likeness alters over time. We come to believe that photographs are an unmediated image, what Roland Barthes called an “image without a code,” and continue this belief as digital methods of scanning, altering, and creating have developed. But of course, all images are encoded by their technologies of production and embody the qualities of the media in which they exist.

Most, if not all, of the visualizations adopted by humanists, such as GIS mapping, graphs, and charts, were developed in other disciplines. These graphical tools are a kind of intellectual Trojan horse, a vehicle through which assumptions about what constitutes information swarm with potent force. These assumptions are cloaked in a rhetoric taken wholesale from the techniques of the empirical sciences that conceals their epistemological biases under a guise of familiarity. So naturalized are the maps and bar charts generated from spread sheets that they pass as unquestioned representations of “what is.” This is the hallmark of realist models of knowledge and needs to be subjected to a radical critique to return the humanistic tenets of constructedness and interpretation to the fore. Realist approaches depend above all upon an idea that phenomena are observer-independent and can be characterized as data. Data pass themselves off as mere descriptions of a priori conditions. Rendering observation (the act of creating a statistical, empirical, or subjective account or image) as if it were the same as the phenomena observed collapses the critical distance between the phenomenal world and its interpretation, undoing the concept of interpretation on which humanistic knowledge production is based. We know this. But we seem ready and eager to suspend critical judgment in a rush to visualization. At the very least, humanists beginning to play at the intersection of statistics and graphics ought to take a detour through the substantial discussions of the sociology of knowledge and its critical discussion of realist models of data gathering. At best, we need to take on the challenge of developing graphical expressions rooted in and appropriate to interpretative activity.

Because realist approaches to visualization assume transparency and equivalence, as if the phenomenal world were self-evident and the apprehension of it a mere mechanical task, they are fundamentally at odds with approaches to humanities scholarship premised on constructivist principles. I would argue that even for realist models, those that presume an observer-independent reality available to description, the methods of presenting ambiguity and uncertainty in more nuanced terms would be useful.

the rendering of statistical information into graphical form gives it a simplicity and legibility that hides every aspect of the original interpretative framework on which the statistical data were constructed. The graphical force conceals what the statistician knows very well—that no “data” pre-exist their parameterization. Data are capta, taken not given, constructed as an interpretation of the phenomenal world, not inherent in it.

knowledge created with the acknowledgment of the constructed nature of its premises is not commensurate with principles of certainty guiding empirical or realist methods. Humanistic methods are counter to the idea of reliably repeatable experiments or standard metrics that assume observer-independent phenomena. By definition, a humanistic approach is centered in the experiential, subjective conditions of interpretation.

Kitchin, 2014

Kitchin, R. (2014). The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. London: Sage.

I did not read it, mostly for practical reasons. It features here because Eef Masson refers to it on this point: “In the absence of readily legible clues as to their epistemic foundations, computational research tools are often assigned such values as reliability and transparency.”

Ruppert, Law & Savage, 2013

Ruppert, E., Law, J., & Savage, M. (2013). Reassembling social science methods: The challenge of digital devices. Theory, culture & society, 30(4), 22-46.

For Evelyn Ruppert, John Law and Mike Savage, the most impacting traits of the digital apparatus are the kind of traces produced, the modalities of their circulation, the kind of arrangements they elicit… Most of their argument is not specific about tools or visualization; but a small part is.

For the authors, visualization can carry political power, for instance when it maps populations. This power lies in what it represents, but also how it was obtained, and the kind of patterns it does or does not manifest. The way visualizations circulate also matters.

Note. I did not extract quotes that are not directly about tools. However, I can offer an idea of the aspects of the digital apparatus they elaborate on, since it might interest you: (1) transactional actors; (2) heterogeneity; (3) visualization; (4) continuous, rather than bundled time; (5) whole populations; (6) granularity; (7) expertise; (8) mobile and mobilizing; and (9) non-coherence.

On the one hand, we want to suggest, controversially, that we are seeing a partial return to an older, observational kind of knowledge economy, based on the political power of the visualization and mapping of administratively derived data about whole populations. On the other hand, as a genealogical approach demands, we need to attend to the differential problems, concerns and devices through which observation is being performed by the digital and its material and productive effects, including the reconfiguration of knowledge spaces and social science expertise.

in the move to the digital visualization now becomes a means of showing how ‘excessive’ information can be reduced to a form in which it can be meaningfully, if partially, rendered for interpretation. … visualization becomes a summarizing inscription device for stabilizing and representing patterns so that they can be interpreted.

since both the distribution of digital devices and inscriptions is widespread, and that cascading devices work in different ways to produce different effects in different locations and circumstances, it is more readily apparent that knowledges do not cohere to generate a single authoritative representation of the social.

Rather than competition between ideas, it is competition between material devices where those that assemble and summarize can become ‘centres of calculation’. But crucial to this is their mobility, transmission and circulation, and the similar movement of inscriptions.

Drucker, 2012

Drucker, J. (2012). Humanistic Theory and Digital Scholarship. In Debates in the Digital Humanities, ed. Matthew K. Gold, 85-95. Minneapolis: University of Minnesota Press.

Johanna Drucker writes that the association of certain visualization and processing techniques blocks the functioning of the humanistic methods. Those techniques are the digital techniques developed for empirical, positivist research (including the social and natural sciences) and other fields such as management, business, gaming… They block its functioning mainly because the interpretation, in the humanities, depends on the observer and cannot be reified into the observer-independent artifacts required by digital processing and visualization.

Her only argument incriminating visualization is that it requires digital data, problematic because of their inherent reductionism. Johanna Drucker gives visualization the power to imply certainty, but she does not support the claim. On the contrary, she mentions that only a naive viewer would fall for it, and that most humanists and social and natural scientists accept it regardless.

Tools for humanities work have evolved considerably in the last decade, but during that same period a host of protocols for information visualization, data mining, geospatial representation, and other research instruments have been absorbed from disciplines whose epistemological foundations and fundamental values are at odds with, or even hostile to, the humanities. Positivistic, strictly quantitative, mechanistic, reductive and literal, these visualization and processing techniques preclude humanistic methods from their operations because of the very assumptions on which they are designed: that objects of knowledge can be understood as self-identical, self-evident, ahistorical, and autonomous.

these visualization techniques … come entirely from realms outside the humanities—management, social sciences, natural sciences, business, economics, military surveillance, entertainment, gaming, and other fields in which the relativistic and comparative methods of the humanities play, at best, a small and accessory role.

Getting the work done—putting texts into digital formats with markup that identified content—might be an interpretative exercise, but introducing ambiguity at the level of markup was untenable, not merely impractical. … to play in a digital sandbox one had to follow the rules of computation: disambiguation and making explicit what was so often implicit in humanities work was the price of entry.

Humanities approaches would proceed from a number of very specific principles. The first of these is that interpretation is performative, not mechanistic—in other words, no text is self-identical; each instance or reading constructs a text; discourses create their objects; texts (in the broad sense of linguistic, visual, acoustic, filmic works) are not static objects but encoded provocations for reading.

The graphical tools that are used for statistical display depend, in the first instance, on quantitative data, information that can be parameterized so that it lends itself to display. Virtually no humanistic data lends itself to such parameterization (e.g., what year should a publication be dated to in the long history of its production and reception?), and it is in fact precisely in the impossibility of creating metrics appropriate to humanistic artifacts that the qualitative character of capta, that which is taken as interpretation rather than data, comes sharply into relief.

if the premises on which quantitative information might be abstracted from texts or corpora raise one set of issues, the use of graphical techniques from social and natural sciences raise others. Graphs and charts reify statistical information. They give it a look of certainty. Only a naive viewer, unskilled and untrained in matters of statistics or critical thought, would accept an information visualization at face value. But most humanists share with their social and natural science colleagues a willingness to accept the use of standard metrics and conventions without question in the production of these graphs

Probability is not the same as ambiguity or multivalent possibility within the field of humanistic inquiry. The task of calculating norms, medians, means, and averages will never be the same as the task of engaging with anomalies and taking their details as the basis of an argument. Statistics and pataphysics will never meet on the playing fields of shared understanding. They play different games, not just the same game with different rules.

Law, 2012

 Law, J. (2011). Collateral realities. The politics of knowledge157. 

John Law looks at reality as something that is performed, as opposed to an already-made fate. In that perspective, no representation can be considered transparent – because that would conceal what the representation performs.

He argues here that material-semiotic practices enact reality the most powerfully when they leverage whatever lies beyond the limits of contestability, in particular the assumptions that are not represented but nevertheless accepted by participants. This realist trick, as he casually calls it, is the technique at the heart of common sense realism.

Practices enact realities … This means that if we want to understand how realities are done or to explore their politics, then we have to attend carefully to practices and ask how they work. … For my purposes, practices are detectable and somewhat ordered sets of material-semiotic relations.

[My interest] is to ask how these talking and meeting practices work to assemble a putative reality. But if we are to do this then we have to teach ourselves to see the work being done by the PowerPoints and the abstracts. We need to find ways of making this work visible. We need to resist the propensity to treat these texts as transparent, self-evident, or uninteresting windows on a pre-given world.

if we stick with the methodologists, then we know that they worry about technical adequacy. The assumption is that good techniques produce satisfactory representations of reality. What follows? One implication that I’ve already touched on is that techniques themselves become essentially uninteresting. This is because when they are working properly they are transparent. In this way of thinking they don’t distort realities, but merely transmit them. In short, good methods are a like window on reality. This means that unless something has gone wrong they can be ignored. As is clear, I have been arguing against this. No representation, I’ve been saying, is actually transparent.

The words (appear to) open a small window onto reality. At the same time (this is a part of the realist trick) the methods for making that window have been more or less erased.

Here is the argument. First attend to practices. Look to see what is being done. In particular, attend empirically to how it is being done: how the relations are being assembled and ordered to produce objects, subjects and appropriate locations. Second, wash away the assumption that there is a reality out there beyond practice that is independent, definite, singular, coherent, and prior to that practice. Ask, instead, how it is that such a world is done in practice, and how it manages to hold steady. Third, ask how this process works to delete the way in which this sense of a definite exterior world is being done, to wash away the practices and turn representations into windows on the world. Four, remember that wherever you look whether this is a meeting hall, a talk, a laboratory, or a survey, there is no escape from practice. It is practices all the way down, contested or otherwise. Five, look for the gaps, the aporias and the tensions between the practices and their realities – for if you go looking for differences you will discover them.

Here is the proposition: whatever which is not contested and, more particularly, whatever lies beyond the limits of contestability is that which operates most powerfully to do the real. And it is this, to be sure, that is the technique that lies at the heart of common sense realism. It is the enactment of collateral realities that turns what is being done in practice into what necessarily has to be.

Marres, 2012 a

Marres, N. (2012). The redistribution of methods: on intervention in digital social research, broadly conceived. The sociological review, 60, 139-165. 

For Noortje Marres, tools mediate methods, and as such have the power re-distribute them. This ambivalent ability can reinstate old power relations, but also enable new forms of critique and the experimentation with new methods.

Both [the Issue Crawler and the Co-Word Machine] re-mediate existing social methods, and both, I argue, involve the attempt to render specific methodology critiques effective in the online realm, namely critiques of the authority effects implicit in citation analysis. As such, these methods offer ways for social research to intervene critically in digital social research, and more specifically, to endorse and actively pursue the re-distribution of social methods online.

Taking up digital online tools, sociologists are likely to enter into working relations with platforms, tool developers and analytic and visual devices which are operating in contexts and developed for purposes that are not necessarily those of sociology

The new network science namely favours a new set of techniques for data collection and analysis, which entail an unusual division of labour between research subjects, data collection devices, and analysts in social research. To put it somewhat crudely, the approach seeks to maximize the role of mathematical techniques, at the expense of research subjects. … the new network science reinstates a classic opposition of social research, that between subjective and objective data.

Marres, 2012 b

Marres, N. (2012). On some uses and abuses of topology in the social analysis of technology (or the problem with smart meters). Theory, Culture & Society, 29(4-5), 288-310. 

Noortje Marres does not write here specifically about tools, but technology and methods. Yet she sketches a conceptual frame along the way. Methods and ontologies can be built into material devices, and those can resurface regardless of the user, as artefacts.

In the social studies of technology, topology has been mostly understood as a theoretical construct, as a conceptual language that can help social theory to render explicit the structure of socio-technical phenomena. However, at the current juncture, topology must also be understood as a device, as a way of structuring phenomena in practice, which is enabled (and disabled) by particular technologies. We must, then, attend more closely to how a topological imagination is facilitated by specific material apparatuses deployed across social life.

Online applications for data analysis and visualization, that is, enable dynamic, and arguably ‘topological’, renderings of controversy (November and Latour, 2010; Scharnhorst and Wouters, 2006). However, what makes matters especially complicated here is that these applications have built into them particular methods of analysis and visualization, on which social studies of technology has also relied in the past to analyse controversies. To speak of the deployment of topology as a device, in this case, is then to do more than suggest an analogy between a sociological concept and digital technologies. It is to highlight that certain methods of ‘topological’ analysis have become built into digital technologies in recent times.

The topological unfolding of a space-time of controversy is revealed to be partly an artefact of the devices used to render controversy visible and analysable. The topological organization of controversy, that is, is here accomplished experimentally, through the deployment of digital devices. And this, in turn, has implications for how we imagine the relation between social and technological change.

mapping controversies may be said to offer a way of being critical that does not require a transcendentalizing move. To develop such empirical forms of critique requires more serious work and reflection on the tools and methods of topological analysis and, in particular, on the kinds of ontologies that get built into the software applications on which we rely.

Rieder & Röhle, 2012

Rieder, B., Röhle, T. (2012). Digital Methods: Five Challenges. In Understanding Digital Humanities, ed. David M. Berry, 67-84. New York: Palgrave Macmillan.

In this text Berhard Rieder and Theo Röhle focus on the tools that function as methods. For them, those tools promise nice things to the humanities, but condition their results to certain truth-claims. The mechanization they offer is attractive because it appears less subjective, and visualization is appealing because it efficiently reduces information; but both effects are rhetorical effects of concealing the process and the mediation, which is methodologically dangerous, if not plain wrong. Yet it does not invalidate the benefits of the tools. Understanding the tool and its methodology is crucial, which also depends on the user.

These computational tools hold a lot of promise: they are able to process much more data than would ever be possible manually; they promise extended zoomability between micro and macro and the ability to reconcile breadth and depth of analysis; they can help to reveal patterns and structures which are impossible to discern with the naked eye; they are even probing into semantic relations and meaning.

What interests us here … are tools that explicitly function as methods: they process data systematically and associate certain “truth claims” with their results. Two types of software can be distinguished here.

The first consists of automated versions of existing manual methods … The central question here is how these methods are affected when they move into the digital realm and are implemented as software. … the change in technology may have important consequence for how these methods are used, how they evolve, and how they produce knowledge. …

The second type of tools, which can be broadly subsumed under the term “data exploration”, represents a more inductive tendency. While they do not pretend to verify or falsify hypotheses, they still try to generate knowledge about the data they analyse. By rendering certain aspects, properties, and relations visible, they offer us particular perspectives on the phenomena we are interested in. While their results may be visually impressive and intuitively convincing, the methodological and epistemological status of their output seems unclear at best. Nevertheless, it is these very tools that provoke the most enthusiastic reactions. What is rarely reflected by advocates of an “end of theory” (Chris Anderson) though, is that theory is already at work on the most basic level of methodology, i.e. when it comes to defining units of analysis, algorithms, and visualisation procedures.

Empirical methods that rely on strong formalisation epitomise [the scientific ideal of the natural sciences], especially if they are implemented in an automated way. Automatic collection and processing appears to remove the data one step further from the perils of human error and subjective judgement.

Visualisation has a long history as a rhetoric device in the sciences; it is one of the prime vehicles for reducing complexity and conveying a certain perspective on the material. The fact that a visualisation is not given, but always a specific projection of the data is often forgotten in the process, especially when visual competence is lacking.

visualisations are specific kinds of representations that involve specific kinds of reductions. But is it therefore feasible to treat them exclusively as a rhetoric device? The visualisations seem to carry some kind of valid proposition about the world, but how can their range be properly delineated, what are kosher ways to integrate them into a scientific argument?

without the full consciousness of what it means to mechanise methodology, we may find ourselves in a situation where large parts of knowledge production are delegated to software tools that we do no longer understand.

Drucker, 2011

Drucker, J. (2011). Humanities approaches to graphical display. Digital Humanities Quarterly5(1), 1-21.

This text is basically the same as a section featured in Johanna Drucker’s 2014 book Graphesis quoted above. I summarized the point already. I will still quote it for the record.

This is the first paragraph:

As digital visualization tools have become more ubiquitous, humanists have adopted many applications such as GIS mapping, graphs, and charts for statistical display that were developed in other disciplines. But, I will argue, such graphical tools are a kind of intellectual Trojan horse, a vehicle through which assumptions about what constitutes information swarm with potent force. These assumptions are cloaked in a rhetoric taken wholesale from the techniques of the empirical sciences that conceals their epistemological biases under a guise of familiarity. So naturalized are the google maps and bar charts generated from spread sheets that they pass as unquestioned representations of “what is.” This is the hallmark of realist models of knowledge and needs to be subjected to a radical critique to return the humanistic tenets of constructedness and interpretation to the fore. Realist approaches depend above all upon an idea that phenomena are observer-independent and can be characterized as data. Data pass themselves off as mere descriptions of a priori conditions. Rendering observation (the act of creating a statistical, empirical, or subjective account or image) as if it were the same as the phenomena observed collapses the critical distance between the phenomenal world and its interpretation, undoing the basis of interpretation on which humanistic knowledge production is based. We know this. But we seem ready and eager to suspend critical judgment in a rush to visualization. At the very least, humanists beginning to play at the intersection of statistics and graphics ought to take a detour through the substantial discussions of the sociology of knowledge and its developed critique of realist models of data gathering. At best, we need to take on the challenge of developing graphical expressions rooted in and appropriate to interpretative activity.

These quotes are from a more detailed discussion that engages with semiotics.

At stake, as I have said before and in many contexts, is the authority of humanistic knowledge in a culture increasingly beset by the claims of quantitative approaches that operate on claims of certainty. … The digital humanities can no longer afford to take its tools and methods from disciplines whose fundamental epistemological assumptions are at odds with humanistic method.

the rendering of statistical information into graphical form gives it a simplicity and legibility that hides every aspect of the original interpretative framework on which the statistical data were constructed. The graphical force conceals what the statistician knows very well—that no “data” pre-exist their parameterization. Data are capta, taken not given, constructed as an interpretation of the phenomenal world, not inherent in it.

To expose the constructedness of data as capta a number of systematic changes have to be applied to the creation of graphical displays. That is the foundation and purpose of a humanistic approach to the qualitative display of graphical information.

take these basic elements of graphical display and rethink them according to humanistic principles:
In conventional statistical graphics, the scale divisions are equal units. In humanistic, interpretative, graphics, they are not.
In statistical graphics the coordinate lines are always continuous and straight. In humanistic, interpretative, graphics, they might have breaks, repetitions, and curves or dips. Interpretation is stochastic and probabilistic, not mechanistic, and its uncertainties require the same mathematical and computational models as other complex systems.
The scale figures and labels in statistical graphics need to be clear and legible in all cases, and all the more so in humanistic, interpretative, graphics since they will need to do quite a bit of work.

References on that matter:

  • Latour, 1986
  • Knorr-Cetina, Karin, and Klaus Amann (1990). “Image Dissection in Natural Scientific Inquiry”. Science, Technology, and Human Values 15 (1990), pp. 259-259.
  • Lynch, Michael, and Steve Woolgar. (1988) “Introduction: Sociological Orientations to Representational Practice in Science”. Human Studies 11 (1988), pp. 99-116.
  • Anderson, Margo. “Quantitative History”. In William Outwaite and Stephen Turner, eds., The Sage Handbook of Social Science Methodology. London: Sage Publications, 2007. pp. 246-263.
  • Anderson, Margo. “The Census, Audiences, and Publics”. Social Science History 32: 1 (2008), pp. 1-18.
  • Porter, Ted. Trust in Numbers: The Pursuit of Objectivity. Princeton: Princeton University Press, 1995.
  • Lochlann, Jain. “Morality Effect: Counting the Dead in the Cancer Trail”. Public Culture (2010), pp. 89-117.

Haraway, 1988

Haraway, D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist studies14(3), 575-599.

Donna Haraway writes on visualization – if you can make it through the fermented dough of her language. The text is often cited on the idea of the god trick of seeing everything from nowhere. The trick is to conceal the mediation and make you believe that the dominant knowledge is not situated – it is objective because it does not have a body, but you have a body and you are thus biased.

The eyes have been used to signify a perverse capacity … to distance the knowing subject from everybody and everything in the interests of unfettered power. The instruments of visualization in multinationalist, postmodernist culture have compounded these meanings of disembodiment.

Vision in this technological feast becomes unregulated gluttony; all seems not just mythically about the god trick of seeing everything from nowhere, but to have put the myth into ordinary practice. And like the god trick, this eye fucks the world to make techno-monsters.

this ideology of direct, devouring, generative, and unrestricted vision, whose technological mediations are simultaneously celebrated and presented as utterly transparent

we need to reclaim that sense to find our way through all the visualizing tricks and powers of modern sciences and technologies that have transformed the objectivity debates.

There is no unmediated photograph or passive camera obscura in scientific accounts of bodies and machines; there are only highly specific visual possibilities, each with a wonderfully detailed, active, partial way of organizing worlds. All these pictures of the world should not be allegories of infinite mobility and interchangeability but of elaborate specificity and difference and the loving care people might take to learn how to see faithfully from another’s point of view, even when the other is our own machine.

I wish to translate the ideological dimensions of “facticity” and “the organic” into a cumbersome entity called a “material-semiotic actor.” This unwieldy term is intended to portray the object of knowledge as an active, meaning-generating part of apparatus of bodily production, without ever implying the immediate presence of such objects or, what is the same thing, their final or unique determination of what can count as objective knowledge at a particular historical juncture.

Latour, 1986

Latour, Bruno. (1986) "Visualisation and Cognition: Drawing Things Together." Knowledge and Society: Studies in the Sociology of Culture and Present, 6: 1-40. 

For Bruno Latour, the materiality of visualization may matter more than its semiotics. The fact that it allows optical consistency, can be moved, recombined, plays a crucial role in the power of science. He also argues that despite the subjectivity of interpretation, disagreeing with the main interpretation becomes increasingly costly as visualizations accumulate and succeed in mobilizing many powerful actors.

The essential characteristics of inscriptions cannot be defined in terms of visualization, print, and writing. In other words, it is not perception which is at stake in this problem of visualization and cognition. New inscriptions, and new ways of perceiving them, are the results of something deeper. … In sum, you have to invent objects which have the properties of being mobile but also immutable, presentable, readable and combinable with one another.

The shift from the other senses to vision is a consequence of the agonistic situation. You present absent things. No one can smell or hear or touch Sakhalin island, but you can look at the map and determine at which bearing you will see the land when you send the next fleet. The speakers are talking to one another, feeling, hearing and touching each other, but they are now talking with many absent things presented all at once. This presence/absence is possible through the two-way connection established by these many contrivances —perspective, projection, map, log book, etc.— that allow translation without corruption.

The main quality of the new space [of the map] is not to be “objective” as a naïve definition of realism often claims, but rather to have optical consistency. This consistency entails the “art of describing” everything and the possibility of going from one type of visual trace to another.

There is no detectable difference between natural and social science, as far as the obsession for graphism is concerned. If scientists were looking at nature, at economies, at stars, at organs, they would not see anything. … In the debates around perception, what is always forgotten is this simple drift from watching confusing three-dimensional objects, to inspecting two-dimensional images which have been made less confusing.

it is not the inscription by itself that should carry the burden of explaining the power of science ; it is the inscription as the fine edge and the final stage of a whole process of mobilization, that modifies the scale of the rhetoric. Without the displacement, the inscription is worthless ; without the inscription the displacement is wasted. … So, the phenomenon we are tackling is not inscription per se, but the cascade of ever simplified inscriptions that allow harder facts to be produced at greater cost.

It is precisely because the dissenter can always escape and try out another interpretation, that so much energy and time is devoted by scientists to corner him and surround him with ever more dramatic visual effects. Although in principle any interpretation can be opposed to any text and image, in practice this is far from being the case ; the cost of dissenting increases with each new collection, each new labeling, each new redrawing. … Thus, one more inscription, one more trick to enhance contrast, one simple device to decrease background, one coloring procedure, might be enough, all things being equal, to swing the balance of power and turn an incredible statement into a credible one which would then be passed along without further modification.

Seeking the noema of big data visualization

30 minutes read

I do not even know what I am searching for, and it makes it hard to explain. But at the same time, that is why it is worth it.

These days, I was stuck in the writing of a long paper. It has been submitted to publication, and the anonymity of the peer-review conflicts with writing a blog post – for now. As an agile developer I feel bad, as I overlooked the mantra: release early, release often. In other terms, if I am not ashamed when I publish, I published too late.

I am not ready yet, so let’s publish something. This post is not my conclusion, but the beginning of my journey.

I started reflecting on the noema of big data visualization (I will characterize those terms), and I found myself lost in a forest of concepts that I do not master. As I just entered these woods, I was barely capable of retracing my steps to the edge. Here I am just consolidating my early steps. I can’t tell whether the path leads somewhere, and once (if) I get somewhere, that path will not be optimal. It’s exploratory.

I want so salvage something

Big data and its visualization are criticized for different reasons, most of which I agree with. I made peace with the disappointing rhetoric of big data in the industry, both as an engineer and as a researcher. I have never believed in the myth. Yet I believe that big data visualization works, that it does something for you. I don’t think it is a scam. A wreckage, maybe, but from which there is something to salvage.

I cannot write that which I want to salvage. I do not have the words, and in fact, I do not know what it its. I endeavored this exploration to find an answer.

My expertise on data visualization gives me the tools to unpack and criticize the big data viz rhetoric. For instance I do not make network visualization invisible as a mediation; I document the multiple layers involved; I argue against misuses; and I criticize the distorted reasons why people rely on it.

Anecdote. For a PhD course, I write a piece titled Big data visualization beyond persuasion, arguing that “big data visualization builds its own abusable regime of persuasion.” As my draft is discussed in the context of the course, I realize that some participants understand “regime of persuasion” as a bad thing per se, as a myth generating fake science. While for me, persuasion can be abused but also useful; for me the success of big data is also due to legit persuasive powers. They ask: who persuades? and of what? Awkwardly, I have no answer. I only realize that I am trying to salvage something despite my criticism. Let me face that.

A distinctive material-semiotic feature

When Ruppert and Scheel observe big data under the angle of the politics of method, they find material-semiotic practices involving visualization. I wrote about their paper in a previous post.

“To be clear, our point is that discursive struggles often work together with digital devices such that the politics of method cannot be reduced to language games.”

Evelyn Ruppert and Stephan Scheel in The Politics of Method: Taming the New, Making Data Official

I suspect that detractors of big data oppose, through it, to the world view of the natural sciences. Such strategy is doomed to fail. Resisting the epistemology of physics modeling is one thing, pretending it does not work is another one. Having a foot in both worlds, I have the feeling that big data visualization is effective despite the well-identified myth. Obviously, big data cannot be at the same time the fuel of surveillance capitalism and the new snake oil: either it performs, or it doesn’t. Denouncing big data visualization as the harmless and replaceable vessel of a convincing myth does not help, as the substance is actually active, even radioactive. Big data visualization is not the symptom, but the agent of a problematic power relation.

This tells me what to search for: a material-semiotic property of big data visualization that grounds both its effectiveness and its specificity.

The question is not to tell whether big data visualization shows real things, or imaginary things. As a mediation, it translates and distorts, so one can argue either its fidelity or its biases. Nobody is fooled by this duality. The question is to determine why one does, or does not, hold the visualization for true.

I seek a piece of the sense-making mechanic. I postulate a single trait that gets interpreted differently depending on your sensibility, but yet characterizes big data visualization for everyone.

Barthes’ noema

I draw inspiration from the French semiotician Roland Barthes. In his book Camera Lucida he inquires the essence of photography, from a phenomenological and personal standpoint, reflecting on images of his recently deceased mother. This book can also be read as a testament, as he died shortly after of a car crash.

Why do we hold for true what we know being an illusion? Anyone can relate to the feelings of Barthes looking at the image of his deceased mother. Barthes articulates how the sincerity of his affect can derive from a treacherous image. Despite his mother posing for the photographer; despite the artificial setting; despite the photographer’s touch; the photograph is the trace of a moment that actually happened. Despite the fabrication, it has been. For Barthes, this is the essence of a photographic image. He calls it the photographic noema: that-has-been, “ça a été.”

The noema is a phenomenological concept introduced by Husserl, but Barthes uses it in a pretty liberal way. As there is a long-lasting controversy on its exact meaning, and as it does not matter much here, I just stick to Barthes’ loose usage.

The photographic noema is a powerful pivot to deconstruct a photo. It does not work in isolation, but altogether with other concepts such as the studium, the spectrum, and the punctum. I think it is better to skip this theory here, but it is a very rewarding (and famous) piece of semiotics. It suffices to understand how the photographic noema allows unpacking why we hold the illusion for true. Here, McLuhan is a legit shortcut: if the medium is the message, then the message of photography is that-has-been. Any photograph implies “that has been”, and its meaning derives from there.

I must warn against a possible misunderstanding. For me, the photographic noema does not require that-has-been to be true, or that one agrees with it. Barthes just tells us that it the noema is an inevitable part of the meaning; thus I assume that the truth of that meaning can of course be assessed independently. It is precisely assessed, in fact, against it. A drawing of you hugging the pope is not deceptive, but a photo is, because the drawing does not tell that-has-been. It might not be what Barthes meant, but since photographic manipulation has been commoditized, the situation has changed.

The noema does for Barthes two interesting things. Firstly, it gives a single point of origin for multiple interpretations. A same image may mean different things to different people, their meanings can still be unpacked starting from the same underlying assumption – some may believe you hugged the pope and others not. Secondly, the noema is specific to photography. It defines its particular semiotic character.

Ordeals

I can imagine different propositions for the noema of big data visualization. I need to evaluate them. My next step is to formalize ordeals to help me in that task.

(1) The noema must be specific to big data visualization.
I characterize big data as Kitchin’s “3 Vs”: Volume, Velocity and Variety, to which I add relational data (networks). It must in particular mark the difference with two other forms of visualization: the map, and the chart (e.g. a bar chart). Maps or bars are not forbidden, but the “big data” qualifier requires at least one of those traits: outstanding volume, dynamic, heterogeneity, and/or links.

In this post I borrow a practical case from Ruppert and Scheel (see the figure below). It represents the population density of Ljubljana, as derived from mobile data. It qualifies by its volume and dynamics.

(2) The noema allows formulating both the critique and the defense of big data visualization.
It must be compatible with the idea of a myth of big data, but also account for the reasons why it can be held for true, and of course acknowledge big data visualization as a mediation: conveying information but with a distortion, constructed but not independent of what it represents, etc.

(3) The noema is rooted in material-semiotic properties
It is important that the noema is not just the product of the discourses surrounding big data. Sure, these discourses matter as they impact our interpretation; big data visualization is culturally situated. But the noema is nevertheless the key to unpacking how the interpretation of big data visualization derives from its material-semiotic features.

(bonus) A minimal statement
It is clear now that I am not philosophizing freely for the purpose of it. I am crafting a concept. There is a design to it. The noema empowers you this way: you convoke its statement as a fulcrum, a pivotal landmark for generating a discussion on how an image makes sense. It is a key to unpacking the meaning of the image. As a key, it must be transportable, lightweight. It works better if you can remember it easily, and state it shortly. I also want my noema to be minimal.

Possible noema and their problems

a. that-has-been

When I look at the picture above, the high number of data points makes an important difference with the usual bar chart: the data have been reduced less. The visualization is closer to the raw data, it does not make any effort to convey a curated message to a broad audience. Of course there is still a reduction, an aggregation (e.g. the grid on which the bars are based), but the trade-off between realism and readability leans towards the former (in the figures below, it leans on the left). Then, why not just reuse the photographic “that-has-been”?

Understanding Comics, Scott Mc Cloud, 1993
The “Pédofil” of Boa Vista, Bruno Latour, 1995

Big data visualization shares something with photography, in the sense that it is a relatively raw record. However, one does not recognize the same way what they represent. When I look at a photo of my mother, I see her; I recognize her effortlessly. When I look at the visualization of population densities in Ljubljana, I recognize nothing, not even Ljubljana. I must make an effort to understand what I am looking at (read the caption etc.). I do not know what that is right away.

The that of that-has-been is much less problematic in the case of photography. The mechanical process of fixing the scene on a substrate is quite close to how our eyes function; close enough that it makes sense to understand it as if I was there. The that of that-has-been is both the that of the image and the that-as-if-I-was-there. But it does not work with big data visualization, because the representation is nothing like the represented phenomenon.

b. that-has-been-manipulated

This is what Bachimont proposes as the noema of the digital (in French). I do not reproduce his point here. Big data visualization is digital, so it is part of it, but not specific enough.

c. something-is-brewing

My next try is to emphasize the rawness of the record without relying on a that. It does not seem so problematic, as in my experience it is quite difficult to put words on what a big data visualization represents – and I mean it in a good sense, not as a failure, since ambiguity is a feature of the world. To me, it seems fair to assume that one does not really see what is going on. Sure, in the case of Ljubljana, there is something to see – as stated in the caption. However (1) there is potentially more to see and (2) if you curate the data differently, even in the absence of a main message, it remains a big data visualization. It does not require that we understand what is going on – but something is going on. Is that the noema?

I like the idea that big data visualization is visualized rawness. For me there is a rawness to the data, but the visualization itself, of course, is all but raw. Actual rawness would be something like a series of 0 and 1 in a computer. Like photography, there is a mechanical dimension to the reproduction of the represented phenomenon. But the rawness we see is fabricated, because our cognition is not compatible with the order of that materiality. Even represented as a spreadsheet, which is already quite a significant processing, it remains too raw. The rawness we see is staged rawness. Once again, I do not mean it in a negative way; the point of visualization is precisely to provide a good staging. However there is a paradox to this well-cooked rawness. I want to capture that contrast.

I like the something-is-brewing expression because it suggests “under the surface”. In French I would pick “il se trame quelque chose”, an idiomatic expression where “trame” also means “pattern”. I am considering here the implicit message that big data visualization makes the invisible visible, albeit in an indefinite way.

However, this candidate has two issues.

Firstly, something-is-brewing applies a fortiori to other data visualizations, where we often know what is brewing. A bar chart may well tell the same. It is not specific to big data visualization.

Secondly, I am not convinced that the “making the invisible visible” angle is rooted in material-semiotic features. To me, it sounds more like a promise that the promoters of big data associate to it. What is visible or invisible depends too much on the audience. Something might be visible to some, and not to others. For that reason, it is less specific to the material-semiotic features of big data visualization.

d. that-is-how-it-is

I now explore now more directly the idea of rawness. This time, instead of dealing with the problematic that by getting rid of it, I split it in two: we have a that, and a it. One is the representation, and the other one is the represented phenomenon.

This proposition addresses most issues, but not with equal success.

Is it specific to big data visualization? It must not work for other things like photos, maps and charts.

For a photo, that-is-how-it-is sounds technically true, but not to the point. Indeed the photography assumes the fidelity of the mechanical reproduction, but from there, it poses the question of the realism of the image itself. Obviously staged photographs still have been, even though the situations they depict are not realistic (e.g. family portraits). Conversely, that-is-how-it-is asserts a fidelity to the phenomenon, realism. In the case of the photograph, depending on what that is, the stage or everyday life, the statement is either obvious (as in “Mom’s pose in the studio was how it is depicted”) or false (“In everyday life, Mom was how it is depicted”).

That-is-how-it-is does not apply well either to traditional charts. These visualizations are visibly processed, the reduction is apparent. However, once again it is technically true, in some sense, that most visualizations aim at being faithful to what it represents.

The proposition might not make such a good job at pointing to rawness, as opposed to, more generally, trying to represent something.

That-is-how-it-is also works for maps. There is an important overlap between big data visualization and maps, and I am wondering how relevant it is to mark the difference. But I would still mark it better if I can.

All in all, this proposition is moderately specific to big data visualization (ordeal 1), but it could be improved by finding something closer to the idea of rawness.

Ordeal 2 is where the proposition shines. Indeed, it allows formulating both the critique and the defense of big data visualization. With this noema, the image implies that that is how it is. For the ones, it is a virtue: big data visualization demonstrates the fidelity of the data. For the others, it is a vice: the image evokes a self-evidence that does not exist; it hides the mediation. Both views can be expressed using the noema.

Big data visualization is about manifesting visually the volume, velocity, and/or variety of the data, and/or the presence of links. Each of those four traits prevents a strong reduction of the visualization. Reducing them before visualizing them would produce a classic chart. By definition, what I characterize as a big data visualization is irreducible.

This non-reduction of the data grounds the that-is-how-it-is effect. From the perspective of the reader of the image, the proliferation of signs is overwhelming. It must then have another meaning than conveying a message, as the big data visualization cannot be understood otherwise. The proliferated signs are not arbitrary, they are always ordered by a principle, a key to reading the image – regardless of how hard the decoding might actually be. I think that this ordered proliferation is at the source of the meaning of the image. It may showcase an internal property of the data: volume, velocity, variety, or links. It may be an invitation to explore the data visually. It may just showcase the ability to obtain and organize the data. In any case, the organized proliferation of signs refers to an internal structuration of the data.

I like that this possible noema is rooted on material-semiotic properties of big data visualization. However, it is does not convey specifically the staged rawness I try to capture; it is not specific enough.

e. there-is-an-order-to-that-chaos

I think I overlooked the fact that there is always a pattern, a shape to read. The signs in a big data visualization tell three things:

  1. The proliferation of signs tells “you cannot comprehend that”
  2. The internal order of signs tells “that is computable”
  3. The pattern tells “that has a meaning” (but which one?)

This triplet captures a tension, but also leaves aside the question of fidelity. It makes sense that big data visualization does not, by itself, tell “I am real”. I see two justifications: (1) we do not recognize a “real thing”, and (2) the “3V” criteria do not require the data to refer to reality. For instance the picture below qualifies as a big data visualization, even though it plots a simulation. In a different context, one could believe it represents empirical measurements.

https://www.youtube.com/watch?v=ncRj2uyAeBY

Big data visualizations do not necessarily tell “that has been”, after all. But the visual patterns emerging from the mass of signs still tell that “something is brewing”. The patterns create a kind of reality effect, in the sense that they manifest something – but what?

“There is an order to that chaos” evokes the contrast between the proliferation of signs and the patterns. It also deals with a lot of our constraints: it is specific to big data visualizations, as it does not apply to photography, to classic charts, or to (most) maps; it allows formulating both the critique and the defense of big data, and it is rooted in material-semiotic properties; I only regret that it does not directly refer to the self-evidence effect.

This candidate makes the best noema so far. Let me wrap it up.

A semiotic model for big data visualization

I made a number of points, and they came up when it made sense for the exploration. I now state the general argument, and make the same points but in a more logical order.

I also take the opportunity to get rid of the clumsy notion of “big data visualization”. Now I can characterize those more directly.

Note that this is a tentative argument. The assertive style here purely serves clarity. After posting this piece, I will engage more with the literature on the topic, and it might well change my argument.


I refer to a specific type of data visualization, commonly associated with big data, and sharing the following characteristics:

  • A proliferation of signs. The visualization features a high number of signs, usually associated with data points. This includes dynamic visualizations where those are distributed over time.
  • An internal order. As the image or video pictures data, there are presupposed rules or constraints to these elements. Those may be stated in a key or caption, or left implicit but visible in the image.
  • Emergent patterns. Together, the signs display shapes not prescribed by the internal order. This emergent order may or may not be recognizable.

When one looks at such image, one cannot deny that it is not completely disorganized, that its apparent chaos is partially organized. As this specific constraint only exists for this kind of visualization, one should consider it its essence, its “noema”: there is an order to that chaos.

That which I see in such image is all at once disorganized and partially structured. A visual signal stands out of the noise. I may not know what it means, but I recognize it as a sign, as an observation deserving an explanation, as a clue in a potential investigation.

The noema there-is-an-order-to-that-chaos may be met with indifference, as a self-evident property. In that case the order is received as a direct emanation of the data. It leads to matter of concerns such as: what is that order? can I delineate it? describe it? reproduce it? understand it?

But one can also be aware of the noema. In such image the presence of a pattern is never metaphorical, but it does not mean that it is true. Patterns can be illusions: artifacts of the data, of the visualization. To certain readers, these data visualizations evoke how problematic visual patterns can be; these readers might not easily agree to hold the patterns for true.

The noema derives from the interplay between two material-semiotic properties: the proliferation of signs, and the presence of patterns.

The proliferation of signs has multiple effects. Firstly, it overwhelms the reader. The eye does not know where to land, the image requires an effort. Secondly, it shows that these signs have not been placed manually to convey a specific message. Assuming that the image explicitly visualizes data, it tells that those have not been reduced (or not much). Thirdly, assuming that the image is intentional (i.e. not a random screenshot illustrating a process), it tells that it is open to interpretation. Indeed, if there was a clear message to convey, the data would have been reduced further to remove unnecessary visual noise. The proliferation of signs tells “chaos”, “raw data” and “potential for interpretation”.

The presence of patterns, often highlighted in the title or caption, tells “order”. Of course, we must exclude the structures trivially explained by the construction of the image. Importantly, the meaning and the shape of the patterns may well be undefined, provided that one can agree on their presence. For instance we may agree on the presence of visual clusters, but disagree on where they are, or how many. It does not matter that the “order” is unspecified, as long as its presence is assured.

These material-semiotic features of “patterns in an proliferation of signs” specifically imply that there is an order to that chaos.

Examples

Here are a few examples of such visualizations. The proliferation of signs is always obvious, but the patterns may not. I will make at least some patterns explicit for each case, but I will not explain them in detail.

Trajectories. It plots the trajectories of players during a sports game, as seen from above.

Patterns: Symmetry; it shows that both teams play evenly on both sides. Accumulation of trajectories along certain lines and curves; it indicates that players circulate more in certain places – can you guess the sport?

Dos Juegos, Laura Castro

A 3D interactive dynamic map bar chart of the Manhattan population. Try the live version.

Patterns: Certain bars do spikes at certain places, certain dates; it shows that some areas are more crowded. At certain hours, all the bars are higher; it means that many people leave Manhattan at night.

Manhattan Population Explorer, Justin Fung

A network map. The dots and labels represent websites, the lines represent hyperlinks. By design, connected websites tend to be closer.

Patterns: The dots spread more horizontally; it means we have two connected clusters with more internal links, two communities. There are two colors; it means that each cluster has a different position on climate change (acknowledging or denying its human origin).

Climate Change on the Web, Sciences Po médialab

A map with many dots. One dot for each resident, colored by race, in the city of Minneapolis.

Patterns: More dots in certain areas; it indicates denser neighborhoods. White areas; nobody leaves there (rivers). Colors cluster in certain areas; it shows racial communities.

Racial dot map – University of Virginia

Bonus: misleading patterns. I you plot usage data on a map, it will often follow the distribution of population. You see patterns, but those are not related to what one may think. It has become a classic joke in the community of data visualization, see below.

Heatmap
https://xkcd.com/1138/, Randall Munroe

Did I salvage something?

At least I found what I was trying to salvage. That is a beginning. It matters to me that the pattern is real. Sure, sometimes it is an illusion, but illusions are real – in the phenomenological sense.

Acknowledging the reality of the pattern helps me separate interesting practical questions from pointless ones (for me).

We should always discuss:

  • whether the pattern is an artifact, of the data; of the visualization;
  • where the pattern is, and if there is one (when our readings differ);
  • what the pattern means;
  • which different purposes the visualization serves.

These questions seem pointless to me:

  • Is the pattern the product of a discourse on big data?
    (if you see it, it is a material inscription somewhere)
  • If people disagree on a pattern, is it real?
    (Aka the “post-truth” argument. Same answer.)
  • Why do some people believe these visualizations to be scientific?
    (The pattern is real, so its interpretation is as legit as any falsifiable hypothesis in a Popperian scientific method)

For the record, I do not deny that some people use these kinds of visualizations for other purposes, leveraging the same material-semiotic properties. For instance the proliferation of signs may suggest technical mastery, fidelity to data, accurate measurements. And because it is cognitively overwhelming, it suggests the necessity to rely on sophisticated algorithmic approaches to extract hidden values hinted at by the patterns. I think these statements are, in fact, poorly supported by such data visualizations.

Improving the critique of big data

Let me circle back to my starting point. With these temporary conceptual tools, I try to fix something that bothers me in Ruppert and Scheel’s analysis of big data in The Politics of Method: Taming the New, Making Data Official.

I cannot say a thing on their field work, beyond the fact that it does not conflict with my own observations. But I think I am well aligned with their points overall. I agree with their analysis of the rhetoric of big data, and the politics of methods per se; but when it comes to data visualization specifically, I feel frustrated by their analysis. I would like to improve it.

The first demonstration of their paper is titled “Enacting Mobile Populations as Self‐evident Realities.” They feature dynamic visualization of the Estonian population over time, obtained from mobile data. I reproduce their screenshot below.

The visualization has many moving dots, that I assume correspond to data points representing mobile users. The patterns that I can identify are clusters around cities and along circulation axes. To me, the data seems coherent with what one would expect – which is never a given. But I do not see anything surprising. Full disclosure: I know nothing of Estonia.

I will now cite a section of their paper, and comment within the text. I do not skip passages – the quotes are contiguous.

The setting observed by the authors is the company MOBDATA, owning and promoting the data, showcasing their product to Estonian officials.

Despite its simplicity, the map succeeds in generating astonishment, as a MOBDATA staff member responsible for sales in Estonia stresses: “Oh yes, people are impressed… it’s catchy, and it’s nice… people like to see things like this.”

I don’t know exactly why the authors feature that quote here. I want to mention that I see no reason to interpret it as if MOBDATA acknowledged that they would rather do something catchy than scientific. Those traits are not mutually exclusive. On a factual level, their visualization is, indeed, appealing.

This astonishment is caused by what the moving dots enact and make intelligible: commuting patterns in Estonia.

I agree on the pattern and its interpretation.

In contrast to other statistical accounts of mobility, such as static charts and tables, MPD [(mobile positioning data)] seems to speak for itself precisely because it moves.

I do not fully understand this notion of data speaking for itself – I will rather rely on the rest of the text. Surely the signs do tell things, yes; but I do not see how the movement, specifically, implies a special way to mean.

I do think, however, that the dots have a spatial and temporal resolution that tells something. This is what I labelled earlier “proliferation of signs”, although the number of data points is more obvious over space (we see many dots) than over time (the same dots just move). As these points cannot have been placed manually, they must come from somewhere else – tracking mobile devices. The proliferation also suggests that there might be more to discover, but that remains to be proved.

There is no reason to doubt that the (unsurprising) patterns we see reflect a structure within the data. The data have captured the commuting patterns in Estonia. The visualization does not mean it “for itself precisely because it moves”, it does not mean it in a special, suspicious way. It means it the normal way, the way we are supposed to read a visualization. I see no reason to be suspicious about the conclusion that the data reflect population movements.

The moving red dots become not only a vehicle for the data, but first and foremost for its claimed self‐evidence.

I disagree with this statement. The moving red dots are not “first and foremost” suggesting self-evidence, because the way they suggest it is precisely grounded in how they represent the data.

The dots themselves are nothing without the patterns. Obviously, randomly moving dots do not produce the same effect. The patterns are establishing the link with the data, the confidence that population movements have been captured.

The confidence must not be confused with self-evidence. The visual patterns convincingly refer to the commuting patterns. I refuse to downplay the semiotic strength of this visualization on the motive that it also serves a marketing discourse.

MagrittePipe.jpg

Magritte famously painted a pipe with the caption: “this is not a pipe”. He called the painting The Treachery of Images. It is funny precisely because in everyday life one can point at the image and say: “this is a pipe”. Conflating the image and what it represents is not a scandal. But indeed, it matters that we do not completely forget the mediation, that we remain capable of retrieving it if necessary.

The red moving dots of the Estonian mobility data are in the same situation. The pattern reasonably represents commuting – the authors acknowledge it themselves. For me, it is then rational to conflate the visual pattern with the phenomenal pattern. This suspension of the mediation does not have to be definitive, as we may agree but remain critical. But then, indeed, the conflation looks like self-evidence. And surely, that presents the danger of forgetting the mediation.

Now, one does not have to be convinced: the picture may not resemble a pipe. In that case, self-evidence is not possible. But it does not mean that the picture is “first and foremost” a vessel for “claimed self-evidence”. Here self-evidence is not the fruit of malice, but the consequence of people trusting the image for legit although debatable reasons.

The red dots moving along Estonia’s main transport routes suggest that they correspond to the commuters they are meant to represent.

The authors acknowledge the pattern of correspondence.

Through this “realist trick” (Law 2012) mobility is enacted as a reality that exists independently of the methods that are used to describe it. There appears to be a seamless correspondence between the visualization (the moving dots) and the reality (commuting patterns in Estonia) it represents and renders “the phenomenal world (as if it) were self‐evident and the apprehension of it a mere mechanical task” (Drucker 2011, 2).

I contest that the correspondence is “seamless”, and that it only “appears” to be real. For me the correspondence is debatable, but legit. I see no reason to consider it as an illusion. The pattern is real.

It is precisely because the visual pattern corresponds to the phenomenal world that the conflation is possible, that it can become self-evident. The correspondence is not a trick.

The trick is to forget the mediation. The trick directly gets its strength from the correspondence. The more grounded the correspondence, the more convincing the trick.

In this way MPD is constituted as the perfect method for tracing the movements and locations of increasingly mobile populations, a method that offers an unrestricted vision from above, a vision that allows, in tradition of the “god trick” described by Donna Haraway (1988, 581), to see “everything from nowhere.”

The “god trick” is to make you forget that god’s plunging viewpoint “from nowhere” is actually situated somewhere. The trick is powerful precisely because the view from above is efficient.

In this case the notion of “from above” is mostly metaphorical, and can be better understood as “from nowhere”. It does not matter much that the map is seen as from a satellite. It matters more that the data seems omniscient, tracking each person with an intrusive but invisible accuracy. The high resolution of the data points, typical of big data, is where the viewpoint is godlike.

The trick is to make you believe that this kind of knowledge is disembodied. But at the same time, these data sets are living evidence of where the big data’s bodies lie.

It bothers me that Ruppert and Scheel’s excellent piece tries to establish that big data visualizations falsely claim self-evidence, as if unveiling it could a strike a blow against it. I believe on the contrary that self-evidence is a natural by-product of the very real and legit convincing powers of big data. We should not deny them for two reasons. Firstly, because big data can do something for us, and we should seize the opportunity – but we must impose our conditions! Secondly, because any critique that confuses big data’s force for a weakness is doomed to miss its blow.

Making complex networks interpretable with a metric

15 min read

A story within a story: this is the only way I can explain the problem of visualizing complex networks.

The bigger story investigates an important question: what happens when we try to know something that we cannot know? The mere existence of this question is already capable of causing havoc. It assumes that we cannot know everything, that the horizon of Science is unattainable. Blasphemous to some, self-evident to others, the idea is, in any case, old. Leibniz’ dream has been put to death, and the killing caused riot in Maths Kingdom. Did puny humans learn the lesson? I’ll jump to a more interesting question: when we try to know the unknowable, what happens instead? Does the cake of knowledge refuse to enter the mouth of our cognition? Do we choke on it, incapable of chewing? Do we just eat a chunk and declare that it was enough? Do we even see this infinitely big cake? I suspect that in many situations, the feeling that we understand something has more to do with the familiarity we have with it, and the confidence that it will not cause unreasonable trouble.

I believe that we are delusional about the unknowable, and it makes us take the wrong courses of action. This issue is not purely epistemic: it is social, cultural, political, and psychological. It is a question of culture, practice, and materiality. I would like to put this hypothesis to a test, but such a thing is like climbing a mountain. I must take smaller steps. This is where complexity, as a topic, gets interesting. Complexity is a sophisticated concept in many ways, but it is also a simple notion about our the failure of our powers to know. What do we do in front of an epistemic wall?

The smaller story is about visualizing complex networks. Visualizing non-complex networks is no a big deal. It is about following the connections. The semiotics of that affair are generic: respecting symmetries, limiting cluttering by having less lines cross each other, prevent overlap between elements… Think of it as laying out a diagram nicely. But complex networks are something else. Those are too big to digest at once. There are too many connections to be able to follow one individually.

Visualizing a complex network generally relies on a placement algorithm, capable of arranging the nodes so that their positions tell us something about the topology. This way we can analyze the structure indirectly, by looking at where the nodes are. For instance, if they are packed in a certain area, it means that we have a cluster – in the topological sense.

For non-obvious reasons that would take time to formulate entirely, we do not really know what we see in networks. The most common family of layout algorithms, namely “force-directed placement algorithms”, like Force Atlas 2, produce a kind of representation that is poorly understood. Computer scientists who create and evaluate these algorithms do not explain what these do, only how they work. People who use these algorithms do not have to understand them either. Using these algorithms is a self-legitimized cultural practice; at this point it is accepted well enough that it becomes a tacit norm. Do not get me wrong: these algorithms produce highly usable results; we just ignore why. These excellent operational qualities explain why they remain in use, despite the constant critique that these algorithms are problematic. Unfortunately, the critiques do not seem to understand these algorithms either.

I assume here that these layouts capture something of the topology, but we do not know exactly why. Some people believe that force-driven layouts do not tell us more than, say, a good clustering algorithm. I believe that they are wrong, but I have not evidence, and they do not either. Some people assume that what these algorithms capture has been studied in depth; but no. For instance the rationale provided by the gold standard, the Lin Log algorithm, is flawed – I wrote on that here. I take this matter for an out of fashion, but still open question.

As there are two stories, my interest is double. On a practical level, I am on a quest to determine what we see in networks. This would help people make sense of their network maps, at least. But on a more general level, I am interested in the narratives that all sorts of people come up with to rationalize their networks. The computer scientists who publish algo papers. The engineers who implement these algorithms into code packages. The different fields that use these algorithms in practice. And even popular culture. From a techno-anthropological standpoint, it allows accounting for the black-boxing of our knowledge apparatus. Complex network visualization is a good case to study our cognitive, psychological, and social reaction to the unknowable.

One of the difficulties I face in my inquiries is simply the lack of source material. I developed algorithms myself, so have a sense of the issues faced in the process. But this process leaves no traces, as those are often conflicting with the narrative of the final paper, that tries to black-box as much as possible. Developing an algorithm is complicated enough that spending an additional effort to take readable notes is problematic. I did not document my own process either. I regret it, but I learned the lesson.

I developed a new metric

This week I developed a metric for reading networks – Yay! It started as an attempt to find a quantitative answer to a simple question. It required some visualization. This gave me an idea that I decided to test. It provided good results, so I decided to evaluate it more systematically. It turned out it worked very well, so I black-boxed it as a metric to evaluate the quality of a layout in practice. I call it connected-closeness.

The only tool I used for this process was Observable, a Javascript-based notebook platform. Think of it as Jupyter notebooks, but online – hosted on their website and executed in your browser. This allowed me to intertwine text, visualization, computations and interactions.

I started to write for a broad public, but then I realized that I was on my way to develop an actual metric, so I switched to a more technical writing style, focusing on documenting my process. At the end of the day, these notebooks contain a lot of information that are only useful to someone who wants to study the process in depth. But I will summarize it here.

My process consists of 8 notebooks (so far) and you can find the whole series there. It starts with small things and ends up with a self-contained, reusable implementation. You could read it all, in order, to get the complete story of the process.

The whole process is one of the two stories I can tell. The other story is the black-boxed one, the kind of narrative expected in a computer science paper: this is a new metric, it is better than all existing alternatives, here is evidence of that, now give me magic academic coins. If you are still reading this, you are interested in the process; I will expose it as briefly as I can.

What do force-driven layouts accomplish?

We know how these algorithms work: all nodes repulse each other, and connected nodes also attract each other. The position of each node depends on the positions of all other nodes. The algorithm applies the forces step by step, all nodes moving at each step. It converges at some point to a state of equilibrium (approximately). The final state depends on the initial positions, set at random. It is called “non-deterministic”. From one time to another, the final state can be better or worse.

So these layouts are not like a statistical projection. We cannot tell what the position of a node means with a straightforward statement. But what can we know about the result?

The functioning of the algorithm tells us that it tries to put connected nodes closer. From there, it seems reasonable to conclude that connected nodes are indeed closer. But is it true? How much closer? For all networks, or only some?

Let us take an example: a network spatialized by Force Atlas 2.

Look at the long edges: those are connected pairs that are not close. So the layout did not succeed completely. Why is the unanswered question that shows the limits of our understanding. But at least we can describe the situation.

If we just account for the distances between nodes, we can see that connected nodes are indeed closer. In that sense, the algorithm was effective. Note: the length unit is arbitrary.

CountMean length
All pairs of nodes63,190179
Connected pairs (edges)98242
Disconnected pairs 62,208 181

So there is at least one statement that is true here: connected pairs are closer than disconnected pairs. If we count how the node pairs are distributed over different distances, we can evaluate other relevant statements. Here is a list.

  • Connected nodes are closer on average: TRUE ✔
  • All connected nodes are very close: FALSE ✘
  • Most connected nodes are very close: TRUE ✔
  • Most very close nodes are connected: FALSE ✘
  • Nodes that are far away are disconnected: (mostly) TRUE ✔
  • Disconnected nodes are split apart: FALSE ✘

Check this notebook for more details and interactions (e.g. re-run the layout to check how randomization affects those figures).

The problem with these statements is twofold. Firstly, they are not quantitative – but this is easy to fix. Secondarily, they are not very informative. The simplest statement here is that “nodes as close as X are connected”, but unfortunately it is false (regardless of X). The best we have here is that most connected nodes are very close. We can quantify that. But can we make it more informative?

Challenge accepted

My process was exactly how you imagine: I started with one case, and then scaled up in generality. I visualized what happens for one network and one layout in the second notebook of the series. Then I tried a different network and two layouts in the third notebook. At this point I realized that there was a problem. Let me explain.

The low hanging fruit at this point is to quantify the statement “most connected nodes are very close“. We must quantify the “most” and the “very close”, and the former depends on the latter. So I set up an interactive device to count and visualize the edges closer than a given distance (try it).

When you tinker with it, you realize that the distance captures a lot of edges very quickly – as expected from the statement that “most connected nodes are very close“. But to give this a meaning, we need a point of comparison.

Here I plot different measures as a function of the distance D. In black, the proportion of edges shorter than D. In blue, the proportion of node pairs shorter than D. As you can see, the latter rises much more slowly, and is a natural point of comparison. This is a good opportunity.

The proportion of node pairs is basically the same thing as the proportion of edges if they were distributed randomly. I call this the “expected proportion of edges shorter than D”, where “expected” means “in a similar but randomized situation”. We can then compare the actual proportion of edges to the expected proportion. This gives us the green curve, the proportion of edges shorter than D above expectations. It is equal to the difference between the black and the blue curve (in light green).

The green curve is obviously null at both ends, so it has to reach a peak somewhere in between. The peak point is very interesting:

  • The higher the green curve, the more “unexpected” edges captured by the layout; the more dramatic is the statement we can make.
  • It provides the precise distance were the layout is the most efficient, which is a precious practical information on the map.

Identifying this point allows forging a quantitative and informative statement such as: X% of edges are unexpectedly shorter than D, where X is as high as possible. This is what I aim at, and it looks promising, but there is an issue.

In my third notebook I try a random layout, to have a point of comparison. And then this happens:

Naturally, for any distance, the number of connected pairs is as random as expected. The blue curve sticks to the black curve; the green is flat; there is no significant bump. There is no special distance. What if my test case was a favorable case, but the metric fails with other networks, other layouts?

From one case to many

Notebooks are great for that, and I had a particularly great time with Observable: you can quite easily navigate the ladder of abstraction. In my fourth notebook I conducted a systematic benchmark of the metrics using 14 different network generators and 7 different layout algorithms. And for each network+layout pair, I generated 100 different cases, for a total of almost 10,000 cases. I visualized the behavior of different metrics across this table, generating plots such as this one:

Just visually, I realized that the metric was very consistent in all cases except for random layouts and, sometimes, random networks.

Look at the little circles: they stack up most of the time even though the network is generated randomly, and the layout is non-deterministic. This is not obvious, so let me illustrate that. You can also try by yourself.

Here you see four different random networks, where edges have been generated with a probability of 5%, spatialized with Force Atlas 2. The curve profiles are nevertheless very similar, the optimal distance is basically at the same point (115, 120, 120 and 100) and the maximal proportion of unexpected edges is consistent (60%, 55%, 55% and 50%). Even though these networks are random and the layout non-deterministic, the statement we can formulate about the different cases is almost the same.

All of that is nice, but I still have the “flat curve” problem, and even worse, as the benchmark unfortunately reveals a different but connected problem.

Fixing unexpected issues

Certain curves have a plateau on top. Check this out:

This curve is for two cliques with just one bridge between them. The plateau corresponds to this long bridge.

The problem here is that even though the max of the curve is pretty clear, there are multiple valid distances. At first I just picked the shortest one, as it carries more information. But it does not work in practice, as micro bumps on the curve give give a strict answer. No tie breakers, the micro-bumps decide of a winner, but there are a whole lot other almost as valid distances that are also very different. The peak is not a peak, and its location is not a point.

I fixed this issue by using a tolerance parameter epsilon, very small. It works this way: I pick for the smallest distance that fits the max of the curve with a tolerance of epsilon. That apparently works.

It does not solve the problem of the flat-to-zero curve, though. In this case, picking a distance would be like declaring a winner in a race where no one has left the starting line. This is not a maths problem. It is a design problem.

Designing an algorithm

In my fifth notebook I redesigned the algorithm. This step was probably the most important to document.

So far I had been tinkering with equations and data, calling things the way they made sense to me at the moment. This is not my first tool and I ended up realizing that if you do not have a (re-)design step, your tool gets crippled by an upside-down logic. So far I started from the internal constraints about measuring a layout, and I progressed toward something operational. The actual user will follow the same path in reverse: she will see the metric first, and get to understanding the underlying constraints by applying it.

In this post I refrained from naming the different quantities: this is because they have a different name before and after the redesign. Here is an example. The “green curve” plays the role of a quality metric, so I named it Q. This makes sense for an engineer, but not for a user, as Q does not say anything about what it represents. As this quantity is pivotal to understanding this construction, I finally give it a literal name, “connected-closeness“, and note it accordingly “C”. My fifth notebook details my rationale for these decisions.

The redesign is about the naming of the different quantities, and how to communicate them the best way possible. It is also about the mathematical formalism, featured below.

The design is also about the graphical appearance of the metric. It matters a lot, as the special distance, now called Δmax, has to be drawn onto the network map. Check the result below, featuring the classic data set C. Elegans.

Finally, the design is about political decisions. I know, it sounds overly dramatic; but there is a point to make. I decided to refuse to declare Δmax when the “top of the curve” Cmax is below 10%, as in the “flat-to-zero” curve. It is not a mathematical decision, as in practice there is always a Δmax, but a design decision. Indeed, if I allow a Δmax in a situation where it is blatantly meaningless, I support misinterpretations. This algorithm will not be that docile, and refuses to communicate a value when it is meaningless. Check this out:

One last word on design; it is, famously, iterative. After I redesigned the algorithm, I had to redo my fourth notebook entirely in order to have statistics that feature the right elements of language and formalism – including the code. This became my sixth notebook.

Statistical justification

My seventh notebook focused on highlighting interesting facts about connected-closeness. It consists of a statistical analysis of the data I had generated. It explores how the behavior of the metric relates to intuition and the knowledge we have on networks.

For instance, it captures very well the fact that force-directed layouts succeed better when there is a community structure. The chart below features the maximum connected-closeness for different settings of a simple stochastic block model.

It also captures that these algorithms are better on sparse networks than dense networks. The chart below features random networks with different settings.

It also confirms that “bad” layouts are worse. Here “bad” means either using the wrong settings, or just the random layout.

Optimization

This was not the end of the journey, as my implementation during these tests was naive. I did not bother to make it efficient, but unfortunately it required as many computations as the square number of nodes, because I was looking at all node pairs, which prevents the algorithm to resolve on large networks. My eighth notebook exposes the algorithmic techniques I considered to improve performance.

I finally settled a self-contained Javascript implementation of the algorithm, requiring less computations (the number of edges). Find it below, for your curiosity.

computeConnectedCloseness = function(g, settings){
	// Default settings
	settings = settings || {}
	settings.epsilon = settings.epsilon || 0.03; // 3%
	settings.grid_size = settings.grid_size || 10; // This is an optimization thing, it's not the graphical grid

	const pairs_of_nodes_sampled = sample_pairs_of_nodes();
	const connected_pairs = g.edges().map(eid => {
	  const n1 = g.getNodeAttributes(g.source(eid));
	  const n2 = g.getNodeAttributes(g.target(eid));
	  const d = Math.sqrt(Math.pow(n1.x-n2.x, 2)+Math.pow(n1.y-n2.y, 2));
	  return d;
	})

	// Grid search for C_max
	
	let range = [0, Math.max(d3.max(pairs_of_nodes_sampled), d3.max(connected_pairs))];

	let C_max = 0;
	let distances_index = {};
	let Delta, old_C_max, C, i, target_index, indicators_over_Delta;
	do {
		for(i=0; i<=grid_size; i++){
			Delta = range[0] + (range[1]-range[0]) * i / grid_size;
			if (distances_index[Delta] === undefined) {
			  distances_index[Delta] = computeIndicators(Delta, g, pairs_of_nodes_sampled, connected_pairs);
			}
		}
		old_C_max = C_max;
		C_max = 0;
		indicators_over_Delta = Object.values(distances_index);
		indicators_over_Delta.forEach((indicators, i) => {
			C = indicators.C;
			if (C > C_max) {
				C_max = C;
				target_index = i;
			}
		});

		range = [
			indicators_over_Delta[Math.max(0, target_index-1)].Delta,
			indicators_over_Delta[Math.min(indicators_over_Delta.length-1, target_index+1)].Delta
		]
  } while ( (C_max-old_C_max)/C_max >= settings.epsilon/10 )
	
  const Delta_max = find_Delta_max(indicators_over_Delta, epsilon);

  const indicators_of_Delta_max = computeIndicators(Delta_max, g, pairs_of_nodes_sampled, connected_pairs);
  
  // Resistance to misinterpretation
  if (indicators_of_Delta_max.C < 0.1) {
    return {
      undefined,
      E_percent_of_Delta_max: undefined,
      p_percent_of_Delta_max: undefined,
      P_edge_of_Delta_max: undefined,
      C_max: indicators_of_Delta_max.C
    }
  } else {
    return {
      Delta_max,
      E_percent_of_Delta_max: indicators_of_Delta_max.E_percent,
      p_percent_of_Delta_max: indicators_of_Delta_max.p_percent,
      P_edge_of_Delta_max: indicators_of_Delta_max.P_edge,
      C_max: indicators_of_Delta_max.C
    }    
  }
  
  // Internal methods

  // Compute indicators given a distance Delta
	function computeIndicators(Delta, g, pairs_of_nodes_sampled, connected_pairs) {
	  const connected_pairs_below_Delta = connected_pairs.filter(d => d<=Delta);
	  const pairs_below_Delta = pairs_of_nodes_sampled.filter(d => d<=Delta);

	  // Count of edges shorter than Delta
    // note: actual count
	  const E = connected_pairs_below_Delta.length;

	  // Proportion of edges shorter than Delta
    // note: actual count
	  const E_percent = E / connected_pairs.length;

	  // Count of node pairs closer than Delta
    // note: sampling-dependent
	  const p = pairs_below_Delta.length;

	  // Proportion of node pairs closer than Delta
    // note: sampling-dependent, but it cancels out
	  const p_percent = p / pairs_of_nodes_sampled.length;

	  // Connected closeness
	  const C = E_percent - p_percent;

	  // Probability that, considering two nodes closer than Delta, they are connected
    // note: p is sampling-dependent, so we have to normalize it here.
    const possible_edges_per_pair = g.undirected ? 1 : 2;
	  const P_edge = E / (possible_edges_per_pair * p * (g.order * (g.order-1)) / pairs_of_nodes_sampled.length);

	  return {
	    Delta,
	    E_percent,
	    p_percent,
	    P_edge, // Note: P_edge is complentary information, not strictly necessary
	    C
	  };
	}

	function sample_pairs_of_nodes(){
	  if (g.order<2) return [];
	  let samples = [];
	  let node1, node2, n1, n2, d, c;
	  const samples_count = g.size; // We want as many samples as edges
	  if (samples_count<1) return [];
	  for (let i=0; i<samples_count; i++) {
	    node1 = g.nodes()[Math.floor(Math.random()*g.order)]
	    do {
	      node2 = g.nodes()[Math.floor(Math.random()*g.order)]
	    } while (node1 == node2)
	    n1 = g.getNodeAttributes(node1);
	    n2 = g.getNodeAttributes(node2);
	    d = Math.sqrt(Math.pow(n1.x-n2.x, 2)+Math.pow(n1.y-n2.y, 2));
	    samples.push(d);
	  }
	  return samples;
	}

	function find_Delta_max(indicators_over_Delta, epsilon) {
	  const C_max = d3.max(indicators_over_Delta, d => d.C);
	  const Delta_max = d3.min(
	      indicators_over_Delta.filter(d => (
	        d.C >= (1-epsilon) * C_max
	      )
	    ),
	    d => d.Delta
	  );
	  return Delta_max;
	}
}

Science tools are not made for their users

5 min read

Carelman, Catalogue d’Objets Introuvables

I often get to talk about Gephi, an open source tool to visualize and analyze networks that I co-created a decade ago. The project is in a semi-dormant state since a few years, but despite some issues it still works. As frustrating as it can be, Gephi has its fans – kudos to its amazing community! The feedback I receive from these well-meaning enthusiasts makes me think that our motivation is sometimes misunderstood. Nothing surprising, since it is mostly implicit. But making it explicit may prove useful.

I get sometimes asked why we do not push Gephi further. This feedback comes with different assumptions: that we want to but cannot; that creating a business may solve our problems; that Gephi gets behind in a race against other tools… But I do not think that Gephi should “take over the market” and that is why I prioritize other tasks over developing or designing Gephi, or doing community management – writing, for instance.

The sustainability of our project does not depend on a business model; but if I had to paint the situation under that light, I would say that our “business model” does not care that people use other tools. Less users does not mean less money for us, and money is not the problem anyways. What we need the most is (1) a healthy community, and (2) time from trained developers to fix major issues. The rest is secondary.

There are many similar tools. Did you know that there was already another French open source graph analysis tool out there when we created Gephi? It’s called Tulip, and it’s still alive and well. We were also inspired by Pajek and Eytan Adar’s GUESS. We created Gephi to be able to do things that we could not do with these tools, but it does not mean that we aimed at being better, only different. Of course, we were excited to get known and radiate out of the French circles. I remember how happy was Mathieu Bastian, our lead developer at the time, to shake hands with Duncan Watts, or Marc Smith, the creator of NodeXL, another network tool still going strong today. There are many nice solutions out there, all with their own specificities: Cytoscape, UCINet, Visone, iGraph, GraphViz, Voson… But it is fair to say that there is a large overlap between them. There are more tools than necessary. Why?

There are too many tools than necessary to the users; but tools are not mainly made for the users. Or if you prefer, their real “users” may be the participants of the project. From my perspective, Gephi is also an experimental device through which I enroll participants into a large-scale epistemic design experiment about complexity. You’re my guinea pigs, folks; but everyone is someone else’s guinea pig anyways, and at least Gephi is a win-win situation (note: we do not track Gephi usage – we are not Facebook).

Open source tool creators, especially in science, do not create tools to meet user’s needs. People create tools because (1) the tool has benefits, (2) the process of making it has benefits, (3) the tool is reusable, (4) the tool happened by accident.

The benefits of creating your own tool include:

  • It meets your needs. Sometimes no existing tool does what you want – that is how Gephi started.
  • Retaining technical skills in your lab, because it is a key piece of the puzzle of making your lab work. Ex: the Sciences Po médialab.
  • Visibility, assuming that you make the effort of associating your name(s) with the tool. It can also disseminate “your” method – although users repurpose tools in unexpected ways.
  • Collaborations with other people whose interest was sparked by the tool, including open source developers. Big tools can bring many people together, e.g. Media Cloud.
  • Citations. As a goal, it is overrated: the Gephi paper is cited 5000+ times, but it is not that many academic magic points considering the amount of work involved.
  • Dedicated funding – this can make it harder to open the source code, since it often comes with pressure for patenting and/or creating a business.
  • Students can afford a free tool. And in a pedagogical setting, a simple tool is often more efficient than a complicated one.

The benefits of the process of creating a tool include:

  • Learning how to code, because you want to become methodologically independent (look at the success of Jupyter notebooks).
  • Engaging with computers because you study them (I look at you, Bernhard Rieder, and your upcoming book on algorithmic techniques Engines of Order).
  • By redoing something, you understand how it is done. And it is sometimes faster.

Most of these good reasons apply regardless of whether the device is made public or not. But dissemination, as a value, has a strong currency in science – a point in common with open source software. Once you have created your gizmo, why not allow others using it? And that is how you sometimes inadvertently end up with a tool. You put your gizmo online, you mention it in some slides, and the next thing you know a PhD student asks about “your method” and you’re booked to give a workshop.

Most gizmos do not meet a large public – most fall back into oblivion. But of all popular open-source tools, most probably started as unexpectedly successful small-scale experiments. They were not created to meet user needs. Meeting user needs made them successful, but it was not the initial motivation. And it does not have to be.

Gephi’s development is slow and it does not bother me too much, despite a number of frustrating points that are, unfortunately, difficult to solve. To be completely honest, the current state of Gephi can be useful in unexpected ways, for instance when it comes to remind everywhere that developing it has a cost – and it is not just money. I agree however that the situation is precarious, which is problematic: we are not in a too bad spot, but we could too easily fall into one.

Gephi is a collective project and each participant has its own motivations; here I only speak for myself. My position is quite self-centered. It’s not that I disregard users: I deeply care about them. But I am just not so interested in the problem they think they have. I am interested in the problems I think they have, I think we have. I think we have a problem with complexity, a problem akin to a collective blindness to our own limitations, a denial. That is what I find interesting, in the current state of affairs. Despite what users desire, a moderate amount of frustration can be fruitful to that end.

Foldable pocket anvil. Carelman, Catalogue d’Objets Introuvables.

The Thick Machine

With Anders Munk and Asger Olesen we have been experimenting with machine learning with the help of a physical device built by Asger. It looks like an arcade game and for reasons that will appear, we call it the “thick machine”. We presented it at the Machine Anthropology Workshop at Copenhagen University today (2020-01-27). It is a story about ambiguity, and the materiality of the digital.

The “thick machine”, ready to get transported.

The “thick machine” is a custom-built computer. It is based on a refurbished screen, a Raspberry Pi, some arcade buttons, and a custom-made case inspired of arcade machines. It contains a software coded in Python using a simple game engine, where you can play a guessing game. In this game, you try to guess which of 5 emojis is the right one; but you actually compete with an algorithm. This algorithm is a specially trained classifier, and it tries to guess the right emoji, exactly like you. There is no score or advanced game mechanics: you commit to a choice, then you see both the right answer, and the guess of the algorithm. That’s is all. And it is already interesting, but let me explain what the guess is about.

The screen and buttons of the machine.

At the TANT Lab, we have what we call “The Atlas of Danish Facebook Culture”, which is basically a harvesting of the whole Danish Facebook from before the APIcalypse (the Facebook API closed these accesses). That is our starting material.

On Facebook, when you “like” a content, you also have the choice of five different emotional reactions, represented by an emoji. You can react with an emoji, write a comment, or both.

In our corpus we have 128 million comments, 700 million emoji reactions, and 23 million emoji-reactions (both at the same time). The game is about those: given a commentary, can you guess which emoji was used by the author?

We also trained a classifier for that task. We tinkered with different algorithms, but that is not the most interesting part. We landed on a simple neural network from a popular Python library, SciKit Learn.

It turns out this classifier is neither better nor worse than humans (i.e. lab people having played the game on the thick machine). Not only is it as accurate, but the structure of the results is also the same. This is something you can see in a confusion matrix.

A confusion matrix shows which emoji is guessed depending on which emoji is the right one. An algo that guesses right all the time would have only the diagonal filled, and an accuracy of 100%. An algo that guesses at random would have every cell filled equally, and an accuracy of 20% (one chance over five to guess right). In our case, we are somewhere in-between.

We have two confusion matrices. On the left, the results of human guesses. On the right, the results of the algorithm. Both achieve about 50% accuracy, which is not so bad, but also not so great. Perhaps surprisingly, humans do not achieve a great score (note: our numbers are still low, it’s a work in progress). But the most interesting is the similarity of the results:

  • There is some degree of confusion between ❤️LOVE and 😲WOW
  • There is some degree of confusion between 😢SAD and 😡ANGRY
  • The [❤️LOVE+😲WOW] group is rarely confused with the
    [😢SAD+😡ANGRY] group
  • The 😆HAHA reaction is rarely confused with others

If we only had the classifier we may presume that something, in the black box of the algorithm, produces this confusion. But now that we know that humans are similarly confused, we think that the confusion is a feature of the data. This observation leads to a productive way to repurpose the algorithm: to find the ambiguous cases.

Indeed, the way people on Facebook is not consistent, what we consider “the right answer” is not really the right answer, because there is no “right answer”. And where this ambiguity lies, we can find the most interesting cultural effects.

For instance, in the two examples below, we suppose that the person uses 😆HAHA to distance themselves from their slightly childish reaction (case 1) or the socially awkward mention of sexual practices (case 2).

Contrary to most cases, which are rather obvious, these reactions are deep. They are more interesting for analysis, and finding them has a scientific application.

Tracking confusion in emoji reactions is interesting because we already know that they are ambiguous. Indeed, emoji have different meanings to different people. The funniest case we observed was this grandma who expresses her sadness with the “tears of joy” emoji. Of course, they have tears, but… judge by yourself.

For us, the accuracy of the classifier is not the productive output (anymore). It is not really a problem that it fails at predicting; we do not want to improve its accuracy. The interesting output lies in the cases where confusion happens.

The machine is thick in the sense of “dumb”, because its accuracy is not great – and humans can be thick too. But it is also thick in the sense of “deep”, in the sense of ethnography, in the sense of Geertz. It is as thick as the wink, something that you cannot understand unless you are yourself involved in the culture.

The claim to attention of an ethnographic account does not rest on its author’s ability to capture primitive facts in  faraway places and carry them home like a mask or a carving, but on the degree to which he is able to clarify what goes on in such places (…) This raises some serious problems of verification, all right–or,  if “verification” is too strong a word for so soft a science (I, myself, would prefer “appraisal”), of how you can tell a better account from a worse one. But that is precisely the virtue of it. If ethnography is thick description and ethnographers those who are doing the describing, then the determining question  for any given example of it (…) is whether it sorts winks from twitches and real winks from mimicked ones.

Geertz, C. (1973). The Interpretation of Cultures, p.16

“Great!, will you say, but why building the arcade machine?” Indeed we could have achieved the same result by classifying the cases in an Excel spreadsheet. What does the materiality of the machine change?

Being able to build the machine and actually building it are actually not the same. Imagining that you play the game also differs from actually playing it. But in the physical absence of the device it is more difficult to account for the difference it makes.

When you play the game, you realize how weirdly aligned you are with the classifier. You get the feeling that you and the classifier are, somehow, right, and the “ground truth” is wrong. You feel a logic to reactions, that you share with the classifier, and you can also understand why certain actual reactions deviate from that logic.

This feeling is quite dependent of the situation where you are. The context of Facebook must be removed, so that your guess is based only on the text reaction. You also must not see the actual reaction beforehand, or you would be biased. You must have a simple way to input an answer, as each guess requires focus and attention. The materiality of the arcade machine has a number of effects.

But beyond these details, the arcade machine actively repurposes the classifier. Its symbolic effect is, I think, the most important. By conveying the idea of a game, it puts people in a different position towards the algorithm, and this is actually hard to achieve. We have seen it.

Although our main point was to say that the accuracy of the classifier was not how it was productive for us, multiple people came to us telling how to improve its accuracy. We observed that it is actually hard for researchers to think of their algorithms in a different way. Yes, in theory, anyone can do the same in Excel. But at the same time, our habitual toolkit anaesthetizes our ability to recontextualize algorithms – it is an aesthetic matter! We cannot repurpose without imagination. Breaking the charm requires a strong re-dramatization, and I think that the thick machine accomplished that for us.

By its materiality, the thick machine forced a different setting on us; and that setting allowed us to find a new productive way to repurpose the algorithm.

Download our slides