Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Digital Glitter, the Curse of Big Data Visualization

In this piece I argue that in social science and humanities, data visualizations are cursed by a latent incentive to reframe them as spectacular outcomes, when in reality, most are mere by-products of scholarly work. Even though data visualization can be highly valuable as research publication (which requires expertise and commitment) I criticize the repurposing of intimate, exploratory imagery as a marketing asset.

“To be clear, our point is that discursive struggles often work together with digital devices such that the politics of method cannot be reduced to language games.”

Evelyn Ruppert and Stephan Scheel in The Politics of Method: Taming the New, Making Data Official

Big data visualization acts in more ways than just conveying information. I was recently at the IT University of Copenhagen (ITU) where I had the chance to attend a seminar where Evelyn Ruppert presented the paper quoted above, written with Stephan Scheel. They propose a conceptual framework to understand the politics of methods, and engage it in two empirical examples from their fieldwork. It puts words on an aspect of visualization practices in social science that I usually struggle to grasp. Coincidentally, during that very same seminar, just after Ruppert’s presentation, such a hard-to-grasp situation presented itself.

A few PhD students had the occasion to present their work in 10 minutes. One of them had a network visualization to show. It was a colorful screenshot of a decently sized Gephi network on a black background, hard to read because of the superposition of many labels, but displaying a clearly clustered structure. That person just said: “We also do these kinds of fancy visualizations”, and then just moved on to the rest of the slides. No explanations were added. Obviously that person wanted to showcase that part of their work but, pressured by time, skipped the comments. I do not blame anyone here, because I do not take this vagary too seriously. It is actually quite funny. Still, there is something worth unpacking here.

The situation presents a paradox. Why chosing to showcase a visualization, if only to denigrate it? The author might either decide that the visualization is worth it, and then display and explain it, or not, and then leave it aside. The author must have reasons to showcase the visualization. So why hide those and denigrate the visualization instead? And what are those reasons? There are no simple answers to that.

Visualization failure is an easy but poor explanation. There are multiple ways a network visualization can fail to meet expectations. When failure happens for good reason, showcasing it is relevant. Failure is a valid outcome to a science experiment and in practice, it is legitimate to showcase for instance how network analysis turned out unable to answer the research questions. However, in such situation there is still a point, even if it involves a negative appreciation. Our problem is not that some denigrate or criticize network visualization. Open criticism makes a valid argument, and thus does not elucidate the paradox of the absence of arguments.

The problem lies with digital glitter. Think of it as a convenient name we can use to give some existence to an invisible feature of data visualization. I gradually came up with this notion to reflect on my own practices and notably on what data visualization performs in terms of public relations. I am well aware that network visualization earns me “something”, and I started to call that “digital glitter”. I see it as a feature of both documents and people. Like a sticky matter, it appears first in certain objects (images, videos, interactive devices…) and from there moves on people by contact. Since I created tools and data visualizations, I became identified as a data scientist and gradually got digital glitter on myself. I would say that the person in our starting example wanted to get some digital glitter too. The metaphor resonates because like literal glitter it can make you shine (albeit in a vulgar, non-specific way) but too much of it and it turns into a cruel, poisonous joke. Like make-up or dress code, whether you like it or not it seems required in certain social situations. The notion of digital glitter helped me situating the problem. However, the analogy does not help much to unpack what is at play. Fortunately, Ruppert and Scheele’s framework on the politics of method does.

In their paper, Ruppert and Scheele come up with a detailed framing of the pair of pictures reproduced above. As you can see, those images are oozing with digital glitter. The authors narrate how they were employed: “Rather than charts, numbers or line graphs, [the person] displays a three‐dimensional heat map that has become a popular visual form and which shows a rather obvious pattern – the density of population in the inner city differs during the day versus night. The data, analysis and work that went into producing the visualisation are not discussed. But the deployment of a visualisation is not to settle technical questions. Rather, the visualization is a strategy to convince others that working with big data […] requires a change in ‘paradigm,’ which the visualization performs.” In this case, the paradigm shift is a change from statistics to modelling. The key point is how the role of visualization is specifically not to make a technical point, but to convince the audience of a need for change. “The demonstration shows how innovations need their diagrams not only to represent but […] as means to build allies and to persuade others.” As they accurately note, the visualization performs a change. It produces another effect than conveying a message, illustrating a point or providing visual evidence. Its agency goes beyond the classic role of data visualization, sharing knowledge.

Data massiveness, high granularity, dynamicity, and presence of relations have their own agency. When I asked Ruppert which features of the visualization helped convince, she mentioned dynamicity and high granularity, and remarked that there were no labels in the picture. In this specific regime of persuasion, some features get a crucial role and others lose relevance. Key features all contribute to “claimed self-evidence”, a property described as “a seamless correspondence between the visualization […] and the reality” or, as in this quote from Johanna Drucker, as a way to render “the phenomenal world (as if it) were self‐evident and the apprehension of it a mere mechanical task”. The article frames it as a trick, either as John Law’s “realist trick” or Donna Haraway’s “god trick” to see “everything from nowhere”. I am reluctant to frame it as a mere trick however, for at least two reasons. Contrary to a trick, the effect lies in the object (the visualization) and does not require to be enacted in person. And contrary to a trick, seeing through it does not dissolve it. I acknowledge that a form of mystification is at play, but I challenge its contingency. Self-evidence does trick you into forgetting that knowledge is always situated, but the effect is deeply rooted in materiality, and cannot be dismissed as a mere illusion. The article demonstrates that the presence of certain elements and the absence of others reconfigures visualization to perform better in a specific regime of persuasion. This reconfiguration draws on a correspondence between the 3D heat map and Ljubljana. This connivence of the image with the field may be partial and constructed, it is nevertheless real. The effective affinity between the image and the phenomenon fuels the visualization’s persuasive power. The reconfiguration emphasizes this correspondence and hides the rest, but it does not invent it on a purely semiotic level. The connivence is material-semiotic, and firmly roots the self-evidence effect.

My point here is constructivist, and like often requires to walk a perilous edge between relativism and positivism. There is a lot of room between “Big Data is bullshit” and Chris Anderson’s End of Theory. I resist both the relativist and the positivist readings of the situation. The caricatural relativist thinks big data persuasion is marketing fabricated by actors. In this perspective, even if actors convoke convenient visualizations, self-evidence is ultimately a social construction. I disagree because self-evidence is grounded in material-semiotic properties of visualizations. On the other end of the spectrum, the caricatural positivist thinks big data is the new paradigm of a datafied reality. In this perspective self-evidence is a legitimate feature of visualization insofar as data fit reality, even if not perfectly. I disagree because for me data remains heavily situated in that process, even if actors do not acknowledge it. But I agree with the relativists that big data enact politics, and with the positivists that big data reconfigures our practices. I see granular or complex data visualizations as artifacts with their own political agency, a specific and new configuration of influence. Digital glitter is a possible name for that specific agency, whispering promises of unmediated knowledge, predictive power, and increased control. The existence of such agency is neither surprising nor specific to big data visualizations, but for some reason it seems to cause a lot of trouble in social science.

Big data visualization is problematic to social science in particular because we reprove the fantasies of omniscience and control it promotes. The change it brings in methodological tradition is challenging the state of our affairs. It conflicts with our values. Worse, it fuels opposite agendas. It tends to legitimate that physicists and computer scientists take in charge social questions. It challenges the role of social theory, enacting a new world order where empiricism is datafied and fieldwork obsolete, as if such machinery could work. Big data visualization is not on our side, on the side of Noortje Marres’ “radical empiricism”. You may feel how strong is the temptation to reject big data visualization, and fight against its influence. A nice but naïve attitude, insofar as we also have a need for highly granular, dynamic, and/or relational data, as misaligned their political agency might be.

Why we visualize big data

Big data is more than an empirical opportunity, it has become a necessity to understand social phenomena. The moment when digital traces could be seen as a limited echo of “real life” is way behind us. We can no longer argue that Facebook friendship is no “real friendship”, we must now face how it contributes to reconfiguring friendship, kinship, and more. The digital even performs beyond its material infrastructure (Facebook impacts you even if you are not in there). Or to put it another way, the “non-digital world” does not exist anymore. This is not to say that any social study must engage with the digital, but that some need to. Situations will happen more and more often where an a-priori non-digital research design requires engaging with digital traces. We do not always get to choose whether to engage or not with the digital, sometimes we just have to. The same goes for many other constraints attached to investigating social phenomena, and of course that situation itself is not new. But the kinds of constraints we face change with time, and Big Data embodies the newness of digital-specific conditions. The necessity to engage with that material leads to face massive amounts of data, and/or highly granular, and/or dynamic, and/or relational. We do not do it to get digital glitter, but because certain inquiries call for it. That being said, the necessity to engage with big data does not imply to visualize it.

Before all else, visualizing big data is a condition to empiricism. More precisely, the double uncertainty we face with digital traces calls for it. Firstly, we are unsure of our research questions. The situation is not new. Exploratory data analysis had formalized it before the data deluge. But it has become more prevalent. The proliferation of data sources gives rise to more opportunistic and open-ended research strategies. The “data sprint” format familiar to Richard Rogers’ Digial Methods Initiative roots in this situation. Secondly, we often ignore if our data sets can answer our research questions. It is a direct consequence of scholars losing control over the production of the data they use. Census and audience measurement followed procedures forged with (and partially for) social scientists. They were produced in order to know the social (even if not necessarily in an academic perspective). On the contrary many digital traces we study have been produced by the industry for its own needs, with procedures alien to social sciences. We repurpose them by necessity and opportunism, but we cannot presume that the conditions of their production allow us to draw legitimate conclusions. This second uncertainty also fuels the first one: committing to research questions is a more risky investment if you have no guarantee that your data will correspond to them. The pervasiveness of repurposable data incentivizes more open-ended research designs. In such context, visualization is critical. Exploratory data analysis is of course a visualization-based approach, but even before the analysis step, we need to monitor data to assess basic quality and validity. Visualization is efficient for this purpose. It requires few assumptions, produces results fast, and allows to iterate quickly. Before analysis, we do not want to invest too much time and energy in interpreting the data. We just need a low cost, low reward strategy to engage with it. We will only gradually mobilize high cost, high reward strategies once we check that repurposed data are usable. Data visualization can also be high cost, high reward. But it is not the same kind of data visualization.

Exploratory visualizations are not meant to be settled. They are usually produced inside a framework where the user can strongly interact with the data, whether it is a dedicated software such as Gephi and Tableau, or an open platform such as R and Jupytr Notebooks. Such framework alway prioritizes flexibility over the efficiency of conveying a message. It takes in charge most graphic design decisions so that the user can focus on navigating inside multiple facets of the data, including the classic Schneiderman’s mantra “Overview first, zoom and filter, details on demand”, but not only. Algorithms are also often convoked to process the data into new facets. This step of exploration is a material-semiotic engagement with the data. It involves the body as much as the mind. It is intimate for that reason, and because interpretations are still idiosyncratic. Like an unfinished piece of art, the audience cannot properly enjoy it.

The process of settling an interpretation will gradually narrow down the number of relevant facets, strengthen a narrative that can be shared, and crystallize factual elements that can circulate. Those might be in the form of a visualization, but they do not have to. During this process, more and more people will become able to engage with the data, and flexibility will be gradually dropped to the benefit of circulability. It is important to understand that this process is necessarily gradual, because it explains why it takes place inside the exploratory framework. This continuity between exploratory and explanatory explains why a same tool can serve two apparently opposed purposes, investigating for yourself and conveying a message to others. Exploratory visualizations are not meant to be shared, even if the frameworks producing them also allow building explanatory visualizations.

Sharing exploratory visualizations makes sense in certain situations. Firstly, as we have seen the border between exploratory and explanatory is blurry. When digital traces are specific and data science becomes specialized, a division of labour emerges between domain experts (who know the phenomenon) and data scientists. Their collaboration requires sharing exploration. Secondly, it is sometimes useful to account for the exploratory step, to show what science in the making looks like. For instance a Gephi screenshot. Its purpose here is not to be understood, and it must not be framed as explanatory. Thirdly, exploratory visualizations can sometimes be attuned to a wider audience. The most classic example is Hans Rosling’s Gapminder. The datascapes we have developed at the médialab are also of that kind (eg. La Fabrique de la Loi). It must be noted however that it requires a massive investment in graphic and interaction design, that social science labs generally cannot afford.

Releasing explanatory data visualizations is expensive. It requires a specific expertise, and graphic design is not a rare resource to most social science labs. But “fancy visualizations”, if by that you mean screenshots featuring massiveness, high granularity, dynamicity or networks, are not rare at all. Exploratory visualizations are a by-product of every scholar’s normal work. So as a shortcut, some are tempted to repackage those as research outcomes. Or more precisely, as marketing assets. Unfortunately it fuels big data fantasies for free. My starting anecdote is of that kind. Every time we show a big data visualization without any context, every time we prey upon digital glitter by smuggling an otherwise pointless “fancy visualization”, we let big data enact its politics. The trick is cheap but it costs us a lot. If we are to drink the cursed chalice of big data visualization, let us at least get something from it. For instance, reconfiguring our immune system to resist the poison.

Pressure to get digital glitter

I tried to understand the latent incentive to get digital glitter, but I failed. I do not think that the politics of methods in the academia are the same as those Ruppert and Scheele observed. It seems reasonable to hypothesize it provides a form of competitive advantage, or the illusion of it. But I doubt there is a one-size-fits-all answer. Even though I observed multiple situations where digital glitter seemed to have an impact, these glimpses did not help me understanding the reasons why one would go after it.

Since this piece is not an academic publication, allow me to end it on a personal note. I believe that I cannot fully understand digital glitter because of my digital privilege. The notion of white male privilege extends to big data more naturally than I am comfortable with. Because my work inherently secretes digital glitter – lucky me. I do not know what it is like to lack it. I do not need to seek it. I do not have to cheat it. Of course I do not regret it, despite a few gatekeepers seeing my work as detrimental to social science. But when I see a fellow social scientist feeling compelled to give up a little bit of their work’s quality in exchange for the trappings of big data, it makes me feel sad.


OpenEdition suggests that you cite this post as follows:
Mathieu Jacomy (May 3, 2019). Digital Glitter, the Curse of Big Data Visualization. Reticular. Retrieved February 19, 2025 from https://reticular.hypotheses.org/962


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.