Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Interview of Geoffrey Rockwell, maker of Voyant Tools

40 min read or 1 hour video

I was in Dagstuhl for a one-week seminar about visualization and the humanities. Geoffrey Rockwell was also attending, and I jumped on the opportunity to interview him. In short, he is the designer and caretaker of Voyant Tools. The long story, well… read on!

This interview is published with the authorization of Geoffrey. He helped me edit the transcript. We also propose the video version, which is more informal but also more lively. The sound is bad in the beginning, but I think you may enjoy Geoffrey’s sheer friendliness. We release both the transcript and the video under CC-BY-SA license (as anything else on this blog).

Video version of the interview.

For context, although this was quite improvised, I had a set of questions already prepared because I was about to moderate a panel about tool-making organized by the MASSHINE center at Aalborg University (I will post about that later on). This explains why some of the questions look a bit like a survey. See this as part of my long-term project of documenting tool making in the social sciences and humanities.

Mathieu: Can you tell us about yourself and what your background was when you started Voyant?

Geoffrey: Sure! My name is Geoffrey Rockwell. I am professor of Philosophy and Digital Humanities at the University of Alberta. All my degrees are in philosophy, but when I was a graduate student at the University of Toronto, I became an Apple Research Partnership partner. This was back in the late eighties, and it was a program that Apple ran partly to fund graduate students, partly to evangelize the Macintosh, and support HyperCard, etc. I became a partner right around the time HyperCard came out and I ended up becoming the HyperCard trainer for computing services at the University of Toronto.

HyperCard came out in 1986. It was very empowering for a lot of humanists, because it was a multimedia development environment, it was free, you could develop things fairly easily, you were not having to learn C, C++, or anything like that, and you could control interface and code. You had both sort of woven together. I mean, nowadays, I guess, visual basic fills somewhat of a similar niche. But I remember, just to give an example, that a year after HyperCard came out, somebody was telling me there were like 170 HyperCard stacks for learning Arabic. All sorts of language instructors, all sorts of people just took to it.

So that is how I got started. And then, in 1989, I started working full time for the University of Toronto Computing Services. I was only in my second year of my Ph.D., I was not getting a lot of funding, and I had a son who was born… Working full time was good for me! That was a tremendous apprenticeship. Initially, I was supporting text applications and word processing, but then I moved into instructional technology. Again, HyperCard and similar multimedia development environments.

I was very lucky, you know, given that I did not have a computer science or technical background, to be embedded in such a unit. One of those groups was running the academic Internet for Canada! When the Web came out, the guy literally next door to me was one of the first people to explore the web in Toronto. He wrote a book that went on to be a bestseller. It was a very generative group. I was very lucky.

I got an academic job running the Humanities Media and Computing Center at McMaster University in 1994. They were looking for someone who had a Ph.D. in the humanities, and I was close to getting mine done, and they wanted significant project management experience in computing. Because we were running a bunch of labs, and we were running software development. To be honest, there were not a lot of people who had four or five years of experience in computing and a PhD in the humanities. And I had risen to a sort of project manager. So again, I was very lucky. It was the first humanities computing job advertised as a humanities computing job in Canada. So that was at McMaster, and there I ran a shop, I had staff, I had programmers. A lot of what we did was run labs at McMaster. People were just discovering the web. So, you know, I brought the web in. We created the first website for the Faculty of Humanities. We started building websites for people, and so on like that…

But when I was at Toronto, my supervisor was John Bradley. John Bradley was the lead designer of TACT. TACT was one of the best text analysis environments of its generation. It was released in 1989 at the first joint ACH/ALLC conference. There had been sort of two separate scholarly associations, the Europeans and the North Americans, and the first joint conference was in Toronto in 1989. I was right there. George Landau, who was a big name in hypertext studies, was supposed to run a workshop on hypertext, and he got sick. I had to take it over. So I taught my HyperCard workshop for a week, teaching students to build hypertext things. And then the conference… Ted Nelson showed up at the conference. Northrop Frye gave a talk. It was fabulous.

There was a software exhibit, and I was there showing HyperCard stacks that I built for managing bibliographic management tools and note taking. I was right across from Elli Mylonas, who was a graduate student at Harvard, building the first versions of the Perseus TLG search tool. The TLG comprised all the important texts in Greek, and normally, to access it, you would have to buy an Ibycus, which was a dedicated workstation with a CD-ROM player, with the Greek fonts built in. Apple had released a CD-ROM player, and they built an extension to HyperCard that could read the CD-ROM. So all of a sudden, instead of having to buy a $20,000 dedicated workstation, if you could get the CD-ROM, a Mac with a CD-ROM player, and with the HyperCard stacks, you could search all of Greek literature. This was Big Data. In 1989, Big Data in the humanities! In fact, I published a paper with a colleague about it. In some ways we were experimenting with Big Data questions. What can you ask when you have all of ancient Greek literature?

Anyway, the long and the short of it is that that was sort of my getting started in text analysis tools. I got very interested in text analysis working under John Bradley. He and I started building environments we were very interested in, visual programing environments. At the time, with a silicon Graphics workstation, you could get a visual programing environment for scientific visualizations, and we just said, okay, let us build one for humanities. By the time I got to McMaster, I got a programmer in the high performance computing group interested in this. And we actually developed a prototype, I think using Visual Basic. We published a paper on it. It was a system where you could drag out little boxes. Now there is that great environment, Orange, which allows you to do that sort of visual programing with Python under the hood.

It worked, but it was really just a prototype. What really changed things was when we recruited Stéfan Sinclair, who came to work with me at McMaster. He was a genius. John Bradley and I had built a web-based visualization environment that did correspondence analysis, keywords in context, and so on like that… but you had to index the text using the Makebase tool of TACT separately. And then you had to set it up. The visualizations were interesting, but it was very clunky. Stéfan Sinclair, as part of his Ph.D. thesis, had built a system called HyperPo, because he was very interested in OuLiPo, and hypertext. He was doing a Ph.D. in French, at Queen’s, and so he built this HyperPo where you could upload the text to the web, it would index it, and then it would build a display with different sorts of keyword and context statistics. So we recruited him to McMaster.

At the time, I had got one of the first really large CFI grants in the humanities. These were infrastructure grants, so it was about $6.7 million, and a big chunk of it was to build a text analysis portal. Do you remember portals? This idea that you could build an environment that can do everything. So we built this all singing and dancing portal. I mean, to be honest, for that one we contracted with a professional development team, but we were following a sort of agile methodology every week. Stéfan and I were there with the head of the programing.

What was the year?

We heard about the grant in 2002. The building was happening in 2003, 2004, 2005. TAPoR, Text Analysis Portal for Research, is still working. But what happened is that, as part of the portal, we built a bunch of small web services that did specific text analysis things. We called them TAPoRware. TAPoRware, Tupperware… Anyway, simultaneously Stéfan was rethinking HyperPo. And at a certain point, we realized that the portal model was too unwieldy. We would have to get $1,000,000 a year to keep development, and this was not going to be sustainable with the type of funding you get in the humanities. So we took the best parts of the portal and we split them into two. TAPoR became the discovery, how you can find out about tools and document them, and Voyant became the actual set of text analysis tools.

It was sort of inspired by HyperPo, but we in some ways we broke it up into smaller pieces that we could support. And even if we did not have a grant, we could still, you know, coast for a year or two, and then Stéfan and I began an agile praxis where the two of us would get together in a room, and we would try to do a project one day. One of us would be at the keyboard, and the other one would be doing research, checking things out, and then we would swap. Like agile pair programing. And the goal there was to do a lot of projects and see what tools were useful, what was not useful. We just cobbled together what we had, and when we got something that worked, we would implement it for Voyant.

At that time it was not yet called Voyant but Voyeur. At a certain point we realized that the word “voyeur” in English has no positive connotations. In French, it is okay. A little bit better anyway. But in English… So we switched to Voyant. Some good friends of ours actually politely told us: this is not the right word for what you are doing. We love it, but it is not the right word.

At that point, the both of us had the academic problem of we have got to publish. So, to some extent, our solution was this praxis. We would plan a project and do it. We would, on the one hand update Voyant, and on the other hand we would write a paper that we would give at a conference. The papers became a book, and Voyant 2.0 was released. Our book came out in 2016, Hermeneutica. This was the compromise. Well not a compromise! In our case, by making sure that we had these hybrid products of book and software together, and the book illustrated the software the software made the book possible, people could then experiment with us. That was the praxis that we developed and continued.

Have you been making other tools and prototypes? Are they still alive?

A lot of the tools that I did were really more websites, and like many websites, have fallen. I spend a certain amount of time trying to maintain them. Actually, nowadays, I spend my time writing grants to get the money to hire people, to keep these things going. But Voyant is the main one. All the TAPoRware tools and HyperPo, it was replaced by Voyant, and to some extent TAPoR. TAPoR is not that complicated. I have got some other projects I think are cool, but they are nothing like Voyant. Just to give you some numbers, Voyant has about 200,000 or more unique users a year. That creates its own dynamic. Stéfan and I had to move to a more professional, production-oriented approach. And of course, the academy does not reward production. I cannot sit there and say: we now have just released Voyant 2.3, and it might have been as much work as three articles. I got credit for Voyant and Hermeneutica and that was it. You do not get it again and again unless you do something dramatic.

Would you say that the tool is made for social scientist? Humanists? Both?

It is made for humanists. Textual scholars. It is a text analysis and visualization environment. Having said that, I happen to know that all sorts of social scientists and other people use it. You know, I get emails from lawyers to use it. An email from the son of a doctor whose father uses it. It gets used by a lot of different people.

Voyant has a user interface, right?

Yeah.

Does it include visualizations?

Yes, quite a few. In some ways it is a set of tools that can compose a dashboard. It has got about 23 tools. Many of them are visualization tools. Some of them are standard, like word clouds, distribution graphs… Some of them are wacky. It has got a mix. There are a bunch of tools that a lot of people do not know about, because they are not in the default skin. People do not play with them that much, but they are there and they are fun.

Would you say that Voyant Tools is a tool for anyone?

There are two features to the breadth of people that use it. First of all, we built language skins in, and we got volunteer teams. We support something like 13 languages. As a result, people in France can use it in French. And in fact, there is a French infrastructure team that has installed its own Voyant server. It has the ability to handle any language that can be encoded in Unicode. Stéfan did a certain amount of work solving problems with Japanese, Hebrew and Arabic, so that we can have those language skins. Many of the groups that developed the language skins were DH groups that wanted to teach with it. A guy in, I think, Dubai, developed an Arabic language skin, because he wanted to teach it in Arabic, and people wanted it in French and Spanish and Russian, etc.

So my sense is that it gets used a lot as an introductory tool, especially for teaching humanities students. When you think about it, they do not have to install anything. You can get a text into it in all sorts of different ways. You just click away and start playing. It works well for that introduction. Scott Weingart called it “the gateway drug for digital humanities”. It is a great entry-level tool. And so that makes it widely used.

When you think about the pattern of use in the digital humanities, you see that humanists do not do research every day of the week, or every week of the year. They are teaching, and then all of a sudden, they do a bunch of research. They need tools that they can pick up quickly, use intensely, put down and then not use for six months. It is not like email, that they are doing every day. I think that works well. And it is free, so anyone can download it and run it locally.

Is it open source?

It is open source. That may be great, but I do not know of anybody who has looked at this… Oh! That is not true. I only know of one team that has gone in and done something with the code, where they wanted to change something and then reverted back to us.

Do you see yourself as a professional developer?

No.

Are you self-taught?

I have taken programing courses, but not at the level of professional.

But you do code for the tool, right?

I do almost no coding now. I do grant writing. We have a digital humanities student that started working on it, and he does most of the programing now.

Do you see yourself as an academic?

Yes.

And of course, you publish papers.

Yes.

Are you proud of coding or having coded? Or is it something you get to hide?

No, I love it. First of all, I should say one of the one of the things that we have done with Voyant is that we built a notebook programing environment with it (Spyral). So, I do program in that, but I tend not to touch the underlying infrastructure, because that code is too complex for me. But I tend to really like programing.

One of the things I love to do is to teach humanities students to program. I often teach them data analysis in Python. We have them work in Colab and stuff like that. And there is a sort of Aha moment when somebody who has been told that there were bad at Maths, that they could not program, succeeds. When I teach them programing, I spend a lot more time talking about the culture of programing, and I talk about things like Brainfuck, all the playful programing languages that are developed. I try to get them to not feel excluded by the culture of programing. And then there is this moment when they get something to work. That is fabulous.

I wish I was a better programmer. I wish I had the time to be a better programmer. I am pretty sure that if I did have the time, if I had six months and spent 2 hours a day, I probably would be a better programmer. But I could learn Japanese in that time too! You know, there is only so many hours in the day.

Was Voyant initially made as part of an academic job? And today?

That is, I think, one of the genius things that we did: by developing this praxis of experiments, we were able to make the development academic. We were able to make sure that we got academic credit for the tool because we were always developing it while writing papers, delivering conference papers, and eventually the book, MIT press… We always made sure that every summer we had two or three papers at conferences. The papers reported on the tools and often had an academic component. But then we would say that we are also adapting this tool. We are playing with social network analysis. And here you can see how we tackle this problem. That was it.

You know, I think everyone in the digital humanities has to find a way. My colleagues in computer science have some of the same problems. Nobody is going to give them tenure for writing code. On the other hand, they will not give them tenure just for writing theory. Well, maybe they would do that for Maths or something like that. So they are always getting the grant, getting the Ph.D. student to write the code, writing the paper with the Ph.D. student… There are these mixed collaborations.

Was it joyful to make tools? Painful? Both?

The original portal was scary because of the size and the amount of money we were spending on it. The commitments. In some sense you get a big grant, you made a big promise. That was very scary. If you are promising that you can build something that your colleagues will use, and then they do not use it… How many people have built great tools that nobody uses? In some ways Voyant was the second pass. Breaking Voyant and TAPoR apart and making smaller devices that did one thing well (or one cluster of things). And then at a certain point, we realized: this is very popular! We do not have the problem of nobody using it anymore. Now we have got the problem of how to maintain it.

Are you proud of making Voyant?

I think so, yes. I do not think anyone will ever read the books and the articles at some point in the future. Voyant… Well, even Voyant will disappear. But I certainly am known for Voyant. Just even in this retreat, a couple of people have come up to me and said: Oh, I teach with Voyant! You know, nobody comes up to me and says: Oh, I read an article you wrote!

Could we say you are a designer of Voyant?

Stéfan was the designer. I was the vice president, the vice designer, if you will. I was more the theorizer, often. And he was the interface designer. He really had a good visual eye in some ways. And he loved to program. And he hated writing papers. Whereas I like to theorize.

Are you a co-maintainer?

Well, now. Stéfan Sinclair passed in 2020. So now in some ways I am responsible. There is a programmer who does the day-to-day stuff. As I said, I write grants, I answer emails, I test things. I do not touch the code any longer. I touch the code in Spyral, but not the main code.

So what would you say is your role in Voyant?

I would say I am project manager.

How long have you been on the path of making tools?

The very first tool that I made with John Bradley, TACTWeb, was the beginning of the path. When we presented it, I believe it was the first time anyone presented a paper on text visualization on the web. We presented it at ALLC/ACH Paris in 1994. And for what it is worth, I can still remember a very important person in the community basically saying that this is not digital humanities, that there is not an argument here, that this just a pretty visualization, and that it is cute but useless. That is when I realized that John Bradley and I had been immersed in scientific visualization, and that we had been looking at all the tools that they had, and their rhetoric, and there was a little bit of a surprise for me to suddenly realize that my own community are going like: Eh! This is not serious. I mean, to be fair, this was just one person.

Does Voyant do what you wanted it to do?

Voyant does not do some things that I would like it to do. It is a moving target, especially when you are trying to make something that is, in some sense, current, it is a continually moving target. This is one of the problems with us academics; we are not rewarded for production systems. Right now, there is a whole new world of what I am going to call “AI tools”. We have topic modeling. We just redesigned it. That works quite smoothly. We have named entity recognition, but that is very computer-intensive, and Voyant is meant to be fast and interactive. So, computer-intensive things are a problem. What we really need now is to be playing with word embeddings and stuff like that. And that is going to be a challenge to do right.

I have been experimenting with ChatGPT 4 and Code Interpreter, where you can now do text analysis just with a series of prompts. You upload a text and you say: give me a graph of the high frequency words. I have this suspicion that the days of Voyant may be coming to an end, that it might be the return of the command line or the prompt line. That will become the new paradigm, for data analytics. And it will not just be Voyant, it will be Tableau, it will be all sorts of tools that get replaced by these new tools.

So, your feelings about the tool did change over time…

Yes. And I anticipate them continuing to change. To be honest, right now, I am nearing retirement. My feelings right now is that I want to find a way to gracefully pass this on to someone else.

Is it the first time you want to pass it on?

No. I have put a lot of work and I have gotten some funding, so I have probably enough money for five years, and I am going to use those five years to pass it on; or let it die. I am also going to use it on the underlying infrastructure, that needs to be renewed. It is a stack of things. The underlying text engine is Lucene, and that needs to be replaced. When I consult with people who know more about these things, they say: you should start shifting to ElasticSearch. The JavaScript library that we use is also getting a bit dated.

Do you even know who uses the tool?

No. I mean, I know some people use it. One of the people I just met here says he teaches it all the time. I got rid of that. And I certainly do not know his students.

But we can count 200,000 users from Google Analytics. And that is just the people using our server. You can download the code, and people tell us that they are running a remote server and that is great. There is a long list of people do this. When I Google Voyant, I find interesting papers. There is a group of people, I think actually in Denmark, I came across some Ph.D. theses that were using Voyant, and I am going like: Oh!

Do you interact with the users?

The ones who interact with me, yes. I get a regular flow of emails and I try to answer quickly. When COVID hit, Stéfan and I developed (but this was when he was not well) a set of hands-on, teach-yourself Voyant tutorials, and we released them as a series under CC-BY 4.0. You know, anyone can download them, rewrite them, do whatever they want to these documents. We were trying to anticipate people who teach with Voyant and give them a series of lessons. And they can pick the ones they like, and they can rewrite them, and they can jam them together. They can do whatever they want. They can even sell them. The only thing, for CC-BY-4.0, is that they have to give us credit at some point. So, I interacted with people who use those, and of course I used them myself, and tested them with my students.

Another way, is one of the things I have been willing to do. Any time somebody asks me: Would you Zoom in to my class and give an intro to Voyant? I say Yes. I can give those in my sleep, now. I give them a on a regular basis. I just come in and talk about Voyant, or I run a one-hour session. I get a certain amount of feedback from that.

Is making Voyant under-appreciated or over-appreciated? Inside or outside of academia? What is your feeling about that?

It is under-appreciated for purposes of tenure and promotion, and especially in the humanities, academic credit is given mostly for books. In the social sciences it is more articles. But you know, books are the coin of the realm. Articles also. Grants less so. Building websites and tools, you get some credit for it, but it is not the best. No, I go up every year with a couple of articles and then maybe I have got a website or something like that, and as far as I can tell, that is what gives me the ticks. And if I did not have them, I am pretty sure that they would probably recognize the website work, or the tool work, as an adequate replacement. But it would not warm their hearts. That is just at the university I am at. As I said, in some ways we developed a praxis such that we made sure that we always were getting peer-reviewed articles, grants, papers, books. Doing that, as we are illustrating the tools, theorizing them, talking about them, and so on like that.

In Voyant, which design decisions were made specifically for the humanities?

One of the primary things is that you can always get back to the text. In fact, in the default view, there are five panels, and the central one is the pure text. And if you play with Voyant, you see that they are loosely concatenated. Let us say that you have a distribution graph for a word or pattern, and you click on that. You get the keywords in context. You click on a keyword, in context, you get the full text. They are all loosely concatenated, so that you can always get back to the raw text. It would be really nice if we could support formatted text, but that is just beyond us at the moment. So that is one decision that we made.

The second one is that we do not think that our users have a lot of patience. There are certainly people who will spend an hour installing Anaconda, and then one thing or another, and then play around with the tools, and read the books… And a week later they get something. But our tool is meant to be an entry level. You can either use a text we have already put up there, or you can paste some text in, or you can paste in a URL, or you can upload a text, whatever you can, and we will try to deal with it. If you upload a PDF, we will try to extract the text out of it. We are trying to make it as easy as possible for you to get started and play.

Right here is perhaps a contradiction with some of what we have been talking about. I saw one of the displays up there. There are people who are bothered over the idea of people having visualizations where they do not understand how you got that visualization. And I would certainly agree that people should understand it. But I think that there is an intuitive way in which humanists can partly understand it by kicking the tires of the results. We have correspondence analysis. Who the hell understands correspondence analysis? Well, you are from France, everybody understands in France. But it is not even a technique that was that popular outside of France. I think it is a great technique. But you give people that display, and they see the word clusters and the documents and so on like that, and they go: Oh! That is sort of cool. Or you show them topic modeling. Who really understands topic modeling? Our view is that we want people to start playing, and they will start understanding through playing. That is our philosophy of dealing with this, rather than putting them in a wizard situation where you have to make a series of decisions, installations, and then only if you have been a really good boy and passed the test, do you get to see anything.

You want users to learn through playing. Do you have a more constructed idea which other practices you actively support in Voyant?

If we go back to that praxis, one of the things we decided to do, was that we were not going to build a tool based on a needs analysis of other people. We were going to build the tool that we wanted, that we needed to do the experiments that we wanted to do. We tried to do a project in a day, decide what the question was, find the text, grab whatever tools we had at hand, try to answer those things, and write up a summary of it. We did not finish it in a day, but we got a good part of it done. And then we continued that. If you look at the book, Hermeneutica, there are theoretical chapters, and then there are these experimental chapters where we walk through what we did. That is what drove us.

I am going to step back and make a general statement. In my experience, when I talk to colleagues that have never really used computer assisted text analysis, when they describe what they would like, it is a fantasy. Even if you gave it to them, they would not actually use it. It is what they think they would use. But it is not really what would be useful to them. They do not know enough to be a good, reliable guide. At least we had one grant, a funded project called Just What Do They Do? in which we did actually interview a bunch of people. We showed them different screens, and asked what could you do with this and that… And by large, I have to say, we had more fun, and we got further, just by doing what we wanted; what we thought was interesting. So, in that sense, we have not done the canonical needs analysis, user studies, etc. We were not building things for other people. We were building things for us, and people like us. And that worked!

Were you surprised by the users? Do they do unexpected things with the tools?

I was surprised by the amount that this was used for teaching. I should have anticipated it. But I should say that I only teach graduate students. I do not teach undergraduates any longer. But I was surprised by how many colleagues use it for that intro DH course. You know, you are going to have a one-week taste of text analysis, and then one week taste of something else, and so on like that. That surprised me a bit.

I have been surprised by some of the people who use it, who I never expected to use it. Like getting this email from the son of a doctor who is saying: My father loves it! He puts all of his patient records in, or something like that. Wow! Setting aside the confidentiality issues, I am often surprised by how little people use it.

There is a ton of functionality involved. Every time people come to me and say: Can I do X? I say: Yes, you just go here, you do that… Whoa! I did not even know that was there! And to some extent, this is the problem of featuritis and feature creep of software in general. Voyant has been around long enough. We keep on adding features. We are trying to avoid overloading the interface, so some of the features are more subtly integrated. So, if you are in a list view, for example, if you click in the right place, you can check which columns you want to show. We have things like Z-score etc. that people do not even realize are there, because we have got a tiny little column that is just showing the word and the count. They do not realize that you can show all sorts of other things. And that is gratifying to anticipate.

Conversely, whatever you build, somebody is going come back and say: Yes, but do you have…? There is always a wrinkle that they want. And this is the beauty of adding the notebook programing environment. Now I can say OK, Voyant does what it does, but you have a notebook programing environment. Get a grant, hire a JavaScript programmer, and you can add your own thing. And if it is cool enough, we will take it, and build it into the main tool.

So we are actually trying to find a way not to overload the interface, to support all the different people. I do not know if you have noticed this: I have been using Microsoft Word since the beginning, at least on the Macintosh; every year they add more crap. The toolbar, and so on like that. Have you ever seen those pictures where, if you load all the toolbars, you have got this tiny little square of text and it is just all buttons… And we are trying to avoid that.

What I have been surprised by is ChatGPT+ Code Interpreter. It is nowhere near replacing Jupyter notebooks, Colab, Voyant… but it is going to get there. Technology makes a promise and then often we have to adapt to the technology. It never quite fulfills the promise. But I think that it is going to promise data analysis without any coding, and people are going to learn prompt engineering in order to get the analysis that they want. And then there is going to be a constant flow of stories of jackasses who did not write the right prompt, a little bit like that lawyer that generated something which they then submitted in court. It turned out that their prompt was bullshit, and the model hallucinated.

Would you say that Voyant empowers people? Who does it empower? And is it a good thing?

I think it empowers, yes. It empowers people. It allows them to get a taste of text analysis and visualization, including things like topic modeling and principal component analysis. It allows them to get a taste, but there is something else. We have a bunch of wacky tools, and the wacky tools send an interesting message: you can play. It empowers the people who have to teach in the digital humanities.

People have all sorts of skills. You probably have all sorts of infrastructure behind you. But what if you are some poorly supported person? You know, you are the one digital humanist, in a small liberal arts college. There are no servers that support you. Voyant allows you to do something really interesting with your students. I think it empowers the students through that, it gives them a taste of what is happening.

It also empowered Stéfan and I to build a career around this, in a way that would have been very difficult to do if we had not. You look back on your life and you go: it could have gone in very different directions.

How did you fund Voyant? Is it precarious?

Yes. I fund it through a combination of different things. I try to get graduate students or research assistants. They help with certain types of things. I have a programmer, and I try to keep him funded.

One of the main things I do works because Voyant is sufficiently successful. Let us say that you were writing a grant. It is a big grant, and there is a place for Voyant, perhaps for a different tool and Voyant. We do a sort of deal. You write Voyant into your grant, and I write a letter of support; or you include me as a coinvestigator, but in turn you put a budget line in there. So for five years, there is so much funding that comes to me to support Voyant, to do what you want to do, for example to add parts-of-speech tagging.

Did the work of doing and maintaining Voyant put you in a situation of cultural clash with your colleagues?

I have been part of the DH community for a long time. I went to that first conference in 1989. I have outlasted everybody. Well, not everybody; but I would like to say that Stéfan was a genuinely kind person. He was a really good man. More so than I am. He was very supportive of new scholars, graduate students. I aspire to be as supportive as he was. I do not think that we ever had a problem of jealousy or anything like that. Nobody else was really doing anything similar. And we never tried to lord it over anyone. If you wanted to collaborate with us, we were happy to do that. If you wanted to just use Voyant: go ahead! If you wanted to ignore it: go ahead!

The initial TAPoR project, the text analysis portal, was an imperial project; and that was part of the problem. It was trying to say: we can do everything for everybody! That was obviously unsustainable. Or at least we could not figure out how to sustain it. Whereas saying: look, we have got this thing, it is free, it is open source! You can do what you want with it; you can take the code; we publish papers with our ideas: you can take the ideas or disagree with them. I think that, by and large, we avoided any sort of imperial conflicts. Since then, Voyant has received some awards, like the Zampolli award, which is really the top award in the digital humanities for a project. I think that it reflects the fact that it is appreciated in the community. I hope that it is not seen as silencing anybody. In fact, there are all sorts of cool tools out there. I do not think that we have silenced them. I hope we have not silenced anybody.

Can the tool have currency by itself in academia?

There certainly are people for whom the success of the tool makes a difference, so to speak. But at my university, my annual report is going to be assessed by a bunch of chairs and colleagues in the humanities. I get along fine with them, and if I came out with a brand spanking new tool for that year, most of them would give me credit for it; but not for the next year. It would be just like a book: you get credit that year, not the next year. The fact that you are maintaining it does not count. That has been my experience, and Stéfan’s too.

The one thing you can get from the tool is grants. We were collaborating with people and getting grants to maintain that, but it meant that we were regularly getting grants. Which we would not have gotten if we were not the authors of Voyant. We were either on the grants because we were co-developers of a clearly successful project, or because people wanted Voyant as part of their project, because it would put their project over the hump to get the grant. That was the one direct approach. I have talked about this praxis of combinations.

The thesis of our book, Hermeneutica, is that, in some sense, tools bear theory in a different way than texts. A hermeneutical text says: here is how you interpret a text, here is how you do interpretation. A tool bears a theory but bears it differently. We were playing with this idea of tools can be theories, and we turned around and started telling people that. Our papers were telling people that tools are theories.

Now, did that make any difference to my chair and the Dean? Probably not. I did not think they read any of this stuff, but we were telling it to our colleagues in the digital humanities. And in some ways, this is one of the things that has come up in the 30 years that I have been in this field on a regular basis. We have conversations about how you get credit for digital work. I was on the MLA committee in which we developed protocols for how to get credit for digital work. A digital work could be somebody creating a hypertext novel, digital artwork… all sorts of different things. We have been fighting this one for a long time. You know, I wish I had a story to tell. Jerome McGann used to say: this is only going to be solved one death at a time. The old guard just has to die. Although that is cynical.

Which questions should I have asked?

Two things that we thought a lot about. One is play. I told you that Stéfan’s Ph.D. thesis was about OuLiPo. We thought a lot about it, we theorized about it and we played with the tools. You asked me before if I got any joy out of it. I do not think that we would have gotten where we were if we were not getting joy out of it. And part of the joy was actually thinking about the play. In English, there is a relevant use of the word play. Let us say you have a knob, you can fiddle with it until it breaks. That sort of play, playing with things that do not quite work right. There is a play between theory and implementation, which was really fun. It is like a rollercoaster. You go way up into the theory; we would build this sort of theory, and most of my colleagues were not lucky enough to be able to then go swooping down, making it work, and then making it work for other people. That roller coaster, for me, was about vertigo. Roger Caillois has that theory of a play, and vertigo is one of its elements.

The second thing gets talked about a lot in the digital humanities, we have been talking about it, it is the collaboration. It was very important for me and Stéfan that, year by year, we developed a partnership. It was not a one-off partnership. It was paper by paper, tool by tool, interface fiddle by interface fiddle. I do not think that the type of work we were doing can be done by one person. I could be a fairly good programmer, but Stéfan was better, and the two of us together could do more than apart. If I had had to do it on my own, or Stéfan had had to do it on his own, we would not have been as responsive. There would not have been the dialog, the enjeu, between the two of us. It was a blessing. This is just one of the lucks of life, that he and I were together at the same university at the right time, both with our different backgrounds, building things together. Once we were separated (I went to Alberta, he went to McGill), by then we were already collaborating. Every week on Zoom we had a meeting, and then we would get together. That is collaboration, and it does not have to be two people. But I think that a lot more attention needs to be paid to the human elements of care. How do you enjoy working on a project with someone over time?

Geoffrey Rockwell
2023-09-19
Dagstuhl, Germany
Questions by Mathieu Jacomy

Thank you so much Geoffrey for sharing this with me, and allowing me to share it with others.

A scale for visualizing ambiguity

7 min read

I summarize here a short talk I gave in Dagstuhl the 2023-09-21. It offers a scale of how ambiguity is taken into account in a visualization. Here is the scale.

Ambiguity…

  1. …as noise (absent)
  2. …as an accident (present)
  3. …as context (+ significant)
  4. …as a problem (+ in focus)
  5. …as a feature (+ articulated)

That scale is intended as a way to push back against the idea that any form of empirical account is a measurement (why not call that measurism, by the way). Not every description can be reduced to a number, in particular when the phenomenon visualized is inherently ambiguous. I do not mean here that ambiguity cannot be measured if you want, but that visualizing an ambiguous phenomenon can be about accounting for ambiguity as a feature.

The scale is intended as an itching powder to apply on the skin of measurism. It begs:
Where is the feature of ambiguity? Who discarded it and why? It aims at reminding that reductionism is a choice, to expose its commitments, and to reopen the doors to other empiricisms.

Explanation

I use a toy dataset to illustrate my point. I curated a list of 243 articles about actresses in the French version of Wikipedia, and I harvested the hyperlinks between them. I made a network where the articles are the nodes and the hyperlinks the edges. Then I computed groups of pages through community detection using the Louvain method (modularity maximization). I did it five times. Since it is not deterministic, I obtained different results. I had 3 groups every time, but they were not exactly the same. The visualizations that follow have different ways of accounting for these discrepancies.

Setup of the experiment

What is ambiguous?

The categories of actresses are ambiguous by nature. The three groups I found can be labeled as such:

  1. Contemporary Hollywood (ex: Julia Roberts, Meryl Streep, Susan Sarandon)
  2. Golden Age Hollywood (ex: Marylin Monroe, Katharine Hepburn, Greta Garbo)
  3. French Actresses (ex: Catherine Deneuve, Juliette Binoche, Jeanne Moreau)

This partition can be explained by the fact that the data is from the French Wikipedia. Hollywood is very influent in France, but the national cinema too. But of course, some French actresses have been acting in Hollywood, both now and in the past. It’s not that those categories don’t exist, it’s that not everyone fits nicely into one and only one. The groups exist, but they are not well demarcated. This is where, in this particular instance, ambiguity lies.

Ambiguity is not uncertainty. You can reduce uncertainty by obtaining more knowledge. Ambiguity works the other way around. The more you know about something ambiguous, the more ambiguous it becomes. The more you know about Marion Cotillard, the more it becomes clear that she’s both French and a Hollywood actress. In that sense, ambiguity is an empirical feature to account for.

I will show one visualization per stage of the scale. Those are not complicated, I did not even try to do my best. I went for the simplest. It was done in Tableau, except the last one in Gephi.

1. Ambiguity as noise: not visualized

Here I just picked the first of the 5 renderings of community detection. I just ignored the discrepancies.

If you do not even account for the existence of ambiguity. I proactively assumed that each actress could be mapped to exactly one category. I eliminated ambiguity as if it were noise, nuisance.

2. Ambiguity as an accident: visual anecdote

Here I affected each actress to a given category if they were matched to it at least 3 times out of 5. Of all the actresses, only Naomi Watts could not be mapped to a group (she had a 2/2/1 split).

Here ambiguity is apparent, but it is just a detail. It is set up to help you ignore it. Functionally, the result is not much different from ambiguity as noise, but at least, if you pay attention, you have a chance to become aware that something is going on under the hood.

3. Ambiguity as context: visualized as an afterthought

Here I mapped each actress to a group like previously, but I visualized the “definiteness” of that mapping with a glyph. The dot representing each actress is only full if she was strongly mapped to the group (5/5 times). Else, the glyph is smaller. I used a Herfindahl-Hirschmann index, which works in many other settings.

Here ambiguity is framed as context. The premise of that visualization is that (1) you still get one group per data point, but (2) you are made aware that for some data points, the mapping to that group is less solid. The visualization is build primarily as if ambiguity did not exist and secondarily, ambiguity is painted on it; which is why I characterize it as an afterthought. It does not fundamentally challenge the existence of demarcations, but at least you can now focus on the ambiguous cases.

4. Ambiguity as a problem: it gets in the visual way

Here I used a small multiple to display the group split of each actress (the picture below is just a part of it). We can now see the split of Naomi Watts, and other actresses. We can read the full information. But as a result, we lose the sense of the repartition of actresses into approximately three groups.

Ambiguity is entirely visualized, but it takes all the room. We cannot see anything else, which is a problem. Ambiguity coexists with other things. In this particular case, the groups do exist even though their cannot be firmly demarcated.

5. Ambiguity as a feature: purpose of the visualization

Here I visualized the network of relations from which the groups were computed. I did not draw the edges, but they are represented within node positions, through the effect of the layout. This allowed me to map the groups, but I did it intentionally loosely, using these big circles, refraining from drawing precise demarcations.

In this setting the eye can navigate and tell you how strongly each actress is associated to its group. In the middle, we find quite a few actresses that do not seem firmly attached to either group. We find Marion Cotillard between France and Hollywood, and so on. The eye can grasp both the existence of the groups and the continuum between them.

Beyond the case

Forget about the case. It was to give you an example. I don’t pretend here that network maps are the best solution to visualize ambiguity. I fully acknowledge by bias here, and I don’t have to apologize. I became aware of the importance of ambiguity by realizing that it was the reason why network maps were so good, in practice, for exploration. My intuition is that extensionist (non-reductionist) visualizations are good at that in general. But even there might be even more different visualizations that I don’t think of that can represent ambiguity as a feature. It’s not about my particular example.

It’s about the scale. If you try applying it, you will see that the scale makes it clear that some otherwise great types of charts are doomed, by design, to fail at representing ambiguity as a feature. I hope it helps people figure out what part of our visualization apparatus can be rethought to meet non-measurist goals, as we find in the humanities. And there is more to that than ambiguity, but that is for another post.

We all have “dimentia”

12 min read

Dimentia = dimension + dementia
You have dimentia, and that’s a good thing. If you know it. Let me explain.

Dimensions

Our minds are unequipped to apprehend a space with more than 3 dimensions. What would it be like to live in 4 dimensions? Despite our ability to painfully build a few basic intuitions, the question makes no sense. What would a tree or a dog look like? Too hard. Let’s take a simpler case: the cylinder. What does a cylinder look like in 4D? It turns out it could be three different things, called the cubinder, the spherinder, and the duocylinder. Following the same principle, did you imagine the different kinds of dogs that could exist in 4D? If you’re like me, what your mind actually generates when you think of a 4D-dog is just a 3D-dog in a 4D space. Our intuition is powerless, it fails absolutely. For instance, in 4D, two planes can intersect in a point. How is your intuition doing?

But 4D is child’s play. What about 1000D space? It looks like we don’t need to build intuition for such an absurd thing, but it turns out that feature spaces in machine learning are easily that size, if not orders of magnitude bigger. I write “size” but I mean dimension. We even lack the words. When we meet these spaces in computer science, we are naturally drawn to applying our 3D intuition to them, but that would be a huge mistake. A mistake so common that it has a name: the curse of dimensionality.

But 1000D space is child’s play, because it is Euclidean. Even in 1000D, you can draw a 2D triangle and it behaves like you expect. In 2D hyperbolic geometry, which is not Euclidean, the triangle breaks your intuition. If you try to build the biggest triangle you can, you end up with one delimited by 3 parallel lines (you read me well) and even so, its volume is finite (idem). In fact, in hyperbolic space, there is a maximum area for the triangle, and any polygon more generally. And of course, you could have a 1000D hyperbolic plane. And even weirder things. Networks are non-Euclidean spaces as well, and hyperbolic space is remarkably useful to network analysis and visualization. Building intuition for network topology is actually so hard that we generally fail to account for the fact that depending on the network, the challenge might be completely different. A lattice may behave like a 2D plane while a large, real-world, scale-free network may behave like a high-dimensional, heterogeneous, hyperbolic space. Our intuition has been left far behind at this point.

To get an empirical taste of 4D space, take a look at 4D Toys and the making of Miegakure, both by Marc ten Bosch. Also take a look at the upcoming 4D Golf and Hyperbolica for an experience of non-Euclidean spaces, both by CodeParade.

Complexity

I believe that complexity refers to the horizon of our understanding. It looks like complexity can be defined in itself, as an empirical feature that we may one day get to understand; but that is an illusion. Complexity is generally described as entanglement. But there are infinitely many entanglements, and just a few disentangled things. The things we see as disentangled are those we can parse, and the rest we call complex. It follows from a simple alternative. Either anything in existence can fit within our limited understanding, or not. The extraordinary claim of our entitlement to understand everything lacks extraordinary evidence; it lacks evidence at all; and it ultimately boils down to main character syndrome. Once again, we fail to realize that we place ourselves as the center of the cosmos.

Entanglement is tricky because it looks so material. Entanglement is knotiness, and everyone knows a knot, even though there are infinitely many kinds. But entanglement hides a more subtle form of resistance than the frustration of undoing shoelaces. It resists our general strategy to know: divide and conquer. The frustration of complexity comes in its own peculiar flavor, where the phenomenon to know disappears as we divide it, forcing us to hold all the parts at once in our mind. Networks are a good example, because their raison d’être is to care about relations (the links) instead of substance (the nodes). If you can study the social structure of a group after the properties of its members, then you don’t need network analysis; Excel suffices. But the social structure lies in the relations, and your phenomenon disappears if you divide the group into its members, precisely because you lose the relations. The phenomenon lies in the relations. Divide-and-conquer is lossy, and complexity makes you pay that loss back with interests. Entanglement is, in essence, resistance to division. It may come as a surprise, but complexity is not complicated. That’s because our ability to know is not that complicated, and complexity is entirely defined by it.

Our limitation is the fundamental reason why we divide things to know them. We divide into chunks we can apprehend. We trade time for space, which is the fundamental operation of analysis, the essence of computing. We absorb in multiple times what we can’t take in one. And we have been quite successful at dividing anything. The problem does not lie in the size of the chunks, which we can always divide; the problem is the price of dividing. Division being lossy, when you divide in two you generate a third chunk: the loss, the cost of the cut. This additional chunk is not always small. When it is big, we call the phenomenon complex. Complex phenomena resist by overmultiplying parts as you divide them. Your time too is limited.

You unpack phenomena for a living and you believe that complexity is real, independent of your subjectivity. When a phenomenon resists you, you find more clever ways to divide, and you conquer. You don’t see complexity primarily as entanglement, but as emergence. The shape of dunes is not written in the laws of wind, or in the grains of sand. It emerges from the complex system of sand and wind. The shape of dunes is written in the laws of complexity. Dividing works for a while, because complexity is a ladder. The first steps only challenge the most obvious ways of dividing, and you find clever alternatives. It’s not just the wind and the sand, it’s also the system. The system is the cost of the cut, the supplemental chunk. You divide the entanglement into its pieces plus their relations, and you see that extra chunk as just another chunk. But higher up the ladder of complexity, cutting the supplemental chunk also comes at a price. You need to account for the meta-relations between the parts and their relations. You need to account for the meta-system of the parts and the system-as-a-part. You need an instrument to analyze the results of your analysis. And so on. The self-serving postulate that everything has its laws becomes a liability. Either complexity forces you to accept that knowing is relative to your limited ability to describe phenomena as systems, and therefore subjective, or it forces you to postulate the existence of unknowable laws, which makes you leave empiricism for pseudoscience.

Remember that Leibniz believed that we could one day prove all of mathematics, and that Gödel killed that dream.

Dementia

Facing multi-dimensional spaces, or the topology of complex networks, is facing the abyss of our own finitude. Our instinct makes us look elsewhere. The sense of loss and helplessness that comes with the defeat of our understanding is generally deemed unproductive. We take what we get, we leave what we don’t, we forget it exists, and we turn our back to the abyss. It is simply more practical to delude ourselves into thinking of any space on the basis of the 3D space. This leads to making mistakes on the way, but we fix them once we get there. We give them a name, like “curse of dimensionality”, think of them as a problem, and find solutions. Divide and conquer, always. That’s all we can do. That has always worked, right?

I’m interested in the possibility that loss and helplessness can be productive. My primary motivation is to overcome collective denial about complexity. I see a potential problem in the fact that we, human beings, have basically the same limitations. Intelligence is not like wealth, where some one-percenters get provided with orders of magnitude more than the rest. What makes our geniuses geniuses is not their ability to think thousands time faster or memorize near infinite knowledge. If we were computers, none of us would be supercomputers; all of us would be consumer laptops. This makes it so easy to mistake our own homogeneity for a law of nature. I glimpse the possibility that we have collectively agreed to pretend that anything that we can’t divide and conquer simply does not exist. That is why I’m up for trying another path. Maybe, living with a higher sense of loss and helplessness could help us see what we might be missing otherwise.

I take seriously the eventuality that our confidence gets in the way of the science we do. I am looking for ways to temper that confidence, to reopen the doors to things prematurely understood. I try to factor in the possibility that we might be unknowingly blind. How would you know that you’re missing some senses? Missing compared to what?

This leads me to dementia, and acquired disabilities in general. Some people are forced to experience the loss of some of their abilities. I don’t know much about this, beyond having friends in that situation, and having experienced a depressive episode myself. Here I’m just sketching a way forward. I think that we should draw inspiration from people who have experienced that kind of loss. I am particularly interested in dementia, because I believe that the affected person may experience various degrees of awareness of their own condition. I wonder: how do you become aware of your own impairment? How does this awareness change your relation to the world? How can you overcome your limitations? I want to harness the psychedelic nature of dementia and repurpose it to help us touch the walls of the human thought box, so that we can some day break free of it.

Dimentia, with a “i”.

If you have ever tried seriously to learn about the fourth dimension, you have heard about Flatland, the 1884 novel by Edwin Abbott. The main point of the book is to have you experience the 3D world from the standpoint of a character living in the 2D world, Flatland. You realize that the 2D world is quite different from our own, and that the 3D world cannot look the same to a 2D person than to us 3D folks. The main protagonist, Square, sees his flat world as a line, the same way we see our 3D world as a plane, even though in both cases we can have a sense of depth. Being pulled out of Flatland into the third dimension does not change Square’s visual system; he still sees the world as a line, even though what it sees gets different. He gets to experience the existence of the 3D world, but it does not make it much easier to understand.

An episode about Flatland in Randall Munroe’s webcomic XKCD.

Square has what one could call dimentia: dementia about dimensions. He experiences to some extent his own inability to apprehend the third dimension. Flatland, the book, is famous for this precise reason: it gives you a practical idea of the problems to face experiencing the fourth dimension. It makes you understand why you may never fully get it, and yet what kind of work you would have to do to compensate for your own limitations.

The word dimentia comes from Johanna Drucker. I was last week in Dagstuhl for a seminar about Visualization and the Humanities; I will say more about it in a few upcoming posts. I had a short presentation to make, and during the questions, I made the same point as above about drawing inspiration from people with dementia to understand complex spaces. As I went back to my seat, Johanna gave me a post-it with dimentia written on it. I understood that it referred to some a previous work but I could not find a reference to it, it may not be published. Dimentia is also a common misspelling of dementia, it has an entry in Wikitionary.

The dimentia post-it passed to me by Johanna Drucker.

The point of dimentia is that we all have it. Dimentia is a natural condition of the human being. If we had the opportunity to meet a being vastly superior to us (alien, AI, angel, pick your favorite) then maybe we would give a name to all the things they have and we don’t. Then it may become clear that we have dimentia. And it would crush our main character syndrome. Until then, we are inclined to believe that we’re perfectly fine people, made in the image of God. That’s why we need a bit of creative help to think outside the box. Let’s take it seriously that we have dimentia, and start wondering how to factor in our inability to deal with a number of things beyond our cognitive abilities.

You have dimentia, so does everyone else, and it’s a great thing as long as you are aware of it. I suspect that we have more to gain by studying understood things as if they were incomprehensible than by studying incomprehensible things as if we were entitled to understand them. Because at the end of the day, I am not so confident that we actually understand what we believe we do. At least for the topology of complex networks, it seems to me that we have left a number of unknown unknowns on the side of the road.

PS: Thank you Johanna!

Quoting data visually

22 min read

There is a sense in which a visualization can quote the data it represents, and not just summarize it. I find the analogy with quoting text useful. In this post, I explain how to spot such visual quotes, how it works, and when to use it in your own visualizations. To do so, I will contrast traditional visualization, that I call reductionist, with the visualizations that quote, that I call extensionist.

Why we quote (in text)

Let us forget about visualization for a moment, and think of quotes in text. Quoting divorces a statement from its author and makes it available as object of discourse (Olson & Oatley, 2014). Quoting is reusing, and therefore, reinterpreting.

It may serve multiple purposes. One quotes a figure of authority to get legitimacy. A researcher quotes their colleague to pay their intellectual debt. In those examples, one could summarize or paraphrase instead of quoting. But I am interested in the situations where we quote precisely instead of paraphrasing or summarizing.

“The fundamental meaning of quote marks is conventionally delineated as based on a distinction between ‘direct’ and ‘indirect’ speech. To surround words by quote signs signifies that we have someone’s exact written or spoken words and excludes the possibility that it might merely be a paraphrase, surmise or reinterpretation. Inside the signs are the direct words of some other voice, marked as of a different status from a second-hand processing or reshaping of them.” (Finnegan, 2011)

The difference between direct and indirect speech is not just about minimizing the distortion inherent to the reinterpretation, it is about providing the reader with a way to double-check that reinterpretation.

Here is an example. During the 2008 US presidential campaign, circulated the claim that vice-presidential candidate Sarah Palin said she could see Russia from her house. The implication was, for Americans eager to see her ridiculed, that she used the absurd statement as a geopolitical insight. The claim originates in an interview on ABC News. However, Palin’s actual quote is quite different: “you can actually see Russia from land here in Alaska.” An objectively true fact, that Palin did not frame as a motive for any geopolitical insights (Mikkelson, 2011). In this example, the actual quotes gives the reader a chance to assess the claim by themselves, critically.

Remark that the difference made by the quoting is not about the faithfulness of the retranscription, but trust:

  • If I write Palin said she could see Russia from her house, the reader may think that I have misunderstood her or that I am ill-intended.
  • If I write Palin said you can actually see Russia from land in Alaska, the reader may also think that I have misunderstood or bent her words. The reader has no way to know that my statement is more faithful.
  • If I write Palin said: “you can actually see Russia from land here in Alaska”, the reader only needs to trust that I did not manipulate the quote.
  • And if I write Palin said: “I can see Russia from my house”, the reader also just need to trust that I quoted appropriately; but I will have a much higher price to pay for a fake quote than for a stretchy interpretation (litigation, credibility loss…), therefore the reader’s trust has reasons to be higher.

Quoting engages the author to a higher degree than paraphrasing. It can be efficiently combined with summarizing, as the reader can skip the quote if they do not want to double-check the faithfulness of the summary. Quoting in visualization works the same.

Quoting data in a visualization: example

Here I showcase an example from my own work where quoting is explicit. It consists of a poster showing a network map of the words used in academic papers about AI and algorithms. I have described the method in a previous post, but you don’t need to read it as I will describe the poster from a reader’s perspective.

I you walk towards the poster, you first see its general shape: red lumps and tendrils, unequally distributed across space, with a big hole more or less in the middle.

General view of the map

If you come closer, you see labels. Big labels for the biggest chunks of the map, in purple and mostly around. Medium labels in red, pointing at clusters or along the tendrils.

The zoomed-in section above is about social science (on top) and economics (bottom-left). It contains vocabulary about different fields, but always within paper mentioning algorithms, AI or machine learning (as this is how the corpus was delineated). In that sense, it is a semantic map. The clusters consist of words (or multi-word expressions) that appear in the same papers. The red labels form a manual coding of the different clusters: we (Anders Munk, Matilde Ficozzi and I) read samples of the abstracts to summarize what the related papers are about (process in this post).

The purple layer consists of very general annotations. The red layer consists of more precise annotations (manual coding) but is still, in some sense, a summarization. If the visualization only included these layers, the reader would have to trust our ability to represent the underlying data (expressions connected by co-occurrence) as a picture. They would see the image below and wonder: why did they draw the shapes like this?

Annotation layers only

This is why we also featured the underlying data itself. That layer consists of dots representing the expressions, and they have been placed by a network layout algorithm. The labels are very small, and even so, we could not display all of them. A light shadow is highlighting the areas where the dots are densely packed (explanations there). That layer just by itself looks like this:

Underlying data layer only

The underlying data layer is not very readable, but it is useful to the reader who wants to double-check our annotations. This is why we combined the two layers into the final image. The underlying data layer is present in cyan, and you can see where the dots are and read the expressions by getting very close to the poster. The cyan appears black when superimposed with the red. The reader can see that the red shapes follow the contours of the groups of cyan dots, but only imperfectly.

Annotations (red) are superposed with underlying data (cyan)

By design, the annotation layer is easy to see and the underlying data layer requires an effort. The red layer is a summarization, while the cyan layer is a quote. There are different ways to articulate these two layers. In this instance, we used an anaglyphic split of the color spectrum to allow seeing one layer or the other through a cyan or red filter.

As a result, one can navigate through the layers by wearing anaglyphic 3D glasses and blinking from one eye to the other.

If you spend time with this visualization and you compare our annotations with the underlying data, you realize that the clusters do not represent the underlying data equally well everywhere. We did our best, but we had to compromise sometimes.

In some cases, the clusters were well delineated, and you can check that there are (almost) no cyan dots around the red clusters. In these situations, the underlying structure could be appropriately visualized as a single thing, a single cluster.

In other cases, the clusters detected (by clique percolation) were partially entangled, overlapping. This situation represents the inherent ambiguity of a semantic space, the continuity of meaning between topics. Our annotations provided distinctions, but the underlying data shows that the distinct topics are interfering with each other: there are no clear gaps in the continuum of dots.

In yet other situations, we could define a cluster but with fuzzy borders. Instead of a tightly packed set of expressions that always come together, we had a nebula of loosely co-occurring terms orbiting around an identifiable center. This is visible in the visualization as clouds of dots floating around the red cluster. We then tried to draw fuzzy borders to our clusters. The reader can look at the surrounding expressions and ponder whether they belong to the cluster.

Finally, some of the structural clusters we detected were stretched by the layout algorithm, making them look like bridges. Those clusters can be called bridges, but it is worth stressing that from a structural standpoint, they are no less clustery than the others. We tried to capture that feeling by drawing the bridge on top of the appropriate dots, which you can check by yourself.

If you have tried navigating these images, you should have a sense of how the quoted data layer contextualizes the more readable summarized layer. This way of quoting data is admittedly sophisticated, but a similar effect lies at the heart of what I have called “big data visualization” in a previous blog post and in my PhD thesis; I do not like the name though, so today I will go with extensionist visualization.

Extensionist visualizations do quote

I intend the term extensionist as a counterpoint to reductionist. A reductionist visualization summarizes. There is nothing wrong to reductionism. It is at the heart of science, statistical analysis, and traditional data visualization. Here is a reductionist chart:

Number of recorded deaths of migrants in the Mediterranean Sea from 2014 to 2022 (source)

What makes it reductionist is the act of reducing the phenomenon (migrants drowning in the Mediterranean Sea while trying to reach Europe) to one of its features (evolution over time). As Latour (1999) theorized, reduction loses locality, particularity, materiality… but also amplifies compatibility, computability, universality… Reduction is a productive tradeoff. Basically, it summarizes. We are very used to it, and most visualizations are like this, so it does not stand out. But here is a non-reductionist visualization, an extensionist one (click to enlarge).

The Missing Migrants Map by Valerio Pellegrini and Michele Mauri

This piece by Valerio Pellegrini and Michele Mauri has been produced for the Italian newspaper Il Corriere Della Sera and won the Kantar Information Is Beautiful Award in 2016. It is composed of multiple smaller reductionist visualizations, but the central part is extensionist:

Zoom on the central part

The visualization dedicates space to each and every data point in the source corpus; and by data point, I mean the record of a dead or missing migrant. Those are not aggregated into a statistics, but visually spread out so that we get a sense of where it happened, and how much it represents. This lack of aggregation is the defining feature of an extensionist visualization.

To circle back to my argument about quoting in text, it is worth noting that the extensionist visualization works in tandem with the reductionist ones, the same way quotes and summarizations get along well. If you just want to know how much migrants are missing, you can just read the text (1,700 died and 2,200 were missing in 2015); if you want to know when it happened, you can look at the top-right chart. The map provides additional context to various ways of summarizing the same information, it efficiently combines with them. And of course, it is less abstract. It makes the missing migrants more real to the reader.

Seeing each dot as a missing migrant makes the visualization striking. This is made possible by quoting the underlying data literally. In this case the quoting is not a supplement to the main message, it is the main message.

Extensionist visualizations are made possible by big data, because they need many data points and relatable numeric dimensions (in this case, the geographic position). A visualization is extensionist when it visually quotes the data points in a way that displays recognizable patterns, and lets the reader engage with them without reducing them to a specific message. In that sense, extensionist visualizations differ from traditional visual communication because they offer the reader the possibility to explore.

In other words, extensionist visualizations are the heirs of cartography. Most cartographies are extensionist insofar as they provide knowledge without conveying a message, and let you recognize the patterns relevant to you. But extensionist visualizations do not necessarily depict geographical spaces. Typically, network maps are extensionist visualizations depicting non-geographical spaces. Also note that extensionist visualizations do not necessarily consist of dots. In the wind map below, the data points are essentially lines.

The Wind Map project by Fernanda Bertini Viégas and Martin Wattenberg (featured in the MoMA)

Reductionist visualizations are not extensionist because the aggregation they involve summarizes instead of quoting, so the visual does not refer to the data in a literal way. The point of the reduction being to obtain new insights, the visual patterns are, by design, co-produced by the method. We can summarize a series as an average or as a median, both being equally valid yet different, each giving us a different insight and interpretation. The reduction method is baked into the pattern. So although we always see patterns in a visualization, those are not always from the data. A bar chart has bars, which is a visual pattern, but it comes from the method, not from the data; conversely, the swirly pattern in the wind map above comes from the data.

Note that extensionism is a type of visualization, not a firmly delineated category. A visualization can be somewhere between extensionism and reductionism. This happens when the data points are not manifested as literally as they could, which is the case most of the time, because visualization is, in essence, translation. For instance, in a network map, the layout algorithm mediates the structure, and therefore the visual patterns depend on it, in addition to the data. The quote is only as literal as the algorithm is transparent to you, which depends on your expertise. One easily realizes that visual patterns (like clusters) only arise when the network has certain properties (a community structure), but unpacking how that translation works remains hard. The reader may find themselves in the relatively common situation where they trust that the visual pattern comes from the data, and yet they cannot explain how, so they cannot fully interpret the visualization. The edge case is when the reader is uncertain about whether patterns come from the data or something else, like the method, the algorithm employed, or a manual intervention. In that case, one could say that the visualization is partially extensionist.

The noema of extensionist visualization

As the defining feature of extensionist visualization is to offer visual patterns to explore, its noema is: the data have patterns (genesis of this idea in this post).

Noema basically means essence. I borrow the term from French semiotician Roland Barthes who theorized that the noema of photography was that-has-been. The noema refers to the process through which we attribute meaning to a piece of media. For Barthes, we attribute a meaning to a camera picture on the basis that a mechanical chain of reproduction has taken place between the photographed subject and who watches the picture. As such, even though what is represented may trick us in various ways, and not be “real”, and even though we know it, we still assume that what we see has, in some ways, been. Roland Barthes did not have Stable Diffusion.

Similarly, we make sense of extensionist visualization by assuming that the representation of the data points is sufficiently literal to grant that the patterns we see originate in the data. The visual patterns bear meaning precisely because we make sense of these visualizations as quotes of the data. But importantly, we might not have access to that meaning. We may only recognize that there are patterns in the data, not necessarily what those patterns are and how to interpret them. Look at the wind map shown before, and its massive spiral near Dallas: can you tell whether it is a hurricane or a normal wind pattern? I cannot, and yet I see the pattern, and since the picture quotes the data points, I can trust that the recorded winds are that swirly. The pattern is in the data; yet I lack the knowledge necessary to tell if this is common or remarkable. I cannot interpret it.

The purpose of the noema is to explain the appeal of extensionist visualizations. My motivation for conceptualizing it came from reading the excellent paper The Politics of Method: Taming the New, Making Data Official (Ruppert & Scheel, 2019). The authors analyze the showcasing of a dynamic visualization of the Estonian population to Estonian officials (screenshot reproduced below). They write: “The moving red dots become not only a vehicle for the data, but first and foremost for its claimed self‐evidence. … Through this ‘realist trick’ (Law 2012) mobility is enacted as a reality that exists independently of the methods that are used to describe it. There appears to be a seamless correspondence between the visualization (the moving dots) and the reality (commuting patterns in Estonia) it represents and renders ‘the phenomenal world (as if it) were self‐evident and the apprehension of it a mere mechanical task’ (Drucker 2011)” (emphasis mine). I agree with Ruppert and Scheel that extensionist visualizations are problematic for their self-evidence, that hides the existence of a mediation, and ultimately tricks the reader into believing that Big Data (in this case) provides an unfiltered access to reality itself. But I disagree that self-evidence is merely “claimed” and the “correspondence” between patterns and data just an illusion. The purpose of the noema, as a concept, is to help articulate that extensionist visualizations are dangerous because they are powerful, yes, but that this potential can also be used in legit (non misleading) ways, which can be very useful.

Screenshot of a dynamic visualization of the Estonian population over time, obtained from mobile data, whose use in the public sector was analyzed by Evelyn Ruppert and Stephan Scheel in The Politics of Method: Taming the New, Making Data Official (Ruppert & Scheel, 2019)

Deconstructing extensionist visualizations

Extensionist visualizations are worth deconstructing because they are powerful. It is most important to realize that our recognition of visual patterns (if any) must be taken seriously. The visual patterns are as real as it gets to the reader who perceives them; but the patterns are co-produced by the visualization method and may not come from the data, or not entirely.

As extensionist visualizations rely on a form of quoting, the correspondence with data points is generally stated in the legend or somewhere else. The reader has reasons the trust (or not) the authenticity of the quoting, which is external to the visualization itself. Trust is built outside the visualization, but also within it. Indeed, visible patterns reinforce trust in the authenticity of the quote, for at least two reasons. First, the presence of visual patterns suggests that quoting the data points is a design decision aiming at displaying them (the patterns). It provides a justification for the extensionist design. Second, the (assumed) imperfections in the patterns suggest that the quoting was transparent about the method’s limitations. The reader is given the autonomy to double-check the author’s interpretation, which helps build trust. Those two points do not imply that the visual patterns exist in the underlying data, but they have value on their own. Doctoring an extensionist visualization is actually harder than a reductionist one, because it offers many more opportunities to detect the fraud.

This ability to build trust is the source of the outstanding convincing power of extensionist visualizations. The accountability is real, it is not an illusion, but it does not fully cover the correspondance between the visual and the data. Indeed, visual patterns are co-produced by the data and the method, so that visual patterns point at the existence of a pattern in the data, but that pattern might be quite different if the method has interfered with it. The reader who perceives patterns is justified in believing that the data have patterns (the noema of extensionist visualization), but they cannot know what the patterns are in the data and how to interpret them without specific knowledge about the method employed. The danger of extensionist visualization lies in the reader’s excessive trust that what they see is an unfiltered representation of the data, or worse, of reality. Despite their trust, the reader should not forget that the visualization is a partial view on the data, and that data only partially capture empirical reality. To mitigate the invisibilization of the mediations involved, it is important to provide adequate context, especially to extensionist visualizations.

How to spot visualization that quote data

When a visualization quotes data points, it has a surabondance of graphical elements. This is the main tell. Then you should check whether you are provided with the autonomy to make your own interpretation, or challenge the author’s interpretation. Finally, you should check how the graphical elements relate to the data points, which should be explained. The simplest case is when each data point is represented as a dot, but there are many other possibilities.

What to do when facing a visual quote?

A visualization quotes the data points for a reason. Check why, because you may not be the audience. Don’t waste your time if you cannot understand the patterns. Exploring takes time, so you probably need a good motivation to invest your energy in it.

If you want to engage, I suggest this approach. Mind the patterns you recognize, and ponder whether they come from the data, the method, or a combination of both. Be methodical about the mediations. What was the process leading to shaping the graphical elements the way they are? What does it tell about the origin of the visual pattern? What can we take for granted about patterns in the data, and what can we not? Mind that you may be inclined to prematurely conflate the visual elements with the data points. The conflation is plausible and simpler to navigate for your cognition, but it will make you miss crucial elements about the validity of the visualization and its context.

More generally, when it comes to exploring data visually, don’t be docile. An intuitive and comfortable visualization is not your friend. When facing an extensionist visualization, you will see the expected patterns first. Your first finding will often be a confirmation of your beliefs, which will inflate trust towards the visualization too early. Slow down, suspend your preference for agreement, and engage with the visualization further. Become active and indocile: look for flaws and failures. You have seen things that are present where they are expected to. What are other expectations that are not met? And what are unexpected things, things that you expected to be absent but are nevertheless present? What are unexpected patterns? Build your trust on the possibility that you may disagree with the patterns, not on the possibility that you may agree; because you will always agree to some extent. You will always find a mix of agreement and disagreement, and the agreement will always come first. More about this in Thinking through the Databody, a chapter I coauthored (Munk et al, 2019).

Can you quote data in your visualizations?

Here is my best insight: quoting data points only makes sense with relatable spatial dimensions. Not all dimensions give you the opportunity to go the extensionist route, and you may have to be reductionist. You don’t get to decide every time.

I see four situations where you can afford an extensionist design.

1. Your data points are geolocated

Then you can make a map. This is the easiest case, because extensionist visualizations inherit from the cartographic tradition. Example: a dot map representing the US census. The wind map also belongs here.

2. Your data points can be placed in a relatable space

In this case the space is abstract, but can be understood by the audience. The main example is a scatterplot, but there are other possibilities, like the ternary plot below from this paper (Wilson et al, 2018). Obviously, “relatable” depends on the audience (the vertices of the triangle don’t mean anything to me).

3. Relatable spatial relations can be derived

This is the realm of network maps, but also plots from t-SNE and UMAP. Here it is not the space itself that is relatable, but the distance between the dots. The presence of a layout algorithm thickens the mediation, which is a problem. But in principle, proximity can be interpreted and we can still explore the image. Below, an example from this paper (Mokashi et al, 2021) where each dot is a cell and two cell are placed closer if they have a similar “expression pattern”.

4. You don’t need spatial dimensions

You may not need your data points to have spatial dimensions, in which case you may just arrange their corresponding graphical elements to get enough clarity. The reader will still be able to explore the visual variations between those elements, which may represent different dimensions. This kind of chart is sometimes called a pictorial chart but that is not a well-defined category. We nevertheless find many good examples, like the piece below by Nadieh Bremer for the Scientific American.

Note that you can combine a pictorial graph with other kinds of charts, like a bar chart. Find a good example by Andrew Van Dam and Renee Lightner for the Wall Street Journal below.

Should you quote data in your visualizations?

First of all, offering exploration does not engage your users more (Boy et al, 2015). It actually goes the other way around: exploration requires a strong engagement. And engagement is scarce. If your audience is not particularly engaged, then you should not quote the data points, or you should quote them in a non-intrusive way for the few interested, but keep the summarized layer clear for most of your audience.

I do not have a list of situations where quoting makes sense, but one stands out to me: when I produce a visualization for myself. Once in a while, for my own research, I have the time to engage with a large dataset. Extensionist visualization is a fantastic tool for exploration. Note that doing so does not prevent me from being reductionist in parallel: it works in tandem.

Besides exploration, the other strong reason I see to quote the data points is to allow the reader to double-check and challenge your interpretation. This is typically the case when you are exploring a dataset for which you lack expertise. When you meet experts to get more knowledge about the data, it is useful to have a visual support with your own temporary conclusions but allowing for other interpretations. You would then mix annotations and a straightforward representation of the data points. More generally, quoting the data visually is useful in collaboration settings, within science and beyond.

Finally, quoting the data can be useful to improve the legitimacy of the visualization, for instance as a data journalist. It allows the reader to be more critical, which is risky but can be rewarding. In that situation, mind that most people will not engage much, so provide a strong summarization, for instance as annotation, for the readers who just want the key insight.

References

BOY, Jeremy, DETIENNE, Francoise, et FEKETE, Jean-Daniel. Storytelling in information visualizations: Does it engage users to explore data?. In : Proceedings of the 33rd annual ACM conference on human factors in computing systems. 2015. p. 1449-1458.

DRUCKER, Johanna. Humanities approaches to graphical display. Digital Humanities Quarterly, 2011, vol. 5, no 1.

FINNEGAN, Ruth. Why do we quote?: the culture and history of quotation. Open Book Publishers, 2011.

LATOUR, Bruno. Pandora’s hope: Essays on the reality of science studies. Harvard university press, 1999.

LAW, John, 2012, Collateral realities. In : The politics of knowledge. New York : Routledge. p. 156–178.

MIKKELSON, David. Did Sarah Palin Say: ‘I Can See Russia from My House’? Snopes, 2011. https://www.snopes.com/fact-check/sarah-palin-russia-house/ (accessed 2023-07-12)

MOKASHI, Sneha S., SHANKAR, Vijay, MACPHERSON, Rebecca A., et al. Developmental alcohol exposure in Drosophila: effects on adult phenotypes and gene expression in the brain. Frontiers in Psychiatry, 2021, vol. 12, p. 699033.

MUNK, Anders Kristian, MADSEN, Anders Koed, et JACOMY, Mathieu. Thinking through the Databody. Designs for experimentation and inquiry: Approaching learning and knowing in digital transformation, 2019, p. 110.

OLSON, David R. et OATLEY, Keith. The quotation theory of writing. Written Communication, 2014, vol. 31, no 1, p. 4-26.

RUPPERT, Evelyn et SCHEEL, Stephan. The politics of method: Taming the new, making data official. International Political Sociology, 2019, vol. 13, no 3, p. 233-252.

WILSON, S., COLLINS, F., et LAVERY, R. Using ternary plots for interpretation of ground gas monitoring results. Ground Gas, 2018.

The care of things (and Gephi)

15 min read

I have enjoyed reading Le Soin des Choses, which you may translate as “the care of things”, Jérôme Denis and David Pontille’s book about maintenance. It is not at all about software, but it nevertheless resonates with my experience of taking care of, and caring for, Gephi (a software tool I co-created). This post is not a summary or a review of the book, but a series of ideas I took from it and that helped me understand better what is going on with Gephi’s maintenance. They might be useful to other tool makers. And secretly, to STS academics interested in open-source practices. Or is it the other way around? Anyway, it’s a cross-over episode.

The book is currently in French only, but will be released in English soon. I hope this post will make you want to read it. Also, maintenance and repair studies are a thing, and here is a starting point, by the same authors, in English.

Maintenance does not make an event

Maintenance and repair are two different things, although related. The breaking (and the repairing) of a thing makes an event. Maintenance does not make an event, and goes unnoticed. The main problem of maintenance is its invisibility. I am pretty sure that every single open-source tool maker knows this. Maintenance is expected, yet most often, no resources are explicitly dedicated to it, whether that is as funding or as working time. The book makes it clear that it is because maintenance does not make an event.

From experience, I can tell that Gephi can get funds to add new features; it makes an event. Or to repair something broken; it makes an event. But we struggle to fund the mundane work of updating the code and keeping the codebase compatible with an ever-evolving software environment. The main takeaway, here, is that we need to make of maintenance an event to get it funded.

We kind-of did that with the Gephi Weeks, but those events want to become mini-festivals where we do many other things. Which is absolutely great, but the maintaining effort gets diluted. Another strategy is to disguise maintenance as repair. This happens naturally, as people are willing to get things fixed when they break. Failure is an event that summons a public who cares for maintenance; but that public will dispel as soon as everything gets back to working again. Maintainers have an incentive to let things break, because it makes their work visible. But I now understand that we could also invent a maintenance event, as paradoxical as it sounds. The “bug bash” we did during the 2021 Gephi Week might have been that kind of event, although I did not frame it that way at the time. I think of a maintenance event as a deliberately made-up event. I want to own its artificial character by making it explicit, on the motive that the lack of event is the main obstacle to funding maintenance. I would go as far as calling it that way, “maintenance event”, because it allows making the point that we just need any pretext to make an event that makes maintenance visible. That is not being deceitful or dishonest, just pragmatic.

A diplomacy of dependencies

As Denis and Pontille write, maintenance is a diplomacy of the interdependence relations of the maintained thing. It is care, in Annemarie Mol’s sense: a broad way of attending to relations and needs. Maintenance takes dependencies into account. It acknowledges fragility without attempting to make the maintained thing unbreakable. Maintenance is not the negation of decay. Its ideal is not the autonomous object that no longer requires maintenance, but the object that allows itself to be maintained, that does not resist to care. In that sense, it forces us to question the distinction between intact and broken, and makes us ask what it is to be in a “good shape”. There is a diplomacy of wear and tear, degradation, and so on.

Denis and Pontille mean “dependencies” in the general sense of existential attachments, but when it comes to software, it can be understood quite literally as “software dependencies”. Maintaining software entails a diplomacy of code dependencies. Diplomacy means, in this context, that there are tradeoffs everywhere and we need to compromise all the time, as there is absolutely no way we can make our piece of software independent of the rest of the ever-evolving digital infrastructure.

I have two examples. First, Gephi uses an OpenGL code library that is no longer maintained. When new compatibility issues arise, we only get unofficial fixes, with a delay, as unreliable patches from the community and released in obscure forums. As time passes, the delay gets longer, the patches get flimsier, and finding them (when they exist) is more and more difficult. Gephi’s main developers, Eduardo Ramos Ibáñez and Mathieu Bastian, spend more and more time on this. We want to replace this library by another one that is better maintained, to ensure the long-term reliability of Gephi. But this will require rewriting a significant part of the graphic engine into a new paradigm. Unfortunately, for this we need too much time than we have for now; until Gephi breaks for good and we no longer have a choice, or until we find enough resources.

Second example, that I get from Antonin Delpeuch, maintainer of Open Refine (another open-source tool). At some point, that project was in discussion with a foundation dedicated to open source software to get fiscal sponsorship. The foundation benchmarked their code, and found a legal issue with a code library that featured, as an addendum to its open-source license, “don’t do evil things with this package” (or something similar). The foundation’s lawyers had determined that it presented a legal risk, and asked for that code library to be removed. Unfortunately, Open Refine could not replace it with an alternative, as too much code would have to be rewritten in the process. It would break the tool. Ultimately, Open Refine had to find another fiscal sponsor because of this. In these two examples, we see that a negotiation took place, that involved not only the codebase and its architecture, but also legal terms, resources at disposal, various human actors, and organizations.

Annemarie Mol, who studies health care and medicine, tells us that care does not necessarily mean curing a disease to return to a healthy state, but being attentive to the needs of a person whose conditions of existence change. Similarly, maintenance differs from repair in that it does not aim to return to an intact state, but rather to be attentive to the dependencies of a thing whose conditions of existence evolve. The “normal” state of an object is to be maintainable, not intact; and maintainable includes a measure of wear and tear, degradation, and so on. Similarly, the health of a codebase boils down to its maintainability. Developers have been paying attention to this for a long time, forging concepts such as technical debt. A well-maintained tool is not one that never fails, but one that allows itself to be maintained. Good maintainability demands a specific kind of compromise, a diplomacy turned towards the practices of developers and other caretakers. At its heart lie these two questions: How do we realize that the conditions of existence of the tool change? And how do we intervene?

Attention to fragility

Taking care of a tool requires being attentive to the way it gets, or could get, changed by its environment. Gephi’s most common kind of fragility is to fall out of compatibility with a new OS, as it has happened multiple times with MacOS (but not only); or a new hardware (and its drivers); or new norms (security issues); etc. We are also aware that some fragility is inherited from code dependencies, like in my previous example. As Denis and Pontille note, attention to fragility requires proximity and movement. Fragility does not present itself as a stimulus to a consciousness that receives it, but rather emerges from bodily interactions with the maintained object.

The examples narrated by the authors typically consist of a technical and material investigation to understand its condition, where the maintainer must configure their sensitivity. A software tool is material in a different way, because the inscriptions it really consists of are mediated by the computer. But it is not less material. I do not deny that the bodily interactions of the developer do not entail any specific sense of touch, smell or sight. Those senses are not the place where attention to fragility takes place. Yet the software maintainer has to roam through their territory like a farmer inspecting their fields to check for anomalies. The maintainer’s work entails an inquiring movement that is not inherent to coding practices.

Gephi does have its own maintenance infrastructure. It consists of multiple things, and I may forget some. It starts from the senses we deploy to detect new problems. We need a certain proximity with the users, so that we can receive warnings. It might be me when I teach Gephi to students, or meet Gephi users. It might be over social media, or in the Facebook group. It might be through community members who know the core developers directly and communicate their observations. It might be through automatic error reports (we use a system named Sentry). But it is also about establishing a dialogue with the users experiencing failures, when necessary. We typically do this via GitHub Issues. Here is an example of an issue we have a hard time replicating. Indeed, it is critical for us to be able to inquire into a given issue (and replicate it) if we want to understand what is wrong, and fix it. We need allies in the community, we need some complicity with the users, we rely on their good will. So we need a space to cultivate this dialogue. This is why our Slack channel is also part of the maintenance infrastructure. Our infrastructure also comprises ways to keep updated on the technologies we use: expert watch on Java, OpenGL… and ways to reach out to more qualified people when we need to (developers networks). A software tool like Gephi needs an apparatus that broad, or it could not be maintained.

The software maintainer must cultivate a proximity to both the users and the technologies. Their attention cannot focus on everything at the same time, which is why their movement is necessary. The maintainer moves their attention from listening the the users, to the error reports, to the codebase (ex: unit tests), to technological watch, and so on. Fixing an issue or preventing problems entails an inquiry that leads the maintainer to navigate across multiple spaces.

No stable criteria

As Denis and Pontille note, maintenance is not a control operation conducted on the basis of stable criteria. It is not aimed at establishing conformity or non-conformity. Even the software environment changes; there is no status quo. Consequently, maintenance cannot consist of coming back to that fictional status quo, even under comfortable margins of error. Maintenance is about living with change.

Maintaining sometimes deeply transforms, or obliges to transform. For example by replacing a part that is no longer made with another, which sometimes can lead to a chain reaction through the dependencies, which in turn can lead to redesigning the object. The authors give the example of a smartphone, but it is also the case for Gephi. As we have seen, the software packages it relies on can also become obsolete (i.e., fail) and that can lead to cascading redesigns.

The internal discussion we have about Gephi’s visualization engine is exactly about consequences in cascade. Let me document this shortly. Our discussion has multiple levels. On a first level, we face the cost of removing the code dependency that may break soon. On the one hand, it is a lot of work; but on the other hand, if we are not prepared, it may break Gephi for many users and for a long time. It looks like we have no other choice than to remove that unmaintained library. On a second level, this might be an opportunity. The bad news is that we need to recode our engine into the contemporary paradigm of OpenGL. But the good news is that by doing so, we will be able to do new things and get a better performance. That work is not in vain. Maintenance may prompt a sensible improvement of Gephi with new features (that is not always the case). On a third level, the alternatives to the library we want to get rid of are not great. OpenGL is not used enough in Java to sustain a well-maintained package ecosystem. The best thing to do might be to get ready to move to a new library, in case of emergency, but without actually dropping the unmaintained library we currently use, at least for now. This requires to write an agnostic visualization engine, compatible with multiple OpenGL libraries at a minimal cost. Which is feasible, but that would transform Gephi’s architecture even more deeply. At the end of the day, that seemingly simple library issue triggers a broad discussion on how to architecture Gephi, with a sensible impact on our road map.

The ontological share of maintenance

Maintenance calls into question the nature of the thing maintained, which produces discussions or even conflicts. Denis and Pontille’s book is a journey that leads more or less to that point: maintenance necessarily has its ontological share that we need to think of. Maintaining a thing does not turn it upside down (exceptions aside), but it does not leave it untouched either. We must be aware of this change if we want to drive it in the direction we desire. Often, we are also forced to decide between different becomings of the thing maintained. We may want to resist, compromise, or conversely, seize an opportunity to drive change forward.

I can tell it is true, but I had not conceived it clearly until I read the book: maintaining Gephi changes it into something else, that I do not necessarily desire, or even just understand. I presume that the same goes for anyone who takes care of a piece of software, although we do not care for every tool we make. Some just fall out of fashion and into obsolescence and/or oblivion; we let them do so. The ontological share only matters for the tools that we want to last.

The temporality of what is maintained is at stake. What is meant to last? The initial object, or the object that lives and gets altered over time? That question is brutal for any physical object exposed to decay, less so for digital objects. Nevertheless, it remains true for a piece of software, because decay is not the only significant change. Gephi has already changed over time. We added features, but we also removed some, and it did not make everyone happy. What is Gephi? For the community of people who take care of, and care for Gephi, that question is very important even though we rarely talk about it. Is Gephi allowed to deviate from what it was initially intended to be?

I believe that we all want to be true to Gephi, but we cannot take for granted that Gephi means the same thing to all of us. We address this need with things like our road map: we discuss it, we make it public, and we use it to cultivate our alignment. But sometimes, opportunities arise that are not part of the road map and shape Gephi. For instance Gephi Lite, a web version of Gephi (you can try it now).

Gephi Lite came to us as an opportunity. Indeed, the necessary libraries in the web ecosystem were mature enough to support the project, and similar projects were already flourishing in different places, supported by a community of developers who knew and appreciated Gephi. When the OuestWare team proposed the Gephi community to take in charge a Gephi-like project for the web, we had a discussion about whether or not this should be part of Gephi. Everyone agreed quickly that it should be the case, as it was a great idea, and it became known as Gephi Lite (let me thank them here for their work!).

I believe that at the time, it did not feel like a dramatic change to most of us. After all, Gephi Lite was mostly developed by a different team, and we were very clear on the fact that it would not replace the original Gephi (they meet different needs). It would not disrupt our road map. That road map would remain the same, yes, but Gephi Lite would disrupt us, the “our” of “our road map”. Indeed, there was an ontological share to that move: Gephi was not a tool anymore, but a project with two tools. Yes, it had always been a project as well, but so far we could conflate the project and the tool. Now the Gephi project has two pieces of software under its umbrella, Gephi and Gephi Lite, each with their own trajectory. Hence the question: what is Gephi now? At least, it begs whether it refers to the project or to the Java software. But I think that the ontological change goes beyond that.

When Gephi Lite came into existence, I personally felt that it was a big change. Let me just clarify that I do not imply that I regret what happened; on the contrary, I wished for that change and supported it. But I must highlight this moment to make it clear that it changed Gephi’s time, to point at the change in the temporality of the thing maintained (in the words of Denis and Pontille). Gephi Lite came from the desire to keep Gephi relevant, which implicitly assumed that Gephi was more than the Java implementation. Gephi was understood by us as a certain approach to network visualization, a philosophy, and a set of methodological commitments. Before Gephi Lite, Gephi could be seen as just the Java tool, whatever it was. Now Gephi has two not-so-different but not-quite-the-same ways of existing. That is a change to the nature of Gephi. Gephi has become something else. Making things last is sometimes just making them exist; but that often entails shaping what they represent by selecting what remains of them.

Read the book.