40 min read or 1 hour video
I was in Dagstuhl for a one-week seminar about visualization and the humanities. Geoffrey Rockwell was also attending, and I jumped on the opportunity to interview him. In short, he is the designer and caretaker of Voyant Tools. The long story, well… read on!
This interview is published with the authorization of Geoffrey. He helped me edit the transcript. We also propose the video version, which is more informal but also more lively. The sound is bad in the beginning, but I think you may enjoy Geoffrey’s sheer friendliness. We release both the transcript and the video under CC-BY-SA license (as anything else on this blog).
For context, although this was quite improvised, I had a set of questions already prepared because I was about to moderate a panel about tool-making organized by the MASSHINE center at Aalborg University (I will post about that later on). This explains why some of the questions look a bit like a survey. See this as part of my long-term project of documenting tool making in the social sciences and humanities.
Mathieu: Can you tell us about yourself and what your background was when you started Voyant?
Geoffrey: Sure! My name is Geoffrey Rockwell. I am professor of Philosophy and Digital Humanities at the University of Alberta. All my degrees are in philosophy, but when I was a graduate student at the University of Toronto, I became an Apple Research Partnership partner. This was back in the late eighties, and it was a program that Apple ran partly to fund graduate students, partly to evangelize the Macintosh, and support HyperCard, etc. I became a partner right around the time HyperCard came out and I ended up becoming the HyperCard trainer for computing services at the University of Toronto.
HyperCard came out in 1986. It was very empowering for a lot of humanists, because it was a multimedia development environment, it was free, you could develop things fairly easily, you were not having to learn C, C++, or anything like that, and you could control interface and code. You had both sort of woven together. I mean, nowadays, I guess, visual basic fills somewhat of a similar niche. But I remember, just to give an example, that a year after HyperCard came out, somebody was telling me there were like 170 HyperCard stacks for learning Arabic. All sorts of language instructors, all sorts of people just took to it.
So that is how I got started. And then, in 1989, I started working full time for the University of Toronto Computing Services. I was only in my second year of my Ph.D., I was not getting a lot of funding, and I had a son who was born… Working full time was good for me! That was a tremendous apprenticeship. Initially, I was supporting text applications and word processing, but then I moved into instructional technology. Again, HyperCard and similar multimedia development environments.
I was very lucky, you know, given that I did not have a computer science or technical background, to be embedded in such a unit. One of those groups was running the academic Internet for Canada! When the Web came out, the guy literally next door to me was one of the first people to explore the web in Toronto. He wrote a book that went on to be a bestseller. It was a very generative group. I was very lucky.
I got an academic job running the Humanities Media and Computing Center at McMaster University in 1994. They were looking for someone who had a Ph.D. in the humanities, and I was close to getting mine done, and they wanted significant project management experience in computing. Because we were running a bunch of labs, and we were running software development. To be honest, there were not a lot of people who had four or five years of experience in computing and a PhD in the humanities. And I had risen to a sort of project manager. So again, I was very lucky. It was the first humanities computing job advertised as a humanities computing job in Canada. So that was at McMaster, and there I ran a shop, I had staff, I had programmers. A lot of what we did was run labs at McMaster. People were just discovering the web. So, you know, I brought the web in. We created the first website for the Faculty of Humanities. We started building websites for people, and so on like that…
But when I was at Toronto, my supervisor was John Bradley. John Bradley was the lead designer of TACT. TACT was one of the best text analysis environments of its generation. It was released in 1989 at the first joint ACH/ALLC conference. There had been sort of two separate scholarly associations, the Europeans and the North Americans, and the first joint conference was in Toronto in 1989. I was right there. George Landau, who was a big name in hypertext studies, was supposed to run a workshop on hypertext, and he got sick. I had to take it over. So I taught my HyperCard workshop for a week, teaching students to build hypertext things. And then the conference… Ted Nelson showed up at the conference. Northrop Frye gave a talk. It was fabulous.
There was a software exhibit, and I was there showing HyperCard stacks that I built for managing bibliographic management tools and note taking. I was right across from Elli Mylonas, who was a graduate student at Harvard, building the first versions of the Perseus TLG search tool. The TLG comprised all the important texts in Greek, and normally, to access it, you would have to buy an Ibycus, which was a dedicated workstation with a CD-ROM player, with the Greek fonts built in. Apple had released a CD-ROM player, and they built an extension to HyperCard that could read the CD-ROM. So all of a sudden, instead of having to buy a $20,000 dedicated workstation, if you could get the CD-ROM, a Mac with a CD-ROM player, and with the HyperCard stacks, you could search all of Greek literature. This was Big Data. In 1989, Big Data in the humanities! In fact, I published a paper with a colleague about it. In some ways we were experimenting with Big Data questions. What can you ask when you have all of ancient Greek literature?
Anyway, the long and the short of it is that that was sort of my getting started in text analysis tools. I got very interested in text analysis working under John Bradley. He and I started building environments we were very interested in, visual programing environments. At the time, with a silicon Graphics workstation, you could get a visual programing environment for scientific visualizations, and we just said, okay, let us build one for humanities. By the time I got to McMaster, I got a programmer in the high performance computing group interested in this. And we actually developed a prototype, I think using Visual Basic. We published a paper on it. It was a system where you could drag out little boxes. Now there is that great environment, Orange, which allows you to do that sort of visual programing with Python under the hood.
It worked, but it was really just a prototype. What really changed things was when we recruited Stéfan Sinclair, who came to work with me at McMaster. He was a genius. John Bradley and I had built a web-based visualization environment that did correspondence analysis, keywords in context, and so on like that… but you had to index the text using the Makebase tool of TACT separately. And then you had to set it up. The visualizations were interesting, but it was very clunky. Stéfan Sinclair, as part of his Ph.D. thesis, had built a system called HyperPo, because he was very interested in OuLiPo, and hypertext. He was doing a Ph.D. in French, at Queen’s, and so he built this HyperPo where you could upload the text to the web, it would index it, and then it would build a display with different sorts of keyword and context statistics. So we recruited him to McMaster.
At the time, I had got one of the first really large CFI grants in the humanities. These were infrastructure grants, so it was about $6.7 million, and a big chunk of it was to build a text analysis portal. Do you remember portals? This idea that you could build an environment that can do everything. So we built this all singing and dancing portal. I mean, to be honest, for that one we contracted with a professional development team, but we were following a sort of agile methodology every week. Stéfan and I were there with the head of the programing.
What was the year?
We heard about the grant in 2002. The building was happening in 2003, 2004, 2005. TAPoR, Text Analysis Portal for Research, is still working. But what happened is that, as part of the portal, we built a bunch of small web services that did specific text analysis things. We called them TAPoRware. TAPoRware, Tupperware… Anyway, simultaneously Stéfan was rethinking HyperPo. And at a certain point, we realized that the portal model was too unwieldy. We would have to get $1,000,000 a year to keep development, and this was not going to be sustainable with the type of funding you get in the humanities. So we took the best parts of the portal and we split them into two. TAPoR became the discovery, how you can find out about tools and document them, and Voyant became the actual set of text analysis tools.
It was sort of inspired by HyperPo, but we in some ways we broke it up into smaller pieces that we could support. And even if we did not have a grant, we could still, you know, coast for a year or two, and then Stéfan and I began an agile praxis where the two of us would get together in a room, and we would try to do a project one day. One of us would be at the keyboard, and the other one would be doing research, checking things out, and then we would swap. Like agile pair programing. And the goal there was to do a lot of projects and see what tools were useful, what was not useful. We just cobbled together what we had, and when we got something that worked, we would implement it for Voyant.
At that time it was not yet called Voyant but Voyeur. At a certain point we realized that the word “voyeur” in English has no positive connotations. In French, it is okay. A little bit better anyway. But in English… So we switched to Voyant. Some good friends of ours actually politely told us: this is not the right word for what you are doing. We love it, but it is not the right word.
At that point, the both of us had the academic problem of we have got to publish. So, to some extent, our solution was this praxis. We would plan a project and do it. We would, on the one hand update Voyant, and on the other hand we would write a paper that we would give at a conference. The papers became a book, and Voyant 2.0 was released. Our book came out in 2016, Hermeneutica. This was the compromise. Well not a compromise! In our case, by making sure that we had these hybrid products of book and software together, and the book illustrated the software the software made the book possible, people could then experiment with us. That was the praxis that we developed and continued.
Have you been making other tools and prototypes? Are they still alive?
A lot of the tools that I did were really more websites, and like many websites, have fallen. I spend a certain amount of time trying to maintain them. Actually, nowadays, I spend my time writing grants to get the money to hire people, to keep these things going. But Voyant is the main one. All the TAPoRware tools and HyperPo, it was replaced by Voyant, and to some extent TAPoR. TAPoR is not that complicated. I have got some other projects I think are cool, but they are nothing like Voyant. Just to give you some numbers, Voyant has about 200,000 or more unique users a year. That creates its own dynamic. Stéfan and I had to move to a more professional, production-oriented approach. And of course, the academy does not reward production. I cannot sit there and say: we now have just released Voyant 2.3, and it might have been as much work as three articles. I got credit for Voyant and Hermeneutica and that was it. You do not get it again and again unless you do something dramatic.
Would you say that the tool is made for social scientist? Humanists? Both?
It is made for humanists. Textual scholars. It is a text analysis and visualization environment. Having said that, I happen to know that all sorts of social scientists and other people use it. You know, I get emails from lawyers to use it. An email from the son of a doctor whose father uses it. It gets used by a lot of different people.
Voyant has a user interface, right?
Yeah.
Does it include visualizations?
Yes, quite a few. In some ways it is a set of tools that can compose a dashboard. It has got about 23 tools. Many of them are visualization tools. Some of them are standard, like word clouds, distribution graphs… Some of them are wacky. It has got a mix. There are a bunch of tools that a lot of people do not know about, because they are not in the default skin. People do not play with them that much, but they are there and they are fun.
Would you say that Voyant Tools is a tool for anyone?
There are two features to the breadth of people that use it. First of all, we built language skins in, and we got volunteer teams. We support something like 13 languages. As a result, people in France can use it in French. And in fact, there is a French infrastructure team that has installed its own Voyant server. It has the ability to handle any language that can be encoded in Unicode. Stéfan did a certain amount of work solving problems with Japanese, Hebrew and Arabic, so that we can have those language skins. Many of the groups that developed the language skins were DH groups that wanted to teach with it. A guy in, I think, Dubai, developed an Arabic language skin, because he wanted to teach it in Arabic, and people wanted it in French and Spanish and Russian, etc.
So my sense is that it gets used a lot as an introductory tool, especially for teaching humanities students. When you think about it, they do not have to install anything. You can get a text into it in all sorts of different ways. You just click away and start playing. It works well for that introduction. Scott Weingart called it “the gateway drug for digital humanities”. It is a great entry-level tool. And so that makes it widely used.
When you think about the pattern of use in the digital humanities, you see that humanists do not do research every day of the week, or every week of the year. They are teaching, and then all of a sudden, they do a bunch of research. They need tools that they can pick up quickly, use intensely, put down and then not use for six months. It is not like email, that they are doing every day. I think that works well. And it is free, so anyone can download it and run it locally.
Is it open source?
It is open source. That may be great, but I do not know of anybody who has looked at this… Oh! That is not true. I only know of one team that has gone in and done something with the code, where they wanted to change something and then reverted back to us.
Do you see yourself as a professional developer?
No.
Are you self-taught?
I have taken programing courses, but not at the level of professional.
But you do code for the tool, right?
I do almost no coding now. I do grant writing. We have a digital humanities student that started working on it, and he does most of the programing now.
Do you see yourself as an academic?
Yes.
And of course, you publish papers.
Yes.
Are you proud of coding or having coded? Or is it something you get to hide?
No, I love it. First of all, I should say one of the one of the things that we have done with Voyant is that we built a notebook programing environment with it (Spyral). So, I do program in that, but I tend not to touch the underlying infrastructure, because that code is too complex for me. But I tend to really like programing.
One of the things I love to do is to teach humanities students to program. I often teach them data analysis in Python. We have them work in Colab and stuff like that. And there is a sort of Aha moment when somebody who has been told that there were bad at Maths, that they could not program, succeeds. When I teach them programing, I spend a lot more time talking about the culture of programing, and I talk about things like Brainfuck, all the playful programing languages that are developed. I try to get them to not feel excluded by the culture of programing. And then there is this moment when they get something to work. That is fabulous.
I wish I was a better programmer. I wish I had the time to be a better programmer. I am pretty sure that if I did have the time, if I had six months and spent 2 hours a day, I probably would be a better programmer. But I could learn Japanese in that time too! You know, there is only so many hours in the day.
Was Voyant initially made as part of an academic job? And today?
That is, I think, one of the genius things that we did: by developing this praxis of experiments, we were able to make the development academic. We were able to make sure that we got academic credit for the tool because we were always developing it while writing papers, delivering conference papers, and eventually the book, MIT press… We always made sure that every summer we had two or three papers at conferences. The papers reported on the tools and often had an academic component. But then we would say that we are also adapting this tool. We are playing with social network analysis. And here you can see how we tackle this problem. That was it.
You know, I think everyone in the digital humanities has to find a way. My colleagues in computer science have some of the same problems. Nobody is going to give them tenure for writing code. On the other hand, they will not give them tenure just for writing theory. Well, maybe they would do that for Maths or something like that. So they are always getting the grant, getting the Ph.D. student to write the code, writing the paper with the Ph.D. student… There are these mixed collaborations.
Was it joyful to make tools? Painful? Both?
The original portal was scary because of the size and the amount of money we were spending on it. The commitments. In some sense you get a big grant, you made a big promise. That was very scary. If you are promising that you can build something that your colleagues will use, and then they do not use it… How many people have built great tools that nobody uses? In some ways Voyant was the second pass. Breaking Voyant and TAPoR apart and making smaller devices that did one thing well (or one cluster of things). And then at a certain point, we realized: this is very popular! We do not have the problem of nobody using it anymore. Now we have got the problem of how to maintain it.
Are you proud of making Voyant?
I think so, yes. I do not think anyone will ever read the books and the articles at some point in the future. Voyant… Well, even Voyant will disappear. But I certainly am known for Voyant. Just even in this retreat, a couple of people have come up to me and said: Oh, I teach with Voyant! You know, nobody comes up to me and says: Oh, I read an article you wrote!
Could we say you are a designer of Voyant?
Stéfan was the designer. I was the vice president, the vice designer, if you will. I was more the theorizer, often. And he was the interface designer. He really had a good visual eye in some ways. And he loved to program. And he hated writing papers. Whereas I like to theorize.
Are you a co-maintainer?
Well, now. Stéfan Sinclair passed in 2020. So now in some ways I am responsible. There is a programmer who does the day-to-day stuff. As I said, I write grants, I answer emails, I test things. I do not touch the code any longer. I touch the code in Spyral, but not the main code.
So what would you say is your role in Voyant?
I would say I am project manager.
How long have you been on the path of making tools?
The very first tool that I made with John Bradley, TACTWeb, was the beginning of the path. When we presented it, I believe it was the first time anyone presented a paper on text visualization on the web. We presented it at ALLC/ACH Paris in 1994. And for what it is worth, I can still remember a very important person in the community basically saying that this is not digital humanities, that there is not an argument here, that this just a pretty visualization, and that it is cute but useless. That is when I realized that John Bradley and I had been immersed in scientific visualization, and that we had been looking at all the tools that they had, and their rhetoric, and there was a little bit of a surprise for me to suddenly realize that my own community are going like: Eh! This is not serious. I mean, to be fair, this was just one person.
Does Voyant do what you wanted it to do?
Voyant does not do some things that I would like it to do. It is a moving target, especially when you are trying to make something that is, in some sense, current, it is a continually moving target. This is one of the problems with us academics; we are not rewarded for production systems. Right now, there is a whole new world of what I am going to call “AI tools”. We have topic modeling. We just redesigned it. That works quite smoothly. We have named entity recognition, but that is very computer-intensive, and Voyant is meant to be fast and interactive. So, computer-intensive things are a problem. What we really need now is to be playing with word embeddings and stuff like that. And that is going to be a challenge to do right.
I have been experimenting with ChatGPT 4 and Code Interpreter, where you can now do text analysis just with a series of prompts. You upload a text and you say: give me a graph of the high frequency words. I have this suspicion that the days of Voyant may be coming to an end, that it might be the return of the command line or the prompt line. That will become the new paradigm, for data analytics. And it will not just be Voyant, it will be Tableau, it will be all sorts of tools that get replaced by these new tools.
So, your feelings about the tool did change over time…
Yes. And I anticipate them continuing to change. To be honest, right now, I am nearing retirement. My feelings right now is that I want to find a way to gracefully pass this on to someone else.
Is it the first time you want to pass it on?
No. I have put a lot of work and I have gotten some funding, so I have probably enough money for five years, and I am going to use those five years to pass it on; or let it die. I am also going to use it on the underlying infrastructure, that needs to be renewed. It is a stack of things. The underlying text engine is Lucene, and that needs to be replaced. When I consult with people who know more about these things, they say: you should start shifting to ElasticSearch. The JavaScript library that we use is also getting a bit dated.
Do you even know who uses the tool?
No. I mean, I know some people use it. One of the people I just met here says he teaches it all the time. I got rid of that. And I certainly do not know his students.
But we can count 200,000 users from Google Analytics. And that is just the people using our server. You can download the code, and people tell us that they are running a remote server and that is great. There is a long list of people do this. When I Google Voyant, I find interesting papers. There is a group of people, I think actually in Denmark, I came across some Ph.D. theses that were using Voyant, and I am going like: Oh!
Do you interact with the users?
The ones who interact with me, yes. I get a regular flow of emails and I try to answer quickly. When COVID hit, Stéfan and I developed (but this was when he was not well) a set of hands-on, teach-yourself Voyant tutorials, and we released them as a series under CC-BY 4.0. You know, anyone can download them, rewrite them, do whatever they want to these documents. We were trying to anticipate people who teach with Voyant and give them a series of lessons. And they can pick the ones they like, and they can rewrite them, and they can jam them together. They can do whatever they want. They can even sell them. The only thing, for CC-BY-4.0, is that they have to give us credit at some point. So, I interacted with people who use those, and of course I used them myself, and tested them with my students.
Another way, is one of the things I have been willing to do. Any time somebody asks me: Would you Zoom in to my class and give an intro to Voyant? I say Yes. I can give those in my sleep, now. I give them a on a regular basis. I just come in and talk about Voyant, or I run a one-hour session. I get a certain amount of feedback from that.
Is making Voyant under-appreciated or over-appreciated? Inside or outside of academia? What is your feeling about that?
It is under-appreciated for purposes of tenure and promotion, and especially in the humanities, academic credit is given mostly for books. In the social sciences it is more articles. But you know, books are the coin of the realm. Articles also. Grants less so. Building websites and tools, you get some credit for it, but it is not the best. No, I go up every year with a couple of articles and then maybe I have got a website or something like that, and as far as I can tell, that is what gives me the ticks. And if I did not have them, I am pretty sure that they would probably recognize the website work, or the tool work, as an adequate replacement. But it would not warm their hearts. That is just at the university I am at. As I said, in some ways we developed a praxis such that we made sure that we always were getting peer-reviewed articles, grants, papers, books. Doing that, as we are illustrating the tools, theorizing them, talking about them, and so on like that.
In Voyant, which design decisions were made specifically for the humanities?
One of the primary things is that you can always get back to the text. In fact, in the default view, there are five panels, and the central one is the pure text. And if you play with Voyant, you see that they are loosely concatenated. Let us say that you have a distribution graph for a word or pattern, and you click on that. You get the keywords in context. You click on a keyword, in context, you get the full text. They are all loosely concatenated, so that you can always get back to the raw text. It would be really nice if we could support formatted text, but that is just beyond us at the moment. So that is one decision that we made.
The second one is that we do not think that our users have a lot of patience. There are certainly people who will spend an hour installing Anaconda, and then one thing or another, and then play around with the tools, and read the books… And a week later they get something. But our tool is meant to be an entry level. You can either use a text we have already put up there, or you can paste some text in, or you can paste in a URL, or you can upload a text, whatever you can, and we will try to deal with it. If you upload a PDF, we will try to extract the text out of it. We are trying to make it as easy as possible for you to get started and play.
Right here is perhaps a contradiction with some of what we have been talking about. I saw one of the displays up there. There are people who are bothered over the idea of people having visualizations where they do not understand how you got that visualization. And I would certainly agree that people should understand it. But I think that there is an intuitive way in which humanists can partly understand it by kicking the tires of the results. We have correspondence analysis. Who the hell understands correspondence analysis? Well, you are from France, everybody understands in France. But it is not even a technique that was that popular outside of France. I think it is a great technique. But you give people that display, and they see the word clusters and the documents and so on like that, and they go: Oh! That is sort of cool. Or you show them topic modeling. Who really understands topic modeling? Our view is that we want people to start playing, and they will start understanding through playing. That is our philosophy of dealing with this, rather than putting them in a wizard situation where you have to make a series of decisions, installations, and then only if you have been a really good boy and passed the test, do you get to see anything.
You want users to learn through playing. Do you have a more constructed idea which other practices you actively support in Voyant?
If we go back to that praxis, one of the things we decided to do, was that we were not going to build a tool based on a needs analysis of other people. We were going to build the tool that we wanted, that we needed to do the experiments that we wanted to do. We tried to do a project in a day, decide what the question was, find the text, grab whatever tools we had at hand, try to answer those things, and write up a summary of it. We did not finish it in a day, but we got a good part of it done. And then we continued that. If you look at the book, Hermeneutica, there are theoretical chapters, and then there are these experimental chapters where we walk through what we did. That is what drove us.
I am going to step back and make a general statement. In my experience, when I talk to colleagues that have never really used computer assisted text analysis, when they describe what they would like, it is a fantasy. Even if you gave it to them, they would not actually use it. It is what they think they would use. But it is not really what would be useful to them. They do not know enough to be a good, reliable guide. At least we had one grant, a funded project called Just What Do They Do? in which we did actually interview a bunch of people. We showed them different screens, and asked what could you do with this and that… And by large, I have to say, we had more fun, and we got further, just by doing what we wanted; what we thought was interesting. So, in that sense, we have not done the canonical needs analysis, user studies, etc. We were not building things for other people. We were building things for us, and people like us. And that worked!
Were you surprised by the users? Do they do unexpected things with the tools?
I was surprised by the amount that this was used for teaching. I should have anticipated it. But I should say that I only teach graduate students. I do not teach undergraduates any longer. But I was surprised by how many colleagues use it for that intro DH course. You know, you are going to have a one-week taste of text analysis, and then one week taste of something else, and so on like that. That surprised me a bit.
I have been surprised by some of the people who use it, who I never expected to use it. Like getting this email from the son of a doctor who is saying: My father loves it! He puts all of his patient records in, or something like that. Wow! Setting aside the confidentiality issues, I am often surprised by how little people use it.
There is a ton of functionality involved. Every time people come to me and say: Can I do X? I say: Yes, you just go here, you do that… Whoa! I did not even know that was there! And to some extent, this is the problem of featuritis and feature creep of software in general. Voyant has been around long enough. We keep on adding features. We are trying to avoid overloading the interface, so some of the features are more subtly integrated. So, if you are in a list view, for example, if you click in the right place, you can check which columns you want to show. We have things like Z-score etc. that people do not even realize are there, because we have got a tiny little column that is just showing the word and the count. They do not realize that you can show all sorts of other things. And that is gratifying to anticipate.
Conversely, whatever you build, somebody is going come back and say: Yes, but do you have…? There is always a wrinkle that they want. And this is the beauty of adding the notebook programing environment. Now I can say OK, Voyant does what it does, but you have a notebook programing environment. Get a grant, hire a JavaScript programmer, and you can add your own thing. And if it is cool enough, we will take it, and build it into the main tool.
So we are actually trying to find a way not to overload the interface, to support all the different people. I do not know if you have noticed this: I have been using Microsoft Word since the beginning, at least on the Macintosh; every year they add more crap. The toolbar, and so on like that. Have you ever seen those pictures where, if you load all the toolbars, you have got this tiny little square of text and it is just all buttons… And we are trying to avoid that.
What I have been surprised by is ChatGPT+ Code Interpreter. It is nowhere near replacing Jupyter notebooks, Colab, Voyant… but it is going to get there. Technology makes a promise and then often we have to adapt to the technology. It never quite fulfills the promise. But I think that it is going to promise data analysis without any coding, and people are going to learn prompt engineering in order to get the analysis that they want. And then there is going to be a constant flow of stories of jackasses who did not write the right prompt, a little bit like that lawyer that generated something which they then submitted in court. It turned out that their prompt was bullshit, and the model hallucinated.
Would you say that Voyant empowers people? Who does it empower? And is it a good thing?
I think it empowers, yes. It empowers people. It allows them to get a taste of text analysis and visualization, including things like topic modeling and principal component analysis. It allows them to get a taste, but there is something else. We have a bunch of wacky tools, and the wacky tools send an interesting message: you can play. It empowers the people who have to teach in the digital humanities.
People have all sorts of skills. You probably have all sorts of infrastructure behind you. But what if you are some poorly supported person? You know, you are the one digital humanist, in a small liberal arts college. There are no servers that support you. Voyant allows you to do something really interesting with your students. I think it empowers the students through that, it gives them a taste of what is happening.
It also empowered Stéfan and I to build a career around this, in a way that would have been very difficult to do if we had not. You look back on your life and you go: it could have gone in very different directions.
How did you fund Voyant? Is it precarious?
Yes. I fund it through a combination of different things. I try to get graduate students or research assistants. They help with certain types of things. I have a programmer, and I try to keep him funded.
One of the main things I do works because Voyant is sufficiently successful. Let us say that you were writing a grant. It is a big grant, and there is a place for Voyant, perhaps for a different tool and Voyant. We do a sort of deal. You write Voyant into your grant, and I write a letter of support; or you include me as a coinvestigator, but in turn you put a budget line in there. So for five years, there is so much funding that comes to me to support Voyant, to do what you want to do, for example to add parts-of-speech tagging.
Did the work of doing and maintaining Voyant put you in a situation of cultural clash with your colleagues?
I have been part of the DH community for a long time. I went to that first conference in 1989. I have outlasted everybody. Well, not everybody; but I would like to say that Stéfan was a genuinely kind person. He was a really good man. More so than I am. He was very supportive of new scholars, graduate students. I aspire to be as supportive as he was. I do not think that we ever had a problem of jealousy or anything like that. Nobody else was really doing anything similar. And we never tried to lord it over anyone. If you wanted to collaborate with us, we were happy to do that. If you wanted to just use Voyant: go ahead! If you wanted to ignore it: go ahead!
The initial TAPoR project, the text analysis portal, was an imperial project; and that was part of the problem. It was trying to say: we can do everything for everybody! That was obviously unsustainable. Or at least we could not figure out how to sustain it. Whereas saying: look, we have got this thing, it is free, it is open source! You can do what you want with it; you can take the code; we publish papers with our ideas: you can take the ideas or disagree with them. I think that, by and large, we avoided any sort of imperial conflicts. Since then, Voyant has received some awards, like the Zampolli award, which is really the top award in the digital humanities for a project. I think that it reflects the fact that it is appreciated in the community. I hope that it is not seen as silencing anybody. In fact, there are all sorts of cool tools out there. I do not think that we have silenced them. I hope we have not silenced anybody.
Can the tool have currency by itself in academia?
There certainly are people for whom the success of the tool makes a difference, so to speak. But at my university, my annual report is going to be assessed by a bunch of chairs and colleagues in the humanities. I get along fine with them, and if I came out with a brand spanking new tool for that year, most of them would give me credit for it; but not for the next year. It would be just like a book: you get credit that year, not the next year. The fact that you are maintaining it does not count. That has been my experience, and Stéfan’s too.
The one thing you can get from the tool is grants. We were collaborating with people and getting grants to maintain that, but it meant that we were regularly getting grants. Which we would not have gotten if we were not the authors of Voyant. We were either on the grants because we were co-developers of a clearly successful project, or because people wanted Voyant as part of their project, because it would put their project over the hump to get the grant. That was the one direct approach. I have talked about this praxis of combinations.
The thesis of our book, Hermeneutica, is that, in some sense, tools bear theory in a different way than texts. A hermeneutical text says: here is how you interpret a text, here is how you do interpretation. A tool bears a theory but bears it differently. We were playing with this idea of tools can be theories, and we turned around and started telling people that. Our papers were telling people that tools are theories.
Now, did that make any difference to my chair and the Dean? Probably not. I did not think they read any of this stuff, but we were telling it to our colleagues in the digital humanities. And in some ways, this is one of the things that has come up in the 30 years that I have been in this field on a regular basis. We have conversations about how you get credit for digital work. I was on the MLA committee in which we developed protocols for how to get credit for digital work. A digital work could be somebody creating a hypertext novel, digital artwork… all sorts of different things. We have been fighting this one for a long time. You know, I wish I had a story to tell. Jerome McGann used to say: this is only going to be solved one death at a time. The old guard just has to die. Although that is cynical.
Which questions should I have asked?
Two things that we thought a lot about. One is play. I told you that Stéfan’s Ph.D. thesis was about OuLiPo. We thought a lot about it, we theorized about it and we played with the tools. You asked me before if I got any joy out of it. I do not think that we would have gotten where we were if we were not getting joy out of it. And part of the joy was actually thinking about the play. In English, there is a relevant use of the word play. Let us say you have a knob, you can fiddle with it until it breaks. That sort of play, playing with things that do not quite work right. There is a play between theory and implementation, which was really fun. It is like a rollercoaster. You go way up into the theory; we would build this sort of theory, and most of my colleagues were not lucky enough to be able to then go swooping down, making it work, and then making it work for other people. That roller coaster, for me, was about vertigo. Roger Caillois has that theory of a play, and vertigo is one of its elements.
The second thing gets talked about a lot in the digital humanities, we have been talking about it, it is the collaboration. It was very important for me and Stéfan that, year by year, we developed a partnership. It was not a one-off partnership. It was paper by paper, tool by tool, interface fiddle by interface fiddle. I do not think that the type of work we were doing can be done by one person. I could be a fairly good programmer, but Stéfan was better, and the two of us together could do more than apart. If I had had to do it on my own, or Stéfan had had to do it on his own, we would not have been as responsive. There would not have been the dialog, the enjeu, between the two of us. It was a blessing. This is just one of the lucks of life, that he and I were together at the same university at the right time, both with our different backgrounds, building things together. Once we were separated (I went to Alberta, he went to McGill), by then we were already collaborating. Every week on Zoom we had a meeting, and then we would get together. That is collaboration, and it does not have to be two people. But I think that a lot more attention needs to be paid to the human elements of care. How do you enjoy working on a project with someone over time?
Geoffrey Rockwell
2023-09-19
Dagstuhl, Germany
Questions by Mathieu Jacomy
Thank you so much Geoffrey for sharing this with me, and allowing me to share it with others.
OpenEdition suggests that you cite this post as follows:
Mathieu Jacomy (December 16, 2023). Interview of Geoffrey Rockwell, maker of Voyant Tools. Reticular. Retrieved January 20, 2025 from https://reticular.hypotheses.org/8039