The situation at stake is the following: we, social science and humanities (SSH) scholars, use a method from another field, but we do not use it the way it was designed to be used. For instance, we do topic modeling, but only as a shortcut to categorize documents manually. More generally, we use predictive models but we do not predict, and we do not believe that the assumptions baked in the model are appropriate. For example, we use community detection, but we do not work with the communities obtained as if they were communities. We disregard some of the features the algorithm provides, while we leverage some of its side effects. Is this good science? Or are we the baddies?
Martin Grandjean was visiting us at the Tantlab for a few months in 2021, and before he left, Anders Munk and I had a long discussion with him to prepare the writing of a paper on our shared interests: network analysis, epistemic cultures, and knowledge technologies. This long-due post basically consists of my notes, to fixate some elements of our discussion. It focuses on networks, and the practice of repurposing algorithms in digital methods.
We routinely see people interpret network maps on a self-evident mode, that is, as if they had no epistemic commitments. As if looking at the picture was sufficient to understand the network structure. But of course, certain competences are required to understand what is going on in the picture. See an example below. These self-evident interpretations beg at least three kinds of questions.
What happens when you look for conversations about vaccines on Twitter. Nodes represent individual users. Clearly, several communities and conversations emerge.
First kind of question: Is the network structure visible or not? This leads to questioning what the network structure is, and what makes it visible. I have a lot to say about that, but I still see it as an open question.
Second kind of question: Why is the practice of self-evidence commonplace? There are obvious answers, for instance: some may believe that the network structure is directly visible; that there is no mediation. It may seem obvious that beginners could get tricked into self-evidence, because they lack training, and/or they are careless. But the obvious answers are not always right, especially when it comes to cultures. Let us refrain from doing armchair anthropology. What do we really know of the beliefs of these persons? We can actually look into these practices and investigate their purpose. What do they provide to those who enact them?
Third kind of question: Is this practice bad? …and if so, in what sense? The answer depends on the answers to the previous questions. The simplest hypothesis is that the network structure is not visible, yet people are tricked into believing that they see it. In that case, the picture does not properly refer to the network structure, so the argument is invalid, so it is bad science. The circulating reference is broken, the signifier does not refer to the signified anymore. The simplest possible answer to this question is that network maps are misleading.
The problem, again, comes from the fact that tools such as Gephi have made network analysis accessible to broad audiences that happily produce network diagrams without having acquired robust understanding of the concepts and techniques the software mobilizes. This more often than not leads to a lack of awareness of the layers of mediation network analysis implies and thus to limited or essentialist readings of the produced outputs that miss its artificial, analytical character. A network visualization is closer to a correlation coefficient than to a geographical map and needs to be treated accordingly.
Rieder and Röhle (2017)
Community detection
The same question can be asked about community detection. This technique works a bit like layout algorithms, insofar as it translates the topological structure, but instead of providing node coordinates, it provides groups of nodes (the “community”). There are different ways to build the groups, depending on what one means by “group”; there are different techniques to community detection. I will present two, but let us consider first what people do with those groups.
If your network is big enough, there are too many nodes and edges to analyze them individually. Having groups is an invaluable commodity, as it offers a reduced set of things to talk about. Node groups reduce the network to something we can analyze more efficiently. Of course, now we have to deal with where those groups come from. We have to justify them. But on the other hand, we can now assess those groups qualitatively and quantitatively. We can measure their properties. In that sense, groups are a coding of the network. A reduction that we can assess and use for analysis.
There are as many ways of assessing the coding (the groups) as there are research designs. You could for instance measure intercoder reliability, a well-codified technique of qualitative analysis that can be calculated in different ways. You could benchmark the groups against ground truth(s), if you have such empirical information. You could also measure their properties of your groups. For instance, if you expect them to be assortative (more connected within each group than across), you could compute the modularity of your groups, and compare it to other ways of making groups. The relevant validity criteria depend on the role of the groups in your analysis.
Let me sketch three examples. In the first one, the groups are not used in the analysis; in the second case they play a minor role; in the third case, the play a major role in the analysis.
First case, you just want to be able to refer to some parts of a network map in the text. The example below maps a discussion about rewilding (in short, the reintroduction of wild animals) by technoanthropology students (Anders and I teach them controversy mapping). Nodes are expressions connected by co-occurrence in Facebook posts, in Danish. The discussion has been analyzed qualitatively, but the map helps to communicate the analysis. The colors come from community detection. In this case, the blue cluster is about emotions (they named it “pathos”) while the orange cluster contains the expected expressions related to rewilding. Having colored groups allows to guide the reader’s attention to certain parts of the map, for instance: The blue group connects to the orange group through nodes like “dyr bag hegn”, which means “animals behind fences”. The students know, from their reading of the empirical material, that the mention of animals behind fences happens to mobilize strong emotions in Facebook posts. They have many other ways than this network to make that case, and they proceed to do so. Yet it helps to visualize where emotions (blue group) are connected to rewilding (orange group), and to check which other concerns may also play that role. The map was exploratory, and sharing it allows the reader to retrace that exploration, from the visualization to the empirical material.
Second case, you have a ground truth that you need to simplify. The example below represents airports and the airlines connecting them, in 2021. We know the country of each airport, but there are too many countries. If we use the country as the group and we assign it a color, we obtain the image on the left. If we use community detection, we obtain the image on the right. There are much less colors. The big red group happens to be Europe: although it has many countries, it appears as a single group because the countries are highly interconnected. The “communities” found are not exactly groups of countries, but it works well-enough to be used as a basis for the analysis, for instance by measuring which of these macro-groups are better connected.
Network of airports connected by airlines in 2021. On the left, colors represent countries. On the right, colors represent groups of nodes found by community detection. The red cluster is Europe: although it contains many countries, it has the structure of a unified ensemble.
Third and last case, the groups are a primary goal of the analysis. Check for instance this recent paper (John et al., 2021): their goal is to identify groups of people from their mobility pattern to profile them in further analyses. Community detection is a key step of the research design, an obligatory passage point of the method. In that case, obviously, the methodological commitments of the community detection technique employed contribute to determining the meaning of the groups further analyzed. Communities are literally modeled, following a number of assumptions.
Do you see communities?
Tiago Peixoto, who made decisive contributions to the science of community detection, happened to visit us while Martin was there. Tiago showed us a case that he later defended on his blog and on Twitter. I have written about it in a previous post. His post contains a provocative picture that I struggled to understand. I will unpack this case because it allows pinpointing the gap between the standpoint of algorithm designers (like Tiago) and scholars who use it (in that case, Martin and I).
Tiago compares two different approaches to community detection, and I need to explain that real quick, with my own words. The first technique is called “modularity clustering”. It is the oldest one, it is popular, notably among Gephi users. In short, it tries to find groups that optimize a certain metric, modularity. It’s too costly to find the absolute best, but we can get close thanks to a few approximations. A high modularity means that most of the links are within groups, not across. The second technique, developed by Tiago, uses Bayesian inference. It gives you the groups that are the most likely to fit a model.
Do these two approaches sound very different to you? If not, bear with me. The difference will appear shortly. Tiago proposes the image below. He asks: do you think there are communities, or not? Look at the network in the image, and remember your answer.
Tiago’s argument goes as follows (in my own words). At first glance, it looks like there are communities in this network. And indeed, if you run modularity clustering, it finds communities (check the colors on the left). However, the communities are not real. Indeed, Tiago generated this network from a model that has no notion of community whatsoever. Instead, it just requires that 13 nodes have 20 neighbors, and 230 nodes have 1 neighbor. So the nodes in each of the detected group have nothing more in common than with the other nodes, all of the nodes are by definition on an equal foot, despite the particular configuration generated in this specific case (see image below).
My initial reaction was: I do see the communities, so how can you argue that they are not real? Maybe those communities are only specific to the network generated that specific time, OK. But it only means that the generator gives you different communities every time, it does not mean that those communities are not real. But Tiago did not agree on that.
A different phrasing helps find a common ground and situate the disagreement. By “I do see the communities”, I mean that if I met such an empirical case, I would describe it as having a community structure. It may mean, for instance, that I could cut just 14 edges and separate the network into 13 pieces of roughly equal size. This criterion boils down to having a good modularity score. Tiago calls that a description, fair enough. Modularity clustering is descriptive. It applies to a situation where the network is empirical. Obtained from the field, as we say.
In comparison, Tiago’s situation is not empirical. His network is generated after a model. By “there are no communities”, he means that the likelihood that the found groups play a role in the connections is low. Which is a given, since the model has no groups to begin with. Still the argument holds; but let me explain in a different way. The model is like the rules of a game. Let me give you a simple example. We are given 2 groups of 5 nodes, and the game is to decide which nodes are connected. The rules tell us how the groups impact the chances to be connected. For each pair of nodes, we roll a dice. If the nodes are in the same group, we connect them on a result of 2+; else on a 6. We play the game, and we get edges that depend on the groups (and of the rules!). The Bayesian inference algorithm for community detection helps us play the game backwards. We have the edges, and we must guess the groups that generated them. But crucially, we must also know the rules (the model). Given the edges and the rules, it gives us the groups that are the most likely. In fact, we could even propose groups, and it would tell us how likely it is that those groups were the ones used for the game. By “there are no communities”, Tiago means that the groups obtained are no more likely than any other distribution. He also argues that the model does more than describing: it explains, although “explains” has a narrow definition in this context.
I find the description/explanation dichotomy too self-serving for two reasons. First, it suggests that descriptive techniques cannot explain, yet I contend that they may contribute to explaining by feeding into other methods. Modeling is far from the only means to provide an explanation. Second, when you get an empirical network, it never actually fits a model. The processes that exist in the world are never as simple as the rules of a game. “All models are wrong, but some are useful”, as one says in statistics. If the explanatory powers of modeling cannot exceed the justifications of the models, and if those are weak, then models only explain in theory, not in practice… yet modeling is useful. There are no doubts about it. The question is: what do researchers actually do with modeling techniques?
I helped Tiago implement a version of his Bayesian inference algorithm in Gephi. This version is based on the simple assumption that each node belongs to exactly one group, and that nodes are more connected within a group than from one to another. These assumptions are reasonable, but one cannot take them too seriously. The model is obviously unrealistic: no person belongs to a single social circle, no word to a single topic, etc. Yet it is useful because most of the time, we want each node to have exactly one group. Possibly for pragmatic reasons, like the necessity to visualize groups as colors, or because we use groups as a reduction, a simplification. Those are good reasons. We want a 1-group model not because we believe that it is how the network was generated, but because our research design demands it. In that situation, the usefulness of Bayesian inference is not about its explanatory powers. We cannot take for granted that the usefulness of an algorithm depends on the usage prescribed by its designers.
Misuse versus off-label use
Algorithms can be repurposed; they should be reappropriated; yet they can still be misused. The problem is to differentiate between those situations. Who gets to tell the misuse from the reappropriation? I am reluctant to be normative on that matter, and this post is long enough. So I will now explore a different direction, and engage with a topic that is faithful to the discussion we had with Martin and Anders: off-label use.
Off-label use is an expression you will primarily find about drugs, about medicine. It mainly refers to the widespread practice of using a drug in an unapproved way. Let us extend this notion to every technology, and refer to any unapproved use. The notion just assumes some degree of normativity, and a practice breaking that norm.
Off-label use takes a different meaning depending on the normativity. The pharmaceutical version of off-label use is often about what the health authorities have determined legal or not. The norm is set by the state. But in other contexts, the norm might be set by the manufacturer, cultures, society at large… and those are not mutually exclusive. In the case of algorithms, we should at least consider what is prescribed in the paper(s) defining it, and the culture of the field.
Let me explore a few examples of off-label use. My goal is to provide analogies as food for thought, but also to show that off-label use is more common than it may seem. I want to make it clear that off-label use is legitimate insofar as it consists of using something for what it is rather than what it is supposed to be.
Nitrous oxide. “Commonly known as “laughing gas”, this odourless substance is used in medicine, as an anaesthetic, and in catering to make whipped cream. It is the whipped-cream chargers that people buy for recreational use. The gas is usually inhaled by discharging a canister containing small amounts of the gas into a balloon.” (The Conversation) Anecdotes: I know those canisters only as a cooking technique, and wondered for a long time why people seemed to throw them away on the streets.
Cannisters of nitrous oxide.
Sildenafil, better known as Viagra, “was initially studied for use in hypertension … and angina pectoris … Phase I clinical trials … suggested the drug had little effect on angina, but it could induce marked penile erections. Pfizer therefore decided to market it for erectile dysfunction, rather than for angina; this decision became an often-cited example of drug repositioning.” (Wikipedia). The off-label use became the intended use.
Ikea’s BEKVÄM spice rack is simple and inexpensive, as any spice rack should probably be. But it can hold more than spices. People started buying it as a book shelf for kids. Then it was discovered that if you hang it upside-down, it also allows you to hang a number of things, like jewels or a towel. These uses became so popular that Ikea now also showcases the spice rack as a book shelf.
Jimi Hendrix was left-handed, but used to play a right-handed guitar held as if it were a left-handed guitar. As a result, the affordances of the instrument are upside-down: the buttons and the switch are under your arm instead of at the tip of your fingers, the tuning keys are far from you instead of close… an ergonomic nightmare. Now, who would dare stating that Jimi Hendrix was not playing the guitar appropriately? The history of musical instruments is full of off-label uses that became mainstream because they defined the sound of popular artists. Most guitar pedal effects, in fact, started as repurposed artifacts (distortion, vibrato, delay…).
In conclusion
I don’t think we are the baddies when we repurpose algorithmic techniques borrowed from other fields to do social sciences and humanities. We have different goals, different methodological commitments, and we have the right to reclaim those techniques for ourselves. This is not inherently bad science.
De facto, we are doing it. I want to frame it as off-label use. We use these techniques for what they are rather than what they are supposed to be. We disagree with the norm because our situation is different. For instance, in the case of community detection, we do not model; yet we may use modeling. We use it as a way to produce a reduction, which it functionally does. We are not misunderstanding the algorithm, it actually performs what we expect, even though this is not what the designers intended.
That being said, normativity also protects against misuse. Although I reclaim the right to use techniques off-label, I also acknowledge that it requires doubling down on assessing the algorithm to ensure that it actually does what we think it does. Off-label use comes with increased risks. It is not inherently bad science, but it exposes us nevertheless. Let’s not become the baddies.
References
Rieder, B. & Röhle, T. (2017). Digital Methods: From Challenges to Bildung. In M. T. Schäfer & K. van Es (Eds.), The Datafied Society: Studying Culture Through Data (pp. 109–124). Amsterdam: Amsterdam University Press.
John, E., Cauthen, K., Brown, N. & Nozick, L. (2021). Detecting Communities and Attributing Purpose to Human Mobility Data, 2021 Winter Simulation Conference (WSC), pp. 1-12, doi: 10.1109/WSC52266.2021.9715396.
I do not give an answer. I report who says what, where the concern comes from, and I show how you can look by yourself. I will unpack how and why AI models somehow recorded artist styles. In particular, I will look into the data where all of this comes from. And just so that you know, in that part I will show you pornography (a warning will precede it).
This is about a kind of apparatus that generates images from a text prompt: DALL-E, Disco Diffusion, Midjourney, Stable Diffusion, Imagen… Those devices are different but share the same general technical premises and fulfill about the same tasks, so let me call them a technology. It lacks a stabilized name though, and since I must commit to one in this piece, I pick “text to image“, abbreviated as T2I. This post is about the T2I technology, artists, and how the former will allegedly change the latter’s life, or not. If this is all new to you, then just watch this awesome 13 minutes video by Vox. It summarizes the issue perfectly.
“You can copy an artist’s style without copying their images, just by putting their name in the prompt.”
Give a text prompt to a T2I tool, and it returns images to you. I have previously documented the process of building prompts. Your prompt may mention to render the image in the style of a given artist, and the tool will oblige. It works better for certain artists than others. I am interested in the most convincing cases. Here is an example I find telling: the art of Simon Stålenhag. You will find his work below (check his website for a better view) next to images returned by T2I tools prompted for his style. Look at them, compare them visually. Do they look similar to you? If so, why? Can you make the difference between the man-made images and T2I output?
A screenshot of Google Images asked for “Simon Stålenhag art”. This is what Google Images knows about his art. Also check his website for a better look at his work.OpenAI DALL-E 2 prompted for “a painting by Simon Stålenhag”.Disco Diffusion 5.6 prompted for “A beautiful painting by Simon Stålenhag, trending on artstation”.
To me, those images have a clear family resemblance. I would characterize them as wide shots of an imposing monument or structure looking alien or technologically advanced standing out at a distance in the misty wilderness, often with a one or a few human beings using 20th century technology (cars, clothes…) rendered with a mix of realism and oil-painting textures, with muted colors and a few bright accents. Wikipedia summarizes Stålenhag’s style as “a stereotypical Swedish landscape with a neofuturistic bent”. I can tell that an image has been generated by a T2I tool, but importantly, I can also guess whether Simon Stålenhag has been the artist used in the prompt. I have seen other people guess it too on social media (unfortunately I could not retrieve any sources). And I am not the only one to find his style remarkably well captured by T2I models (compared to other artists).
At this point, I want to apologize to Simon Stålenhag. He is tired of hearing about this AI stuff. I am sorry to add a layer to this. I still have to, because his case is excellent for what I write about. Not only because T2I “is crazy good at replicating [his] style”, but because he is also involved in at least three important aspects of the discussion. First, he does not care about having his work absorbed by the T2I models, or his name being a popular prompt modifier. Second, some other people try to speak for him as if he took issue with the T2I technology, or try to enroll him as an ally in their fight against it. Third, he gets tired of all this social media activity. See by yourself in the tweets below.
Simon Stålenhag does not take issue with how his work is used in T2I models (may 2022).A now-deleted Twitter thread where Simon Stålenhag is erroneously painted as someone who “is likely to sue first for copyright infringement” by another Twitter user, Andres Guadamuz.
Simon Stålenhag’s response to the thread, making it clear that he has no intentions to sue anyone over this, and dislikes being portrayed as such (August 2022).
I truly apologise and I have deleted the offending thread. I chose you precisely because your work is so identifiable, and I wanted to see if the systems was trained on your work and would reproduce it. I've been writing about this for years and never got any attention.
The response of Andres Guadamyz, who initially tried to enroll Simon. Andres now states that he chose Simon because his work “was so identifiable” (August 2022).
Now, some people do take issue with T2I technology absorbing artist styles. But I have yet to find an actual artist complaining about getting ripped off themselves. What I observe instead is other people getting upset in their stead. The artists themselves seem either nuanced, or willing to embrace the T2I technology, or sometimes indifferent. In the Vox video mentioned above, James Gurney, a renowned artist often used in prompts, does not complain about his style getting absorbed by the DALL-E model. He only states that “the artist should be allowed to opt-in or opt-out of having their work, that they worked so much on by hand, be used as a dataset for creating this other artwork.” In the same video, Vanessa Rosa, artist and art historian, mentions that she has “heard of other artists who got actually extremely upset”, but does not mention them. But are the upset artists those who had their own style absorbed? In the companion video to that above, consisting of additional interview material from various people, we find no mention of style absorption ripping off artists. Ted Underwood, a professor in machine learning and literature, just says that artist names “are really powerful sort of magic words in this model.” Rob Sheridan, an art director, just comments that “everything in art is inspired by something else. … This just … puts a very crass, fine point on it.” And Mario Klingemann, the famous artist at the forefront of AI art, says this:
“It’s a bit unfair, of course, because some people took, I don’t know, years, decades to perfect their style and find their niche. And now all it takes is to put their name in the prompt, and then you can just have the shortcut and go on from there. … ‘Good artists copy, great artists steal.’ And that’s kind of exactly what it is, like, a lot of artists have ‘gotten inspiration’ from some unknown, whatever, other artist or so and never tell. … Art is not like science where you have to cite all your sources.”
And then, there is the Twitter thread below by RJ Palmer aka @arvalis, a concept artist. He takes issue with the T2I technology as an artist, but as far as I know, not as one who had their own style absorbed.
What makes this AI different is that it's explicitly trained on current working artists. You can see below that the AI generated image(left) even tried to recreate the artist's logo of the artist it ripped off.
A twitter thread by RJ Palmer aka @arvalis. He writes: “as an artist, I am extremely concerned”, notably by the fact that T2I models are “explicitly trained on current working artists”. He claims that one of those models (Stable Diffusion) “even tried to recreate the artist’s logo of the artist it ripped off” which is “anti-artist”. August 2022.
There are a few things to unpack here. Let me start by observing that there are two distinct arguments at play: style absorption gouges artists, and T2I will steal their jobs. Let me address the second argument first. The artist community is divided about it. Some artists believe that AI will get their jobs, some believe that it will change the profile of their jobs, for better of for worse, and some believe that it will not change much. The companion to the Vox video features various opinions on that matter. Let me simply acknowledge that many people are concerned about the impact of this technology on the job market, and voice it on social media. Yet what makes RJ Palmer’s Twitter thread stand out is the other argument. The claim made and defended with the images attached is specifically that AI rips off artists by copying their style. Which begs two questions: how linked are the two arguments, and how strong is his case?
The two arguments are weakly linked. T2I tools could disrupt the artist job market without copying styles in particular (I am not saying that it will). My argument, here, is that styles could exist without being attached to artists. Oil-painting style, watercolor style, 3D style… Even in today’s AI art, people use many other modifiers than artist names. We could train a model on data where artist names have been removed, and it would still retain stylistic information. I can imagine such tool disrupt the artist job market the same way, and it would not involve absorbing artist styles. To be fair, RJ Palmer or other artists may believe that T2I technology is so good only because it absorbed artist styles; but personally, I do not buy that. And I do not think that RJ Palmer does either. Indeed, he frames it as an economic issue: he finds it “gross” that “working artists [get] advertise[d] as styles” by AI companies. So conversely, he can imagine a system where AI companies compensate artists fairly. We can envision a disruption of the job market that is beneficial to artists. Of course this will not happen, but not because it’s impossible. No, because the balance of power completely unfavorable to artists. AI companies have power, money, and do not care at all about them. My takeaway here is simple: T2I may disrupt the artist job market in various bad ways, which is the real problem; style absorption is just a part of it; and fixing it is neither necessary nor sufficient to solve the bigger issue of harmful job market disruption.
Aside from that, is RJ Palmer’s case good? I do not think so, for three reasons. First, he is not himself an artist whose style is getting absorbed. The artist in his example is Michael Kutsche. Second, the similarity of the two pictures is vague to me. The style is not as similar as for Simon Stålenhag, but that is subjective. The signature, however, is really not similar (see below). RJ Palmer may ignore that such artifacts are common in current T2I technology. Of course, models have learned that good paintings often have a signature, so if you include popular modifiers such as “trending on Artstation”, you will often get such “watermarks” (in the vernacular of prompt writing). But visibly, the model did not try to reproduce this particular logo. Third, RJ Palmer’s point is phrased in a very anthropological way that makes the technology more intentional than it deserves: the model has not “tried to recreate” the style of said artist. If we could understand AI in terms of what it tries to do, regulating it would be much easier.
AI signatureArtist signatureRJ Palmer claims that the AI (left) tried to copy the artist signature (right).
Let me summarize. Palmer’s case is not convincing (1) because he fails to establish that AI copies artists, (2) because he is not the one being “ripped off” himself, and (3) because it boils down to the more general concern of a harmful job market disruption by T2I technology, which is a legit concern but not dependent on style absorption. Yet this tweet was repurposed precisely to make the case of artists getting ripped off. It was quoted for instance in this newsletter issue titled “Plagiarism by Machine” where the authors says that some AI companies are “direct about ripping off the style and signature elements of digital artists — to the point where they even try to copy the artist’s logo!” I have read, and you will read, about T2I technology plagiarizing artists, and we will get exposed to the implicit injonction to side with the artist against the disruption caused by Big Tech. An injonction that I am personally inclined to endorse, and so may you; but it holds me back that at the root of this argument, we find no artist actually complaining about their own style getting absorbed by the T2I technology. Of course, a prominent artist might make that case tomorrow. Yet I could also see those renowned artists feeling unthreatened by that technology. Or even, why not, flattered.
Is it legal?
This, as well as basically everything related to authorship and AI, is legally unresolved. You can find a series of framings in Is DALL-E’s art borrowed or stolen? by David Cooper on Engadget (July 2022). It is very instructive. Also note that despite the title, there is no mention of an artist complaining about being stolen.
There are two parts to style absorption. First, the artist data has been harvested, in the form of images with a caption that contains their name. Second, the model was trained on that data and it abstracted something that we call “style”. I will explain with more details. The point I want to make here is that the legality of style absorption plays out very differently in these two steps. The data harvested is basically public information on the web. It is the portfolio of artists. In some sense, if you want to be on Google so that your clients find you, then you have to allow crawlers to harvest your images with your name attached and reuse it. But still, that is something we could regulate legally. The other part, however, is where the AI magic operates. There is nothing inherently illegal in training a neural network on a data set. Yet that is where style absorption really happens. You may find it scary and/or fascinating, you would not be alone in that. AI can absorb and repurpose artistic style, although as we will see, there is a lot to say about what “artistic style” means in this context. There is no coming back to when only humans could paint.
AI Artist studies
Let me briefly mention so-called AI artist studies. The name is a bit ambitious for what it actually is, but there is a real effort behind it. In short, it is about rendering the same prompt again and again except for the artist’s name, so that you can see how it impacts the result. This project is an attempt at documenting the T2I technology in a systematic way, and it is a major resource for prompt engineering. Here is an example for Disco Diffusion.
Surea.i, the artist at the origin of this initiative, has taken some of the heat against style absorption on social media. Although he is not affiliated with any of the AI companies. Generating an image takes some time, and collecting this database required a significant effort (many other people participated). The explicit intent was to give back knowledge to the community, and I personally appreciate and support that mindset. Yet it was interpreted by some as an anti-artist contribution. As a Twitter user commented, the artist studies were “not even inherently pro-AI” (see below). As Surea.i replied, the case of artist style absorption could only be made because of it was so well documented in the first case. Using artist names in prompts is a practice that both fed into the artist studies, and was nourished by it.
People wouldn't even know they *could* be upset about this if it wasn't for us sharing information that was largely considered to be personal prompt secrets for many others. https://t.co/LcPpdpOzYH
As Surea.i argues, the case of artist style absorption could only be made because of it was documented in his AI artist studies. August 2022.
Surea.i was “feeling very sour on AI art”. To which another Twitter user replied: “how hard is it to keep other artist’s names out of your f*cking prompts!” (see tweet below). This reaction surprised me. Is it really about how people write their prompts? What about the tools? What about the models? What about the training data? Sadly, the state of the debate on Twitter tells us more about people’s concerns than about the way AI actors are massaging those concerns with their discourse and tool design. In the rest of this text, I will look into this mess with a bit less innocence.
— Templedweller : Visionary Ornamentalism/Surrealism (@TempledwellerAI) August 15, 2022
Where AI knowledge lives
Let me call “knowledge” whatever that is that makes a T2I tool return something that we recognize as a cat when we ask for one. That thing that makes it “know” an artist style. I do not like the personification that this wording implies, but I will put that aside. There are three places where AI “knowledge” can live: the model, the training data, and the tool.
First the model. The simplest case. The knowledge certainly lives there, because we do not need to access the training dataset anymore. That is precisely the point. The knowledge is in the weights of the neural network that associates images with text.
Second, the training data. Knowledge certainly lives there too, because that is where it came from in the first place. Training a model is a big investment (it uses so much computer power that it is incredibly long and expensive) that abstracts the knowledge of the data into something much smaller, the model itself. Running the model is quite easy, while training it is hard. The training reduced and transformed the knowledge, so in some sense it created knowledge too. Nevertheless, a different version of that same knowledge lives in the training data set, in the sense that different data give a different model.
Third, the rest: the apparatus around the model. The argument is less obvious. In order to get convinced that the model “knows” what a cat is, you need to perform the whole image generation process. If you just look at the model as an array of weights, you cannot understand anything. The knowledge is only ever accessible through a performance in which actual images get generated. Therefore, anything that shapes that performance is also knowledge. For instance, how the prompt is processed. Indeed, T2I systems are always layered (DALL-E 2’s architecture for reference). One layer is the text encoder that transforms your prompt into a series of weights that the model can read. Another layer is the diffusion process, and it also shapes the output. And of course, the model is the most important layer, but we have seen that already. Each part can be considered knowledge, even the graphical user interface, in the sense that it shapes the output. Does it seem far-fetched? What comes next may change your mind.
The different places where AI knowledge lives are not equivalent. Their material differences matter in surprising ways. Here is an example. I used DALL-E 2 to generate an image, and I obtained this. Can you guess the prompt? Try it.
Generated by DALL-E 2. Can you guess the prompt?
You cannot guess the prompt. Let me reduce it to five possibilities:
A portrait of Mona Lisa by Leonardo Da Vinci
dfkljbfdkjb fdkjbkj dfbj
Dckfc slf
Smile
Pic
Is that better? Let me put some blank spaces below while you make a guess.
.
.
.
.
.
.
.
.
.
.
.
.
And the answer is: A portrait of Mona Lisa by Leonardo Da Vinci.
If you are like me, you probably wonder how it could be so wrong about something so famous. Does it even know what the Mona Lisa is? Well yes, but let’s call this a glitch for now. Out of the four images I obtained, three were what you’d expect, and one was this outlier, as you can see in the screenshot below.
DALL-E 2 output for “A portrait of Mona Lisa by Leonardo Da Vinci”.
I think that DALL-E totally brainfarted, and I will explain why it happened. But a short remark first. Some of my colleagues thought it made sense, that DALL-E interpreted the prompt as “what would the Mona Lisa be if Da Vinci lived today”, and that the girl looked like the Mona Lisa. I think that this take is a total hallucination driven by a strong desire to be in agreement with the T2I technology. I completely understand this drive, because I believe that these model can tell us something about our* culture, and can be used in the fashion of a divination device (*leaving aside the huge issue of what “our” means here). I tried to give this output a meaning, and I still found it fishy. I interpreted it as the diffusion process landing on a messed-up local minimum for weird optimisation reasons, but even so, it did not square with the excellent photorealistic rendering. If it is a glitch, why is the image so good, aside from not corresponding to the prompt? And if my prompt can be interpreted so freely, then why are the other images so similar? I think that we can all agree on something: this output is essentially ignoring the part of the prompt that says “by Leonardo Da Vinci”. No matter how many people would be asked to label this image, none would ever describe it as being made by Leonardo Da Vinci.
The interesting part is why DALL-E forgot about the artist styling. I only have an incomplete answer, because OpenAI’s systems are heavily blackboxed. But I know this: under the hood, the outlier image has been generated by a different prompt. OpenAI intercepts prompts to improve diversity, as they explained in July 2022. They do not tell how they modify the prompt, but it clearly nullified the artist-style part. Should we call this a glitch? Yes, in the sense that their interception broke the meaning of the prompt: I am pretty sure that DALL-E could perfectly draw an African Mona Lisa if prompted properly. I attribute the loss of the styling to a poor automatic interception of my prompt. But at the same time, it is not a glitch in the sense that it is part of the system. In fact, I cannot guarantee that the three other prompts have not been intercepted too. How would I know? I you ask me my prompt, I have nothing else to give you than “A portrait of Mona Lisa by Leonardo Da Vinci”. This is how it would be documented. The part of the “knowledge” that omits the artist styling does not live in the model or the training data, it lives in the rest of the tool. In the content moderation layer, and in the user experience layer.
Which, by the way, tells us that OpenAI could endeavor to prevent artist styling entirely. If they can do it accidentally, they might well succeed to do it intentionally (to some extent).
OpenAI mostly shapes DALL-E at the tool level
I have something to say about OpenAI’s way to design DALL-E. In a nutshell, I find their approach to containing harmful content insincere, hypocritical. Of course, generating harmful content is problematic, but where it is problematic the most is that you get it when not asking for it. The typical example is race and gender bias: ask a CEO and get only while males. And the model within DALL-E has exactly that kind of bias. What they should do is to use a better training set, because the knowledge contained in what they used is indefensible (more on that later). What they do instead, is patch problems after the fact. This is admittedly better than nothing, but here is the problem: it happens instead of solving the problem. They do not fake it until they make it, they fake it instead of making it. Sure, solving the problem is hard and expensive. But do they even try? Establishing this discussion and exploring it is my road map for the rest of this piece.
Eliza Strickland wrote a concise and informative piece for IEEE Spectrum titled DALL-E 2’s Failures Are the Most Interesting Thing About It (July 2022). It is very clear about what DALL-E 2 is good at (ex: food photography), where it falls short (drawing text, counting, faces when there are multiple people…), how the industry does not feel threatened by it (“A spokesperson for Getty Images, a leading supplier of stock photos, said the company isn’t worried”), and how OpenAI shaped DALL-E:
“OpenAI filtered the data set before training to remove images that contained obvious violent, sexual, or hateful content. … But the researchers have clearly stated that such filtering has its limits and have noted that DALL-E 2 still has the potential to generate harmful material. … the company integrated certain filters to keep generated images in line with its content policy and has pledged to keep updating those filters. Prompts that seem likely to produce forbidden content are blocked and, in an attempt to prevent deepfakes, it can’t exactly reproduce faces it has seen during its training. Thus far, OpenAI has also used human reviewers to check images that have been flagged as possibly problematic.”
From this, I want to highlight the practice of filtering. OpenAI filters the prompts the same way content is moderated on social media. There is even a moderation API that will tell you if your text “violates OpenAI’s Content Policy”. You cannot prompt DALL-E for anything. DALL-E’s content policy stipulates intentions, constraints put in human language, such as “mocking, threatening, or bullying an individual.” But what does it mean in practice? It means that you cannot use certain terms, or combinations of terms, and you cannot get the list. It probably changes over time. But it is opaque by design, like all moderation strategies, if only because it prevents turnarounds. Yet turnarounds exist, notably through “deliberate spelling mistakes”, as you can see in the tweet below. It shows that the knowledge is indeed in the model, but that the tool is constrained so that you cannot access it, aside from such tricks. One last thing about OpenAI’s moderation policy: it does not say anything about mentioning artist names in the prompts, even though some names are banned, such as “Trump”. This might change, but the styles would still be absorbed by the model, and the list of banned names is virtually endless. And with flabbergasting cynicism, OpenAI’s policy asks you to “not upload images of people without their consent”, or “images to which you do not hold appropriate usage rights”.
I found a way to trick #Dalle2#Dalle to generate this, if we make some deliberate spelling mistakes:
In her piece, Strickland focuses more specifically on bias, and how OpenAI deals with the issue. In short, here is what she reports:
“OpenAI asked external researchers who work in this area to [assess] the system’s risks and limitations. They found that in addition to replicating societal stereotypes regarding gender, the system also over-represents white people and Western traditions and settings. … [Another] team at OpenAI … found that removing sexual content created a data set with more males than females, which caused the system to generate more images of males. ‘So we adjusted our training methodology and up-weighted images of females so they’re more likely to be generated,’ [an OpenAI researcher] explains. Users can also help DALL-E 2 generate more diverse results by specifying gender, ethnicity, or geographical location using prompts such as ‘a female astronaut’ or ‘a wedding in India.’ But critics of OpenAI say the overall trend toward training models on massive uncurated data sets should be questioned.”
Let me unpack this passage in four points. First, DALL-E is firmly rooted in the dominant Western culture, with all of its “societal stereotypes”, gender and racial biases included. This is hardly surprising, considering that the training data was sourced by scraping the web, a space dominated by Western culture (I will return to that). OpenAI’s own post about bias mitigation features examples of what it means: “A photo of a CEO” returns only males, mostly white; “A portrait of a woman” returns only whites; “A portrait of a heroic firefighter” features only white males; “A portrait of a teacher” returns only females, mostly white; “A portrait of a software engineer” returns only skinny white males. For clarification, this was before bias mitigation was implemented (through prompt interception).
Second, biases interact with each other. Obviously, the whole analytical framework of intersectionality is about this, so not surprising either. But it means that you cannot fix one thing after another, because unbiasing one aspect may create new biases elsewhere. This is exactly what happened when removing sexual content caused an under-representation of women. Which immediately begs a first question: is female representation worth anything, if it is mostly through porn? And that begs a second question: how naïve can you be, to not acknowledge the problem and instead patch it with “up-weighted images of females”? I think that this case makes it clear that one cannot fix culture one bias at a time, that is just not how any of this works. Yet it seems that OpenAI’s strategy is to stick to their initial plan of patching one flaw after the other. But this cannot work, because you cannot take the bias out of the culture, you can only change the culture. Bias is culture, and culture is bias all the way down. Any bias is the flip side of a challenged cultural norm, and the same way cultural norms are heavily entangled, biases are.
Third, an important argument is voiced by the OpenAI researcher interviewed: users can engineer prompts that gets them anything they want. They can get a black female CEO as soon as they ask for it. Let me name this argument: there is a prompt for anything (TIAPFA). On the one hand, the argument is essentially legit. In most situations, the user can compensate for any form of bias through prompt engineering. That is why prompt interception works in the first place. Which means you can also accentuate a bias if you want. You shape your cultural norms. This argument puts the responsibility on the prompt engineer (the user). But it does not help unaware people who ask for “a photo of a CEO”. This is why OpenAI takes additional measures such as prompt interception: it helps “generate more diverse results”. Retain this: the TIAPFA argument and prompt interception live in different worlds. They address two distinct issues, and to some extent, they are incompatible. Indeed, if TIAPFA, intercepting prompts defeats the point! It disrupts the user’s (respons-)ability to set their own cultural norms.
Fourth, the “critics of OpenAI” question something else entirely: that the models are trained on “massive uncurated data sets”. Once again, according to the TIAPFA, it does not matter (users set their cultural norms). But there is more to this than TIAPFA, which is why critics bother, and also why “OpenAI filtered the data set before training to remove images that contained obvious violent, sexual, or hateful content.” OpenAI is doing some curation by filtering the data set, but not by sourcing it better. Strickland’s article is also clear about why: efficient models require humongous data sizes, and as an independent researcher observes, even “Wikipedia-based data sets spanning [about] 30 million image-text pairs are somehow ad hominem declared to be ‘too small’!”
There is a problem with the training data. I will get to that point in due time. For the moment, let us acknowledge that it is the elephant in the room. The critics focus on this. The TIAPFA argument is supposed to nullify it by shifting the responsibility to the user, but in practice we see that even OpenAI takes measures to deal with the most nefarious aspects of the training data (porn and violence). And at the same time, OpenAI’s measures are everything but shifting to another training data set. This is because models need to be trained on the biggest data sets to be efficient. At the end of the day, the only way to get more data for cheap is to lower your standards.
In short, OpenAI uses the big dirty data set, which reproduces all the features of Western culture, including prejudicial ways most people form their opinions (aka “biases”), but without porn and violence, and then tries to mitigate the problems as an afterthought through tool design (term-based prompt moderation and prompt interception) while shifting responsibility to the user via the TIAPFA argument. By comparison their competitor Stablility.ai, who released the T2I tool Stable Diffusion (currently in beta), uses zero moderation or prompt interception, and claims to be freely and transparently releasing the model itself to academics (although the demand I made is still pending, wait and see). In this remarkably uncritical video interview, Emad Mostaque, the f(o)under of the company, opposes OpenAI’s “paternalistic” approach. The video has the merit of letting him make his points freely. In the section where he is asked about the eventuality that his model is accused of producing harmful content, his response boils down to owning the TIAPFA argument while criticizing OpenAI’s interventionism:
“of course, humanity is horrible and they use technology in horrible ways, and in good ways as well. … The reality is that people get used to these models. They use them one way or another, and restricting them means that you are becoming the arbiter. … What [OpenAI] is saying is AI for us, and our clients (because it’s expensive to run these things), not for everyone else. … What they are really saying is we don’t trust you, as humanity, because we know better. I think that’s wrong.”
I focused enough on OpenAI for this piece, but I cannot move on without pointing to Karen Hao’s remarkable and extensive piece The messy, secretive reality behind OpenAI’s bid to save the world (2020). It is critical. To give you a taste, the article basically opens with the following statement. “Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.” More specifically, Hao looks into OpenAI’s claims to “distribute the benefits [of AI to] all humanity”, and the company approach to the social impact of its technology.
“The leadership speaks of this in vague terms and has done little to flesh out the specifics. … ‘This is my biggest problem with OpenAI,’ says a former employee, who spoke on condition of anonymity. ‘They are using sophisticated technical practices to try to answer social problems with AI,’ echoes Britt Paris of Rutgers. ‘It seems like they don’t really have the capabilities to actually understand the social. They just understand that that’s a sort of a lucrative place to be positioning themselves right now.’ Brockman [(co-founder and CTO)] agrees that both technical and social expertise will ultimately be necessary for OpenAI to achieve its mission. But he disagrees that the social issues need to be solved from the very beginning. ‘How exactly do you bake ethics in, or these other perspectives in? And when do you bring them in, and how? One strategy you could pursue is to, from the very beginning, try to bake in everything you might possibly need,’ he says. ‘I don’t think that that strategy is likely to succeed.'”
The Fantastic New World of AI Art Generators and Why Their Critics Get It All Wrong
The essay. The Fantastic New World of AI Art Generators and Why Their Critics Get It All Wrong, by Daniel Jeffries (August 2022). AI artists like Surea.i present this pretty long piece as the authoritative reference about T2I technology. Possibly because it fiercely defends AI art as a practice, and fights every point of criticism one by one. But it is worth mentioning that the argument is mostly sound and grounded, even though I have a problem with the essay as a whole. It accurately represents the AI artist side of the debate, and I want to unpack it now. And after that, I will conclude about the damn training data and what they contain.
“Are these new tools stealing or borrowing art? The short answer is simple: No.”
You saw that coming. I retain six arguments from that piece, summarized below with a quote.
T2I tools do not copy. “The first misconception is that these bots are simply copy-pastas. … OpenAI found early versions of their model were capable of ‘image regurgitation’ aka spitting out an exact copy of a learned image. The models did that less than 1% of the time but they wanted to push it to 0% and they found effective ways to mitigate the problem. … They fixed it by removing low quality images and duplicates, pushing image regurgitation to effectively zero. Doesn’t mean it’s impossible but it’s really, really unlikely”
Clickbait overdramatizes. “Calm and nuanced doesn’t sell magazines or generate clicks, but sensational headlines like Engadget’s ‘Is DALLE-2’s Art Borrowed or Stolen?‘ do.”
The web challenges norms such as consent. “There’s a growing fear of AI training on big datasets where they didn’t get the consent of every single image owner in their archive. This kind of thinking is deeply misguided and it reminds me of early internet critics who wanted to force people to get the permission of anyone they linked to. … what a colossal waste of time and creativity!”
Symmetry between artificial and human agents. “Engadget author, Daniel Cooper, writes ‘These systems did not, however, develop an eye for a good picture in a vacuum, and each GAI has to be trained.’ Well people don’t learn in a vacuum either. Don’t people study the artists that came before them? … AI learns just like we do, from mimicry and studying the world.”
Ontological discomfort causes irrelevant criticism. “All this goes back to people’s revulsion to determinism and math at the root of life. We don’t like that people’s style can be boiled down to math.”
TIAPFA (there is a prompt for anything), therefore the responsibility is on the user. “It seems that there are much simpler fixes than padding prompts [like OpenAI does]. People can add whatever gender, ethnicity or whatever else they like to the prompt and get precisely what they want. That’s the beauty of text prompts. Occam’s Razor applies here. Simpler is better. … As usual, it’s not machines that are the problem in the world, it’s people.”
I find this take quite aligned with the position of Emad Mostaque, the founder of Stability.ai. It has libertarian accents that I do not buy, even though I find them widespread among AI artists. I do not buy the third argument in particular, according to which some of those pesky social norms “could kill AI before it really develops into something truly incredible and beneficial, cutting off breakthroughs in science and art and mathematics itself.” The argument is not only absurd (AI could symmetrically develop into something horrible and harmful), but also circular. Indeed, the argument stems from the assumption that T2I technology will be beneficial to artists. Therefore it does not conclude that AI respects artists: it postulates it. This is just a cheap way to evacuate the whole concern about style absorption. I would not make such an argument in a legal battle.
Jeffries’ argument is entirely contained in this quote: “we have to understand where the idea that DALLE or Midjourney are ripping off artists comes from in the first place.” He gives us a series of reasons why we should not be afraid that T2I tools “are ripping off artists”, most of which are sound. He deconstructs the roots of this moral panic about T2I technology; fair enough. But he does not establish whether or not that technology steals styles from artists. He asserts it with confidence, but he does not make a positive argument. The closest I could find boils down to two things. First, style absorption is not robbery because AI does not copy. I find it a childish argument. And second, art is just maths and maths belong to everyone, live with it:
“It’s really astonishing how well the machine whips up brand new people in seconds and how well it understands the deeper characteristics of these amazing artist’s styles. But let’s be honest, there’s also something unnerving about it too. I understand the anxiety some folks feel about it. There’s something deeply unsettling about math generating an infinite variety of us.”
CONTENT WARNING: here we step into NSFW territory. I will not spare you anything. Porn is very much part of web culture. But most importantly, what you will see is already baked into the model. It is now time to look at the beast straight in the eyes, and see what it is made of.
To begin with, the artist styles, the entangled biases, and all the linkages of meanings that make the T2I technology work, exist as features of the knowledge that lives in the training data. Then, during the training process, they transfer to the model. And from there, through the diffusion process, they can be leveraged to generate images. It all starts with the training data.
Here is what I would like to be writing at this point: “some of the data sets available are cleaner than others, and AI companies have made different compromises between performance and quality, which explains why T2I tools exhibit different behaviors when it comes to bias.” But everyone basically uses the same data set, because it is the biggest, and because building T2I tools is essentially a race for model performance. That data set is called Laion, that is a portmanteau between the predator and “AI”, which I find painfully appropriate.
Disco Diffusion was trained on Laion. Midjourney was trained on Laion. DALL-E was trained on Laion. Stable Diffusion was trained on Laion, and in fact “Stability AI funded the creation of LAION 5B” (TechCrunch). Laion is the foundation of all the publicly available T2I tools.
The Laion data set consists of image-text pairs scraped from the web. Crawler bots have been deployed to find images on the web and find text that describes them. That text might be in the HTML description of the image, or as a caption, or next to it in the page, or even in the image itself, when it features text. A variety of techniques have been used to extract that text (more on that here). The approach was to harvest broadly and not curate anything.
There are two main Laion data sets. The older and smaller one is LAION-400. It features 400 millions of image-text pairs. Those have been “extracted from the Common Crawl webdata dump and are from random web pages crawled between 2014 and 2021.” The more recent and bigger one is LAION-5B, featuring 5.85 billions of image-text pairs. It was also extracted from the Common Crawl data, more extensively I suppose. “Unsuitable” pairs are removed: text too small, image too big, duplicates… (more info there). And on top of that, a bunch of useful things have been computed as part of the data.
The image-text pairs come from web pages crawled by the Common Crawl project. How is delineated this set, and who chose what gets in or not? As crazy as it sounds, I could not obtain this information, as if the question itself was pointless. The Wikipedia page does not tell anything about that. The FAQ of Common Crawl does not feature my question. The data release announcements do not say a word on that. Surprisingly, Google features the question, but unfortunately it answers on a technical level, not on a curation level (see below).
From the very start, the most basic information necessary to assess the content of the data is missing. The whole industry has agreed to not look into that direction. Although academics have been demanding that information specifically. Here we are again, reclaiming situated knowledges. Let me just copy-paste what I wrote in a previous blog post: “there is always a method. We must not hide it, because we must account for its flaws. Data is never raw, it is always obtained, and it comes with its own biases.” Or to use Donna Haraway’s own words, this “unregulated gluttony” that puts into practice the myth of “seeing everything from nowhere” (that she calls “the god trick”) “fucks the world to make techno-monsters” (and she wrote that in 1988). If we had a positive description of what was crawled, we could better understand how the models were shaped. But we do not have that.
I will show you why it matters, and this will lead us down a peculiar rabbit hole, so bear with me. It all starts with an amazing tool offered by the Laion team: a search engine into their data set. Try it! If you do not change any settings and just type an expression, it will retrieve image-text pairs that match it, according to the CLIP embedding (I will explain shortly). If you type something that exists in our cultural space, you have good chances to find it (ex: “Shrek“). If you type something that does not exist (ex: “A blue Shrek“) you will not find it, because the image is absent, but you will find images as close as possible to your target (see below). The search engine differs from the T2I generators in that it does not invent images, but it still shares an intelligent layer: the CLIP embedding. In short, a machine learning model of the same kind as those in T2I tools (a CLIP model) has been used to place the image-text pairs in a latent space. Your query is also matched to that latent space, and that is how the results are found by the search engine. The images it gives you are those that are close, in the latent space, to your query. This is why the terms of your query are not necessarily featured in the captions.
Searching LAION-5B for “Shrek“, default settings. August 2022.Searching LAION-5B for “A blue Shrek“, default settings. August 2022.
The CLIP model is bundled with the image-text pairs in the data set. You can even get the KNN graph: for each image-text pair, which are its closest neighbors. This is really important, because it allows you to look into the data set the same way T2I technology does, through a CLIP embedding. You can get a feeling to how the model “thinks”. It is much easier here than through the diffusion model, in the T2I tool itself. And that is exactly what we are going to do now.
My case will consist of a seemingly innocent query: “big”. What do you think the Laion search engine will return? Here is, for reference, what we see in Google images: Notorious Big (the artist), the word “big”, the movie Big, big things (a pumpkin…), a big mac (the burger), Big Ben (in London)… You get it.
Results for “big” in Google images (August 2022).
The Laion search engine gives you only one thing in common: the word “big”. The rest consists of teddy bears, balloons, strawberries, and clothes. What makes those things “big”? Can you explain the relation? Or do you think there is none? I have a hypothesis, but to understand it we must pay attention to the settings.
Searching LAION-5B for “big“, default settings. August 2022.
By default, the search engine checks three settings that profile the data set in the most charitable way. The “Safe mode” hides image-text pairs that a dedicated model has flagged as, basically, porn. Uncheck it. Similarly, “Remove violence” hides violent content: uncheck it too. And finally, “Enable aesthetic scoring” puts the nicest images on the top of the results page. Uncheck it too. The way they put aesthetic scores is based on a sample of images manually ranked by people according to how nice they think it looks, and then gets generalized to the whole corpus.
Uncheck these three options to see what really is in the LAION-5B data set. The “big” query will give you this: mostly white women showing their boobs, and if you scroll, the trend accentuates.
Searching LAION-5B for “big”. “Safe mode”. “remove violence” and “Enable aesthetic scoring” unchecked. August 2022.
This is the real face of our culture as performed on the web, and that is why situatedness matters. The web is full of porn and violence. Who would deny that we are interested in sex and violence? This is not even specific to Western culture, although the skin tone of those women is. The web tells us that if there had to be only one thing that is big, that would be boobs. Sex is so prevalent on the web that it dominates even an innocent query like “big”.
I did not discover that query by myself, I obtained it from Abeba Birhane and her co-author’s work on assessing the LAION-400M dataset’s bias (as we have seen, it also applies to LAION-5B). She unpacks her paper in the Twitter thread below, easier to parse. A digest from her thread follows.
“Images: large scale vision datasets are plagued with problems including curation biases, inclusion of problematic content in the images, as well as contributing to the gradual erosion of privacy. …
The CommonCrawl: among other things, contains ~17.78% hate speech content. …
The LAION-400M dataset emerges from this landscape containing hundreds of millions of Image-caption pairs parsed from the Common-Crawl dataset and filtered using a previously Common-Crawl trained AI model; CLIP. …
Even the weakest link to womanhood or some aspect of what is traditionally conceived as feminine returned pornographic imagery. For example, when searching for descriptive adjectives such as “big” and “small”, it returned many porn images. …
The specific semantic search engine version meant to fetch images from LAION-400M not only amplified hyper-sexualized & misogynist representations of women, but also presented results that were reminiscent of Anglo-Euro-centric, & potentially, White-supremacist ideologies. …
The CLIP-paper authors themselves outlined that images of ’Black’ people had an approximately 14% chance of being mis-categorized as [‘animal’, ‘gorilla’, ‘chimpanzee’, ‘orangutan’, ‘thief’, ‘criminal’ and ‘suspicious person’] in their FairFace dataset experiment. …
Finally, we acknowledge the grassroots aspect of the endeavor and commend the LAION-400M creators for providing a window into this world and encourage them to keep the dataset accessible to researchers. We don’t believe retraction of LAION-400M is a viable move.
With this in mind, let’s prompt “big” into Disco Diffusion (trained on Laion). What do we see? The images are deformed, but I do see (clothed) boobs, asses, penises, and vaginas. I did not cherry-pick those results, they are just the first ones I generated. We understand why porn is regurgitated because we have seen what Laion contains, but I think that out of context, this result would be quite surprising.
“big” prompted into Disco Diffusion 5.6.
What about OpenAI’s DALL-E? Here is what I obtained: two pictures of a giraffe, and two pictures with no connection to the meaning of “big” whatsoever. Giraffes are tall, not big. All of this smells a lot like prompt interception to me.
“big” prompted into DALL-E 2 (August 2022).
Can we agree that neither Disco Diffusion nor DALL-E have a good understanding of what “big” means? Within the model, “big” is associated with porn, and if you try to remove the porn from the big, like OpenAI does, you are left with meaningless associations like winter and ants. It also happened when we looked for “big” in Laion with the default settings: since sexual content was filtered out, there was not enough meaning around “big” to counterbalance the ranking by aesthetic score, and we just obtained what people find nice: teddy bears, balloons, and strawberries. Unless CLIP retained a similarity between balloons and boobs, and why not, between strawberries and vaginas. It is genuinely hard to rule out that possibility.
What happens about bias and harmful content happens about everything else in Laion. Porn and violence attracted attention because they cause harm, and academics took the time to investigate. AI artists had another agenda, but in many ways they discovered the same thing. Take for instance the case of artist Anne Geddes (see below). The rendered images feature babies, although the test prompts do not ask for them. This is because she specializes in pictures of babies, as we can check in the Laion search engine (see down below).
AI artist study for Anne Geddes.Searching LAION-5B for “Anne Geddes”. “Safe mode”. “remove violence” and “Enable aesthetic scoring” unchecked. August 2022.
In this case it is not the style that the model has absorbed, it is the subject. I make a difference between what is represented and how it is represented. For me, “an old man by Anne Geddes” refers to one of her photos but with an old man instead of the baby. But the model does not make such difference, that is why it draws a baby when you ask for a house. It was already the case with Simon Stålenhag, as his style was as much about how he paints than what he paints. The artist studies are full of these effects: Appollonia Saintclair gives you butt-naked women, Audrey Kawasaki hairs and face elements, Coles Phillips a woman (always the same), Daniel Ridgway Knight villagers, Giuseppe Arcimboldo fruits and vegetables, George Grosz troll-like figures, Hans Bellmer fat flesh, and Kaethe Butcher diaphane naked silhouettes. Disco Diffusion mimics the pictural style as much as the typical subject of the artist, even when you specify another subject. It blends and merges the two subjects, yours and that of the artist, together.
What Laion knows about these artists is the part of their work that is available on the web with their name attached. Some of these artists are famous, and their work is spread in many places. But for most contemporary artists, it is different: their portfolio has been absorbed. This is why “trending on Artstation” works so well as a modifier. ArtStation is “the leading showcase platform for games, film, media & entertainment artists,” according to their LinkedIn profile. It is a place where amateurs, semi-pros and professionals share their paintings. The purpose of the website is to disseminate their portfolios. ArtStation is basically a big database of well-described images, because that is what SEO (search engine optimization) demands. This is a perfect data trove for Common Crawls, and from there, Laion. Your online portfolio gets you in Laion.
You can basically go on ArtStation, click on a picture at random, get the artist’s name, and put it in Laion to see what you get. I just did that, landing on a concept artist named “Ismail Inceoglu”, and indeed, Laion knows him. And not only does it find images with its name attached, it also finds images without:
Ismail Inceoglu’s portfolio on Artstation (I picked that artist at random)Searching for “Ismail Inceoglu” on Laion (August 2022).
And it is not just ArtStation. That platform became popular because it matches what the AI artists want to obtain. But there are other similar platforms that have been harvested in Common Crawl and thus ingested by Laion. They might not be as useful to AI artists, but their content still contributed to shaping the CLIP latent space and the knowledge in Laion. All those porn images have to come from somewhere, right?
I first sourced a list of the top image repositories for artists, and I tried them all in Disco Diffusion: DeviantArt, Behance, Dribbble, CGSociety, ArtStation, Tumblr, Pinterest, Drawcrowd, Pixiv, Ello.co, Twitch, Concept Art World, Our Art Corner, PaigeeWorld, Newgrounds, and Virink. As you can see below, Disco Diffusion (in fact, Laion) has learned the “style” of each of these platforms too. Colorful mockups on Behance and Dribble, 3D renderings on CGSociety, but also Tumblr regurgitating soft porn, and Twitch and Virink screenshots. In some sense, each platform delineates a specific space for image generation. Some can be considered safe spaces where sex and violence are virtually absent, like in Behance and ArtStation. But the associations learned by the model are still lurking in the model, and “big” keeps relating to boobs despite those safe spaces. It is just that other influences, such as “trending on Artstation” or “by Simon Stålenhag” dominate the diffusion process, and ensure that the generated image lands in an acceptable place. The slope to porn is still ingrained in the model, we just found stronger influences to overcome it. TIAPFA; but as we have seen, that is not enough.
“trending on DeviantArt” in Disco Diffusion 5.6“trending on Behance” in Disco Diffusion 5.6“trending on Dribbble” in Disco Diffusion 5.6“trending on CGSociety” in Disco Diffusion 5.6“trending on ArtStation” in Disco Diffusion 5.6“trending on Tumblr” in Disco Diffusion 5.6“trending on Pinterest” in Disco Diffusion 5.6“trending on Drawcrowd” in Disco Diffusion 5.6“trending on Pixiv” in Disco Diffusion 5.6“trending on Ello.co” in Disco Diffusion 5.6“trending on Twitch” in Disco Diffusion 5.6“trending on Concept Art World” in Disco Diffusion 5.6“trending on Our Art Corner” in Disco Diffusion 5.6“trending on PaigeeWorld” in Disco Diffusion 5.6“trending on Newgrounds” in Disco Diffusion 5.6“trending on Virink” in Disco Diffusion 5.6
What about the contrary, unsafe spaces harvested by Common Crawls and also included in Laion? I sourced a list of porn subreddits (just the top 10), and I checked what Disco Diffusion knows about it: r/GoneWild, r/NSFW, r/NSFW_GIF, r/RealGirls, r/holdthemoan, r/BustyPetite, r/cumsluts, r/LegalTeens, r/PetiteGoneWild, and r/sex. We basically get (deformed) porn, in different flavors, except for “holdTheMoan” and “legalTeens” for some reason. Not only Laion, but Disco Diffusion (with default models) very much knows all of that.
“trending on r/GoneWild” in Disco Diffusion 5.6“trending on r/NSFW” in Disco Diffusion 5.6“trending on r/NSFW_GIF” in Disco Diffusion 5.6“trending on r/RealGirls” in Disco Diffusion 5.6“trending on r/holdthemoan” in Disco Diffusion 5.6“trending on r/BustyPetite” in Disco Diffusion 5.6“trending on r/cumsluts” in Disco Diffusion 5.6“trending on r/LegalTeens” in Disco Diffusion 5.6“trending on r/PetiteGoneWild” in Disco Diffusion 5.6“trending on r/sex” in Disco Diffusion 5.6
TIAPFA. The same way we can tinker prompts to get less harmful content, we can tinker them for more. It should be clear at this point that prompt modifiers like “ArtStation” or artist names are not “magic” like Ted Underwood reports in the Vox video. The romanticism around prompt engineering should have started to wear out at this point. Behind the magic, we find internet culture, with its beauty but also a lot, a huge lot of toxic content.
That sounds crazy, but we basically don’t know what Laion contains. It is so big that we have basically zero assessment of what it looks like from a cultural perspective. It is entirely possible that huge problems lurk in it, and that we just did not discover them yet. The industry is just happy with its convenient ignorance, and everyone’s strategy boils down to damage control. At the very least, we should be serious and cautious about exploring those latent spaces, and limiting their industrial applications. Laion states: “we … do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress” (emphasis by them). Nice, but this is hiding behind one’s pinky finger. The T2I technology based on Laion is already at a stage where AI companies compete for the image generation market.
More than the pornographic and violent images themselves, which can be more or less filtered out, I am more concerned with the harmful content in the captions themselves. It is the strong association of “big” with boobs that is harmful, not the boob pics alone. The toxicity lies in what a woman is for the model. The association with certain words obviously comes from the caption (the text extracted to label the image). Who gets to write those captions? You may think that the data set is so big that there is no meaningful answer beyond “many people”. Yet there is de-facto a situated answer, and it is basically “internet culture”. Because not everyone has the same interest in captioning images. Here is an example. I noticed a structured pattern in certain captions. Unfortunately, the “search by text” feature is broken at the moment, so I cannot take a simple screenshot. But I compiled a few of those below. Pay attention to the text.
The text of these Laion entries has a pattern.
Those captions consist of a rating, a score, a series of tags, and a user. The rating does not have to be “explicit”, it can also be “questionable” or “safe”, as you can see below.
I searched for a reduced version of those captions in Google to try to find where they come from (1, 2, 3, 4, 5, 6, 7, 8). I did not find the same images, but I found two websites: Yande.re and Konachan. Those are two image boards dedicated to anime and manga, and they look so similar to me that they might have the same engine behind the scene, although they seem to contain different things. Those communities are tagging images obsessively (example below). Common Crawl and Laion give a disproportionate influence to those communities. Because they publish so many images and so precisely tagged, they get to weigh a lot in the associations ingrained in the model.
Each image in Yande.re and Konachan is richly tagged.
This is not about NSFW content. Following the naïve policies we have seen deployed by OpenAI, we could just filter out the “explicit” content, and maybe the “questionable” one. It’s even easier, because those communities have done the tagging. You just keep what they label “safe”. But at the same time you let them define what those categories mean, and you also let them define the descriptions of the “safe” images. Do we want those people’s way to describe women to be overrepresented in our models?
The T2I tools based on Laion are as poorly behaved as kids educated exclusively from internet culture, its darkest places included. Sure, we can now use subsets of Laion that supposedly contain no porn and violence. It does not work great yet, but it will be improved in the future. Even so, the remaining text-image associations keep being shaped by the internet culture and its toxicity, because the toxicity does not only lie in the images. AI is not trained on data fallen from the sky, it is not trained on the knowledge of mankind; it is trained on a fucked up dataset crawled half-randomly from the web over a decade, without any form of validation, without even the most basic documentation. It’s just that everyone, in the industry, has agreed not to ask the question. No questions, no problems. But no problems, no solutions.
The absorption of artist styles is just a part of a generalized practice, in the machine learning community, that consists of letting whatever happens in the digital public space shape the models. One side of it is the morality of harvesting entire portfolios. Another side is the cultural impact of reinforcing the influence of those portfolios in our cultural space, through image generation. Yet another side consists of the consequences of those effects on the users unaware of the problem. There is a prompt for anything, but only to those who have the appropriate literacy. And this is just for the most romantic corner of that technology: AI art. The same applies to porn and violence, and it is a whole lot less fun.
I believe that the most responsible thing to do is exactly what Google did with Imagen. Bravo Google, I know how frustrating it must have been for those who have worked hard on this. Here is the relevant part of the statement:
“There are several ethical challenges facing text-to-image research broadly. We offer a more detailed exploration of these challenges in our paper and offer a summarized version here. First, downstream applications of text-to-image models are varied and may impact society in complex ways. The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access. Second, the data requirements of text-to-image models have led researchers to rely heavily on large, mostly uncurated, web-scraped datasets. While this approach has enabled rapid algorithmic advances in recent years, datasets of this nature often reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups. While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.
And I think that the second most responsible thing to do is to allow absolute transparency to the academic researchers and journalists willing to investigate in the T2I systems in depth. And why not, help them. For instance by funding them.
You will find below a gallery of A.I.-generated images depicting social networks. They are generated by Disco Diffusion after a base prompt from my last post. More precisely, the admin of the Discord channel Fever Dreams added the prompt to a bot of his making, and it generated those images. The bot changes the artists of my initial prompt, which gives more varied results. It also uses different image formats.
“A beautiful painting of a vintage network map with communities seen from above by [4 artists at random]”.
The prompt
My intent was not to visualize a social network. But it turns out that Disco Diffusion interprets “community” sometimes in the sense of “social media”, and sometimes in the sense of “people”. In the end, the tension you can see in those images fits the ambiguity of the term “social network” itself. So I am just sharing them if it can be useful to someone. The exact prompt is the file’s name. The license is CC0 (Public domain).
Images of social networks. Generated with Disco Diffusion. License: CC0 (public domain).
In short, Disco Diffusion is a Google Colab notebook that you can reuse for free to generate images from text. You can see it as artificial intelligence, as machine learning, as an assemblage of algorithmic techniques. You can also see it as a notebook additively modified by a community, and shaped by their practices. Good entry points into this community are the Discord and the subreddit.
At the heart of the notebook is the prompt: a sentence that Disco Diffusion uses to generate an image. It is the main way to dialogue with the algorithm. To negotiate. Because Disco Diffusion is thick; sometimes surprisingly savvy, sometimes bafflingly stupid, and often you hesitate about which one, because that is what makes it so effective. So one cannot anticipate the result of a prompt. We need to build specific knowledge about it, by trying it.
Prompt engineering is just query design for Disco Diffusion: the way you learn to obtain a certain result by trial-and-error iterations. That is how the members of this community build knowledge about the algorithm. Not by looking into how it works, as this does not help much, but by assessing what it produces, from the outside. They build post-hoc interpretations of Disco Diffusion. See for instance so-called “AI art studies”: a library of how certain terms or artist names shape the result. But there is more to it, because those bits of text interact in unexpected ways when you mix them into a prompt.
I write this post to show what happens under the hood, to expose the practice. You will easily find impressive pictures, quite often with the prompts they come from, but you will rarely see the process through which those prompts are designed. It is harder than it looks. The choice of each word has surprising implications. You will easily find a success story where the A.I. works like black magic, but in practice that is far from the case. And in fact, prompt engineering is a lot about turning the weaknesses of the A.I. into something productive. So by exposing the process of prompt engineering, I also hope to make visible the weird ways Disco Diffusion fails, or rather misbehaves. Which this misbehavior is, I think, why many artists love it so much (compared to for example DALL-E).
I start with a simple, open question: could I repurpose Disco Diffusion for rendering network maps as maps? As a way to style, to make up a network map rendered beforehand by other means. It is a vague question, and it has to be, because this is pretty exploratory. I will share my train of thoughts and comment some interesting things along the way. I will also not cherry-pick: every time, I will only render 4 images and show you them all. Hopefully, this will lead me somewhere. As I am writing these lines, I have no idea yet.
Vintage map, trending on Artstation
In case it is not clear, the prompt is the title above. It might sound weird if you’re new with Disco Diffusion, but a classic way to (try to) ensure consistent and aesthetically pleasing results is to add “trending on Artstation” to your prompt. Artstation is a platform for artist portfolios that was incorporated into the LAION dataset used for training the algorithm. I also added “vintage” for flavor.
Problems already appear. First of all, those are not maps as seen from above, but DD (Disco Diffusion) added a tilted angle and a narrow field of view. Some of the pictures also have some elevation. And the third picture is not really a map. Or a weird, specific one. Where does this come from? Too much is going on. Let me start again, from a more minimal prompt, and we will proceed step by step.
A map
This is flat, which fixes my issues. But it comes with new problems. It’s not pleasing, there is not much detail, and the maps look like children’s drawings. Big, black roads. This is a pretty specific take on the concept of map. I will now try to see if DD gives me different maps with different simple terms. I will first explore this, and then try to build more elaborate prompts.
Cartography
More varied and colorful, with now roads in white or other colors. Also aesthetically agressive, which I assume I can fix by adding stuff to the prompt. More problematic, the field of view and elevation are present. Takeaway: that does not just come from “trending on Artstation” as I initially assumed.
Topographic map
I expected geodesics, and they are somehow present, but reinterpreted as stacked folds or ravines. Somehow, DD did not “understand” the specificity of the pattern, and is trying to render the geodesics as if they were a geographic feature. Of course, those concepts do not exist as such within DD. Let’s say that “map” is clashing here with “topographic”.
Ground plan
I tried yet another possible synonym of “map”. This prompt captured a different kind of map, and it’s pretty detailed too. Nice finding. Remarkable features: white walls, green spaces.
Building map
Close to “ground plan” but with problems (for what I want to do). First, blurriness, 3D and field-of-view. Second, as we see most clearly in image #4, DD understands the prompt at the same time as “a map of a building” and as “a building drawn as a map”. DD has (known) issues at compositing, which I see as more of a boon than a curse, to be honest, but it does not help us here.
This prompt makes me think: let’s try to nudge DD to draw maps of different scales, and see if it gives us different kinds of maps. Even for me, there is not just one kind of map. Let me just replace “building” by “city” and other things.
City map
Captures yet another flavor of map. Pretty varied, with writings. Problem: the resolution is degraded. Note that this has been produced by DD. Yes, DD tried to reproduce images in low resolution. These algorithms are just stochastic parrots, after all. They often regurgitate things that are undesirable in the most obvious ways (to us, not to them). Also note that it tried to write “city” (image #4). This is in part a problem we have already seen: “city map” is also understood as “the word city rendered as a map”.
World map
This is indeed very different kind of map, yay! It comes with a pretty weird look though, between child drawing and play dough. The roads have disappeared, oceans have appeared. We have writings, and it feels a bit like it is contained in a circle (except image #2). It is remarkably consistent.
I wonder two things. Can it regurgitate the actual shape of the world? And if so, which framing and projection would be picked? This side-tracks me but I add, to my stack of things to try, actual geographical entities, and projections.
The oceans are not super consistent though, and I wonder how good would DD be at managing the blue areas. I keep this point for later, as I want to try more “XXX map” prompts.
Network map
You did see that one coming, didn’t you? DD seems to understand the prompt as the map of a network, as in underground or railroad network. It tends to be photorealistic. Also worth remarking that the images #2 and #3 have geographic map elements.
Sky map
This result comes as a total surprise to me, as I fully expected a night sky map, a map of the constellations. This is totally how Google Images understands “sky map”, for instance. But no, DD understands “the sky rendered as a map”, where sky seems to mean clouds over a blue expanse. Genius? Dumb? Both? You decide. Certainly usable, though.
Treasure map
I expected a big red X mark and a dashed path to it. We’re not quite there yet, but the maps produced seem to be in that spirit. We have a mark in image #3 and a red splash in #4.
Dungeon map
I expected a black-and-white plan of rooms and corridors, and this is pretty much what DD gave us. Nice! And usable. This simple prompt can certainly be repurposed to generate actual role-playing maps. Let me push a bit further in that direction.
RPG map
Again, a surprise, as I expected it to be pretty close to “dungeon map”, but no. It is much more colorful and varied. I suspect that “RPG” is understood as a videogame thing, because those images like Zelda levels to me. Case in point, the presence of tiles (all images) and geographic features that look like they have been placed onto a grid (#2, #4).
Videogame map
This is too much of “videogame rendered as a map”. Which is a shame, because many non-RPG games have fantastic maps, notably FPS and RTS. But I am not pushing more into that direction (which is certainly doable, starting with stuff like “FPS map” and then trying actual game titles).
Now, I want to explore a bit beyond the “XXX map” template.
Floorplan
I thought of that one while trying “dungeon map” and for once it does give the results that I expected: similar in style to “dungeon map” but more contemporary, more modern. Image #1 is blurry but the other ones have thin details. The realistic style that creeps in can be managed by engineering the prompt, I assume. Nice prompt overall.
Map projection
Following an idea mentioned before, what if I just ask for an unnamed projection? My rationale was that the expression “map projection” is mentioned in specific contexts, and I expected to see the most abstract features of world maps, such as meridians and parallels, and cuts in maps like some projections have (e.g. the Waterman-butterfly). Instead, those maps look more like world maps, to me, than those from the actual “world map” prompt. This is a classic lesson of query design: the best way to track something is rarely its name.
All these images look torn apart and reassembled. Open question: is it related to the fact that some projections have cuts? This result is mysterious to me.
I will now try to fix the problem of “XXX map” being interpreted as “XXX rendered as a map” by reformulating it into “map of XXX“.
Map of a building
These results seem sensibly better than those from “building map”. They now look more like maps, and they are pretty varied. The fix seems to work :)
Map of a city
We have lost some of the features of “city map”, notably the writings. But like before, it fixed the ambiguity. We still have the streets and rivers and parks. However, new problems appeared, like that pesky field of view. And streets are so white!
About water in maps: we have seen that for DD, the world map is not bathing in oceans. Can it draw an island, though?
Map of an island
The answer is yes, and the results are pretty detailed. It seems to hesitate between different rendering style (e.g. photographed or drawn) and mix them at times (image #1 and #2). Pretty good, though. This pattern seems promising: can it also represent other geographic features?
Map of mountains
This prompt works but the mountains still seem seen from the side, at least partially. At the same time, it retained the way mountains are rendered from above in classic cartography, with hill shading etc. It compromised between the two perspectives, as if the cartographer was Picasso. Despite the prompt template, DD still draws “mountains rendered as a map of mountains”.
Map of a desert
For some reason, I did not expect water and vegetation in a desert map. But now that I think of it, DD might be the most savvy here. In a desert, what you want on your map is the closest oasis. The results are very consistent, although the photo style creeps in like for islands. Well done, nevertheless! Can we move to even bigger things?
Map of a continent
Many details, despite some blurriness and 3D elevation creeping in. Like “world map” and unlike “map of an island”, the land masses are not bathing in water (it does not fill the map up to the sides). It nevertheless retained something of the scale of a continent in the depiction of the geographical features. All maps have are colored in naturalistic ways, sometimes with the addition of abstract colors. I see it as continent maps featuring countries.
It seems hard for DD to bathe land masses in water. What if I ask for just the ocean?
Map of the ocean
Better but still difficult. DD seems lost, here, and features not related to maps start to creep in: realistic waves and marine creatures (I see an octopus in #1). Lost, DD confuses parts of the prompt. It’s one of the way its resistance shows up. That being said, I know that ocean maps exist and that some of them are in the dataset it was trained on. I just did not succeed in connecting to them (assuming that they found a specific place in its head, i.e. a given location in the feature space). Let me try another prompt.
Oceanographic map
We captured a different map flavor, but we are not there yet either: marine life features appearing everywhere. That would be a problem to fix (or an opportunity to explore). I’m trying again.
Topographic map of the oceans
It’s a fail: the problems of the prompts above (wave and animal features) combined with the problems of “topographic map” (geodesics rendered as geographic features) produce this mess. In a way that is typical of DD’s genial idiocy, these now make sense as patterns of sand on the bottom of the sea.
This is more understandable if you know how the diffusion process works. In short, the features are drawn from large to detailed, and the later steps ignore what the earlier steps “had in mind” (i.e., they had nothing in mind, they were just parroting their thing). So geodesics are drawn early on, but later on, the algorithm, not knowing what to do with it, gives it another meaning.
Takeaway: DD is not great at dealing with seas and oceans in maps. Lakes and rivers seem fine, though.
Moving on to the next question on my stack: can it recognize known geographical shapes, like coastlines and countries?
Map of France
This is not France, not even remotely. Image #2 has an air, but I am probably hallucinating. I find comical the pathetic attempt at shipping the French flag as a clue about what this is. That being said, it captures features of the map of a country. I want another attempt!
Cartography of France
That is worse :( Maybe France is hard to characterize?
Map of Denmark
Image #3 is remarkably close to the Netherlands, too bad it’s the wrong country. Joke aside, DD retained that Denmark is basically surrounded by water, but that is all.
Cartography of Denmark
Worse, and once again we see DD becoming erratic. Or is all this red another failed attempt at drawing a flag? OK, DD does not give us country borders. Continents, maybe?
Map of Africa
I wonder if it is not a bit better. The map style seems different to me (warmer colors?). Difficult to assess, but DD is known to be biased in many ways including the most obvious ones, like being centered on the Western world. I write this because the result of “map” matches “map of France” and “map of Denmark” but much less “map of Africa”.
Cartography of Africa
Also a different style, but now image #1 has a shape close to Africa! Coincidence? Possibly not. I now think that it might be possible to improve the retrieval of some of the best known geographical features, but this requires more tweaking than just the prompt, and I will not try this here.
Short parenthesis about two things at once: DD’s biases, and its poor ability to draw faces. What do you think we get if we ask for a portrait of a man?
Portrait of a man
Answer: monstrous white dudes. End of the parenthesis.
Map of Paris
Consistent with previous observations: for DD the main features of Paris are the Eiffel Tower(s) and the architecture, but not the Seine or other geographic things.
Cartography of Paris
The same in worse.
Take away: DD is just bad at returning the shape of a city, country or continent. It tries instead to ship other elements that it sees as characteristic of the entity mapped. But if you ask for a city, country or continent, it will retain something of the scale of what you asked for. So “map of France” has a different flavor from “map of Paris” even though neither is recognizable.
Enough with the basic prompts. I want to make a selection to start adding stuff to them. We can get maps of different scales (building, city, country…) and I do not want to choose, so I will combine the prompts that work best. Only then will I try different ways to tweak them.
Map of an island with mountains, rivers and cities
The rationale for this kind of prompt is that I want to have a variety of things into the same map. Yet all I can get for sure, it seems, are mountains and some water expanse. Island is not granted, nor are rivers and cities. The results are pretty varied, however, which I appreciate. DD has more material to work with. It might be good enough for me. All these images are pretty map-like to me.
Next I would like to try a country. As we have seen, it is worth using an actual country name since it gives the map some flavor without imposing the actual shape of the country (let’s turn this weakness into an opportunity). Let me start with a country that is also an island!
Map of Japan with mountains, deserts, forests, fields, rivers and cities
Pros: map-like, detailed, and with a lot of character. Cons: strong presence of the flag, some writing, can be blurry. Even when the flag is not directly there, its colors creep into the image due to the diffusion process.
I want to try a country that is not an island, and that has a flag with colors that blend in more easily. I pick Ukraine.
Map of Ukraine with mountains, deserts, forests, fields, rivers and cities
Those maps look less drawn and more satellite-view-like. It is less charming. The yellow and blue of the flag keep creeping in, but it is less problematic. A very different flavor than the precedent prompt. I try Rwanda next, because it’s yet another continent and also, the flag has map-like colors.
Map of Rwanda with mountains, deserts, forests, fields, rivers and cities
Satellite-like too. Like previously, I do not see rivers and cities. Let’s try the continent scale, now.
Map of Africa with mountains, deserts, forests, fields, rivers and cities
The shape of Africa, in fact, appears, confirming my previous hypothesis. Maps of Africa apparently “want” to feature animals and people (image #2). The four images are pretty different. Interesting.
Map of Europe with mountains, deserts, forests, fields, rivers and cities
Those images are also pretty different from one another. I do not see rivers and cities, though. They also have this plastic-like appearance that we have seen a number of times now, notably with “world map”.
Map of Asia with mountains, deserts, forests, fields, rivers and cities
Like with Japan, those maps have a distinctive character. They still do not have rivers and cities, but they remain less satellite-like.
Next scale: the world. Since “map projection” gives us world map, let me try this:
World map projection
It is a fail. For some reason, those look even more torn apart than “map projection”, to the point that images #2 might not even be understood as a map. I need another angle.
World map with mountains, rivers and cities
Weirdly, mountains are on top of those maps, seen from the side. In some ways, the scale seems smaller than for continents. DD is not doing great with the world scale.
Moving on. Ground plan and floorplan are both nice and pretty similar, shouldn’t I combine them?
Ground plan, floorplan
It works but it is not fantastic. I think I preferred “ground plan” by itself. It is both sparse and too square. “Dungeon map” might compensate for that, maybe? But wait a minute, what if I shortened all of that into…
Dungeon ground floorplan
My bad: it is now trying to make maps that could also be how it imagines a dungeon: stone bricks (image #1), windows (#2) etc. Let me rephrase.
Dungeon map, ground plan, floorplan
Now it works!
Network map with communities
I wanted to nudge network maps toward node-link diagrams. It kind of works, but it seems that “community” is also understood in the sense of “people” (images #1 and #4).
Gephi network map
DD correctly picks the Gephi vibe. But at the same time, this is not the direction I want to take. I am looking for something less dramatic.
Node-link network map
The image #1 is surprising, but the others are like before, in sparse. My favorite attempt remains “network map with communities”.
At the end of the day, I will keep these 6 base prompts:
Dungeon map, ground plan, floorplan
Map of a city
Map of an island with mountains, rivers and cities
Map of Asia with mountains, deserts, forests, fields, rivers and cities
RPG map
Network map with communities
The four first are actual ways to (hopefully) generate maps. The last two are more like a control group, to track the impact of different modifiers. I do not expect those to necessarily produce map-like images, but who knows!
I will test different additions to these prompts to improve them. My goal will simply be to generate nice maps: aesthetically pleasing, but also recognizable as maps, and importantly, flat (as seen from above).
(…), trending on Artstation
Each row is a different base prompt.
Row 1: Dungeon map, ground plan, floorplan, trending on Artstation Row 2: Map of a city, trending on Artstation Row 3: Map of an island with mountains, rivers and cities, trending on Artstation Row 4: Map of Asia with mountains, deserts, forests, fields, rivers and cities, trending on Artstation Row 5: RPG map, trending on Artstation Row 6: Network map with communities, trending on Artstation
This modifier visibly improves the aesthetic qualities of those images: better graphical consistency, more details. It comes with problems, though. First, a lot of narrow field of view. Second, the images tend to be realistic. Third, some elements are not drawn as seen from above, but rather from the side. These elements make the images less like map and more like 3D dioramas. Those are maps from a movie screenshot, not from a printable file. The only case where it makes sense is the “RPG map” prompt, that is already in that direction.
Of those three problems, one is a battle I do not want to fight: things drawn from the side. I doubt I can fight against that, and it does not bother me much, as it is something we sometimes see in old maps. I want to get rid of the out-of-focus blur though, and I want a flatter rendering. I assume that the former may come with the latter, so I will start by seeking map-like flatness.
I will try the classic strategy (for DD) to ask explicitly for a “beautiful painting”. But before that, since most of our prompts are already “maps”, has DD a notion of “beautiful maps”?
A beautiful (…)
Row 1: A beautiful dungeon map, ground plan, floorplan Row 2: A beautiful map of a city Row 3: A beautiful map of an island with mountains, rivers and cities Row 4: A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities Row 5: A beautiful RPG map Row 6: A beautiful network map with communities
This looks promising to me. Even though the out-of-focus blur and 3D features are still present sometimes, the result is flatter and more map-like than “trending on Artstation”, while improving the aesthetic quality. I am especially impressed by the city map (row 2) and RPG map (row 5).
A beautiful painting of a (…)
Row 1: A beautiful painting of a dungeon map, ground plan, floorplan Row 2: A beautiful painting of a map of a city Row 3: A beautiful painting of a map of an island with mountains, rivers and cities Row 4: A beautiful painting of a map of Asia with mountains, deserts, forests, fields, rivers and cities Row 5: A beautiful painting of a RPG map Row 6: A beautiful painting of a network map with communities
Those images are nice and flat, which is great. Are they map-like? I find them like painted maps, which is one kind of map. But do they look like painted maps of something, or like paintings of that thing? It depends, I think. The island map (row 3) and Asia map (row 4) are better this way, but the city map (row 2) is more like a city and less like a map of a city. What if maps are drawings rather than paintings?
A beautiful drawing of a (…)
Row 1: A beautiful drawing of a dungeon map, ground plan, floorplan Row 2: A beautiful drawing of a map of a city Row 3: A beautiful drawing of a map of an island with mountains, rivers and cities Row 4: A beautiful drawing of a map of Asia with mountains, deserts, forests, fields, rivers and cities Row 5: A beautiful drawing of a RPG map Row 6: A beautiful drawing of a network map with communities
These drawings are not always that different to paintings, but we have flatness, we have details, and we do not have background blur. Once again, the interaction with the base prompt can be surprising. Dungeon maps (row 1) already looked like drawing, and they still do, but other prompts now also look the same, notably city maps (row 2) and RPG maps (row 5). Also surprising, while the “beautiful painting” modifier worked well with island maps and Asia maps (rows 3 and 4), this modifier worked better for island maps (row 3) than Asia maps (row 4). With the latter, it was inconsistent, producing sometimes detailed, sometimes sparse, childish drawings. It interacted badly with RPG maps. I really like the network map #3 (last row). Overall, it might be more map-like than with “beautiful painting”, and it does solve some of my issues, but I am not entirely sold, and I want to push further and try to get a more modern style.
A beautiful (…), flat design
Row 1: A beautiful dungeon map, ground plan, floorplan, flat design Row 2: A beautiful map of a city, flat design Row 3: A beautiful map of an island with mountains, rivers and cities, flat design Row 4: A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, flat design Row 5: A beautiful RPG map, flat design Row 6: A beautiful network map with communities, flat design
It comes as a complete surprise to me that “flat design” may lead to miniature dioramas (rows 3 and 4). And I cannot explain why city maps, specifically, are transmogrified into the cuteness of row 2. I expected uniform fills, and on that level it delivers. It is also nice overall, very aesthetically pleasing, and pretty map-like, which is hard to get, as we have seen. But we have the out-of-focus issue again, and some amount of 3Dness. Yet I want to highlight how versatile this modifier is. It pushed each base prompt in a different direction. It is compatible with RPG maps (row 5) but apparently incompatible with network maps (row 6), which makes sense.
My takeaway: uniform fills and subtle gradients have a map-like flavor that I really like. Let me try an alternative to this modifier, it may do better.
A beautiful (…), vector graphics, illustrator
Row 1: A beautiful dungeon map, ground plan, floorplan, vector graphics, illustrator Row 2: A beautiful map of a city, vector graphics, illustrator Row 3: A beautiful map of an island with mountains, rivers and cities, vector graphics, illustrator Row 4: A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, vector graphics, illustrator Row 5: A beautiful RPG map, vector graphics, illustrator Row 6: A beautiful network map with communities, vector graphics, illustrator
We also get the same flatness as before, but city map (row 2) does not react differently from the other rows. Is that good, though? Open question. We do get out-of-focus blur, but it remains less diaporama-like than the “flat design” modifier, and more map-like to my taste, which is a clear win. Now I want to explore modifiers that change the style in other ways, and see if it also gives map-like images.
A beautiful (…), watercolor painting
Row 1: A beautiful dungeon map, ground plan, floorplan, watercolor painting Row 2: A beautiful map of a city, watercolor painting Row 3: A beautiful map of an island with mountains, rivers and cities, watercolor painting Row 4: A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, watercolor painting Row 5: A beautiful RPG map, watercolor painting Row 6: A beautiful network map with communities, watercolor painting
I like the hand-made touch, and the images are very flat (no 3D or blur). It works for dungeons (row 1), cities (2) and networks (6) but not much for island (3) and Asia (4) that look more like landscapes than maps.
A beautiful (…), colored pencil art
Row 1: A beautiful dungeon map, ground plan, floorplan, colored pencil art Row 2: A beautiful map of a city, colored pencil art Row 3: A beautiful map of an island with mountains, rivers and cities, colored pencil art Row 4: A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, colored pencil art Row 5: A beautiful RPG map, colored pencil art Row 6: A beautiful network map with communities, colored pencil art
It is sometimes nice, notably for row 4, but DD confuses the prompt in two ways. First, it wants to draw pencils (row 2). Second, wool textures creep in the pencil texture (row 6). Not necessarily bad, but not map-like. It’s less flat than watercolor (some 3D and blur appear).
A beautiful vintage (…)
Row 1: A beautiful vintage dungeon map, ground plan, floorplan Row 2: A beautiful vintage map of a city Row 3: A beautiful vintage map of an island with mountains, rivers and cities Row 4: A beautiful vintage map of Asia with mountains, deserts, forests, fields, rivers and cities Row 5: A vintage beautiful RPG map Row 6: A beautiful vintage network map with communities
This modifier is more effective than I expected. Despite the usual out-of-focus blur, the result generally feels quite map-like to me. Like a collage of colorized photos. Interesting results for city maps (row 2) and network maps (row 6).
I will now try the classic DD strategy of adding a painter’s name to borrow their style. I start with artists whose style could feel map-like.
A beautiful (…), by Wassily Kandinsky
Row 1: A beautiful dungeon map, ground plan, floorplan, by Wassily Kandinsky Row 2: A beautiful map of a city, by Wassily Kandinsky Row 3: A beautiful map of an island with mountains, rivers and cities, by Wassily Kandinsky Row 4: A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, by Wassily Kandinsky Row 5: A beautiful RPG map, by Wassily Kandinsky Row 6: A beautiful network map with communities, by Wassily Kandinsky
Beautiful, but it does not feel like maps at all (except row 4 #2,3) because the features are too big. Interesting networks, though.
A beautiful (…), by Paul Klee
Row 1: A beautiful dungeon map, ground plan, floorplan, by Paul Klee Row 2: A beautiful map of a city, by Paul Klee Row 3: A beautiful map of an island with mountains, rivers and cities, by Paul Klee Row 4: A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, by Paul Klee Row 5: A beautiful RPG map, by Paul Klee Row 6: A beautiful network map with communities, by Paul Klee
Better than Kandinsky, but still with the same problem: it resembles abstract painting more than maps. Too bad, row 4 #2 is otherwise fantastic.
A beautiful (…), by Max Ernst
Row 1: A beautiful dungeon map, ground plan, floorplan, by Max Ernst Row 2: A beautiful map of a city, by Max Ernst Row 3: A beautiful map of an island with mountains, rivers and cities, by Max Ernst Row 4: A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, by Max Ernst Row 5: A beautiful RPG map, by Max Ernst Row 6: A beautiful network map with communities, by Max Ernst
Still pretty painting-like, but one more step toward mapiness. City maps (row 2) and Asia map (row 4) might be usable? Anyway, I will now swap to artists that are more illustrators than fine artists, and whose marked style could be interesting even though they did not produce map-like illustrations at all.
A beautiful (…), by Gustave Doré
Row 1: A beautiful dungeon map, ground plan, floorplan, by Gustave Doré Row 2: A beautiful map of a city, by Gustave Doré Row 3: A beautiful map of an island with mountains, rivers and cities, by Gustave Doré Row 4: A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, by Gustave Doré Row 5: A beautiful RPG map, by Gustave Doré Row 6: A beautiful network map with communities, by Gustave Doré
Gustave Doré is mostly known as a printmaker, and I expected a fully black and white rendering, but that is not totally the case (and not an issue). Many of the renderings look hand-drawn, and would pass for printed maps, set aside the 3D features and out-of-focus blur that creeps in occasionally. Certainly usable. Island and Asia maps (rows 3 and 4) are close to the “vintage” modifier. The network maps are fantastic in that “community” has been interpreted as crowds of people (take a close look!).
A beautiful (…), by Edmund Dulac
Row 1: A beautiful dungeon map, ground plan, floorplan, by Edmund Dulac Row 2: A beautiful map of a city, by Edmund Dulac Row 3: A beautiful map of an island with mountains, rivers and cities, by Edmund Dulac Row 4: A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, by Edmund Dulac Row 5: A beautiful RPG map, by Edmund Dulac Row 6: A beautiful network map with communities, by Edmund Dulac
Incredible rendering by this artist who does not make maps at all, I am enthusiastic about this one! His flat but vintage style is a great fit for maps.
A beautiful (…), by Victo Ngai
Row 1: A beautiful dungeon map, ground plan, floorplan, by Victo Ngai Row 2: A beautiful map of a city, by Victo Ngai Row 3: A beautiful map of an island with mountains, rivers and cities, by Victo Ngai Row 4: A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, by Victo Ngai Row 5: A beautiful RPG map, by Victo Ngai Row 6: A beautiful network map with communities, by Victo Ngai
Victo Ngai is another incredible fit for maps, with a more digital style than Edmund Dulac, close to flat design, and a very distinctive flavor. The creeping tilt-shift blur needs to be managed, though. Island and Asia maps are very map-like, but network and dungeon maps are very rich and clean as well. RPG maps are more like RPG illustrations, though (and beautiful, nevertheless).
A beautiful (…), by Mary Blair
Row 1: A beautiful dungeon map, ground plan, floorplan, by Mary Blair Row 2: A beautiful map of a city, by Mary Blair Row 3: A beautiful map of an island with mountains, rivers and cities, by Mary Blair Row 4: A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, by Mary Blair Row 5: A beautiful RPG map, by Mary Blair Row 6: A beautiful network map with communities, by Mary Blair
Those are disappointing in comparison to the last two, half-way through between maps and paintings. I have better options.
I have tested enough artists for what I need, and with some relative success, so it is now time to tackle two problems we have seen again and again: the landscapes seen from the side instead of above, and the out-of-focus blurriness.
A beautiful (…), seen from above, satellite view, trending on Artstation
Row 1: A beautiful dungeon map, ground plan, floorplan, seen from above, satellite view, trending on Artstation Row 2: A beautiful map of a city, seen from above, satellite view, trending on Artstation Row 3: A beautiful map of an island with mountains, rivers and cities, seen from above, satellite view, trending on Artstation Row 4: A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, seen from above, satellite view, trending on Artstation Row 5: A beautiful RPG map, seen from above, satellite view, trending on Artstation Row 6: A beautiful network map with communities, seen from above, satellite view, trending on Artstation
The actual modifier tested here is “seen from above, satellite view” and I used “trending on Artstation” because it produced a number of images seen from an angle. This works well-enough, but I assume that the “satellite view” part changes the character on some of the images, notably the dungeon maps (row 1) where trees now appear. I will have to remove that part in some cases.
I now tackle the blurriness of the field of view (out-of-focus background). I do this by disincentivizing it with a negative weight. As it turn out, you are not limited to one prompt, but you can use multiple ones and even weight them. The weight is the number after the colon (defaults to 1) and I separate the prompts with a pipe, even though in practice the syntax is a little bit different.
A beautiful (…), seen from above, satellite view, trending on Artstation: 1.2 | tilt-shift: -0.1 | blurred: -0.1
Row 1: ‘A beautiful dungeon map, ground plan, floorplan, seen from above, satellite view, trending on Artstation:1.2′, ’tilt-shift:-0.1′, ‘blurred:-0.1’ Row 2: ‘A beautiful map of a city, seen from above, satellite view, trending on Artstation:1.2′, ’tilt-shift:-0.1′, ‘blurred:-0.1’ Row 3: ‘A beautiful map of an island with mountains, rivers and cities, seen from above, satellite view, trending on Artstation:1.2′, ’tilt-shift:-0.1′, ‘blurred:-0.1’ Row 4: ‘A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, seen from above, satellite view, trending on Artstation:1.2′, ’tilt-shift:-0.1′, ‘blurred:-0.1’ Row 5: ‘A beautiful RPG map, seen from above, satellite view, trending on Artstation:1.2′, ’tilt-shift:-0.1′, ‘blurred:-0.1’ Row 6: ‘A beautiful network map with communities, seen from above, satellite view, trending on Artstation:1.2′, ’tilt-shift:-0.1′, ‘blurred:-0.1’
“Tilt-shift” produces exactly the kind of tilted angle with out-of-focus background that I want to avoid, so I give it a negative weight, and similarly with “blurred” as a backup plan. As those do not have a precise thing to draw (only something to not draw) they must not compete too much with the main prompt, so they have a lower weight. I kept the total weight to 1 like before.
Unfortunately, we still have some blur and tilt-shifting (row 2) so I will increase the negative weight of those modifiers.
A beautiful (…), seen from above, satellite view, trending on Artstation: 2 | tilt-shift: -0.5 | blurred: -0.5
Row 1: ‘A beautiful dungeon map, ground plan, floorplan, seen from above, satellite view, trending on Artstation:2′, ’tilt-shift:-0.5′, ‘blurred:-0.5’ Row 2: ‘A beautiful map of a city, seen from above, satellite view, trending on Artstation:2′, ’tilt-shift:-0.5′, ‘blurred:-0.5’ Row 3: ‘A beautiful map of an island with mountains, rivers and cities, seen from above, satellite view, trending on Artstation:2′, ’tilt-shift:-0.5′, ‘blurred:-0.5’ Row 4: ‘A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, seen from above, satellite view, trending on Artstation:2′, ’tilt-shift:-0.5′, ‘blurred:-0.5’ Row 5: ‘A beautiful RPG map, seen from above, satellite view, trending on Artstation:2′, ’tilt-shift:-0.5′, ‘blurred:-0.5’ Row 6: ‘A beautiful network map with communities, seen from above, satellite view, trending on Artstation:2′, ’tilt-shift:-0.5′, ‘blurred:-0.5’
Now we run into another problem: the images lose their character because the other prompts compete too much. In short, DD is happy with whatever is neither blurry nor tilt-shifted, and it does not try as hard to draw maps. It may also over-compensate. You can see that to the many spiraling patterns and noise on the sides. I need to tone down the negative weights.
A beautiful (…), seen from above, satellite view, trending on Artstation: 1.4 | tilt-shift: -0.3 | blurred: -0.1
Row 1: ‘A beautiful dungeon map, ground plan, floorplan, seen from above, satellite view, trending on Artstation:1.4′, ’tilt-shift:-0.3′, ‘blurred:-0.1’ Row 2: ‘A beautiful map of a city, seen from above, satellite view, trending on Artstation:1.4′, ’tilt-shift:-0.3′, ‘blurred:-0.1’ Row 3: ‘A beautiful map of an island with mountains, rivers and cities, seen from above, satellite view, trending on Artstation:1.4′, ’tilt-shift:-0.3′, ‘blurred:-0.1’ Row 4: ‘A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities, seen from above, satellite view, trending on Artstation:1.4′, ’tilt-shift:-0.3′, ‘blurred:-0.1’ Row 5: ‘A beautiful RPG map, seen from above, satellite view, trending on Artstation:1.4′, ’tilt-shift:-0.3′, ‘blurred:-0.1’ Row 6: ‘A beautiful network map with communities, seen from above, satellite view, trending on Artstation:1.4′, ’tilt-shift:-0.3′, ‘blurred:-0.1’
This work well-enough for me! As we can see (row 2) it does not get rid of things seen from the side entirely, but we do not have absolute control on what DD does.
I am confident now that I have enough modifiers to assemble different kinds of maps and solve most of the problems I have met. I have also explored enough to understand that different base prompt “want” to go in different directions. This is what I will be trying to get:
A black and white, drawn map for the building size (dungeon). Basically, something that could work for role playing.
A colorful tourist map for the city scale. I will not pursue a road map style, but something more illustrated.
A hand-painted vintage map for the island scale.
Something as map-like as possible for the continent scale (Asia). It might not actually be looking like a map of that scale, but it must look like a map.
A colorful hand-painted map for the RPG style. As we have seen, it does not necessarily look like a video game, but it often brings an aesthetically pleasing composition that I want to preserve.
Whatever could evoke a network while still looking like a map.
A beautiful drawing of a dungeon map and ground plan and floorplan seen from above by Gustave Doré and Victo Ngai, trending on Artstation, black and white, vector graphics:1.4 | tilt-shift:-0.3 | blurred:-0.1
My rationale here is to go for a drawing, add the modifier “black and white”, ask for Gustave Doré of course but also Victo Ngai for the cleanliness of the style, to avoid something too messy, and similarly ask for “vector graphics”. I did put “seen from above” but not “satellite view” to avoid trees.
There is still some tilt-shifting creeping in, and that might come from “trending on Artstation”. I also find that there is too much little details, and some color creeping in, that I attribute to Ngai’s style. I will try to correct for that by (1) adding “vintage”, (2) replacing Victo Ngai by Edmund Dulac, (3) removing “trending on Artstation” and (4) pushing a bit the negative weight of tilt-shift.
A beautiful drawing of a dungeon map and ground plan and floorplan seen from above by Gustave Doré and Edmund Dulac, black and white, vector graphics:1.6 | tilt-shift:-0.5 | blurred:-0.1
Here we are! Surprisingly, some figurative drawing creeps into the picture, but in fact I like it a lot because it does not interfere too-much with the mapiness of the image. It reminds me of annotations and “here be dragons” stuff. Let’s call that a happy accident, but you can see where it comes from (using Edmund Dulac in the prompt). Not all the renderings are equally good (#3 produced jpeg artifacts), but it does not matter as I can cherry-pick in the end. I am happy enough with that one. Let’s move on to the second style.
A beautiful watercolor painting of a map of a city seen from above by Edmund Dulac and Victo Ngai, flat design, trending on Artstation:1.4 | tilt-shift:-0.3 | blurred:-0.1
I went for a combination that was strong in the illustrative side. I merged “A beautiful painting” and “watercolor painting” into “A beautiful watercolor painting”. I hesitated but finally kept the “seen from above” part to ensure that it is map-like enough, also because it does not completely prevent features seen from the side.
The result passes well for a map, and is in fact pretty satisfactory, but I aimed at something more illustrated, less vintage and more digital. I will adjust by (1) removing Edmund Dulac, (2) replace “watercolor” by “digital”, and (3) remove “seen from above”.
A beautiful digital painting of a map of a city by Victo Ngai, flat design, trending on Artstation:1.4 | tilt-shift:-0.3 | blurred:-0.1
That is closer to my goal but there are problems, and in particular the absence (sometimes) of roads or pathways makes it look less like a map and more like an illustration of the city. I will try to make it more like a tourist map by replacing “map” by “tourist map” and adding “with main roads and shops and restaurants”. I will also remove “trending on Artstation” to minimize the chances of background blur, and also because the rest of the prompt should ensure an aesthetically pleasing image by itself.
A beautiful digital painting of a tourist map of a city with main roads and shops and restaurants by Victo Ngai, flat design:1.4 | tilt-shift:-0.3 | blurred:-0.1
Almost there! But some of those failed at being maps, and the detail level is sometimes too much. I will put back “trending on Artstation”, raise the negative weight of “tilt-shift”, simplify “digital painting of a tourist map” into just “digital map”, and simplify “with main roads and shops and restaurants” into “with roads and shops”.
A beautiful digital map of a city with roads and shops by Victo Ngai, flat design, trending on Artstation:1.6 | tilt-shift:-0.5 | blurred:-0.1
That works well enough for me :) And I like the presence of ideogram-like signs.
Moving on to the third style: a hand-painted vintage map of an island.
A beautiful vintage drawing of a map of an island seen from above with mountains, rivers and cities, satellite view by Edmund Dulac and Gustave Doré, trending on Artstation:1.4 | tilt-shift:-0.3 |blurred:-0.1
I went for the styles that looked similar to the “vintage” modifier, which includes “drawing” over “painting”, even though that will probably not look like a drawing in the end.
Why did a sausage land in image #2? Anyways, it turns out that the style of Gustave Doré is making the whole thing a bit dirty to my taste, so I will try to tone it down by adding a third artist, Mary Blair, because her style is clean but feels like the 60’s, which fits. I will also move “satellite view” after “trending on Artstation” to keep the main part of the sentence more coherent. I will try a new modifier to reinforce the map character by adding “coastal” before “map”, and I will add “blue mood” to nudge the general color of the map.
A beautiful vintage drawing of a coastal map of an island seen from above with mountains and rivers and cities by Edmund Dulac, Mary Blair and Gustave Doré, trending on Artstation, satellite view, blue mood:1.4 | tilt-shift:-0.3 | blurred:-0.1
Unfortunately, if the details are more map-like, I feel like we have still lost on the mapiness in the composition. I assume that “Mary Blair” interferes too much. I will replace that part by a “vector graphics” modifier, to get just a bit of cleanliness without altering too much the drawing style.
A beautiful vintage drawing of a coastal map of an island seen from above with mountains and rivers and cities by Edmund Dulac and Gustave Doré, trending on Artstation, satellite view, vector graphics, blue mood:1.4 | tilt-shift:-0.3 | blurred:-0.1
Great! I will stop there. To the fourth style: something as map-like as possible using the base prompt of the Asia map.
A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities seen from above by Victo Ngai, satellite view, flat design, vector graphics, illustrator:1.3 | tilt-shift:-0.1 | blurred:-0.2
The style is as clean as I hoped for, but still a bit too close to illustration for actual maps. Part of it is due to the vibrant colors of Victo Ngai, but also to the pattern she uses, that are not typical of maps. For that reason, I want to draw Edmund Dulac into this, as a counterpoint, hoping that he will not break the cleanliness of the render. I will also add “desaturated colors”. Finally, I will also add “outlines” to nudge at that kind of graphic signal.
A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities seen from above by Victo Ngai and Edmund Dulac, satellite view, flat design, vector graphics, illustrator, outlines, desaturated colors:1.3 | tilt-shift:-0.1 | blurred:-0.2
Unfortunately, the colors keep being pretty vibrant, and some tilt-shift style is occasionally creeping in. To address both at once, I will redistribute the prompts. I will move “desaturated colors” to a secondary prompt to which I will add “satellite map” to keep nudging DD into the right direction, and I will raise a bit the negative weight of “tilt-shift”.
A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities seen from above by Victo Ngai and Edmund Dulac, satellite view, flat design, vector graphics, illustrator, outlines:1.3 | tilt-shift:-0.3 | blurred:-0.2 | satellite map with desaturated colors:0.2
This did not work, as apparently the style of Victo Ngai is too dominant. From there, I will essentially split the main prompt into two with different weights, one by Dulac and one by Ngai with a small weight.
A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities seen from above by Edmund Dulac, satellite view, flat design, vector graphics, illustrator:1.2 | A beautiful map of Asia seen from above by Victo Ngai, satellite view, flat design:0.1 | tilt-shift:-0.3 | blurred:-0.2 | satellite map with desaturated colors:0.2
It was effective, but I feel like we are too far away from the style of a modern map. To compensate for that I will add “trending on Artstation” to the main part of the prompt, and give a bit more weight to the Ngai part. I also see that the patterns are more illustrative than map-like but at this point, I will not be able to address it.
A beautiful map of Asia with mountains, deserts, forests, fields, rivers and cities seen from above by Edmund Dulac, trending on Artstation, satellite view, flat design, vector graphics, illustrator:1.0 | A beautiful map of Asia seen from above by Victo Ngai, satellite view, flat design:0.3 | tilt-shift:-0.3 | blurred:-0.2 | satellite map with desaturated colors:0.2
The balance is better, but there is still too much grain despite all the modifiers like “flat design”. I will be attempting one more push in that direction by adding “digital” before “map” in the main part, and adding “Mary Blair” as a second artist in that same part. Fingers crossed.
A beautiful digital map of Asia with mountains, deserts, forests, fields, rivers and cities seen from above by Edmund Dulac and Mary Blair, trending on Artstation, satellite view, flat design, vector graphics, illustrator:1.0 | A beautiful map of Asia seen from above by Victo Ngai, satellite view, flat design:0.3 | tilt-shift:-0.3 | blurred:-0.2 | satellite map with desaturated colors:0.2
Pfiou! Here we are, or at least I am satisfied enough to stop there, even though it is far from perfect. But at least, the clean appearance is back. This one was hard. As you can see, the map-like style is largely a reconstruction, a delicate assemblage that draws on the observed behavior of the algorithm. Part of it is surprising behavior that we can learn to understand, part of it is interaction between sub-prompts that remains pretty chaotic, part of it is just coincidence, luck, and a lot of iteration.
For the fifth style, the RPG map, I will be letting DD follow its own slope, as this is how it started, and try to emphasize it.
A beautiful RPG map, flat design, trending on Artstation:1.1 | blurred:-0.1
I will leave it at that. I am making an exception here to allow tilt-shift. This will look more illustrative, like imaginary screenshots of cute RPGs that do not exist. By contrast with the previous style I tried to achieve, it’s so much easier when you do not fight against the algorithm. In that sense, you just get more out of the algorithm if you let it go where it wants to, which is also why so many A.I. art pieces look the same. And as you can see, getting something different may take a lot of work. A work that is, in nature, artistic.
For my sixth and last style, the network map, I will just look for a mix of moderate mapiness, some but not too many links, and rich details.
A beautiful painting of a vintage network map with communities seen from above by Paul Klee, Gustave Doré, Edmund Dulac and Victo Ngai
Well that is weird, fun, and nice. “Community” gives little persons, and I am happy with that. The balance is good enough for me, so be it! I now have my 6 styles.
A large rendering of each of the 6 prompts
For those I asked for more pixels, more details, and more diffusion steps. Each one takes above 30 minutes to render.
1. Building map (dungeon style)
2. City map (tourist style)
3. Island map (vintage style)
4. Country map (modern style)
5. RPG map (video game style)
6. Network map (painting style)
Recap of the prompts used, in the DD format:
# 1. Building map (dungeon style)
['A beautiful drawing of a dungeon map and ground plan and floorplan seen from above by Gustave Doré and Edmund Dulac, black and white, vector graphics:1.6',
'tilt-shift:-0.5',
'blurred:-0.1']
# 2. City map (tourist style)
['A beautiful digital map of a city with roads and shops by Victo Ngai, flat design, trending on Artstation:1.6',
'tilt-shift:-0.5',
'blurred:-0.1']
# 3. Island map (vintage style)
['A beautiful vintage drawing of a coastal map of an island seen from above with mountains and rivers and cities by Edmund Dulac and Gustave Doré, trending on Artstation, satellite view, vector graphics, blue mood:1.4',
'tilt-shift:-0.3',
'blurred:-0.1']
# 4. Country map (modern style)
['A beautiful digital map of Asia with mountains, deserts, forests, fields, rivers and cities seen from above by Edmund Dulac and Mary Blair, trending on Artstation, satellite view, flat design, vector graphics, illustrator:1.0',
'A beautiful map of Asia seen from above by Victo Ngai, satellite view, flat design:0.3',
'tilt-shift:-0.3',
'blurred:-0.2',
'satellite map with desaturated colors:0.2']
# 5. RPG map (video game style)
['A beautiful RPG map, flat design, trending on Artstation:1.1',
'blurred:-0.1']
# 6. Network map (painting style)
['A beautiful painting of a vintage network map with communities seen from above by Paul Klee, Gustave Doré, Edmund Dulac and Victo Ngai']
Styling a network map
Finally, can that be useful to give a different appearance to a network map? Here is a base image given to DD for each of the prompts, and how it was subsequently modified.
I made a giant network map for Le Monde. It was published the 1st of April 2022. It was no April fools! Here I talk about the craft I put into it.
Géopolitique de la twittosphère, Le Monde 2022-04-01
It was a collaboration. The data came from Linkfluence (Guilhem Fouétillou). They were gathered and processed by Linkage (Pierre Latouche, Carlos Ocanto, Stéphane Petiot and Charles Bouveyron). The data were editorialized by the journalists from Le Monde who wrote the related papers (notably Nicolas Chapuis and Matthieu Goar).
The visualization was declined in different formats online. A simple scrollytelling presents the four papers of the series (paywall), where a larger, zoomable version can also be found. You can download the largest map just below. It’s in CC-BY-SA.
Download the visualization in large format
What if?
My goal is to make visible the decisions baked into these images. I will not show the process the way it looked to me. I will use a “what if?” style: what if I made different decisions? You will see that the map could have ended very differently.
I will use a smaller version of the map, readable at the format of this blog, as a reference. The decision points I mention below had different answers depending on the situation. This is the reference map for this post:
The reference map (the decisions I ended up making)
Layout algorithm. There are multiple algorithms that place the nodes in the picture, and none is “the best”. I used Force Atlas 2 with LinLog, because I find it convenient and I like its result. Here is what it would look like with OpenOrd. I don’t like the result for two reasons : nodes do overlap (it’s not readable), and clusters are artificially separated (that’s how this algo works). Anyway, if I were to go with that, I would have to adjust all the other decisions. There are dependencies between the design decisions.
Another layout: OpenOrd
The settings also matter. Here is what Force Atlas 2 gives you with default settings. Not too different, but the nodes overlap, the clusters are less defined, and the minor nodes all around drifted more far away, which caused a framing issue.
Another layout: Force Atlas 2 with default settings.
Orientation. Obvious but crucial: I intentionally put the political left on the left, and the political right on the right. I also put the government on top and the antisystem in the bottom. That was a choice aimed at sticking to expectations. But the layout has no orientation by default.
Same layout, but with a different orientation. Equally valid, but…
Node size. In this map, node size tells how cited nodes are (in the corpus). Of course, being cited is harder than citing. Everyone can retweet a lot. So being cited is a better indication of notoriety or influence than citing. Yet, the ability to cite means something, albeit something else. It’s a proxy for activism. I made a choice here, but see below what it would have looked with size as a function of citing (i.e., mentioning and retweeting). As you can see, activism comes from the sides of the map. The borders cite the center. Also note that there are many nodes that cite a lot, while only a few nodes are cited a lot.
If node size represents how many accounts of the corpus one retweets or mentions
Underlying heat map. I use a heat map for different things. It summarizes where the nodes gather. Let me show it to you, as it comes up later on. Black means low, white means high. I kept the rest of the map on top of it for context, but of course I just use the heat (i.e., height) information. It’s just computed after node positions.
The heat map used as a background: black in the back, white in the front.
Hill shading. I use a classic cartography technique called hill shading: drawing the shades of elevation. Of course, elevation is fake, it’s just derived from the heat map (i.e., node density). Without it, the map looks like this.
No hill shading (and no hypsometric gradient, see below).
How different does it look to you? In my view, hill shading plays two roles. It emphasizes high density areas, that have a specific meaning (community/cluster), and it reminds of traditional maps (geographical). It helps readability, and it renders the image more familiar.
Hypsometric gradient. In addition to the hill shading, cartographers often use a hypsometric gradient: the background color depends on the elevation. I do the same to remind classic cartography, with blue where there are no nodes (the sea) and a clear color where there are nodes (land). I find the analogy with continents and islands useful. The gradient is instrumental to this, the metaphor disappears if I remove it.
Just the hill shading, without the hypsometric gradient.
Hill shading settings. Hill shading takes two attributes: the height of the sun (elevation), and its clockwise angle (azimuth). We are used to having the light coming from the top-left, so that is what I used. Check a different choice:
Hill shading with light from the bottom-right
Inspiration, and impact craters. For this map, I drew inspiration from old maps of the moon by NASA. You’ll see the reference right away. Here is an example.
From there, a happy accident. Around highly connected nodes, the smaller nodes are repelled relatively far away, which, with hill shading, resembles an impact crater on the map. The crater evokes importance, which is coherent with the meaning of highly connected nodes. So I kept that, even though it was not intentional. I owned it, but a posteriori. It’s very visible when you just have hill shading and its hypsometric gradient (what I’d call the “basemap”, the background of the map).
Hill shading and hypsometric gradient only. Big nodes make impact craters. That’s an accident, but I like it.
Heat map settings. Now, this whole elevation thing depends on how the heat map is computed. There are different settings at play. For every map, I seek a balance between general shapes and local details. Check different settings:
Different heat map settings. Botero style?
Display edges. Or not. We had a discussion with Le Monde about displaying edges. Pros: it makes it clear that it’s a network; it adds information. Cons: it brings cluttering; useful only at high resolutions. It does not make sense at the detail level of the map I use as a reference, but here is what I suggested for a high resolution, work-in-progress version of the map (detail). It was pretty subtle. Yet we ruled it out as unnecessary.
With edges (work in progress; detail)
Color
Node colors. We naturally associate color with political parties. As we have seen, the positions already match our expectations. Yet color is instrumental to an intuitive reading. If all the nodes had the same color, we would see something like that:
If nodes had no colors.
A common approach is to use a community detection algorithm to get groups, and color them. Le Monde has a set of colors for political orientations. It turns out that the communities are recognizable, and we can apply those colors to them. It looks like this.
Colors from modularity clustering. Good as an approximation, horrible in the details. Node shadows not displayed (see below).
However, political affiliation is a touchy matter. We already expected some people to be mad at the map and the articles, because Twitter can be notoriously toxic. We knew that any affiliation mistake would be used as a way to discredit the journalistic work. And community detection makes A LOT of affiliation mistakes, like Emmanuel Macron painted as an antisystem (!). Not only because it is approximative; not only because society is more complex than what an algorithm can grasp; but also because modularity clustering tries to put everyone in a box. Not everyone has a political color, and this is very visible in this network, where many people have a critical relation to the candidates, and have commented on multiple ones, in a non-activist way. Not everyone is partisan.
The journalists from Le Monde, and notably Nicolas Chapuis and Matthieu Goar, have manually investigated the most cited accounts in the network. They retrieved the political color manually, looking at what people declare in their Twitter description. It does not mean that it is “the truth”. For instance, a number of political activists self-identify as journalists. Nevertheless, it is more respectful to people, and the positions in the map suffice to bring contradiction to self-declared affiliations. “Follow the actors” is a classic guideline of controversy mapping, by the way.
Manually retrieving political color is time-consuming, and we only did it for approximately 1000 accounts, i.e. 3% of the accounts. Which leads to a visualization issue: as you can see below, there are not enough colored nodes to make the different areas visible. The big dots have a color, but the many small ones remain gray. As a result, you will see the areas if you pay attention, but the big picture does not jump at your face.
Curated node colors (without node shadows).
Edges usually mitigate that kind of problem, because they occupy so much space that it becomes a kind of background. Here is for instance what the network looks like in Gephi, if we display the edges and color them as the mix of the source node and target node. There is a bit more color. Yet the many grey nodes (those without a political color) keep obfuscating the political color. And we do not display edges anyway.
A screenshot of the network in Gephi, with edges.
Node color shadows. I have developed a trick to emphasize node colors: “node shadows” (for lack of a better name). Basically, I paint the background with the color of the nodes. I just pay attention that it is very smooth, so that it is not too intrusive or busy, and that the colors do not mix too much, as blue and yellow do not equal green in this context (a right-wing plus an antisystem do not give you an ecologist). Yet I now run into another problem: the grey dots keep the shadows of colored nodes contained. That is precisely because the colors do not mix. Which is a good thing in general, but not here. Here the shadows work a little, but not much.
With node shadows, but not tuned.
A good design is always specific: I had to tune the process so that the shadows of uncolored nodes are not taken into account, so that the color of relevant ones can spread far enough. This gives the highlight we need to convey the big picture right away, finally.
The reference map, where I had to tune the node shadows.
Labels
Label colors. As we have seen, colors have a political meaning. I had to compromise. For instance, you may have noticed that the Zemmour brown (on the right) is pretty dark, while the antisystem yellow (in the bottom) is pretty light. This creates visual distortion, but I deemed it acceptable. The labels, however, also have to be readable. The yellow, in particular, creates a readability issue, as you can see below.
With labels the exact same color as nodes.
My solution was to constrain each label color into an acceptable range of lightness. I did not do it manually, I used the HCL color space (hue, chroma, lightness).
Fusion modes. It took some tricks to make the labels readable and natural at the same time. Notably, labels need a border. Else, they conflict with nodes and they are not legible.
Labels without any border. It creates readability issues.
For simple maps, the border could simply be of the same color as the background. But this map is too sophisticated already. What even is the background`? Any color I pick will conflict somewhere. The result is acceptable, but it draws too much attention to certain places, and creates imbalance.
Label borders using a background color: quite visible.
Instead, I use fusion modes on a combination of layers so that the labels have a node-free space border but still blend with the hill shading, the hypsometric gradient, the node shadows and so on. The result just feels natural, but it is, in fact, the most complex solution.
Fusion modes used to blend the node borders naturally.
Constant label thickness. I use another trick to ensure visual homogeneity: I set the labels to an approximately constant weight. The large labels are thin, while the small ones are thick. This is not very apparent in our case because, as we will see, in addition to that, I made the labels of the candidates bolder. Yet without this trick, a large discrepancy between label sizes, something that happens with large maps, makes small labels too thin (less readable) and large labels too thick (too emphasized).
If font weight was set the same regardless of the font size. I have exaggerated the label size range.
How many labels? There is a limit to how many labels you can draw, because there is not enough space. There are about 33,000 labels to display, and that is just impossible. But of course it depends on the display size, so bigger or zoomable maps can afford more labels. Regardless, labels also bring visual cluttering. We had a conversation about limiting the number of labels. In the end, we decided to show quite a lot of them. A map with less labels, like below, looks more summarized.
With only 25 labels.
Curved labels. A visually strong choice. Label paths follow the heat map gradient, but are constrained to not curve too much (it depends on the font size, by the way). My goal was to emphasize the isotropic nature of the visualization: contrary to a scatter plot, the space is the same in every direction. There are no axes. I find that horizontal labels put too much emphasis on the X axis, and suggest a type of reading that is not appropriate.
Classic, horizontal labels.
That being said, if the wiggly labels are too distracting, a compromise could be to allow various orientations but not the curvature (see below). I picked the most organic looking option, but that is not set in stone.
Isotropic but straight labels.
Forcing the labels of candidates. The journalists asked why some candidates were not visible. Indeed, initially, the candidates were not special nodes. If they were not visible enough, their label could be omitted (see below).
No special case for candidates. Some of which are not visible.
I added custom code to ensure that they would be displayed. Le Monde also asked me to make them more visible, which I did by making them bolder. Then a problem arose: the labels of some candidates conflicted (see below). To fix that, I had to specify the orientation of some candidates so that they do not conflict.
Forcing the display of some labels create conflicts, that I had to tune by creating exceptions in the code.
Hide some nodes. Finally, for one of the papers, I had to display only certain actors (those who mentioned a given conspiracy). One may think that it suffices to hide part of the nodes, but we need to keep the base map. So we need to hide nodes for certain things, and not for others. Once again, this requires tinkering with the code, because it is quite specific.
Display only a selection of nodes, but keep the base map.
A final word
I am aware that I did not explain how this is done in practice. That is for another time. But you can picture a reaaally long Javascript file that uses a lot of Graphology and D3, and that I need to use carefully, because bad settings lead to various issues like out-of-memory errors, horrible glitches, and code that unnecessarily runs for hours.
I also did not explain the basics of visual network analysis. You can find that in my PhD thesis, though.
I can nevertheless tell what the map does to its audience. It tells: “there is an order to that chaos”. It tells that the political space of Twitter is, somehow, organized. How? For that, you have to read the related articles in Le Monde (that’s the point). And the map also does something else: it draws you into the data. It encourages you to take a look at the labels, and explore it by yourself. I know that this kind of visual literacy is not widespread. But we have to start somewhere…