The Gephisto paper is out! Anders Munk and I defend the idea that since people get to understand tools by using them, those should cultivate critical thinking through use, and even more, through ease of use.
Easier said than done, though. Gephisto is an attempt at doing this in practice. You can see it as a one-click Gephi. Give it a GEXF or GraphML, and it makes a network map. Your data is not uploaded, it all happens on your own computer. https://jacomyma.github.io/gephisto/
Gephisto is primarily intended for teaching, although you may find it useful as a network map maker. But if you find it frustrating, understand that this is integral to its design. One-click magic comes at a price.
We argue that for the user, making informed decisions is like climbing a ladder: becoming aware that a choice exists, knowing its options, what they entail, and how to assess them. Climbing the ladder requires efforts.
In practice, users must choose between climbing the ladder (learning) and meeting their needs (utilitarian use of the tool). They have limited resources, so they must compromise between learning and meeting their operational goals.
The problem is not to use the tool in utilitarian ways (unreflexive). Even seasoned experts go for shortcuts in time-constrained situations. The problem is to never climb the ladder. That’s where ease of use can be a trap.
Some people say that science tools should be hard to use, so that untrained users, supposedly uncritical and unreflexive, cannot access them. Such tools would then only be usable to the worthy users (critical, reflexive), and it would preserve good science.
But this argument fails to acknowledge that most beginners take shortcuts because they have to meet operational goals in time-constrained situations, not because they are uncritical. Climbing the ladder is just costly.
Hard-to-use tools are like Thor’s mythical hammer, Mjølnir, that can only be lifted by the worthy. It requires people to be worthy, but it does not make them worthy. So users will turn around the problem to meet their needs, leading to even worse misuses.
Contrary to Mjølnir, Gephisto is not hostile to the user. It delivers the shortcut (a visualization in one click). However, the more you use it, the more you get to understand the thickness of the decisions it makes for you.
It does that by making the decisions half-randomly. One can make good and bad choices, and Gephisto always tries to make the best ones. But even so, a number of decisions remain. For ex: black or white background? It picks at random, differently each time.
This inconsistency is the price of ease of use. It should become frustrating as you realize that you actually want to make certain decisions. Ex: you prefer the black background. You are climbing the ladder. You need to move to Gephi.
I know how to use Gephi, yet I sometimes use Gephisto. I have climbed the ladder, yet sometimes want a super quick result. It does not make me uncritical. Ease of use does not have to be the nemesis of reflexiveness.
We want to make tools that elevate the user, not tools that lock them out until they have found other ways to get “worthy”. It’s hard though. Gephisto works but it is a prototype, a statement. We hope it paves the way forward to better science tools.
This is a show and tell, a photo reporting, and open notes about this work.
In short, we have harvested ~1.5M academic abstracts about AI, machine learning and algorithms from Scopus, we have extracted expressions, and made a co-occurrence network. We have rendered this network as a big map, and we are annotating it.
Johan Søltoft harvested the data, Snorre Ralund lead the computations, and Anders Munk, Matilde Ficozzi and I did the qualitative work. This is part of the ADD project.
This is how we work. We have the map on a big table with a big screen, and we navigate between the original network in Gephi, where we can filter in various ways to look more closely into the topology, a spreadsheet where we have a series of clusters to annotate (obtained from clique percolation), and an ElasticSearch engine where we have stored all the abstracts, and where we can read them for a given cluster. We draw on the map with various markers.
Where we annotate the map. Done!
The network is mostly made of clusters, but not only. The non-cluster parts are the harder to work with, but also the most interesting. They are the reason why we do this qualitative work (annotating is a form of coding). Our very last piece was a bridge, for instance.
The last bridge to annotate. Finally! It was long. Weeks of full-time work. Matilde did the biggest part: summarizing the underlying literature of most of the clusters.
The base map consisted of 7.5K nodes and 85K edges. I had rendered it in a neutral way but with a bit of hillshading, so that we could read the clusters quickly while being able to annotate on top. Some of the node labels (expressions such as “telescope” or “orthopedics”) were displayed to help. We added about 250 landmarks (in red), 50 bridges (in orange), a dozen of areas (in blue) and a few borders (in green).
Overview of the annotated labels
It is nice that with a printed map, you can just get closer to read the labels. You zoom in and out with your own body. This is what it looks like from close. And this is just a tiny tiny part.
Zoom on a tiny part of the map: about the brain, and AI/algorithms, like everything else in this network.
A big part of our time has been dedicated to finding the labels of the landmarks. We invented the method while doing it, and it deserves a separate discussion, so I will not develop that here. But in short, we do read a sample of the corresponding abstracts, and we wonder how algorithms and AI are involved. Each landmark has a longer description in our spreadsheet, but we only have room for a quite short label. So the landmarks are not just about the words, we look into the articles to understand how AI/algorithms are involved.
My focus in this post is the manual annotation, and why we do it that way. Indeed, there is a simpler way to bring context to the network: we could apply community detection and code the clusters obtained. See below our network colored by clusters from Infomap (a clustering algorithm) in Gephi. We could document what each color is. In pink, on the right, we find health and medicine, for instance. That is a valid option. But we have a good reason to put the extra effort.
The same network in Gephi. Colors: clusters obtained from Infomap.
Our ultimate goal is twofold. We want to account for how AI, algorithms and machine learning are involved in science, and we want to make it accessible and understandable to a wide audience. This audience consisting of academics and non-academics, for instance journalists. We want this network to be part of an atlas readable by people not trained in network analysis. The output is not just a paper. We aim at something visually explicit. Therefore, we optimize our method for visual affordances. We want to get the most out of the visual medium.
The visual medium is at its best when a place (an area, a locality) in the image tells something clear about the represented thing (the network). This is why most of our annotations consist of landmarks: a red cross that implicitly applies to the space around it.
We use landmarks instead of hulls or other delimiting shapes because the clusters do not always have clear boundaries. See the cluster about cryptography below: it has distinct subclusters that we detected with an algorithm and annotated separately, but there is also a real blending between these topics in the literature, which is why they are merged together (topologically as well as visually). Using landmarks allows us to account for this continuity. The ambiguity is an empirical reality that we need to account for. We think that it is more exact to not add arbitrary boundaries when there are none in reality. In this case, we can leverage the visual medium in a productive way.
Cluster about cryptography.
However, the visual medium does not have much affinity with nonlocal structures, and that is a problem when it comes to account for this network. Nonlocality is not an intuitive notion and is hard to explain, because it works differently than our everyday experience of the world, but let me take a shot at it quickly. It is like the fourth dimension, but worse, because even high-dimensional spaces are Euclidean, and nonlocal structures are not. The simplest nonlocal structure is a bridge between two clusters. We call a cluster “local” because being close in the map means being connected in the topology (and vice-versa). It does not have to be an absolute rule, a good-enough correspondence between the topological structure of the network and the 2-dimensional space of the map suffices. That is why we can read it. We can infer the structure from the map space: what is drawn, and where it is drawn. From a visual cluster, we can infer a topological cluster. But bridges are topological clusters that cannot be represented as visual clusters because they are nonlocal. They connect two local structures (topological clusters) that are only local separately, but not together. The bridge is at the same time in different “localities” (because the clusters are distinct) and in the same “locality” (because the bridge is connected). The notion of “local” does not help us understand such a structure. And unfortunately visualizing on a 2D space requires locality, because either you draw things in the same place, or you do not. You have to commit to one or the other.
A simple example of a bridge, with planted partitions. The bridge is somehow in two distinct places at the same time; it is nonlocal in that sense.
Bridges are the reason why it is useful to display the edges in addition to the nodes. The long edges show nonlocal structures: connected nodes that have been split apart by the layout algorithm to preserve locality where it is possible. Do you see what I mean? If not, that is not really a problem. The takeaway is that reducing the topology to a set of clusters only accounts for a part of the structure, because of the bridges and other nonlocal structures. And at the same time, we can compensate for this loss of information by using a number of tricks, for instance drawing the edges, and emphasizing the bridges. The more general question is then: how to assess the situations where clusters are not enough to capture the structure, and how to annotate to account for it?
We annotate manually because we want to look at the edge cases straight into the eyes. We face the problems of representing this network as a map, and we look for solutions. So for the rest of this post, I will showcase the weird parts of the map, explain what makes them problematic, and how we dealt with them.
Simple bridges
Let me start with a simple bridge. The bridge in the image below connects the social science cluster (it has a lot of landmarks such as “text classification” and “optical character recognition”) with a small cluster about audio technologies, with two landmarks: “music genre recognition” and “application to cochlear implants”. We labelled the bridge itself “language”. We had to look into Gephi to understand it. It turns out that when machine learning is used to model language, it is sometimes at the same time about the written language and the spoken language (audio), which is why we find a number of papers that mix vocabularies that tend to be in distinct papers the rest of the time, such as “sentences” and “speaker”.
Focus on the “language” bridge.
The bridge is a real structure, it is a cluster from a topological standpoint. But we cannot represent it as a locality in the map without breaking the fact that the two other clusters are distinct localities. And by the way, for feasibility reasons, we decided to commit to the layout at this stage, so we would not change the node positions anyway. So far we decided to represent the bridge as a thick highlight with a label. We picked orange because it is close to the red of the landmarks: the bridges are, in fact, quite the same thing as a landmark, except that we cannot represent them as a place.
Artifacts
Some clusters were artifacts of our method. The algorithm we used picked up on expressions that were noise in the abstracts. We had to annotate them anyway, because they are visible in the map. We keep them, but annotate them accordingly. I just wanted to mention that.
Those clusters are artifacts of our method.
Artifact bridges
We ruled out a number of bridges as artifacts. The pattern generalizes very well, so let me give a typical example. Below, the cluster about signal processing (left) and the cluster about electricity (right) seem connected through the expression “modulation”. We can verify it by selecting it in Gephi.
The two cluster are connected through a single expression, “modulation”.
But as we can check as well in Gephi, the two clusters have exactly zero co-occurrence. Basically, there are no papers using some typical words of the signal processing field as well as of typical words of the electricity field. There is no cluster overlapping the two that could constitute a bridge.
The left cluster is highlighted, and we see that its neighbors are not in the right cluster: there are no real bridges.
In short, the explanation is that the word “modulation” is used in both fields, but possibly in different ways. The word has multiple uses, and possibly multiple meanings. This phenomenon happens all over the place in this network. That is just how language works. We ruled those as “artifact bridges”.
The artifact bridges are visible in the map, as the edges are saillant. So not every set of convergent edges is necessarily a “real” bridge. We always double-check, and our criterion is that the clusters have direct links aside from the bridging node. There are very few edge cases.
Bridges with nodes in the middle
For sure, bridges are always made of nodes. Because even edges are made of nodes. But those nodes are not always in the clusters at the two ends of the bridge. Sometimes, there are nodes in the middle. Often, in addition to the direct edges from one cluster to the other. When that happens, we try to have the bridge go over those nodes. Here is an example.
We included nodes in the path of the bridge “metamaterials applications”.
Another example below. In this case, we tried to fix the bridge by adding some blue (the background color, more or less) so that it keeps looking like a path. Keeping bridges path-shaped can be a challenge, as we will see.
Another bridge where we included nodes. We decided it after the fact, and we had to paint out part of it with a blue pen.
Does it sound like a good idea to you? I ask this because it comes with problems. If we decide that bridges must include the nodes they pass on, then we now have to avoid the nodes that do not belong to the bridge. The bridge is a topological cluster, as you know, so we can determine which nodes belong to it, or not.
We decided to go for this strategy nevertheless, and try to avoid passing over nodes that are not in the bridge when we can. It strongly shaped how we annotate bridges, but we think that it is still better.
Bent bridges
It happens quite often that an entire cluster is on the direct path of a bridge. When that happens, and insofar as we have enough space to do so, we bend the bridge to avoid confusion about where it starts and ends. The area below is very dense in bridges, and we had to bend many of them. I find it quite appealing, as an unexpected bonus.
This area is rich in bent bridges (health and medical science).
Keeping bridges in check
We sometimes had to annotate that a bridge does not own certain nodes. Finding a path for a bridge can be tricky sometimes, and we had to compromise. We used the blue pen in the examples below. We had to clarify some bridge intersections. The first bridges are easy to draw, and the last ones complicated, because the space gets crowded at places.
Breadcrumbs
Some of the bridges we found had so many nodes in the middle that we informally called them “breadcrumbs”. The pattern is very visible in the map, even without the edges displayed.
This bridge, “renewable energy storage”, consists of this whole trail of nodes. We called this pattern “breadcrumbs” because you can easily follow the bridge from it.
This pattern forced me to realize that bridges always consist of nodes, while I was tempted to think of them as edges. Below is another example of a breadcrumb we annotated first as a bridge before we found out it was a topological cluster found by our clique percolation algorithm, which incentivized us to also annotate it as a landmark.
Breadcrumbs may be small clusters. In this case, we used a landmark.
Bridges into nothingness
It happens sometimes that one of our actual bridges ends up on an artifact bridge. You can think of it this way: a small cluster is stretched between another cluster and a single node that connects to a distant cluster because it is polysemic. The small cluster will look like breadcrumbs because it is so stretched, so we cannot put a landmark, but we can annotate it as a bridge. Yet, there is seemingly nothing on the end of the bridge because it ends, in fact, by an artifact bridge. I struggle to explain! Anyway, find two examples below.
Some bridges look like they lead to nowhere. There are two in this map. This is because they end on artifact bridges.
Thick bridges / stretched clusters
In some cases there are so many nodes in the middle of the bridge that it may look almost like a cluster. When this happens, we have no other choice than to make the bridge as thick as necessary. Sometimes, it can be very thick, and weird, as below.
A thick bridge. Aqueous solutions analysis and operations. It is less dense as a cluster.
You may think of this bridge as a stretched cluster. We certainly do so, as we do for every bridge. Bridges are clusters in the topological sense. We find them by clique percolation, for instance (in addition to seeing them in the map). What makes them a bridge and not a cluster is the fact that they are so stretched that there is no single place where to put a landmark. Because they spread over a large area, and/or because they are not dense enough. Below is another example where the bridge is super thick and slightly less dense than a cluster, but not very far from it. I did not have enough space to write my label!
This bridge is almost as dense as a cluster. It’s “hemodynamics” but I did not have much room for the label.
We already know that we will rework our annotations to account for this continuity between visual clusters and bridges. There is a whole spectrum of in-between cases, which suggests that having one flexible way to annotate them might be better than two distinct ones (the red landmark and the orange highlighting).
Branching bridges
As bridges are made of nodes, there is no guarantee that they behave well and follow a path. Sometimes the topological cluster branches out. We have a bunch of two-legged bridges, for instance.
A two-legged bridge
But it can get much more complicated. We followed the topology as far as we could, and tried to represent such branching bridges as faithfully as possible. A pretty intricate example below. There is a lot going on… That is just how it is!
An area with multiple branching bridges.
An even more bizarre example below. Honestly, this one was so hard to grasp that we were reaching the limits of our ability to annotate. We just did what we could, and it’s not great.
An very intricate area that we could barely annotate.
Impossible to annotate
I will end this post with this. There is an area in this map that we failed to annotate. It does not look very threatening, but in short, it is so nonlocal that we could not do anything with it. Here it is.
An area impossible to annotate
Let me explain. There are a bunch of nodes floating in the middle of this picture. Those are many things at once. First of all, they are not connected together, so in that sense, they are not a cluster, even though they occupy the same space. In fact, they belong to 3 different quasi-bridges, very sparse, each of which connects to a different pair of clusters, and sometimes no cluster at all. One cluster is almost an artifact, and consists of heavy metals. The corresponding papers do not have much in common besides the fact that a list of heavy metals are listed in the abstract. Another cluster is about pollutants and water solutions, but not heavy metals. The nodes do connect to those around, and they fit the loose theme of that area (chemistry and materials) but not in a single, identifiable way. Here is what it looks like in Gephi.
The problematic area in Gephi
Next steps
Our next steps will be to redo the annotations a cleaner way, possibly improving our visual language, and have experts of the different fields represented comment on the map. One of those experts might have the key to understand better parts of the map.
The situation at stake is the following: we, social science and humanities (SSH) scholars, use a method from another field, but we do not use it the way it was designed to be used. For instance, we do topic modeling, but only as a shortcut to categorize documents manually. More generally, we use predictive models but we do not predict, and we do not believe that the assumptions baked in the model are appropriate. For example, we use community detection, but we do not work with the communities obtained as if they were communities. We disregard some of the features the algorithm provides, while we leverage some of its side effects. Is this good science? Or are we the baddies?
Martin Grandjean was visiting us at the Tantlab for a few months in 2021, and before he left, Anders Munk and I had a long discussion with him to prepare the writing of a paper on our shared interests: network analysis, epistemic cultures, and knowledge technologies. This long-due post basically consists of my notes, to fixate some elements of our discussion. It focuses on networks, and the practice of repurposing algorithms in digital methods.
We routinely see people interpret network maps on a self-evident mode, that is, as if they had no epistemic commitments. As if looking at the picture was sufficient to understand the network structure. But of course, certain competences are required to understand what is going on in the picture. See an example below. These self-evident interpretations beg at least three kinds of questions.
What happens when you look for conversations about vaccines on Twitter. Nodes represent individual users. Clearly, several communities and conversations emerge.
First kind of question: Is the network structure visible or not? This leads to questioning what the network structure is, and what makes it visible. I have a lot to say about that, but I still see it as an open question.
Second kind of question: Why is the practice of self-evidence commonplace? There are obvious answers, for instance: some may believe that the network structure is directly visible; that there is no mediation. It may seem obvious that beginners could get tricked into self-evidence, because they lack training, and/or they are careless. But the obvious answers are not always right, especially when it comes to cultures. Let us refrain from doing armchair anthropology. What do we really know of the beliefs of these persons? We can actually look into these practices and investigate their purpose. What do they provide to those who enact them?
Third kind of question: Is this practice bad? …and if so, in what sense? The answer depends on the answers to the previous questions. The simplest hypothesis is that the network structure is not visible, yet people are tricked into believing that they see it. In that case, the picture does not properly refer to the network structure, so the argument is invalid, so it is bad science. The circulating reference is broken, the signifier does not refer to the signified anymore. The simplest possible answer to this question is that network maps are misleading.
The problem, again, comes from the fact that tools such as Gephi have made network analysis accessible to broad audiences that happily produce network diagrams without having acquired robust understanding of the concepts and techniques the software mobilizes. This more often than not leads to a lack of awareness of the layers of mediation network analysis implies and thus to limited or essentialist readings of the produced outputs that miss its artificial, analytical character. A network visualization is closer to a correlation coefficient than to a geographical map and needs to be treated accordingly.
Rieder and Röhle (2017)
Community detection
The same question can be asked about community detection. This technique works a bit like layout algorithms, insofar as it translates the topological structure, but instead of providing node coordinates, it provides groups of nodes (the “community”). There are different ways to build the groups, depending on what one means by “group”; there are different techniques to community detection. I will present two, but let us consider first what people do with those groups.
If your network is big enough, there are too many nodes and edges to analyze them individually. Having groups is an invaluable commodity, as it offers a reduced set of things to talk about. Node groups reduce the network to something we can analyze more efficiently. Of course, now we have to deal with where those groups come from. We have to justify them. But on the other hand, we can now assess those groups qualitatively and quantitatively. We can measure their properties. In that sense, groups are a coding of the network. A reduction that we can assess and use for analysis.
There are as many ways of assessing the coding (the groups) as there are research designs. You could for instance measure intercoder reliability, a well-codified technique of qualitative analysis that can be calculated in different ways. You could benchmark the groups against ground truth(s), if you have such empirical information. You could also measure their properties of your groups. For instance, if you expect them to be assortative (more connected within each group than across), you could compute the modularity of your groups, and compare it to other ways of making groups. The relevant validity criteria depend on the role of the groups in your analysis.
Let me sketch three examples. In the first one, the groups are not used in the analysis; in the second case they play a minor role; in the third case, the play a major role in the analysis.
First case, you just want to be able to refer to some parts of a network map in the text. The example below maps a discussion about rewilding (in short, the reintroduction of wild animals) by technoanthropology students (Anders and I teach them controversy mapping). Nodes are expressions connected by co-occurrence in Facebook posts, in Danish. The discussion has been analyzed qualitatively, but the map helps to communicate the analysis. The colors come from community detection. In this case, the blue cluster is about emotions (they named it “pathos”) while the orange cluster contains the expected expressions related to rewilding. Having colored groups allows to guide the reader’s attention to certain parts of the map, for instance: The blue group connects to the orange group through nodes like “dyr bag hegn”, which means “animals behind fences”. The students know, from their reading of the empirical material, that the mention of animals behind fences happens to mobilize strong emotions in Facebook posts. They have many other ways than this network to make that case, and they proceed to do so. Yet it helps to visualize where emotions (blue group) are connected to rewilding (orange group), and to check which other concerns may also play that role. The map was exploratory, and sharing it allows the reader to retrace that exploration, from the visualization to the empirical material.
Second case, you have a ground truth that you need to simplify. The example below represents airports and the airlines connecting them, in 2021. We know the country of each airport, but there are too many countries. If we use the country as the group and we assign it a color, we obtain the image on the left. If we use community detection, we obtain the image on the right. There are much less colors. The big red group happens to be Europe: although it has many countries, it appears as a single group because the countries are highly interconnected. The “communities” found are not exactly groups of countries, but it works well-enough to be used as a basis for the analysis, for instance by measuring which of these macro-groups are better connected.
Network of airports connected by airlines in 2021. On the left, colors represent countries. On the right, colors represent groups of nodes found by community detection. The red cluster is Europe: although it contains many countries, it has the structure of a unified ensemble.
Third and last case, the groups are a primary goal of the analysis. Check for instance this recent paper (John et al., 2021): their goal is to identify groups of people from their mobility pattern to profile them in further analyses. Community detection is a key step of the research design, an obligatory passage point of the method. In that case, obviously, the methodological commitments of the community detection technique employed contribute to determining the meaning of the groups further analyzed. Communities are literally modeled, following a number of assumptions.
Do you see communities?
Tiago Peixoto, who made decisive contributions to the science of community detection, happened to visit us while Martin was there. Tiago showed us a case that he later defended on his blog and on Twitter. I have written about it in a previous post. His post contains a provocative picture that I struggled to understand. I will unpack this case because it allows pinpointing the gap between the standpoint of algorithm designers (like Tiago) and scholars who use it (in that case, Martin and I).
Tiago compares two different approaches to community detection, and I need to explain that real quick, with my own words. The first technique is called “modularity clustering”. It is the oldest one, it is popular, notably among Gephi users. In short, it tries to find groups that optimize a certain metric, modularity. It’s too costly to find the absolute best, but we can get close thanks to a few approximations. A high modularity means that most of the links are within groups, not across. The second technique, developed by Tiago, uses Bayesian inference. It gives you the groups that are the most likely to fit a model.
Do these two approaches sound very different to you? If not, bear with me. The difference will appear shortly. Tiago proposes the image below. He asks: do you think there are communities, or not? Look at the network in the image, and remember your answer.
Tiago’s argument goes as follows (in my own words). At first glance, it looks like there are communities in this network. And indeed, if you run modularity clustering, it finds communities (check the colors on the left). However, the communities are not real. Indeed, Tiago generated this network from a model that has no notion of community whatsoever. Instead, it just requires that 13 nodes have 20 neighbors, and 230 nodes have 1 neighbor. So the nodes in each of the detected group have nothing more in common than with the other nodes, all of the nodes are by definition on an equal foot, despite the particular configuration generated in this specific case (see image below).
My initial reaction was: I do see the communities, so how can you argue that they are not real? Maybe those communities are only specific to the network generated that specific time, OK. But it only means that the generator gives you different communities every time, it does not mean that those communities are not real. But Tiago did not agree on that.
A different phrasing helps find a common ground and situate the disagreement. By “I do see the communities”, I mean that if I met such an empirical case, I would describe it as having a community structure. It may mean, for instance, that I could cut just 14 edges and separate the network into 13 pieces of roughly equal size. This criterion boils down to having a good modularity score. Tiago calls that a description, fair enough. Modularity clustering is descriptive. It applies to a situation where the network is empirical. Obtained from the field, as we say.
In comparison, Tiago’s situation is not empirical. His network is generated after a model. By “there are no communities”, he means that the likelihood that the found groups play a role in the connections is low. Which is a given, since the model has no groups to begin with. Still the argument holds; but let me explain in a different way. The model is like the rules of a game. Let me give you a simple example. We are given 2 groups of 5 nodes, and the game is to decide which nodes are connected. The rules tell us how the groups impact the chances to be connected. For each pair of nodes, we roll a dice. If the nodes are in the same group, we connect them on a result of 2+; else on a 6. We play the game, and we get edges that depend on the groups (and of the rules!). The Bayesian inference algorithm for community detection helps us play the game backwards. We have the edges, and we must guess the groups that generated them. But crucially, we must also know the rules (the model). Given the edges and the rules, it gives us the groups that are the most likely. In fact, we could even propose groups, and it would tell us how likely it is that those groups were the ones used for the game. By “there are no communities”, Tiago means that the groups obtained are no more likely than any other distribution. He also argues that the model does more than describing: it explains, although “explains” has a narrow definition in this context.
I find the description/explanation dichotomy too self-serving for two reasons. First, it suggests that descriptive techniques cannot explain, yet I contend that they may contribute to explaining by feeding into other methods. Modeling is far from the only means to provide an explanation. Second, when you get an empirical network, it never actually fits a model. The processes that exist in the world are never as simple as the rules of a game. “All models are wrong, but some are useful”, as one says in statistics. If the explanatory powers of modeling cannot exceed the justifications of the models, and if those are weak, then models only explain in theory, not in practice… yet modeling is useful. There are no doubts about it. The question is: what do researchers actually do with modeling techniques?
I helped Tiago implement a version of his Bayesian inference algorithm in Gephi. This version is based on the simple assumption that each node belongs to exactly one group, and that nodes are more connected within a group than from one to another. These assumptions are reasonable, but one cannot take them too seriously. The model is obviously unrealistic: no person belongs to a single social circle, no word to a single topic, etc. Yet it is useful because most of the time, we want each node to have exactly one group. Possibly for pragmatic reasons, like the necessity to visualize groups as colors, or because we use groups as a reduction, a simplification. Those are good reasons. We want a 1-group model not because we believe that it is how the network was generated, but because our research design demands it. In that situation, the usefulness of Bayesian inference is not about its explanatory powers. We cannot take for granted that the usefulness of an algorithm depends on the usage prescribed by its designers.
Misuse versus off-label use
Algorithms can be repurposed; they should be reappropriated; yet they can still be misused. The problem is to differentiate between those situations. Who gets to tell the misuse from the reappropriation? I am reluctant to be normative on that matter, and this post is long enough. So I will now explore a different direction, and engage with a topic that is faithful to the discussion we had with Martin and Anders: off-label use.
Off-label use is an expression you will primarily find about drugs, about medicine. It mainly refers to the widespread practice of using a drug in an unapproved way. Let us extend this notion to every technology, and refer to any unapproved use. The notion just assumes some degree of normativity, and a practice breaking that norm.
Off-label use takes a different meaning depending on the normativity. The pharmaceutical version of off-label use is often about what the health authorities have determined legal or not. The norm is set by the state. But in other contexts, the norm might be set by the manufacturer, cultures, society at large… and those are not mutually exclusive. In the case of algorithms, we should at least consider what is prescribed in the paper(s) defining it, and the culture of the field.
Let me explore a few examples of off-label use. My goal is to provide analogies as food for thought, but also to show that off-label use is more common than it may seem. I want to make it clear that off-label use is legitimate insofar as it consists of using something for what it is rather than what it is supposed to be.
Nitrous oxide. “Commonly known as “laughing gas”, this odourless substance is used in medicine, as an anaesthetic, and in catering to make whipped cream. It is the whipped-cream chargers that people buy for recreational use. The gas is usually inhaled by discharging a canister containing small amounts of the gas into a balloon.” (The Conversation) Anecdotes: I know those canisters only as a cooking technique, and wondered for a long time why people seemed to throw them away on the streets.
Cannisters of nitrous oxide.
Sildenafil, better known as Viagra, “was initially studied for use in hypertension … and angina pectoris … Phase I clinical trials … suggested the drug had little effect on angina, but it could induce marked penile erections. Pfizer therefore decided to market it for erectile dysfunction, rather than for angina; this decision became an often-cited example of drug repositioning.” (Wikipedia). The off-label use became the intended use.
Ikea’s BEKVÄM spice rack is simple and inexpensive, as any spice rack should probably be. But it can hold more than spices. People started buying it as a book shelf for kids. Then it was discovered that if you hang it upside-down, it also allows you to hang a number of things, like jewels or a towel. These uses became so popular that Ikea now also showcases the spice rack as a book shelf.
Jimi Hendrix was left-handed, but used to play a right-handed guitar held as if it were a left-handed guitar. As a result, the affordances of the instrument are upside-down: the buttons and the switch are under your arm instead of at the tip of your fingers, the tuning keys are far from you instead of close… an ergonomic nightmare. Now, who would dare stating that Jimi Hendrix was not playing the guitar appropriately? The history of musical instruments is full of off-label uses that became mainstream because they defined the sound of popular artists. Most guitar pedal effects, in fact, started as repurposed artifacts (distortion, vibrato, delay…).
In conclusion
I don’t think we are the baddies when we repurpose algorithmic techniques borrowed from other fields to do social sciences and humanities. We have different goals, different methodological commitments, and we have the right to reclaim those techniques for ourselves. This is not inherently bad science.
De facto, we are doing it. I want to frame it as off-label use. We use these techniques for what they are rather than what they are supposed to be. We disagree with the norm because our situation is different. For instance, in the case of community detection, we do not model; yet we may use modeling. We use it as a way to produce a reduction, which it functionally does. We are not misunderstanding the algorithm, it actually performs what we expect, even though this is not what the designers intended.
That being said, normativity also protects against misuse. Although I reclaim the right to use techniques off-label, I also acknowledge that it requires doubling down on assessing the algorithm to ensure that it actually does what we think it does. Off-label use comes with increased risks. It is not inherently bad science, but it exposes us nevertheless. Let’s not become the baddies.
References
Rieder, B. & Röhle, T. (2017). Digital Methods: From Challenges to Bildung. In M. T. Schäfer & K. van Es (Eds.), The Datafied Society: Studying Culture Through Data (pp. 109–124). Amsterdam: Amsterdam University Press.
John, E., Cauthen, K., Brown, N. & Nozick, L. (2021). Detecting Communities and Attributing Purpose to Human Mobility Data, 2021 Winter Simulation Conference (WSC), pp. 1-12, doi: 10.1109/WSC52266.2021.9715396.
I do not give an answer. I report who says what, where the concern comes from, and I show how you can look by yourself. I will unpack how and why AI models somehow recorded artist styles. In particular, I will look into the data where all of this comes from. And just so that you know, in that part I will show you pornography (a warning will precede it).
This is about a kind of apparatus that generates images from a text prompt: DALL-E, Disco Diffusion, Midjourney, Stable Diffusion, Imagen… Those devices are different but share the same general technical premises and fulfill about the same tasks, so let me call them a technology. It lacks a stabilized name though, and since I must commit to one in this piece, I pick “text to image“, abbreviated as T2I. This post is about the T2I technology, artists, and how the former will allegedly change the latter’s life, or not. If this is all new to you, then just watch this awesome 13 minutes video by Vox. It summarizes the issue perfectly.
“You can copy an artist’s style without copying their images, just by putting their name in the prompt.”
Give a text prompt to a T2I tool, and it returns images to you. I have previously documented the process of building prompts. Your prompt may mention to render the image in the style of a given artist, and the tool will oblige. It works better for certain artists than others. I am interested in the most convincing cases. Here is an example I find telling: the art of Simon Stålenhag. You will find his work below (check his website for a better view) next to images returned by T2I tools prompted for his style. Look at them, compare them visually. Do they look similar to you? If so, why? Can you make the difference between the man-made images and T2I output?
A screenshot of Google Images asked for “Simon Stålenhag art”. This is what Google Images knows about his art. Also check his website for a better look at his work.OpenAI DALL-E 2 prompted for “a painting by Simon Stålenhag”.Disco Diffusion 5.6 prompted for “A beautiful painting by Simon Stålenhag, trending on artstation”.
To me, those images have a clear family resemblance. I would characterize them as wide shots of an imposing monument or structure looking alien or technologically advanced standing out at a distance in the misty wilderness, often with a one or a few human beings using 20th century technology (cars, clothes…) rendered with a mix of realism and oil-painting textures, with muted colors and a few bright accents. Wikipedia summarizes Stålenhag’s style as “a stereotypical Swedish landscape with a neofuturistic bent”. I can tell that an image has been generated by a T2I tool, but importantly, I can also guess whether Simon Stålenhag has been the artist used in the prompt. I have seen other people guess it too on social media (unfortunately I could not retrieve any sources). And I am not the only one to find his style remarkably well captured by T2I models (compared to other artists).
At this point, I want to apologize to Simon Stålenhag. He is tired of hearing about this AI stuff. I am sorry to add a layer to this. I still have to, because his case is excellent for what I write about. Not only because T2I “is crazy good at replicating [his] style”, but because he is also involved in at least three important aspects of the discussion. First, he does not care about having his work absorbed by the T2I models, or his name being a popular prompt modifier. Second, some other people try to speak for him as if he took issue with the T2I technology, or try to enroll him as an ally in their fight against it. Third, he gets tired of all this social media activity. See by yourself in the tweets below.
Not only do I agree, I also don’t really see any problems with having my artworks being used as a component by these algorithms, a concern that I have heard other people raise regarding AI generated imagery. The same way I wouldn’t mind it being used in collage art. https://t.co/OA8HkLrVnU
Simon Stålenhag does not take issue with how his work is used in T2I models (may 2022).A now-deleted Twitter thread where Simon Stålenhag is erroneously painted as someone who “is likely to sue first for copyright infringement” by another Twitter user, Andres Guadamuz.
Also – "most likely to sue" … A total of zero (0) times have I felt the urge to sue someone in my life, so the idea of people having this image of me being some art baron with my legal team on speed dial gives me insomnia.
Simon Stålenhag’s response to the thread, making it clear that he has no intentions to sue anyone over this, and dislikes being portrayed as such (August 2022).
I truly apologise and I have deleted the offending thread. I chose you precisely because your work is so identifiable, and I wanted to see if the systems was trained on your work and would reproduce it. I've been writing about this for years and never got any attention.
The response of Andres Guadamyz, who initially tried to enroll Simon. Andres now states that he chose Simon because his work “was so identifiable” (August 2022).
Now, some people do take issue with T2I technology absorbing artist styles. But I have yet to find an actual artist complaining about getting ripped off themselves. What I observe instead is other people getting upset in their stead. The artists themselves seem either nuanced, or willing to embrace the T2I technology, or sometimes indifferent. In the Vox video mentioned above, James Gurney, a renowned artist often used in prompts, does not complain about his style getting absorbed by the DALL-E model. He only states that “the artist should be allowed to opt-in or opt-out of having their work, that they worked so much on by hand, be used as a dataset for creating this other artwork.” In the same video, Vanessa Rosa, artist and art historian, mentions that she has “heard of other artists who got actually extremely upset”, but does not mention them. But are the upset artists those who had their own style absorbed? In the companion video to that above, consisting of additional interview material from various people, we find no mention of style absorption ripping off artists. Ted Underwood, a professor in machine learning and literature, just says that artist names “are really powerful sort of magic words in this model.” Rob Sheridan, an art director, just comments that “everything in art is inspired by something else. … This just … puts a very crass, fine point on it.” And Mario Klingemann, the famous artist at the forefront of AI art, says this:
“It’s a bit unfair, of course, because some people took, I don’t know, years, decades to perfect their style and find their niche. And now all it takes is to put their name in the prompt, and then you can just have the shortcut and go on from there. … ‘Good artists copy, great artists steal.’ And that’s kind of exactly what it is, like, a lot of artists have ‘gotten inspiration’ from some unknown, whatever, other artist or so and never tell. … Art is not like science where you have to cite all your sources.”
And then, there is the Twitter thread below by RJ Palmer aka @arvalis, a concept artist. He takes issue with the T2I technology as an artist, but as far as I know, not as one who had their own style absorbed.
What makes this AI different is that it's explicitly trained on current working artists. You can see below that the AI generated image(left) even tried to recreate the artist's logo of the artist it ripped off.
A twitter thread by RJ Palmer aka @arvalis. He writes: “as an artist, I am extremely concerned”, notably by the fact that T2I models are “explicitly trained on current working artists”. He claims that one of those models (Stable Diffusion) “even tried to recreate the artist’s logo of the artist it ripped off” which is “anti-artist”. August 2022.
There are a few things to unpack here. Let me start by observing that there are two distinct arguments at play: style absorption gouges artists, and T2I will steal their jobs. Let me address the second argument first. The artist community is divided about it. Some artists believe that AI will get their jobs, some believe that it will change the profile of their jobs, for better of for worse, and some believe that it will not change much. The companion to the Vox video features various opinions on that matter. Let me simply acknowledge that many people are concerned about the impact of this technology on the job market, and voice it on social media. Yet what makes RJ Palmer’s Twitter thread stand out is the other argument. The claim made and defended with the images attached is specifically that AI rips off artists by copying their style. Which begs two questions: how linked are the two arguments, and how strong is his case?
The two arguments are weakly linked. T2I tools could disrupt the artist job market without copying styles in particular (I am not saying that it will). My argument, here, is that styles could exist without being attached to artists. Oil-painting style, watercolor style, 3D style… Even in today’s AI art, people use many other modifiers than artist names. We could train a model on data where artist names have been removed, and it would still retain stylistic information. I can imagine such tool disrupt the artist job market the same way, and it would not involve absorbing artist styles. To be fair, RJ Palmer or other artists may believe that T2I technology is so good only because it absorbed artist styles; but personally, I do not buy that. And I do not think that RJ Palmer does either. Indeed, he frames it as an economic issue: he finds it “gross” that “working artists [get] advertise[d] as styles” by AI companies. So conversely, he can imagine a system where AI companies compensate artists fairly. We can envision a disruption of the job market that is beneficial to artists. Of course this will not happen, but not because it’s impossible. No, because the balance of power completely unfavorable to artists. AI companies have power, money, and do not care at all about them. My takeaway here is simple: T2I may disrupt the artist job market in various bad ways, which is the real problem; style absorption is just a part of it; and fixing it is neither necessary nor sufficient to solve the bigger issue of harmful job market disruption.
Aside from that, is RJ Palmer’s case good? I do not think so, for three reasons. First, he is not himself an artist whose style is getting absorbed. The artist in his example is Michael Kutsche. Second, the similarity of the two pictures is vague to me. The style is not as similar as for Simon Stålenhag, but that is subjective. The signature, however, is really not similar (see below). RJ Palmer may ignore that such artifacts are common in current T2I technology. Of course, models have learned that good paintings often have a signature, so if you include popular modifiers such as “trending on Artstation”, you will often get such “watermarks” (in the vernacular of prompt writing). But visibly, the model did not try to reproduce this particular logo. Third, RJ Palmer’s point is phrased in a very anthropological way that makes the technology more intentional than it deserves: the model has not “tried to recreate” the style of said artist. If we could understand AI in terms of what it tries to do, regulating it would be much easier.
AI signatureArtist signatureRJ Palmer claims that the AI (left) tried to copy the artist signature (right).
Let me summarize. Palmer’s case is not convincing (1) because he fails to establish that AI copies artists, (2) because he is not the one being “ripped off” himself, and (3) because it boils down to the more general concern of a harmful job market disruption by T2I technology, which is a legit concern but not dependent on style absorption. Yet this tweet was repurposed precisely to make the case of artists getting ripped off. It was quoted for instance in this newsletter issue titled “Plagiarism by Machine” where the authors says that some AI companies are “direct about ripping off the style and signature elements of digital artists — to the point where they even try to copy the artist’s logo!” I have read, and you will read, about T2I technology plagiarizing artists, and we will get exposed to the implicit injonction to side with the artist against the disruption caused by Big Tech. An injonction that I am personally inclined to endorse, and so may you; but it holds me back that at the root of this argument, we find no artist actually complaining about their own style getting absorbed by the T2I technology. Of course, a prominent artist might make that case tomorrow. Yet I could also see those renowned artists feeling unthreatened by that technology. Or even, why not, flattered.
Is it legal?
This, as well as basically everything related to authorship and AI, is legally unresolved. You can find a series of framings in Is DALL-E’s art borrowed or stolen? by David Cooper on Engadget (July 2022). It is very instructive. Also note that despite the title, there is no mention of an artist complaining about being stolen.
There are two parts to style absorption. First, the artist data has been harvested, in the form of images with a caption that contains their name. Second, the model was trained on that data and it abstracted something that we call “style”. I will explain with more details. The point I want to make here is that the legality of style absorption plays out very differently in these two steps. The data harvested is basically public information on the web. It is the portfolio of artists. In some sense, if you want to be on Google so that your clients find you, then you have to allow crawlers to harvest your images with your name attached and reuse it. But still, that is something we could regulate legally. The other part, however, is where the AI magic operates. There is nothing inherently illegal in training a neural network on a data set. Yet that is where style absorption really happens. You may find it scary and/or fascinating, you would not be alone in that. AI can absorb and repurpose artistic style, although as we will see, there is a lot to say about what “artistic style” means in this context. There is no coming back to when only humans could paint.
AI Artist studies
Let me briefly mention so-called AI artist studies. The name is a bit ambitious for what it actually is, but there is a real effort behind it. In short, it is about rendering the same prompt again and again except for the artist’s name, so that you can see how it impacts the result. This project is an attempt at documenting the T2I technology in a systematic way, and it is a major resource for prompt engineering. Here is an example for Disco Diffusion.
Surea.i, the artist at the origin of this initiative, has taken some of the heat against style absorption on social media. Although he is not affiliated with any of the AI companies. Generating an image takes some time, and collecting this database required a significant effort (many other people participated). The explicit intent was to give back knowledge to the community, and I personally appreciate and support that mindset. Yet it was interpreted by some as an anti-artist contribution. As a Twitter user commented, the artist studies were “not even inherently pro-AI” (see below). As Surea.i replied, the case of artist style absorption could only be made because of it was so well documented in the first case. Using artist names in prompts is a practice that both fed into the artist studies, and was nourished by it.
People wouldn't even know they *could* be upset about this if it wasn't for us sharing information that was largely considered to be personal prompt secrets for many others. https://t.co/LcPpdpOzYH
As Surea.i argues, the case of artist style absorption could only be made because of it was documented in his AI artist studies. August 2022.
Surea.i was “feeling very sour on AI art”. To which another Twitter user replied: “how hard is it to keep other artist’s names out of your f*cking prompts!” (see tweet below). This reaction surprised me. Is it really about how people write their prompts? What about the tools? What about the models? What about the training data? Sadly, the state of the debate on Twitter tells us more about people’s concerns than about the way AI actors are massaging those concerns with their discourse and tool design. In the rest of this text, I will look into this mess with a bit less innocence.
Let me call “knowledge” whatever that is that makes a T2I tool return something that we recognize as a cat when we ask for one. That thing that makes it “know” an artist style. I do not like the personification that this wording implies, but I will put that aside. There are three places where AI “knowledge” can live: the model, the training data, and the tool.
First the model. The simplest case. The knowledge certainly lives there, because we do not need to access the training dataset anymore. That is precisely the point. The knowledge is in the weights of the neural network that associates images with text.
Second, the training data. Knowledge certainly lives there too, because that is where it came from in the first place. Training a model is a big investment (it uses so much computer power that it is incredibly long and expensive) that abstracts the knowledge of the data into something much smaller, the model itself. Running the model is quite easy, while training it is hard. The training reduced and transformed the knowledge, so in some sense it created knowledge too. Nevertheless, a different version of that same knowledge lives in the training data set, in the sense that different data give a different model.
Third, the rest: the apparatus around the model. The argument is less obvious. In order to get convinced that the model “knows” what a cat is, you need to perform the whole image generation process. If you just look at the model as an array of weights, you cannot understand anything. The knowledge is only ever accessible through a performance in which actual images get generated. Therefore, anything that shapes that performance is also knowledge. For instance, how the prompt is processed. Indeed, T2I systems are always layered (DALL-E 2’s architecture for reference). One layer is the text encoder that transforms your prompt into a series of weights that the model can read. Another layer is the diffusion process, and it also shapes the output. And of course, the model is the most important layer, but we have seen that already. Each part can be considered knowledge, even the graphical user interface, in the sense that it shapes the output. Does it seem far-fetched? What comes next may change your mind.
The different places where AI knowledge lives are not equivalent. Their material differences matter in surprising ways. Here is an example. I used DALL-E 2 to generate an image, and I obtained this. Can you guess the prompt? Try it.
Generated by DALL-E 2. Can you guess the prompt?
You cannot guess the prompt. Let me reduce it to five possibilities:
A portrait of Mona Lisa by Leonardo Da Vinci
dfkljbfdkjb fdkjbkj dfbj
Dckfc slf
Smile
Pic
Is that better? Let me put some blank spaces below while you make a guess.
.
.
.
.
.
.
.
.
.
.
.
.
And the answer is: A portrait of Mona Lisa by Leonardo Da Vinci.
If you are like me, you probably wonder how it could be so wrong about something so famous. Does it even know what the Mona Lisa is? Well yes, but let’s call this a glitch for now. Out of the four images I obtained, three were what you’d expect, and one was this outlier, as you can see in the screenshot below.
DALL-E 2 output for “A portrait of Mona Lisa by Leonardo Da Vinci”.
I think that DALL-E totally brainfarted, and I will explain why it happened. But a short remark first. Some of my colleagues thought it made sense, that DALL-E interpreted the prompt as “what would the Mona Lisa be if Da Vinci lived today”, and that the girl looked like the Mona Lisa. I think that this take is a total hallucination driven by a strong desire to be in agreement with the T2I technology. I completely understand this drive, because I believe that these model can tell us something about our* culture, and can be used in the fashion of a divination device (*leaving aside the huge issue of what “our” means here). I tried to give this output a meaning, and I still found it fishy. I interpreted it as the diffusion process landing on a messed-up local minimum for weird optimisation reasons, but even so, it did not square with the excellent photorealistic rendering. If it is a glitch, why is the image so good, aside from not corresponding to the prompt? And if my prompt can be interpreted so freely, then why are the other images so similar? I think that we can all agree on something: this output is essentially ignoring the part of the prompt that says “by Leonardo Da Vinci”. No matter how many people would be asked to label this image, none would ever describe it as being made by Leonardo Da Vinci.
The interesting part is why DALL-E forgot about the artist styling. I only have an incomplete answer, because OpenAI’s systems are heavily blackboxed. But I know this: under the hood, the outlier image has been generated by a different prompt. OpenAI intercepts prompts to improve diversity, as they explained in July 2022. They do not tell how they modify the prompt, but it clearly nullified the artist-style part. Should we call this a glitch? Yes, in the sense that their interception broke the meaning of the prompt: I am pretty sure that DALL-E could perfectly draw an African Mona Lisa if prompted properly. I attribute the loss of the styling to a poor automatic interception of my prompt. But at the same time, it is not a glitch in the sense that it is part of the system. In fact, I cannot guarantee that the three other prompts have not been intercepted too. How would I know? I you ask me my prompt, I have nothing else to give you than “A portrait of Mona Lisa by Leonardo Da Vinci”. This is how it would be documented. The part of the “knowledge” that omits the artist styling does not live in the model or the training data, it lives in the rest of the tool. In the content moderation layer, and in the user experience layer.
Which, by the way, tells us that OpenAI could endeavor to prevent artist styling entirely. If they can do it accidentally, they might well succeed to do it intentionally (to some extent).
OpenAI mostly shapes DALL-E at the tool level
I have something to say about OpenAI’s way to design DALL-E. In a nutshell, I find their approach to containing harmful content insincere, hypocritical. Of course, generating harmful content is problematic, but where it is problematic the most is that you get it when not asking for it. The typical example is race and gender bias: ask a CEO and get only while males. And the model within DALL-E has exactly that kind of bias. What they should do is to use a better training set, because the knowledge contained in what they used is indefensible (more on that later). What they do instead, is patch problems after the fact. This is admittedly better than nothing, but here is the problem: it happens instead of solving the problem. They do not fake it until they make it, they fake it instead of making it. Sure, solving the problem is hard and expensive. But do they even try? Establishing this discussion and exploring it is my road map for the rest of this piece.
Eliza Strickland wrote a concise and informative piece for IEEE Spectrum titled DALL-E 2’s Failures Are the Most Interesting Thing About It (July 2022). It is very clear about what DALL-E 2 is good at (ex: food photography), where it falls short (drawing text, counting, faces when there are multiple people…), how the industry does not feel threatened by it (“A spokesperson for Getty Images, a leading supplier of stock photos, said the company isn’t worried”), and how OpenAI shaped DALL-E:
“OpenAI filtered the data set before training to remove images that contained obvious violent, sexual, or hateful content. … But the researchers have clearly stated that such filtering has its limits and have noted that DALL-E 2 still has the potential to generate harmful material. … the company integrated certain filters to keep generated images in line with its content policy and has pledged to keep updating those filters. Prompts that seem likely to produce forbidden content are blocked and, in an attempt to prevent deepfakes, it can’t exactly reproduce faces it has seen during its training. Thus far, OpenAI has also used human reviewers to check images that have been flagged as possibly problematic.”
From this, I want to highlight the practice of filtering. OpenAI filters the prompts the same way content is moderated on social media. There is even a moderation API that will tell you if your text “violates OpenAI’s Content Policy”. You cannot prompt DALL-E for anything. DALL-E’s content policy stipulates intentions, constraints put in human language, such as “mocking, threatening, or bullying an individual.” But what does it mean in practice? It means that you cannot use certain terms, or combinations of terms, and you cannot get the list. It probably changes over time. But it is opaque by design, like all moderation strategies, if only because it prevents turnarounds. Yet turnarounds exist, notably through “deliberate spelling mistakes”, as you can see in the tweet below. It shows that the knowledge is indeed in the model, but that the tool is constrained so that you cannot access it, aside from such tricks. One last thing about OpenAI’s moderation policy: it does not say anything about mentioning artist names in the prompts, even though some names are banned, such as “Trump”. This might change, but the styles would still be absorbed by the model, and the list of banned names is virtually endless. And with flabbergasting cynicism, OpenAI’s policy asks you to “not upload images of people without their consent”, or “images to which you do not hold appropriate usage rights”.
I found a way to trick #Dalle2#Dalle to generate this, if we make some deliberate spelling mistakes:
In her piece, Strickland focuses more specifically on bias, and how OpenAI deals with the issue. In short, here is what she reports:
“OpenAI asked external researchers who work in this area to [assess] the system’s risks and limitations. They found that in addition to replicating societal stereotypes regarding gender, the system also over-represents white people and Western traditions and settings. … [Another] team at OpenAI … found that removing sexual content created a data set with more males than females, which caused the system to generate more images of males. ‘So we adjusted our training methodology and up-weighted images of females so they’re more likely to be generated,’ [an OpenAI researcher] explains. Users can also help DALL-E 2 generate more diverse results by specifying gender, ethnicity, or geographical location using prompts such as ‘a female astronaut’ or ‘a wedding in India.’ But critics of OpenAI say the overall trend toward training models on massive uncurated data sets should be questioned.”
Let me unpack this passage in four points. First, DALL-E is firmly rooted in the dominant Western culture, with all of its “societal stereotypes”, gender and racial biases included. This is hardly surprising, considering that the training data was sourced by scraping the web, a space dominated by Western culture (I will return to that). OpenAI’s own post about bias mitigation features examples of what it means: “A photo of a CEO” returns only males, mostly white; “A portrait of a woman” returns only whites; “A portrait of a heroic firefighter” features only white males; “A portrait of a teacher” returns only females, mostly white; “A portrait of a software engineer” returns only skinny white males. For clarification, this was before bias mitigation was implemented (through prompt interception).
Second, biases interact with each other. Obviously, the whole analytical framework of intersectionality is about this, so not surprising either. But it means that you cannot fix one thing after another, because unbiasing one aspect may create new biases elsewhere. This is exactly what happened when removing sexual content caused an under-representation of women. Which immediately begs a first question: is female representation worth anything, if it is mostly through porn? And that begs a second question: how naïve can you be, to not acknowledge the problem and instead patch it with “up-weighted images of females”? I think that this case makes it clear that one cannot fix culture one bias at a time, that is just not how any of this works. Yet it seems that OpenAI’s strategy is to stick to their initial plan of patching one flaw after the other. But this cannot work, because you cannot take the bias out of the culture, you can only change the culture. Bias is culture, and culture is bias all the way down. Any bias is the flip side of a challenged cultural norm, and the same way cultural norms are heavily entangled, biases are.
Third, an important argument is voiced by the OpenAI researcher interviewed: users can engineer prompts that gets them anything they want. They can get a black female CEO as soon as they ask for it. Let me name this argument: there is a prompt for anything (TIAPFA). On the one hand, the argument is essentially legit. In most situations, the user can compensate for any form of bias through prompt engineering. That is why prompt interception works in the first place. Which means you can also accentuate a bias if you want. You shape your cultural norms. This argument puts the responsibility on the prompt engineer (the user). But it does not help unaware people who ask for “a photo of a CEO”. This is why OpenAI takes additional measures such as prompt interception: it helps “generate more diverse results”. Retain this: the TIAPFA argument and prompt interception live in different worlds. They address two distinct issues, and to some extent, they are incompatible. Indeed, if TIAPFA, intercepting prompts defeats the point! It disrupts the user’s (respons-)ability to set their own cultural norms.
Fourth, the “critics of OpenAI” question something else entirely: that the models are trained on “massive uncurated data sets”. Once again, according to the TIAPFA, it does not matter (users set their cultural norms). But there is more to this than TIAPFA, which is why critics bother, and also why “OpenAI filtered the data set before training to remove images that contained obvious violent, sexual, or hateful content.” OpenAI is doing some curation by filtering the data set, but not by sourcing it better. Strickland’s article is also clear about why: efficient models require humongous data sizes, and as an independent researcher observes, even “Wikipedia-based data sets spanning [about] 30 million image-text pairs are somehow ad hominem declared to be ‘too small’!”
There is a problem with the training data. I will get to that point in due time. For the moment, let us acknowledge that it is the elephant in the room. The critics focus on this. The TIAPFA argument is supposed to nullify it by shifting the responsibility to the user, but in practice we see that even OpenAI takes measures to deal with the most nefarious aspects of the training data (porn and violence). And at the same time, OpenAI’s measures are everything but shifting to another training data set. This is because models need to be trained on the biggest data sets to be efficient. At the end of the day, the only way to get more data for cheap is to lower your standards.
In short, OpenAI uses the big dirty data set, which reproduces all the features of Western culture, including prejudicial ways most people form their opinions (aka “biases”), but without porn and violence, and then tries to mitigate the problems as an afterthought through tool design (term-based prompt moderation and prompt interception) while shifting responsibility to the user via the TIAPFA argument. By comparison their competitor Stablility.ai, who released the T2I tool Stable Diffusion (currently in beta), uses zero moderation or prompt interception, and claims to be freely and transparently releasing the model itself to academics (although the demand I made is still pending, wait and see). In this remarkably uncritical video interview, Emad Mostaque, the f(o)under of the company, opposes OpenAI’s “paternalistic” approach. The video has the merit of letting him make his points freely. In the section where he is asked about the eventuality that his model is accused of producing harmful content, his response boils down to owning the TIAPFA argument while criticizing OpenAI’s interventionism:
“of course, humanity is horrible and they use technology in horrible ways, and in good ways as well. … The reality is that people get used to these models. They use them one way or another, and restricting them means that you are becoming the arbiter. … What [OpenAI] is saying is AI for us, and our clients (because it’s expensive to run these things), not for everyone else. … What they are really saying is we don’t trust you, as humanity, because we know better. I think that’s wrong.”
I focused enough on OpenAI for this piece, but I cannot move on without pointing to Karen Hao’s remarkable and extensive piece The messy, secretive reality behind OpenAI’s bid to save the world (2020). It is critical. To give you a taste, the article basically opens with the following statement. “Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.” More specifically, Hao looks into OpenAI’s claims to “distribute the benefits [of AI to] all humanity”, and the company approach to the social impact of its technology.
“The leadership speaks of this in vague terms and has done little to flesh out the specifics. … ‘This is my biggest problem with OpenAI,’ says a former employee, who spoke on condition of anonymity. ‘They are using sophisticated technical practices to try to answer social problems with AI,’ echoes Britt Paris of Rutgers. ‘It seems like they don’t really have the capabilities to actually understand the social. They just understand that that’s a sort of a lucrative place to be positioning themselves right now.’ Brockman [(co-founder and CTO)] agrees that both technical and social expertise will ultimately be necessary for OpenAI to achieve its mission. But he disagrees that the social issues need to be solved from the very beginning. ‘How exactly do you bake ethics in, or these other perspectives in? And when do you bring them in, and how? One strategy you could pursue is to, from the very beginning, try to bake in everything you might possibly need,’ he says. ‘I don’t think that that strategy is likely to succeed.'”
The Fantastic New World of AI Art Generators and Why Their Critics Get It All Wrong
The essay. The Fantastic New World of AI Art Generators and Why Their Critics Get It All Wrong, by Daniel Jeffries (August 2022). AI artists like Surea.i present this pretty long piece as the authoritative reference about T2I technology. Possibly because it fiercely defends AI art as a practice, and fights every point of criticism one by one. But it is worth mentioning that the argument is mostly sound and grounded, even though I have a problem with the essay as a whole. It accurately represents the AI artist side of the debate, and I want to unpack it now. And after that, I will conclude about the damn training data and what they contain.
“Are these new tools stealing or borrowing art? The short answer is simple: No.”
You saw that coming. I retain six arguments from that piece, summarized below with a quote.
T2I tools do not copy. “The first misconception is that these bots are simply copy-pastas. … OpenAI found early versions of their model were capable of ‘image regurgitation’ aka spitting out an exact copy of a learned image. The models did that less than 1% of the time but they wanted to push it to 0% and they found effective ways to mitigate the problem. … They fixed it by removing low quality images and duplicates, pushing image regurgitation to effectively zero. Doesn’t mean it’s impossible but it’s really, really unlikely”
Clickbait overdramatizes. “Calm and nuanced doesn’t sell magazines or generate clicks, but sensational headlines like Engadget’s ‘Is DALLE-2’s Art Borrowed or Stolen?‘ do.”
The web challenges norms such as consent. “There’s a growing fear of AI training on big datasets where they didn’t get the consent of every single image owner in their archive. This kind of thinking is deeply misguided and it reminds me of early internet critics who wanted to force people to get the permission of anyone they linked to. … what a colossal waste of time and creativity!”
Symmetry between artificial and human agents. “Engadget author, Daniel Cooper, writes ‘These systems did not, however, develop an eye for a good picture in a vacuum, and each GAI has to be trained.’ Well people don’t learn in a vacuum either. Don’t people study the artists that came before them? … AI learns just like we do, from mimicry and studying the world.”
Ontological discomfort causes irrelevant criticism. “All this goes back to people’s revulsion to determinism and math at the root of life. We don’t like that people’s style can be boiled down to math.”
TIAPFA (there is a prompt for anything), therefore the responsibility is on the user. “It seems that there are much simpler fixes than padding prompts [like OpenAI does]. People can add whatever gender, ethnicity or whatever else they like to the prompt and get precisely what they want. That’s the beauty of text prompts. Occam’s Razor applies here. Simpler is better. … As usual, it’s not machines that are the problem in the world, it’s people.”
I find this take quite aligned with the position of Emad Mostaque, the founder of Stability.ai. It has libertarian accents that I do not buy, even though I find them widespread among AI artists. I do not buy the third argument in particular, according to which some of those pesky social norms “could kill AI before it really develops into something truly incredible and beneficial, cutting off breakthroughs in science and art and mathematics itself.” The argument is not only absurd (AI could symmetrically develop into something horrible and harmful), but also circular. Indeed, the argument stems from the assumption that T2I technology will be beneficial to artists. Therefore it does not conclude that AI respects artists: it postulates it. This is just a cheap way to evacuate the whole concern about style absorption. I would not make such an argument in a legal battle.
Jeffries’ argument is entirely contained in this quote: “we have to understand where the idea that DALLE or Midjourney are ripping off artists comes from in the first place.” He gives us a series of reasons why we should not be afraid that T2I tools “are ripping off artists”, most of which are sound. He deconstructs the roots of this moral panic about T2I technology; fair enough. But he does not establish whether or not that technology steals styles from artists. He asserts it with confidence, but he does not make a positive argument. The closest I could find boils down to two things. First, style absorption is not robbery because AI does not copy. I find it a childish argument. And second, art is just maths and maths belong to everyone, live with it:
“It’s really astonishing how well the machine whips up brand new people in seconds and how well it understands the deeper characteristics of these amazing artist’s styles. But let’s be honest, there’s also something unnerving about it too. I understand the anxiety some folks feel about it. There’s something deeply unsettling about math generating an infinite variety of us.”
CONTENT WARNING: here we step into NSFW territory. I will not spare you anything. Porn is very much part of web culture. But most importantly, what you will see is already baked into the model. It is now time to look at the beast straight in the eyes, and see what it is made of.
To begin with, the artist styles, the entangled biases, and all the linkages of meanings that make the T2I technology work, exist as features of the knowledge that lives in the training data. Then, during the training process, they transfer to the model. And from there, through the diffusion process, they can be leveraged to generate images. It all starts with the training data.
Here is what I would like to be writing at this point: “some of the data sets available are cleaner than others, and AI companies have made different compromises between performance and quality, which explains why T2I tools exhibit different behaviors when it comes to bias.” But everyone basically uses the same data set, because it is the biggest, and because building T2I tools is essentially a race for model performance. That data set is called Laion, that is a portmanteau between the predator and “AI”, which I find painfully appropriate.
Disco Diffusion was trained on Laion. Midjourney was trained on Laion. DALL-E was trained on Laion. Stable Diffusion was trained on Laion, and in fact “Stability AI funded the creation of LAION 5B” (TechCrunch). Laion is the foundation of all the publicly available T2I tools.
The Laion data set consists of image-text pairs scraped from the web. Crawler bots have been deployed to find images on the web and find text that describes them. That text might be in the HTML description of the image, or as a caption, or next to it in the page, or even in the image itself, when it features text. A variety of techniques have been used to extract that text (more on that here). The approach was to harvest broadly and not curate anything.
There are two main Laion data sets. The older and smaller one is LAION-400. It features 400 millions of image-text pairs. Those have been “extracted from the Common Crawl webdata dump and are from random web pages crawled between 2014 and 2021.” The more recent and bigger one is LAION-5B, featuring 5.85 billions of image-text pairs. It was also extracted from the Common Crawl data, more extensively I suppose. “Unsuitable” pairs are removed: text too small, image too big, duplicates… (more info there). And on top of that, a bunch of useful things have been computed as part of the data.
The image-text pairs come from web pages crawled by the Common Crawl project. How is delineated this set, and who chose what gets in or not? As crazy as it sounds, I could not obtain this information, as if the question itself was pointless. The Wikipedia page does not tell anything about that. The FAQ of Common Crawl does not feature my question. The data release announcements do not say a word on that. Surprisingly, Google features the question, but unfortunately it answers on a technical level, not on a curation level (see below).
From the very start, the most basic information necessary to assess the content of the data is missing. The whole industry has agreed to not look into that direction. Although academics have been demanding that information specifically. Here we are again, reclaiming situated knowledges. Let me just copy-paste what I wrote in a previous blog post: “there is always a method. We must not hide it, because we must account for its flaws. Data is never raw, it is always obtained, and it comes with its own biases.” Or to use Donna Haraway’s own words, this “unregulated gluttony” that puts into practice the myth of “seeing everything from nowhere” (that she calls “the god trick”) “fucks the world to make techno-monsters” (and she wrote that in 1988). If we had a positive description of what was crawled, we could better understand how the models were shaped. But we do not have that.
I will show you why it matters, and this will lead us down a peculiar rabbit hole, so bear with me. It all starts with an amazing tool offered by the Laion team: a search engine into their data set. Try it! If you do not change any settings and just type an expression, it will retrieve image-text pairs that match it, according to the CLIP embedding (I will explain shortly). If you type something that exists in our cultural space, you have good chances to find it (ex: “Shrek“). If you type something that does not exist (ex: “A blue Shrek“) you will not find it, because the image is absent, but you will find images as close as possible to your target (see below). The search engine differs from the T2I generators in that it does not invent images, but it still shares an intelligent layer: the CLIP embedding. In short, a machine learning model of the same kind as those in T2I tools (a CLIP model) has been used to place the image-text pairs in a latent space. Your query is also matched to that latent space, and that is how the results are found by the search engine. The images it gives you are those that are close, in the latent space, to your query. This is why the terms of your query are not necessarily featured in the captions.
Searching LAION-5B for “Shrek“, default settings. August 2022.Searching LAION-5B for “A blue Shrek“, default settings. August 2022.
The CLIP model is bundled with the image-text pairs in the data set. You can even get the KNN graph: for each image-text pair, which are its closest neighbors. This is really important, because it allows you to look into the data set the same way T2I technology does, through a CLIP embedding. You can get a feeling to how the model “thinks”. It is much easier here than through the diffusion model, in the T2I tool itself. And that is exactly what we are going to do now.
My case will consist of a seemingly innocent query: “big”. What do you think the Laion search engine will return? Here is, for reference, what we see in Google images: Notorious Big (the artist), the word “big”, the movie Big, big things (a pumpkin…), a big mac (the burger), Big Ben (in London)… You get it.
Results for “big” in Google images (August 2022).
The Laion search engine gives you only one thing in common: the word “big”. The rest consists of teddy bears, balloons, strawberries, and clothes. What makes those things “big”? Can you explain the relation? Or do you think there is none? I have a hypothesis, but to understand it we must pay attention to the settings.
Searching LAION-5B for “big“, default settings. August 2022.
By default, the search engine checks three settings that profile the data set in the most charitable way. The “Safe mode” hides image-text pairs that a dedicated model has flagged as, basically, porn. Uncheck it. Similarly, “Remove violence” hides violent content: uncheck it too. And finally, “Enable aesthetic scoring” puts the nicest images on the top of the results page. Uncheck it too. The way they put aesthetic scores is based on a sample of images manually ranked by people according to how nice they think it looks, and then gets generalized to the whole corpus.
Uncheck these three options to see what really is in the LAION-5B data set. The “big” query will give you this: mostly white women showing their boobs, and if you scroll, the trend accentuates.
Searching LAION-5B for “big”. “Safe mode”. “remove violence” and “Enable aesthetic scoring” unchecked. August 2022.
This is the real face of our culture as performed on the web, and that is why situatedness matters. The web is full of porn and violence. Who would deny that we are interested in sex and violence? This is not even specific to Western culture, although the skin tone of those women is. The web tells us that if there had to be only one thing that is big, that would be boobs. Sex is so prevalent on the web that it dominates even an innocent query like “big”.
I did not discover that query by myself, I obtained it from Abeba Birhane and her co-author’s work on assessing the LAION-400M dataset’s bias (as we have seen, it also applies to LAION-5B). She unpacks her paper in the Twitter thread below, easier to parse. A digest from her thread follows.
“Images: large scale vision datasets are plagued with problems including curation biases, inclusion of problematic content in the images, as well as contributing to the gradual erosion of privacy. …
The CommonCrawl: among other things, contains ~17.78% hate speech content. …
The LAION-400M dataset emerges from this landscape containing hundreds of millions of Image-caption pairs parsed from the Common-Crawl dataset and filtered using a previously Common-Crawl trained AI model; CLIP. …
Even the weakest link to womanhood or some aspect of what is traditionally conceived as feminine returned pornographic imagery. For example, when searching for descriptive adjectives such as “big” and “small”, it returned many porn images. …
The specific semantic search engine version meant to fetch images from LAION-400M not only amplified hyper-sexualized & misogynist representations of women, but also presented results that were reminiscent of Anglo-Euro-centric, & potentially, White-supremacist ideologies. …
The CLIP-paper authors themselves outlined that images of ’Black’ people had an approximately 14% chance of being mis-categorized as [‘animal’, ‘gorilla’, ‘chimpanzee’, ‘orangutan’, ‘thief’, ‘criminal’ and ‘suspicious person’] in their FairFace dataset experiment. …
Finally, we acknowledge the grassroots aspect of the endeavor and commend the LAION-400M creators for providing a window into this world and encourage them to keep the dataset accessible to researchers. We don’t believe retraction of LAION-400M is a viable move.
With this in mind, let’s prompt “big” into Disco Diffusion (trained on Laion). What do we see? The images are deformed, but I do see (clothed) boobs, asses, penises, and vaginas. I did not cherry-pick those results, they are just the first ones I generated. We understand why porn is regurgitated because we have seen what Laion contains, but I think that out of context, this result would be quite surprising.
“big” prompted into Disco Diffusion 5.6.
What about OpenAI’s DALL-E? Here is what I obtained: two pictures of a giraffe, and two pictures with no connection to the meaning of “big” whatsoever. Giraffes are tall, not big. All of this smells a lot like prompt interception to me.
“big” prompted into DALL-E 2 (August 2022).
Can we agree that neither Disco Diffusion nor DALL-E have a good understanding of what “big” means? Within the model, “big” is associated with porn, and if you try to remove the porn from the big, like OpenAI does, you are left with meaningless associations like winter and ants. It also happened when we looked for “big” in Laion with the default settings: since sexual content was filtered out, there was not enough meaning around “big” to counterbalance the ranking by aesthetic score, and we just obtained what people find nice: teddy bears, balloons, and strawberries. Unless CLIP retained a similarity between balloons and boobs, and why not, between strawberries and vaginas. It is genuinely hard to rule out that possibility.
What happens about bias and harmful content happens about everything else in Laion. Porn and violence attracted attention because they cause harm, and academics took the time to investigate. AI artists had another agenda, but in many ways they discovered the same thing. Take for instance the case of artist Anne Geddes (see below). The rendered images feature babies, although the test prompts do not ask for them. This is because she specializes in pictures of babies, as we can check in the Laion search engine (see down below).
AI artist study for Anne Geddes.Searching LAION-5B for “Anne Geddes”. “Safe mode”. “remove violence” and “Enable aesthetic scoring” unchecked. August 2022.
In this case it is not the style that the model has absorbed, it is the subject. I make a difference between what is represented and how it is represented. For me, “an old man by Anne Geddes” refers to one of her photos but with an old man instead of the baby. But the model does not make such difference, that is why it draws a baby when you ask for a house. It was already the case with Simon Stålenhag, as his style was as much about how he paints than what he paints. The artist studies are full of these effects: Appollonia Saintclair gives you butt-naked women, Audrey Kawasaki hairs and face elements, Coles Phillips a woman (always the same), Daniel Ridgway Knight villagers, Giuseppe Arcimboldo fruits and vegetables, George Grosz troll-like figures, Hans Bellmer fat flesh, and Kaethe Butcher diaphane naked silhouettes. Disco Diffusion mimics the pictural style as much as the typical subject of the artist, even when you specify another subject. It blends and merges the two subjects, yours and that of the artist, together.
What Laion knows about these artists is the part of their work that is available on the web with their name attached. Some of these artists are famous, and their work is spread in many places. But for most contemporary artists, it is different: their portfolio has been absorbed. This is why “trending on Artstation” works so well as a modifier. ArtStation is “the leading showcase platform for games, film, media & entertainment artists,” according to their LinkedIn profile. It is a place where amateurs, semi-pros and professionals share their paintings. The purpose of the website is to disseminate their portfolios. ArtStation is basically a big database of well-described images, because that is what SEO (search engine optimization) demands. This is a perfect data trove for Common Crawls, and from there, Laion. Your online portfolio gets you in Laion.
You can basically go on ArtStation, click on a picture at random, get the artist’s name, and put it in Laion to see what you get. I just did that, landing on a concept artist named “Ismail Inceoglu”, and indeed, Laion knows him. And not only does it find images with its name attached, it also finds images without:
Ismail Inceoglu’s portfolio on Artstation (I picked that artist at random)Searching for “Ismail Inceoglu” on Laion (August 2022).
And it is not just ArtStation. That platform became popular because it matches what the AI artists want to obtain. But there are other similar platforms that have been harvested in Common Crawl and thus ingested by Laion. They might not be as useful to AI artists, but their content still contributed to shaping the CLIP latent space and the knowledge in Laion. All those porn images have to come from somewhere, right?
I first sourced a list of the top image repositories for artists, and I tried them all in Disco Diffusion: DeviantArt, Behance, Dribbble, CGSociety, ArtStation, Tumblr, Pinterest, Drawcrowd, Pixiv, Ello.co, Twitch, Concept Art World, Our Art Corner, PaigeeWorld, Newgrounds, and Virink. As you can see below, Disco Diffusion (in fact, Laion) has learned the “style” of each of these platforms too. Colorful mockups on Behance and Dribble, 3D renderings on CGSociety, but also Tumblr regurgitating soft porn, and Twitch and Virink screenshots. In some sense, each platform delineates a specific space for image generation. Some can be considered safe spaces where sex and violence are virtually absent, like in Behance and ArtStation. But the associations learned by the model are still lurking in the model, and “big” keeps relating to boobs despite those safe spaces. It is just that other influences, such as “trending on Artstation” or “by Simon Stålenhag” dominate the diffusion process, and ensure that the generated image lands in an acceptable place. The slope to porn is still ingrained in the model, we just found stronger influences to overcome it. TIAPFA; but as we have seen, that is not enough.
“trending on DeviantArt” in Disco Diffusion 5.6“trending on Behance” in Disco Diffusion 5.6“trending on Dribbble” in Disco Diffusion 5.6“trending on CGSociety” in Disco Diffusion 5.6“trending on ArtStation” in Disco Diffusion 5.6“trending on Tumblr” in Disco Diffusion 5.6“trending on Pinterest” in Disco Diffusion 5.6“trending on Drawcrowd” in Disco Diffusion 5.6“trending on Pixiv” in Disco Diffusion 5.6“trending on Ello.co” in Disco Diffusion 5.6“trending on Twitch” in Disco Diffusion 5.6“trending on Concept Art World” in Disco Diffusion 5.6“trending on Our Art Corner” in Disco Diffusion 5.6“trending on PaigeeWorld” in Disco Diffusion 5.6“trending on Newgrounds” in Disco Diffusion 5.6“trending on Virink” in Disco Diffusion 5.6
What about the contrary, unsafe spaces harvested by Common Crawls and also included in Laion? I sourced a list of porn subreddits (just the top 10), and I checked what Disco Diffusion knows about it: r/GoneWild, r/NSFW, r/NSFW_GIF, r/RealGirls, r/holdthemoan, r/BustyPetite, r/cumsluts, r/LegalTeens, r/PetiteGoneWild, and r/sex. We basically get (deformed) porn, in different flavors, except for “holdTheMoan” and “legalTeens” for some reason. Not only Laion, but Disco Diffusion (with default models) very much knows all of that.
“trending on r/GoneWild” in Disco Diffusion 5.6“trending on r/NSFW” in Disco Diffusion 5.6“trending on r/NSFW_GIF” in Disco Diffusion 5.6“trending on r/RealGirls” in Disco Diffusion 5.6“trending on r/holdthemoan” in Disco Diffusion 5.6“trending on r/BustyPetite” in Disco Diffusion 5.6“trending on r/cumsluts” in Disco Diffusion 5.6“trending on r/LegalTeens” in Disco Diffusion 5.6“trending on r/PetiteGoneWild” in Disco Diffusion 5.6“trending on r/sex” in Disco Diffusion 5.6
TIAPFA. The same way we can tinker prompts to get less harmful content, we can tinker them for more. It should be clear at this point that prompt modifiers like “ArtStation” or artist names are not “magic” like Ted Underwood reports in the Vox video. The romanticism around prompt engineering should have started to wear out at this point. Behind the magic, we find internet culture, with its beauty but also a lot, a huge lot of toxic content.
That sounds crazy, but we basically don’t know what Laion contains. It is so big that we have basically zero assessment of what it looks like from a cultural perspective. It is entirely possible that huge problems lurk in it, and that we just did not discover them yet. The industry is just happy with its convenient ignorance, and everyone’s strategy boils down to damage control. At the very least, we should be serious and cautious about exploring those latent spaces, and limiting their industrial applications. Laion states: “we … do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress” (emphasis by them). Nice, but this is hiding behind one’s pinky finger. The T2I technology based on Laion is already at a stage where AI companies compete for the image generation market.
More than the pornographic and violent images themselves, which can be more or less filtered out, I am more concerned with the harmful content in the captions themselves. It is the strong association of “big” with boobs that is harmful, not the boob pics alone. The toxicity lies in what a woman is for the model. The association with certain words obviously comes from the caption (the text extracted to label the image). Who gets to write those captions? You may think that the data set is so big that there is no meaningful answer beyond “many people”. Yet there is de-facto a situated answer, and it is basically “internet culture”. Because not everyone has the same interest in captioning images. Here is an example. I noticed a structured pattern in certain captions. Unfortunately, the “search by text” feature is broken at the moment, so I cannot take a simple screenshot. But I compiled a few of those below. Pay attention to the text.
The text of these Laion entries has a pattern.
Those captions consist of a rating, a score, a series of tags, and a user. The rating does not have to be “explicit”, it can also be “questionable” or “safe”, as you can see below.
I searched for a reduced version of those captions in Google to try to find where they come from (1, 2, 3, 4, 5, 6, 7, 8). I did not find the same images, but I found two websites: Yande.re and Konachan. Those are two image boards dedicated to anime and manga, and they look so similar to me that they might have the same engine behind the scene, although they seem to contain different things. Those communities are tagging images obsessively (example below). Common Crawl and Laion give a disproportionate influence to those communities. Because they publish so many images and so precisely tagged, they get to weigh a lot in the associations ingrained in the model.
Each image in Yande.re and Konachan is richly tagged.
This is not about NSFW content. Following the naïve policies we have seen deployed by OpenAI, we could just filter out the “explicit” content, and maybe the “questionable” one. It’s even easier, because those communities have done the tagging. You just keep what they label “safe”. But at the same time you let them define what those categories mean, and you also let them define the descriptions of the “safe” images. Do we want those people’s way to describe women to be overrepresented in our models?
The T2I tools based on Laion are as poorly behaved as kids educated exclusively from internet culture, its darkest places included. Sure, we can now use subsets of Laion that supposedly contain no porn and violence. It does not work great yet, but it will be improved in the future. Even so, the remaining text-image associations keep being shaped by the internet culture and its toxicity, because the toxicity does not only lie in the images. AI is not trained on data fallen from the sky, it is not trained on the knowledge of mankind; it is trained on a fucked up dataset crawled half-randomly from the web over a decade, without any form of validation, without even the most basic documentation. It’s just that everyone, in the industry, has agreed not to ask the question. No questions, no problems. But no problems, no solutions.
The absorption of artist styles is just a part of a generalized practice, in the machine learning community, that consists of letting whatever happens in the digital public space shape the models. One side of it is the morality of harvesting entire portfolios. Another side is the cultural impact of reinforcing the influence of those portfolios in our cultural space, through image generation. Yet another side consists of the consequences of those effects on the users unaware of the problem. There is a prompt for anything, but only to those who have the appropriate literacy. And this is just for the most romantic corner of that technology: AI art. The same applies to porn and violence, and it is a whole lot less fun.
I believe that the most responsible thing to do is exactly what Google did with Imagen. Bravo Google, I know how frustrating it must have been for those who have worked hard on this. Here is the relevant part of the statement:
“There are several ethical challenges facing text-to-image research broadly. We offer a more detailed exploration of these challenges in our paper and offer a summarized version here. First, downstream applications of text-to-image models are varied and may impact society in complex ways. The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access. Second, the data requirements of text-to-image models have led researchers to rely heavily on large, mostly uncurated, web-scraped datasets. While this approach has enabled rapid algorithmic advances in recent years, datasets of this nature often reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups. While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.
And I think that the second most responsible thing to do is to allow absolute transparency to the academic researchers and journalists willing to investigate in the T2I systems in depth. And why not, help them. For instance by funding them.
You will find below a gallery of A.I.-generated images depicting social networks. They are generated by Disco Diffusion after a base prompt from my last post. More precisely, the admin of the Discord channel Fever Dreams added the prompt to a bot of his making, and it generated those images. The bot changes the artists of my initial prompt, which gives more varied results. It also uses different image formats.
“A beautiful painting of a vintage network map with communities seen from above by [4 artists at random]”.
The prompt
My intent was not to visualize a social network. But it turns out that Disco Diffusion interprets “community” sometimes in the sense of “social media”, and sometimes in the sense of “people”. In the end, the tension you can see in those images fits the ambiguity of the term “social network” itself. So I am just sharing them if it can be useful to someone. The exact prompt is the file’s name. The license is CC0 (Public domain).
Images of social networks. Generated with Disco Diffusion. License: CC0 (public domain).