50 min. read. The last section is NSFW.
I do not give an answer. I report who says what, where the concern comes from, and I show how you can look by yourself. I will unpack how and why AI models somehow recorded artist styles. In particular, I will look into the data where all of this comes from. And just so that you know, in that part I will show you pornography (a warning will precede it).
This is about a kind of apparatus that generates images from a text prompt: DALL-E, Disco Diffusion, Midjourney, Stable Diffusion, Imagen… Those devices are different but share the same general technical premises and fulfill about the same tasks, so let me call them a technology. It lacks a stabilized name though, and since I must commit to one in this piece, I pick “text to image“, abbreviated as T2I. This post is about the T2I technology, artists, and how the former will allegedly change the latter’s life, or not. If this is all new to you, then just watch this awesome 13 minutes video by Vox. It summarizes the issue perfectly.
“You can copy an artist’s style without copying their images, just by putting their name in the prompt.”
Joss Fong, The AI that creates any picture you want, explained, 2022
Give a text prompt to a T2I tool, and it returns images to you. I have previously documented the process of building prompts. Your prompt may mention to render the image in the style of a given artist, and the tool will oblige. It works better for certain artists than others. I am interested in the most convincing cases. Here is an example I find telling: the art of Simon Stålenhag. You will find his work below (check his website for a better view) next to images returned by T2I tools prompted for his style. Look at them, compare them visually. Do they look similar to you? If so, why? Can you make the difference between the man-made images and T2I output?
To me, those images have a clear family resemblance. I would characterize them as wide shots of an imposing monument or structure looking alien or technologically advanced standing out at a distance in the misty wilderness, often with a one or a few human beings using 20th century technology (cars, clothes…) rendered with a mix of realism and oil-painting textures, with muted colors and a few bright accents. Wikipedia summarizes Stålenhag’s style as “a stereotypical Swedish landscape with a neofuturistic bent”. I can tell that an image has been generated by a T2I tool, but importantly, I can also guess whether Simon Stålenhag has been the artist used in the prompt. I have seen other people guess it too on social media (unfortunately I could not retrieve any sources). And I am not the only one to find his style remarkably well captured by T2I models (compared to other artists).
Who says AI rips off artists?
At this point, I want to apologize to Simon Stålenhag. He is tired of hearing about this AI stuff. I am sorry to add a layer to this. I still have to, because his case is excellent for what I write about. Not only because T2I “is crazy good at replicating [his] style”, but because he is also involved in at least three important aspects of the discussion. First, he does not care about having his work absorbed by the T2I models, or his name being a popular prompt modifier. Second, some other people try to speak for him as if he took issue with the T2I technology, or try to enroll him as an ally in their fight against it. Third, he gets tired of all this social media activity. See by yourself in the tweets below.
Now, some people do take issue with T2I technology absorbing artist styles. But I have yet to find an actual artist complaining about getting ripped off themselves. What I observe instead is other people getting upset in their stead. The artists themselves seem either nuanced, or willing to embrace the T2I technology, or sometimes indifferent. In the Vox video mentioned above, James Gurney, a renowned artist often used in prompts, does not complain about his style getting absorbed by the DALL-E model. He only states that “the artist should be allowed to opt-in or opt-out of having their work, that they worked so much on by hand, be used as a dataset for creating this other artwork.” In the same video, Vanessa Rosa, artist and art historian, mentions that she has “heard of other artists who got actually extremely upset”, but does not mention them. But are the upset artists those who had their own style absorbed? In the companion video to that above, consisting of additional interview material from various people, we find no mention of style absorption ripping off artists. Ted Underwood, a professor in machine learning and literature, just says that artist names “are really powerful sort of magic words in this model.” Rob Sheridan, an art director, just comments that “everything in art is inspired by something else. … This just … puts a very crass, fine point on it.” And Mario Klingemann, the famous artist at the forefront of AI art, says this:
“It’s a bit unfair, of course, because some people took, I don’t know, years, decades to perfect their style and find their niche. And now all it takes is to put their name in the prompt, and then you can just have the shortcut and go on from there. … ‘Good artists copy, great artists steal.’ And that’s kind of exactly what it is, like, a lot of artists have ‘gotten inspiration’ from some unknown, whatever, other artist or so and never tell. … Art is not like science where you have to cite all your sources.”
Mario Klingemann, interviewed in Bonus video: What AI art means for human artists, 2022
And then, there is the Twitter thread below by RJ Palmer aka @arvalis, a concept artist. He takes issue with the T2I technology as an artist, but as far as I know, not as one who had their own style absorbed.
There are a few things to unpack here. Let me start by observing that there are two distinct arguments at play: style absorption gouges artists, and T2I will steal their jobs. Let me address the second argument first. The artist community is divided about it. Some artists believe that AI will get their jobs, some believe that it will change the profile of their jobs, for better of for worse, and some believe that it will not change much. The companion to the Vox video features various opinions on that matter. Let me simply acknowledge that many people are concerned about the impact of this technology on the job market, and voice it on social media. Yet what makes RJ Palmer’s Twitter thread stand out is the other argument. The claim made and defended with the images attached is specifically that AI rips off artists by copying their style. Which begs two questions: how linked are the two arguments, and how strong is his case?
The two arguments are weakly linked. T2I tools could disrupt the artist job market without copying styles in particular (I am not saying that it will). My argument, here, is that styles could exist without being attached to artists. Oil-painting style, watercolor style, 3D style… Even in today’s AI art, people use many other modifiers than artist names. We could train a model on data where artist names have been removed, and it would still retain stylistic information. I can imagine such tool disrupt the artist job market the same way, and it would not involve absorbing artist styles. To be fair, RJ Palmer or other artists may believe that T2I technology is so good only because it absorbed artist styles; but personally, I do not buy that. And I do not think that RJ Palmer does either. Indeed, he frames it as an economic issue: he finds it “gross” that “working artists [get] advertise[d] as styles” by AI companies. So conversely, he can imagine a system where AI companies compensate artists fairly. We can envision a disruption of the job market that is beneficial to artists. Of course this will not happen, but not because it’s impossible. No, because the balance of power completely unfavorable to artists. AI companies have power, money, and do not care at all about them. My takeaway here is simple: T2I may disrupt the artist job market in various bad ways, which is the real problem; style absorption is just a part of it; and fixing it is neither necessary nor sufficient to solve the bigger issue of harmful job market disruption.
Aside from that, is RJ Palmer’s case good? I do not think so, for three reasons. First, he is not himself an artist whose style is getting absorbed. The artist in his example is Michael Kutsche. Second, the similarity of the two pictures is vague to me. The style is not as similar as for Simon Stålenhag, but that is subjective. The signature, however, is really not similar (see below). RJ Palmer may ignore that such artifacts are common in current T2I technology. Of course, models have learned that good paintings often have a signature, so if you include popular modifiers such as “trending on Artstation”, you will often get such “watermarks” (in the vernacular of prompt writing). But visibly, the model did not try to reproduce this particular logo. Third, RJ Palmer’s point is phrased in a very anthropological way that makes the technology more intentional than it deserves: the model has not “tried to recreate” the style of said artist. If we could understand AI in terms of what it tries to do, regulating it would be much easier.
Let me summarize. Palmer’s case is not convincing (1) because he fails to establish that AI copies artists, (2) because he is not the one being “ripped off” himself, and (3) because it boils down to the more general concern of a harmful job market disruption by T2I technology, which is a legit concern but not dependent on style absorption. Yet this tweet was repurposed precisely to make the case of artists getting ripped off. It was quoted for instance in this newsletter issue titled “Plagiarism by Machine” where the authors says that some AI companies are “direct about ripping off the style and signature elements of digital artists — to the point where they even try to copy the artist’s logo!” I have read, and you will read, about T2I technology plagiarizing artists, and we will get exposed to the implicit injonction to side with the artist against the disruption caused by Big Tech. An injonction that I am personally inclined to endorse, and so may you; but it holds me back that at the root of this argument, we find no artist actually complaining about their own style getting absorbed by the T2I technology. Of course, a prominent artist might make that case tomorrow. Yet I could also see those renowned artists feeling unthreatened by that technology. Or even, why not, flattered.
Is it legal?
This, as well as basically everything related to authorship and AI, is legally unresolved. You can find a series of framings in Is DALL-E’s art borrowed or stolen? by David Cooper on Engadget (July 2022). It is very instructive. Also note that despite the title, there is no mention of an artist complaining about being stolen.
There are two parts to style absorption. First, the artist data has been harvested, in the form of images with a caption that contains their name. Second, the model was trained on that data and it abstracted something that we call “style”. I will explain with more details. The point I want to make here is that the legality of style absorption plays out very differently in these two steps. The data harvested is basically public information on the web. It is the portfolio of artists. In some sense, if you want to be on Google so that your clients find you, then you have to allow crawlers to harvest your images with your name attached and reuse it. But still, that is something we could regulate legally. The other part, however, is where the AI magic operates. There is nothing inherently illegal in training a neural network on a data set. Yet that is where style absorption really happens. You may find it scary and/or fascinating, you would not be alone in that. AI can absorb and repurpose artistic style, although as we will see, there is a lot to say about what “artistic style” means in this context. There is no coming back to when only humans could paint.
AI Artist studies
Let me briefly mention so-called AI artist studies. The name is a bit ambitious for what it actually is, but there is a real effort behind it. In short, it is about rendering the same prompt again and again except for the artist’s name, so that you can see how it impacts the result. This project is an attempt at documenting the T2I technology in a systematic way, and it is a major resource for prompt engineering. Here is an example for Disco Diffusion.
Surea.i, the artist at the origin of this initiative, has taken some of the heat against style absorption on social media. Although he is not affiliated with any of the AI companies. Generating an image takes some time, and collecting this database required a significant effort (many other people participated). The explicit intent was to give back knowledge to the community, and I personally appreciate and support that mindset. Yet it was interpreted by some as an anti-artist contribution. As a Twitter user commented, the artist studies were “not even inherently pro-AI” (see below). As Surea.i replied, the case of artist style absorption could only be made because of it was so well documented in the first case. Using artist names in prompts is a practice that both fed into the artist studies, and was nourished by it.
Surea.i was “feeling very sour on AI art”. To which another Twitter user replied: “how hard is it to keep other artist’s names out of your f*cking prompts!” (see tweet below). This reaction surprised me. Is it really about how people write their prompts? What about the tools? What about the models? What about the training data? Sadly, the state of the debate on Twitter tells us more about people’s concerns than about the way AI actors are massaging those concerns with their discourse and tool design. In the rest of this text, I will look into this mess with a bit less innocence.
Where AI knowledge lives
Let me call “knowledge” whatever that is that makes a T2I tool return something that we recognize as a cat when we ask for one. That thing that makes it “know” an artist style. I do not like the personification that this wording implies, but I will put that aside. There are three places where AI “knowledge” can live: the model, the training data, and the tool.
First the model. The simplest case. The knowledge certainly lives there, because we do not need to access the training dataset anymore. That is precisely the point. The knowledge is in the weights of the neural network that associates images with text.
Second, the training data. Knowledge certainly lives there too, because that is where it came from in the first place. Training a model is a big investment (it uses so much computer power that it is incredibly long and expensive) that abstracts the knowledge of the data into something much smaller, the model itself. Running the model is quite easy, while training it is hard. The training reduced and transformed the knowledge, so in some sense it created knowledge too. Nevertheless, a different version of that same knowledge lives in the training data set, in the sense that different data give a different model.
Third, the rest: the apparatus around the model. The argument is less obvious. In order to get convinced that the model “knows” what a cat is, you need to perform the whole image generation process. If you just look at the model as an array of weights, you cannot understand anything. The knowledge is only ever accessible through a performance in which actual images get generated. Therefore, anything that shapes that performance is also knowledge. For instance, how the prompt is processed. Indeed, T2I systems are always layered (DALL-E 2’s architecture for reference). One layer is the text encoder that transforms your prompt into a series of weights that the model can read. Another layer is the diffusion process, and it also shapes the output. And of course, the model is the most important layer, but we have seen that already. Each part can be considered knowledge, even the graphical user interface, in the sense that it shapes the output. Does it seem far-fetched? What comes next may change your mind.
The different places where AI knowledge lives are not equivalent. Their material differences matter in surprising ways. Here is an example. I used DALL-E 2 to generate an image, and I obtained this. Can you guess the prompt? Try it.
You cannot guess the prompt. Let me reduce it to five possibilities:
- A portrait of Mona Lisa by Leonardo Da Vinci
- dfkljbfdkjb fdkjbkj dfbj
- Dckfc slf
- Smile
- Pic
Is that better? Let me put some blank spaces below while you make a guess.
.
.
.
.
.
.
.
.
.
.
.
.
And the answer is:
A portrait of Mona Lisa by Leonardo Da Vinci.
If you are like me, you probably wonder how it could be so wrong about something so famous. Does it even know what the Mona Lisa is? Well yes, but let’s call this a glitch for now. Out of the four images I obtained, three were what you’d expect, and one was this outlier, as you can see in the screenshot below.
I think that DALL-E totally brainfarted, and I will explain why it happened. But a short remark first. Some of my colleagues thought it made sense, that DALL-E interpreted the prompt as “what would the Mona Lisa be if Da Vinci lived today”, and that the girl looked like the Mona Lisa. I think that this take is a total hallucination driven by a strong desire to be in agreement with the T2I technology. I completely understand this drive, because I believe that these model can tell us something about our* culture, and can be used in the fashion of a divination device (*leaving aside the huge issue of what “our” means here). I tried to give this output a meaning, and I still found it fishy. I interpreted it as the diffusion process landing on a messed-up local minimum for weird optimisation reasons, but even so, it did not square with the excellent photorealistic rendering. If it is a glitch, why is the image so good, aside from not corresponding to the prompt? And if my prompt can be interpreted so freely, then why are the other images so similar? I think that we can all agree on something: this output is essentially ignoring the part of the prompt that says “by Leonardo Da Vinci”. No matter how many people would be asked to label this image, none would ever describe it as being made by Leonardo Da Vinci.
The interesting part is why DALL-E forgot about the artist styling. I only have an incomplete answer, because OpenAI’s systems are heavily blackboxed. But I know this: under the hood, the outlier image has been generated by a different prompt. OpenAI intercepts prompts to improve diversity, as they explained in July 2022. They do not tell how they modify the prompt, but it clearly nullified the artist-style part. Should we call this a glitch? Yes, in the sense that their interception broke the meaning of the prompt: I am pretty sure that DALL-E could perfectly draw an African Mona Lisa if prompted properly. I attribute the loss of the styling to a poor automatic interception of my prompt. But at the same time, it is not a glitch in the sense that it is part of the system. In fact, I cannot guarantee that the three other prompts have not been intercepted too. How would I know? I you ask me my prompt, I have nothing else to give you than “A portrait of Mona Lisa by Leonardo Da Vinci”. This is how it would be documented. The part of the “knowledge” that omits the artist styling does not live in the model or the training data, it lives in the rest of the tool. In the content moderation layer, and in the user experience layer.
Which, by the way, tells us that OpenAI could endeavor to prevent artist styling entirely. If they can do it accidentally, they might well succeed to do it intentionally (to some extent).
OpenAI mostly shapes DALL-E at the tool level
I have something to say about OpenAI’s way to design DALL-E. In a nutshell, I find their approach to containing harmful content insincere, hypocritical. Of course, generating harmful content is problematic, but where it is problematic the most is that you get it when not asking for it. The typical example is race and gender bias: ask a CEO and get only while males. And the model within DALL-E has exactly that kind of bias. What they should do is to use a better training set, because the knowledge contained in what they used is indefensible (more on that later). What they do instead, is patch problems after the fact. This is admittedly better than nothing, but here is the problem: it happens instead of solving the problem. They do not fake it until they make it, they fake it instead of making it. Sure, solving the problem is hard and expensive. But do they even try? Establishing this discussion and exploring it is my road map for the rest of this piece.
Eliza Strickland wrote a concise and informative piece for IEEE Spectrum titled DALL-E 2’s Failures Are the Most Interesting Thing About It (July 2022). It is very clear about what DALL-E 2 is good at (ex: food photography), where it falls short (drawing text, counting, faces when there are multiple people…), how the industry does not feel threatened by it (“A spokesperson for Getty Images, a leading supplier of stock photos, said the company isn’t worried”), and how OpenAI shaped DALL-E:
“OpenAI filtered the data set before training to remove images that contained obvious violent, sexual, or hateful content. … But the researchers have clearly stated that such filtering has its limits and have noted that DALL-E 2 still has the potential to generate harmful material. … the company integrated certain filters to keep generated images in line with its content policy and has pledged to keep updating those filters. Prompts that seem likely to produce forbidden content are blocked and, in an attempt to prevent deepfakes, it can’t exactly reproduce faces it has seen during its training. Thus far, OpenAI has also used human reviewers to check images that have been flagged as possibly problematic.”
Eliza Strickland, DALL-E 2’s Failures Are the Most Interesting Thing About It, July 2022.
From this, I want to highlight the practice of filtering. OpenAI filters the prompts the same way content is moderated on social media. There is even a moderation API that will tell you if your text “violates OpenAI’s Content Policy”. You cannot prompt DALL-E for anything. DALL-E’s content policy stipulates intentions, constraints put in human language, such as “mocking, threatening, or bullying an individual.” But what does it mean in practice? It means that you cannot use certain terms, or combinations of terms, and you cannot get the list. It probably changes over time. But it is opaque by design, like all moderation strategies, if only because it prevents turnarounds. Yet turnarounds exist, notably through “deliberate spelling mistakes”, as you can see in the tweet below. It shows that the knowledge is indeed in the model, but that the tool is constrained so that you cannot access it, aside from such tricks. One last thing about OpenAI’s moderation policy: it does not say anything about mentioning artist names in the prompts, even though some names are banned, such as “Trump”. This might change, but the styles would still be absorbed by the model, and the list of banned names is virtually endless. And with flabbergasting cynicism, OpenAI’s policy asks you to “not upload images of people without their consent”, or “images to which you do not hold appropriate usage rights”.
In her piece, Strickland focuses more specifically on bias, and how OpenAI deals with the issue. In short, here is what she reports:
“OpenAI asked external researchers who work in this area to [assess] the system’s risks and limitations. They found that in addition to replicating societal stereotypes regarding gender, the system also over-represents white people and Western traditions and settings. … [Another] team at OpenAI … found that removing sexual content created a data set with more males than females, which caused the system to generate more images of males. ‘So we adjusted our training methodology and up-weighted images of females so they’re more likely to be generated,’ [an OpenAI researcher] explains. Users can also help DALL-E 2 generate more diverse results by specifying gender, ethnicity, or geographical location using prompts such as ‘a female astronaut’ or ‘a wedding in India.’ But critics of OpenAI say the overall trend toward training models on massive uncurated data sets should be questioned.”
Eliza Strickland, DALL-E 2’s Failures Are the Most Interesting Thing About It, July 2022.
Let me unpack this passage in four points. First, DALL-E is firmly rooted in the dominant Western culture, with all of its “societal stereotypes”, gender and racial biases included. This is hardly surprising, considering that the training data was sourced by scraping the web, a space dominated by Western culture (I will return to that). OpenAI’s own post about bias mitigation features examples of what it means: “A photo of a CEO” returns only males, mostly white; “A portrait of a woman” returns only whites; “A portrait of a heroic firefighter” features only white males; “A portrait of a teacher” returns only females, mostly white; “A portrait of a software engineer” returns only skinny white males. For clarification, this was before bias mitigation was implemented (through prompt interception).
Second, biases interact with each other. Obviously, the whole analytical framework of intersectionality is about this, so not surprising either. But it means that you cannot fix one thing after another, because unbiasing one aspect may create new biases elsewhere. This is exactly what happened when removing sexual content caused an under-representation of women. Which immediately begs a first question: is female representation worth anything, if it is mostly through porn? And that begs a second question: how naïve can you be, to not acknowledge the problem and instead patch it with “up-weighted images of females”? I think that this case makes it clear that one cannot fix culture one bias at a time, that is just not how any of this works. Yet it seems that OpenAI’s strategy is to stick to their initial plan of patching one flaw after the other. But this cannot work, because you cannot take the bias out of the culture, you can only change the culture. Bias is culture, and culture is bias all the way down. Any bias is the flip side of a challenged cultural norm, and the same way cultural norms are heavily entangled, biases are.
Third, an important argument is voiced by the OpenAI researcher interviewed: users can engineer prompts that gets them anything they want. They can get a black female CEO as soon as they ask for it. Let me name this argument: there is a prompt for anything (TIAPFA). On the one hand, the argument is essentially legit. In most situations, the user can compensate for any form of bias through prompt engineering. That is why prompt interception works in the first place. Which means you can also accentuate a bias if you want. You shape your cultural norms. This argument puts the responsibility on the prompt engineer (the user). But it does not help unaware people who ask for “a photo of a CEO”. This is why OpenAI takes additional measures such as prompt interception: it helps “generate more diverse results”. Retain this: the TIAPFA argument and prompt interception live in different worlds. They address two distinct issues, and to some extent, they are incompatible. Indeed, if TIAPFA, intercepting prompts defeats the point! It disrupts the user’s (respons-)ability to set their own cultural norms.
Fourth, the “critics of OpenAI” question something else entirely: that the models are trained on “massive uncurated data sets”. Once again, according to the TIAPFA, it does not matter (users set their cultural norms). But there is more to this than TIAPFA, which is why critics bother, and also why “OpenAI filtered the data set before training to remove images that contained obvious violent, sexual, or hateful content.” OpenAI is doing some curation by filtering the data set, but not by sourcing it better. Strickland’s article is also clear about why: efficient models require humongous data sizes, and as an independent researcher observes, even “Wikipedia-based data sets spanning [about] 30 million image-text pairs are somehow ad hominem declared to be ‘too small’!”
There is a problem with the training data. I will get to that point in due time. For the moment, let us acknowledge that it is the elephant in the room. The critics focus on this. The TIAPFA argument is supposed to nullify it by shifting the responsibility to the user, but in practice we see that even OpenAI takes measures to deal with the most nefarious aspects of the training data (porn and violence). And at the same time, OpenAI’s measures are everything but shifting to another training data set. This is because models need to be trained on the biggest data sets to be efficient. At the end of the day, the only way to get more data for cheap is to lower your standards.
In short, OpenAI uses the big dirty data set, which reproduces all the features of Western culture, including prejudicial ways most people form their opinions (aka “biases”), but without porn and violence, and then tries to mitigate the problems as an afterthought through tool design (term-based prompt moderation and prompt interception) while shifting responsibility to the user via the TIAPFA argument. By comparison their competitor Stablility.ai, who released the T2I tool Stable Diffusion (currently in beta), uses zero moderation or prompt interception, and claims to be freely and transparently releasing the model itself to academics (although the demand I made is still pending, wait and see). In this remarkably uncritical video interview, Emad Mostaque, the f(o)under of the company, opposes OpenAI’s “paternalistic” approach. The video has the merit of letting him make his points freely. In the section where he is asked about the eventuality that his model is accused of producing harmful content, his response boils down to owning the TIAPFA argument while criticizing OpenAI’s interventionism:
“of course, humanity is horrible and they use technology in horrible ways, and in good ways as well. … The reality is that people get used to these models. They use them one way or another, and restricting them means that you are becoming the arbiter. … What [OpenAI] is saying is AI for us, and our clients (because it’s expensive to run these things), not for everyone else. … What they are really saying is we don’t trust you, as humanity, because we know better. I think that’s wrong.”
Emad Mostaque, The Man behind Stable Diffusion, August 2022
I focused enough on OpenAI for this piece, but I cannot move on without pointing to Karen Hao’s remarkable and extensive piece The messy, secretive reality behind OpenAI’s bid to save the world (2020). It is critical. To give you a taste, the article basically opens with the following statement. “Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.” More specifically, Hao looks into OpenAI’s claims to “distribute the benefits [of AI to] all humanity”, and the company approach to the social impact of its technology.
“The leadership speaks of this in vague terms and has done little to flesh out the specifics. … ‘This is my biggest problem with OpenAI,’ says a former employee, who spoke on condition of anonymity. ‘They are using sophisticated technical practices to try to answer social problems with AI,’ echoes Britt Paris of Rutgers. ‘It seems like they don’t really have the capabilities to actually understand the social. They just understand that that’s a sort of a lucrative place to be positioning themselves right now.’ Brockman [(co-founder and CTO)] agrees that both technical and social expertise will ultimately be necessary for OpenAI to achieve its mission. But he disagrees that the social issues need to be solved from the very beginning. ‘How exactly do you bake ethics in, or these other perspectives in? And when do you bring them in, and how? One strategy you could pursue is to, from the very beginning, try to bake in everything you might possibly need,’ he says. ‘I don’t think that that strategy is likely to succeed.'”
Karen Hao, The messy, secretive reality behind OpenAI’s bid to save the world (2020)
The Fantastic New World of AI Art Generators and Why Their Critics Get It All Wrong
The essay. The Fantastic New World of AI Art Generators and Why Their Critics Get It All Wrong, by Daniel Jeffries (August 2022). AI artists like Surea.i present this pretty long piece as the authoritative reference about T2I technology. Possibly because it fiercely defends AI art as a practice, and fights every point of criticism one by one. But it is worth mentioning that the argument is mostly sound and grounded, even though I have a problem with the essay as a whole. It accurately represents the AI artist side of the debate, and I want to unpack it now. And after that, I will conclude about the damn training data and what they contain.
“Are these new tools stealing or borrowing art? The short answer is simple: No.”
Daniel Jeffries, The Fantastic New World of AI Art Generators and Why Their Critics Get It All Wrong, August 2022.
You saw that coming. I retain six arguments from that piece, summarized below with a quote.
T2I tools do not copy. “The first misconception is that these bots are simply copy-pastas. … OpenAI found early versions of their model were capable of ‘image regurgitation’ aka spitting out an exact copy of a learned image. The models did that less than 1% of the time but they wanted to push it to 0% and they found effective ways to mitigate the problem. … They fixed it by removing low quality images and duplicates, pushing image regurgitation to effectively zero. Doesn’t mean it’s impossible but it’s really, really unlikely”
Clickbait overdramatizes. “Calm and nuanced doesn’t sell magazines or generate clicks, but sensational headlines like Engadget’s ‘Is DALLE-2’s Art Borrowed or Stolen?‘ do.”
The web challenges norms such as consent. “There’s a growing fear of AI training on big datasets where they didn’t get the consent of every single image owner in their archive. This kind of thinking is deeply misguided and it reminds me of early internet critics who wanted to force people to get the permission of anyone they linked to. … what a colossal waste of time and creativity!”
Symmetry between artificial and human agents. “Engadget author, Daniel Cooper, writes ‘These systems did not, however, develop an eye for a good picture in a vacuum, and each GAI has to be trained.’ Well people don’t learn in a vacuum either. Don’t people study the artists that came before them? … AI learns just like we do, from mimicry and studying the world.”
Ontological discomfort causes irrelevant criticism. “All this goes back to people’s revulsion to determinism and math at the root of life. We don’t like that people’s style can be boiled down to math.”
TIAPFA (there is a prompt for anything), therefore the responsibility is on the user. “It seems that there are much simpler fixes than padding prompts [like OpenAI does]. People can add whatever gender, ethnicity or whatever else they like to the prompt and get precisely what they want. That’s the beauty of text prompts. Occam’s Razor applies here. Simpler is better. … As usual, it’s not machines that are the problem in the world, it’s people.”
I find this take quite aligned with the position of Emad Mostaque, the founder of Stability.ai. It has libertarian accents that I do not buy, even though I find them widespread among AI artists. I do not buy the third argument in particular, according to which some of those pesky social norms “could kill AI before it really develops into something truly incredible and beneficial, cutting off breakthroughs in science and art and mathematics itself.” The argument is not only absurd (AI could symmetrically develop into something horrible and harmful), but also circular. Indeed, the argument stems from the assumption that T2I technology will be beneficial to artists. Therefore it does not conclude that AI respects artists: it postulates it. This is just a cheap way to evacuate the whole concern about style absorption. I would not make such an argument in a legal battle.
Jeffries’ argument is entirely contained in this quote: “we have to understand where the idea that DALLE or Midjourney are ripping off artists comes from in the first place.” He gives us a series of reasons why we should not be afraid that T2I tools “are ripping off artists”, most of which are sound. He deconstructs the roots of this moral panic about T2I technology; fair enough. But he does not establish whether or not that technology steals styles from artists. He asserts it with confidence, but he does not make a positive argument. The closest I could find boils down to two things. First, style absorption is not robbery because AI does not copy. I find it a childish argument. And second, art is just maths and maths belong to everyone, live with it:
“It’s really astonishing how well the machine whips up brand new people in seconds and how well it understands the deeper characteristics of these amazing artist’s styles. But let’s be honest, there’s also something unnerving about it too. I understand the anxiety some folks feel about it. There’s something deeply unsettling about math generating an infinite variety of us.”
Daniel Jeffries, The Fantastic New World of AI Art Generators and Why Their Critics Get It All Wrong, August 2022.
Training data: the Laion in the room
CONTENT WARNING: here we step into NSFW territory. I will not spare you anything. Porn is very much part of web culture. But most importantly, what you will see is already baked into the model. It is now time to look at the beast straight in the eyes, and see what it is made of.
To begin with, the artist styles, the entangled biases, and all the linkages of meanings that make the T2I technology work, exist as features of the knowledge that lives in the training data. Then, during the training process, they transfer to the model. And from there, through the diffusion process, they can be leveraged to generate images. It all starts with the training data.
Here is what I would like to be writing at this point: “some of the data sets available are cleaner than others, and AI companies have made different compromises between performance and quality, which explains why T2I tools exhibit different behaviors when it comes to bias.” But everyone basically uses the same data set, because it is the biggest, and because building T2I tools is essentially a race for model performance. That data set is called Laion, that is a portmanteau between the predator and “AI”, which I find painfully appropriate.
Disco Diffusion was trained on Laion. Midjourney was trained on Laion. DALL-E was trained on Laion. Stable Diffusion was trained on Laion, and in fact “Stability AI funded the creation of LAION 5B” (TechCrunch). Laion is the foundation of all the publicly available T2I tools.
The Laion data set consists of image-text pairs scraped from the web. Crawler bots have been deployed to find images on the web and find text that describes them. That text might be in the HTML description of the image, or as a caption, or next to it in the page, or even in the image itself, when it features text. A variety of techniques have been used to extract that text (more on that here). The approach was to harvest broadly and not curate anything.
There are two main Laion data sets. The older and smaller one is LAION-400. It features 400 millions of image-text pairs. Those have been “extracted from the Common Crawl webdata dump and are from random web pages crawled between 2014 and 2021.” The more recent and bigger one is LAION-5B, featuring 5.85 billions of image-text pairs. It was also extracted from the Common Crawl data, more extensively I suppose. “Unsuitable” pairs are removed: text too small, image too big, duplicates… (more info there). And on top of that, a bunch of useful things have been computed as part of the data.
The image-text pairs come from web pages crawled by the Common Crawl project. How is delineated this set, and who chose what gets in or not? As crazy as it sounds, I could not obtain this information, as if the question itself was pointless. The Wikipedia page does not tell anything about that. The FAQ of Common Crawl does not feature my question. The data release announcements do not say a word on that. Surprisingly, Google features the question, but unfortunately it answers on a technical level, not on a curation level (see below).
From the very start, the most basic information necessary to assess the content of the data is missing. The whole industry has agreed to not look into that direction. Although academics have been demanding that information specifically. Here we are again, reclaiming situated knowledges. Let me just copy-paste what I wrote in a previous blog post: “there is always a method. We must not hide it, because we must account for its flaws. Data is never raw, it is always obtained, and it comes with its own biases.” Or to use Donna Haraway’s own words, this “unregulated gluttony” that puts into practice the myth of “seeing everything from nowhere” (that she calls “the god trick”) “fucks the world to make techno-monsters” (and she wrote that in 1988). If we had a positive description of what was crawled, we could better understand how the models were shaped. But we do not have that.
I will show you why it matters, and this will lead us down a peculiar rabbit hole, so bear with me. It all starts with an amazing tool offered by the Laion team: a search engine into their data set. Try it! If you do not change any settings and just type an expression, it will retrieve image-text pairs that match it, according to the CLIP embedding (I will explain shortly). If you type something that exists in our cultural space, you have good chances to find it (ex: “Shrek“). If you type something that does not exist (ex: “A blue Shrek“) you will not find it, because the image is absent, but you will find images as close as possible to your target (see below). The search engine differs from the T2I generators in that it does not invent images, but it still shares an intelligent layer: the CLIP embedding. In short, a machine learning model of the same kind as those in T2I tools (a CLIP model) has been used to place the image-text pairs in a latent space. Your query is also matched to that latent space, and that is how the results are found by the search engine. The images it gives you are those that are close, in the latent space, to your query. This is why the terms of your query are not necessarily featured in the captions.
The CLIP model is bundled with the image-text pairs in the data set. You can even get the KNN graph: for each image-text pair, which are its closest neighbors. This is really important, because it allows you to look into the data set the same way T2I technology does, through a CLIP embedding. You can get a feeling to how the model “thinks”. It is much easier here than through the diffusion model, in the T2I tool itself. And that is exactly what we are going to do now.
My case will consist of a seemingly innocent query: “big”. What do you think the Laion search engine will return? Here is, for reference, what we see in Google images: Notorious Big (the artist), the word “big”, the movie Big, big things (a pumpkin…), a big mac (the burger), Big Ben (in London)… You get it.
The Laion search engine gives you only one thing in common: the word “big”. The rest consists of teddy bears, balloons, strawberries, and clothes. What makes those things “big”? Can you explain the relation? Or do you think there is none? I have a hypothesis, but to understand it we must pay attention to the settings.
By default, the search engine checks three settings that profile the data set in the most charitable way. The “Safe mode” hides image-text pairs that a dedicated model has flagged as, basically, porn. Uncheck it. Similarly, “Remove violence” hides violent content: uncheck it too. And finally, “Enable aesthetic scoring” puts the nicest images on the top of the results page. Uncheck it too. The way they put aesthetic scores is based on a sample of images manually ranked by people according to how nice they think it looks, and then gets generalized to the whole corpus.
Uncheck these three options to see what really is in the LAION-5B data set. The “big” query will give you this: mostly white women showing their boobs, and if you scroll, the trend accentuates.
This is the real face of our culture as performed on the web, and that is why situatedness matters. The web is full of porn and violence. Who would deny that we are interested in sex and violence? This is not even specific to Western culture, although the skin tone of those women is. The web tells us that if there had to be only one thing that is big, that would be boobs. Sex is so prevalent on the web that it dominates even an innocent query like “big”.
I did not discover that query by myself, I obtained it from Abeba Birhane and her co-author’s work on assessing the LAION-400M dataset’s bias (as we have seen, it also applies to LAION-5B). She unpacks her paper in the Twitter thread below, easier to parse. A digest from her thread follows.
“Images: large scale vision datasets are plagued with problems including curation biases, inclusion of problematic content in the images, as well as contributing to the gradual erosion of privacy. …
The CommonCrawl: among other things, contains ~17.78% hate speech content. …
The LAION-400M dataset emerges from this landscape containing hundreds of millions of Image-caption pairs parsed from the Common-Crawl dataset and filtered using a previously Common-Crawl trained AI model; CLIP. …
Even the weakest link to womanhood or some aspect of what is traditionally conceived as feminine returned pornographic imagery. For example, when searching for descriptive adjectives such as “big” and “small”, it returned many porn images. …
The specific semantic search engine version meant to fetch images from LAION-400M not only amplified hyper-sexualized & misogynist representations of women, but also presented results that were reminiscent of Anglo-Euro-centric, & potentially, White-supremacist ideologies. …
The CLIP-paper authors themselves outlined that images of ’Black’ people had an approximately 14% chance of being mis-categorized as [‘animal’, ‘gorilla’, ‘chimpanzee’, ‘orangutan’, ‘thief’, ‘criminal’ and ‘suspicious person’] in their FairFace dataset experiment. …
Finally, we acknowledge the grassroots aspect of the endeavor and commend the LAION-400M creators for providing a window into this world and encourage them to keep the dataset accessible to researchers. We don’t believe retraction of LAION-400M is a viable move.
Abeba Birhane, Twitter thread, October 2021.
With this in mind, let’s prompt “big” into Disco Diffusion (trained on Laion). What do we see? The images are deformed, but I do see (clothed) boobs, asses, penises, and vaginas. I did not cherry-pick those results, they are just the first ones I generated. We understand why porn is regurgitated because we have seen what Laion contains, but I think that out of context, this result would be quite surprising.
What about OpenAI’s DALL-E? Here is what I obtained: two pictures of a giraffe, and two pictures with no connection to the meaning of “big” whatsoever. Giraffes are tall, not big. All of this smells a lot like prompt interception to me.
Can we agree that neither Disco Diffusion nor DALL-E have a good understanding of what “big” means? Within the model, “big” is associated with porn, and if you try to remove the porn from the big, like OpenAI does, you are left with meaningless associations like winter and ants. It also happened when we looked for “big” in Laion with the default settings: since sexual content was filtered out, there was not enough meaning around “big” to counterbalance the ranking by aesthetic score, and we just obtained what people find nice: teddy bears, balloons, and strawberries. Unless CLIP retained a similarity between balloons and boobs, and why not, between strawberries and vaginas. It is genuinely hard to rule out that possibility.
What happens about bias and harmful content happens about everything else in Laion. Porn and violence attracted attention because they cause harm, and academics took the time to investigate. AI artists had another agenda, but in many ways they discovered the same thing. Take for instance the case of artist Anne Geddes (see below). The rendered images feature babies, although the test prompts do not ask for them. This is because she specializes in pictures of babies, as we can check in the Laion search engine (see down below).
In this case it is not the style that the model has absorbed, it is the subject. I make a difference between what is represented and how it is represented. For me, “an old man by Anne Geddes” refers to one of her photos but with an old man instead of the baby. But the model does not make such difference, that is why it draws a baby when you ask for a house. It was already the case with Simon Stålenhag, as his style was as much about how he paints than what he paints. The artist studies are full of these effects: Appollonia Saintclair gives you butt-naked women, Audrey Kawasaki hairs and face elements, Coles Phillips a woman (always the same), Daniel Ridgway Knight villagers, Giuseppe Arcimboldo fruits and vegetables, George Grosz troll-like figures, Hans Bellmer fat flesh, and Kaethe Butcher diaphane naked silhouettes. Disco Diffusion mimics the pictural style as much as the typical subject of the artist, even when you specify another subject. It blends and merges the two subjects, yours and that of the artist, together.
What Laion knows about these artists is the part of their work that is available on the web with their name attached. Some of these artists are famous, and their work is spread in many places. But for most contemporary artists, it is different: their portfolio has been absorbed. This is why “trending on Artstation” works so well as a modifier. ArtStation is “the leading showcase platform for games, film, media & entertainment artists,” according to their LinkedIn profile. It is a place where amateurs, semi-pros and professionals share their paintings. The purpose of the website is to disseminate their portfolios. ArtStation is basically a big database of well-described images, because that is what SEO (search engine optimization) demands. This is a perfect data trove for Common Crawls, and from there, Laion. Your online portfolio gets you in Laion.
You can basically go on ArtStation, click on a picture at random, get the artist’s name, and put it in Laion to see what you get. I just did that, landing on a concept artist named “Ismail Inceoglu”, and indeed, Laion knows him. And not only does it find images with its name attached, it also finds images without:
And it is not just ArtStation. That platform became popular because it matches what the AI artists want to obtain. But there are other similar platforms that have been harvested in Common Crawl and thus ingested by Laion. They might not be as useful to AI artists, but their content still contributed to shaping the CLIP latent space and the knowledge in Laion. All those porn images have to come from somewhere, right?
I first sourced a list of the top image repositories for artists, and I tried them all in Disco Diffusion: DeviantArt, Behance, Dribbble, CGSociety, ArtStation, Tumblr, Pinterest, Drawcrowd, Pixiv, Ello.co, Twitch, Concept Art World, Our Art Corner, PaigeeWorld, Newgrounds, and Virink. As you can see below, Disco Diffusion (in fact, Laion) has learned the “style” of each of these platforms too. Colorful mockups on Behance and Dribble, 3D renderings on CGSociety, but also Tumblr regurgitating soft porn, and Twitch and Virink screenshots. In some sense, each platform delineates a specific space for image generation. Some can be considered safe spaces where sex and violence are virtually absent, like in Behance and ArtStation. But the associations learned by the model are still lurking in the model, and “big” keeps relating to boobs despite those safe spaces. It is just that other influences, such as “trending on Artstation” or “by Simon Stålenhag” dominate the diffusion process, and ensure that the generated image lands in an acceptable place. The slope to porn is still ingrained in the model, we just found stronger influences to overcome it. TIAPFA; but as we have seen, that is not enough.
What about the contrary, unsafe spaces harvested by Common Crawls and also included in Laion? I sourced a list of porn subreddits (just the top 10), and I checked what Disco Diffusion knows about it: r/GoneWild, r/NSFW, r/NSFW_GIF, r/RealGirls, r/holdthemoan, r/BustyPetite, r/cumsluts, r/LegalTeens, r/PetiteGoneWild, and r/sex. We basically get (deformed) porn, in different flavors, except for “holdTheMoan” and “legalTeens” for some reason. Not only Laion, but Disco Diffusion (with default models) very much knows all of that.
TIAPFA. The same way we can tinker prompts to get less harmful content, we can tinker them for more. It should be clear at this point that prompt modifiers like “ArtStation” or artist names are not “magic” like Ted Underwood reports in the Vox video. The romanticism around prompt engineering should have started to wear out at this point. Behind the magic, we find internet culture, with its beauty but also a lot, a huge lot of toxic content.
That sounds crazy, but we basically don’t know what Laion contains. It is so big that we have basically zero assessment of what it looks like from a cultural perspective. It is entirely possible that huge problems lurk in it, and that we just did not discover them yet. The industry is just happy with its convenient ignorance, and everyone’s strategy boils down to damage control. At the very least, we should be serious and cautious about exploring those latent spaces, and limiting their industrial applications. Laion states: “we … do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress” (emphasis by them). Nice, but this is hiding behind one’s pinky finger. The T2I technology based on Laion is already at a stage where AI companies compete for the image generation market.
More than the pornographic and violent images themselves, which can be more or less filtered out, I am more concerned with the harmful content in the captions themselves. It is the strong association of “big” with boobs that is harmful, not the boob pics alone. The toxicity lies in what a woman is for the model. The association with certain words obviously comes from the caption (the text extracted to label the image). Who gets to write those captions? You may think that the data set is so big that there is no meaningful answer beyond “many people”. Yet there is de-facto a situated answer, and it is basically “internet culture”. Because not everyone has the same interest in captioning images. Here is an example. I noticed a structured pattern in certain captions. Unfortunately, the “search by text” feature is broken at the moment, so I cannot take a simple screenshot. But I compiled a few of those below. Pay attention to the text.
Those captions consist of a rating, a score, a series of tags, and a user. The rating does not have to be “explicit”, it can also be “questionable” or “safe”, as you can see below.
I searched for a reduced version of those captions in Google to try to find where they come from (1, 2, 3, 4, 5, 6, 7, 8). I did not find the same images, but I found two websites: Yande.re and Konachan. Those are two image boards dedicated to anime and manga, and they look so similar to me that they might have the same engine behind the scene, although they seem to contain different things. Those communities are tagging images obsessively (example below). Common Crawl and Laion give a disproportionate influence to those communities. Because they publish so many images and so precisely tagged, they get to weigh a lot in the associations ingrained in the model.
This is not about NSFW content. Following the naïve policies we have seen deployed by OpenAI, we could just filter out the “explicit” content, and maybe the “questionable” one. It’s even easier, because those communities have done the tagging. You just keep what they label “safe”. But at the same time you let them define what those categories mean, and you also let them define the descriptions of the “safe” images. Do we want those people’s way to describe women to be overrepresented in our models?
The T2I tools based on Laion are as poorly behaved as kids educated exclusively from internet culture, its darkest places included. Sure, we can now use subsets of Laion that supposedly contain no porn and violence. It does not work great yet, but it will be improved in the future. Even so, the remaining text-image associations keep being shaped by the internet culture and its toxicity, because the toxicity does not only lie in the images. AI is not trained on data fallen from the sky, it is not trained on the knowledge of mankind; it is trained on a fucked up dataset crawled half-randomly from the web over a decade, without any form of validation, without even the most basic documentation. It’s just that everyone, in the industry, has agreed not to ask the question. No questions, no problems. But no problems, no solutions.
The absorption of artist styles is just a part of a generalized practice, in the machine learning community, that consists of letting whatever happens in the digital public space shape the models. One side of it is the morality of harvesting entire portfolios. Another side is the cultural impact of reinforcing the influence of those portfolios in our cultural space, through image generation. Yet another side consists of the consequences of those effects on the users unaware of the problem. There is a prompt for anything, but only to those who have the appropriate literacy. And this is just for the most romantic corner of that technology: AI art. The same applies to porn and violence, and it is a whole lot less fun.
I believe that the most responsible thing to do is exactly what Google did with Imagen. Bravo Google, I know how frustrating it must have been for those who have worked hard on this. Here is the relevant part of the statement:
“There are several ethical challenges facing text-to-image research broadly. We offer a more detailed exploration of these challenges in our paper and offer a summarized version here. First, downstream applications of text-to-image models are varied and may impact society in complex ways. The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access. Second, the data requirements of text-to-image models have led researchers to rely heavily on large, mostly uncurated, web-scraped datasets. While this approach has enabled rapid algorithmic advances in recent years, datasets of this nature often reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups. While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.
https://imagen.research.google/
And I think that the second most responsible thing to do is to allow absolute transparency to the academic researchers and journalists willing to investigate in the T2I systems in depth. And why not, help them. For instance by funding them.
OpenEdition suggests that you cite this post as follows:
Mathieu Jacomy (August 19, 2022). Does text-to-image AI rip off artists? Reticular. Retrieved January 20, 2025 from https://reticular.hypotheses.org/5216
Addendum early 2023: some famous artists are currently publicly complaining that their work was stolen by Stable Diffusion. Check out their website: https://stablediffusionlitigation.com/