I Went Viral in the Bad Way

A few lessons from my mistake

Jasmin Merdan / Getty

Fair warning: What follows is navel-gazey. Last week, in a particular corner of the on-demand outrage service that is Twitter, I was, for a few hours, the dreaded Main Character. This was entirely my fault. And since I write a fair amount about viral backlash cycles and social-media dynamics, I figured it might be helpful to explore the whole thing from my vantage point.

Last week’s newsletter was a long interview with a defector from Infowars who’s very critical of Alex Jones. Instead of selecting a photo or illustration from Getty Images to go with the story, as I do for most of my newsletters, I decided to try something different and use an AI art tool to come up with the story’s accompanying image. Using Midjourney’s Discord interface, I submitted the prompt “Alex Jones inside an American Office under fluorescent lights.” In a few minutes it spit out a distorted but still-recognizable rendering of Jones in a grim office setting, surrounded by paper. Jones looked miserable, and given that the interview was about how he manages to inflict misery on countless people daily, I thought it worked well as an illustration for the story. I also asked the AI tool to imagine “Alex Jones trying to fix a paper jam.” The result struck me as darkly funny, and not at all generous to Jones, so I included it in the piece.

A day later, an artist–slash–art director saw the post and noticed that I credited “AI art by Midjourney” in the captions. In a series of tweets, they wrote that they were frustrated and a little shocked that a national magazine like The Atlantic was using a computer program to illustrate stories instead of paying an artist to do that work. They were also concerned about this particular use case potentially giving other publications an excuse, or at least an idea, to cut corners on an art budget. The tweets made some assumptions about my intent, and the magazine’s, but they were by no means abusive or rude. The second I saw the tweets in my feed, I immediately understood where the person was coming from. So I reached out to clarify:

But I was too late. In the time between my tweet of clarification and the time the tweeter saw it, their original complaint-tweet had already gotten picked up by others, who quote-tweeted it mostly to express the view that my editorial decision was “fucked up.” The screenshot of my article with The Atlantic’s logo and the AI art seemed, to many, to tell a definitive story.

Once the original tweeter saw my explanation, they realized that they had made a few incorrect assumptions about my choice to use AI art. [Inside-baseball alert: As my pieces are primarily email posts with a web component, they’re a part of a different editorial workflow and do not go through the magazine’s art department. Instead, it is my job to illustrate the post using a Getty Images account provided to me by the magazine.] TL;DR: The Jones post was never a candidate for a commissioned illustration. The original tweeter very graciously apologized for their misunderstanding and deleted their tweet, and I apologized for being obtuse and not realizing the implications of my decision to use that illustration with zero clarification. It was, all in all, a good, productive internet exchange. Or so I thought.

I closed my phone for a few hours and reopened it to a direct message that said, “I’m really sorry this is happening to you.” Never a great sign! I opened Twitter to see that the initial screenshot of my newsletter was now getting shared at an even higher volume than before. Another (understandably frustrated) person had seen the first tweet and wanted to vent as well. By the time I saw it, the new tweet had about 8,000 likes and a hundred or so quote tweets, most of them quite angry at myself or the magazine. My open DMs were well stocked with people alternately calling me a piece of shit or asking why everyone on Twitter was calling me a piece of shit. I’m not looking for sympathy here. This is a pretty common consequence of writing or creating anything online when you have a social-media account over a certain number of followers (usually about 10,000 is where people tend to note a change in how audiences treat them) and, in this was in no way even remotely equivalent to the harassment that many people endure daily just for doing their jobs. Moreover, I very much understand why people were mad.

I even wrote about some of the thorny problems surrounding AI art in a newsletter shortly before my Jones mishap, including the following quote about an AI art generator: “DALL-E is trained on the creative work of countless artists, and so there’s a legitimate argument to be made that it is essentially laundering human creativity in some way for commercial product.” And yet, I still somehow managed to miss the implications as they related to my very own work. Not great, Charlie!

Since then, I’ve reached out to a few of the artists offering criticisms on Twitter to try to get a deeper, good-faith understanding of the threat of AI art, as they saw it, as well as to understand a bit more about what they felt when they saw the screenshot of my piece and why they chose to engage online. None of them got back to me, but I did have a conversation with Matt Bors, the cartoonist, writer, editor, and founder of the excellent publication The Nib. Bors has some concerns with AI art tools like Midjourney and DALL-E and echoed some of the frustrations I’d heard on Twitter.

“Technology is increasingly deployed to make gig jobs and to make billionaires richer, and so much of it doesn't seem to benefit the public good enough,” he told me. “AI art is part of that. To developers and technically minded people, it’s this cool thing, but to illustrators it’s very upsetting because it feels like you’ve eliminated the need to hire the illustrator.”

Bors argues that what seems most alarming (and this was borne out in a lot of the tweets I saw as well) is the speed at which the technology is improving. “It has its own style right now and there are flaws, but it is only going to get better,” he said. And, set in the broader context of smaller art departments and budgets, the emergence of on-demand drawings feels like a punch in the face.

“It’s not like there’s a ton of illustration happening online,” Bors continued. “Go to a website and most of the image content is hosted elsewhere. Articles are full of embedded tweets or Instagram posts or stock photography. The bottom came out of illustration a while ago, but AI art does seem like a thing that will devalue art in the long run.”

I told Bors that what I felt worst about was how mindless my decision to use Midjourney ultimately had been. I was caught up in my own work and life responsibilities and trying to get my newsletter published in a timely fashion. I went to Getty and saw the same handful of photos of Alex Jones, a man who I know enjoys when his photo is plastered everywhere. I didn’t want to use the same photos again, nor did I want to use his exact likeness at all. I also, selfishly, wanted the piece to look different from the 30 pieces that had been published that day about Alex Jones and the Sandy Hook defamation trial. All of that subconsciously overrode all the complicated ethical issues around AI art that I was well apprised of.

What worries me about my scenario is that Midjourney was so easy to use, so readily accessible, and it solved a problem (abstracting Jones' image in a visually appealing way), that I didn’t have much time or incentive to pause and think it through. I can easily see others falling into this like I did.

For these reasons, I don’t think I’ll be using Midjourney or any similar tool to illustrate my newsletter going forward (an exception would be if I were writing about the technology at a later date and wanted to show examples). Even though the job wouldn’t go to a different, deserving, human artist, I think the optics are shitty, and I do worry about having any role in helping to set any kind of precedent in this direction. Like others, I also have questions about the corpus used to train these art tools and the possibility that they are using a great deal of art from both big-name and lesser-known artists without any compensation or disclosure to those artists. (I reached out to Midjourney to ask some clarifying questions as to how they choose the corpus of data to train the tool, and they didn’t respond.)

Now, because this tiny ordeal revolved around a lot of tweets, I feel compelled to point out some things I noticed over the last few days (besides the fact that I very much should have seen this coming and was tweeting in a prison of my own design):

Some main-character situations seem to arise from people having a blend of bad information and wrong assumptions, but also legitimate grievances.

Often, the less information we have about something online, the easier it is to be angry about it. I found it surprisingly dispiriting to watch so many people be very, publicly upset about a situation that they had incomplete information about. The initial assumption that The Atlantic was now going all-in on AI art illustrations would have been newsworthy. And it would have been genuinely upsetting if, as some wrongly assumed, the decision to do so was because some corporate overlords were interested in stripping the editorial operation down to the studs to shave costs. The reason behind their anger was very legitimate, but it was really frustrating to watch so much energy being expended toward a problem that, at present, wasn’t exactly real.

It is extremely difficult, if not impossible, to correct a wrong assumption after it has achieved critical mass without amplifying the initial wrong assumption.

I knew this going in, and yet I still felt compelled to address some of the loudest tweets with an explanation (and acknowledgment of the problem). All this did was make the initial tweets more visible, which ultimately poured more gas on the fire. This is a reasonably common experience among anyone who has been pulled into some version of the discourse tornado—any attempt to try to correct some incorrect information only amplifies the incorrect information. This dynamic exists elsewhere (sometimes fact-checking or debunking also boosts the signal of false claims), but it is an extremely efficient experience on Twitter. Honestly, I think this is the truest example of Twitter as a system that isn’t just broken, but broken in a way that feels almost sinister.

It wasn’t the initial critical tweet that landed me in main-character purgatory, but the tweets that splintered out from it.

I can only speak from my own experience, but I thought it was interesting how the initial critical tweet about the newsletter post was earnest and thoughtful and quite civil. Then Twitter’s dynamics played a game of telephone with the tweet, abstracting it and pulling it further away from any additional context that I (and the initial tweeter) inserted into the discussion. It was fascinating to watch the context collapse get stronger as the screenshot was quote-tweeted further and further away, even onto different social platforms. As it got passed along, I found the comments got progressively nastier until, eventually, the screenshot got to a Twitter user with no profile picture and no followers telling me I deserved to die in a mass shooting. The internet!

The correction to a wrong tweet never gets even a fraction of shares as the wrong tweet.

This isn’t news. But I suppose it’s good to know the dynamic still holds extremely true.

A subject like AI art is a decent subject to quickly get people mad.

A good formula for something that can generate a lot of outrage is one where the stakes are high (as they are for artists in this example) and where the subject is something that is very easy to understand at a basic level and much more complex at the semantic level. AI art tools, as I wrote a few weeks ago, are pretty easy to grasp in broad strokes. But technically, their diffusion model is more complicated. This subject-area dynamic is usually fertile ground for people to get angry or conspiratorial.

For example, a lot of people were really concerned that, simply by plugging prompts into Midjourney, I was giving it intelligence and making it smarter. But I’ve also had numerous people tell me that that’s not the way these tools work and that the models have to be updated internally in order to learn. Similarly, people make the mistake of assuming that the AI tool is more intelligent or cunning than it really is (there’s good evidence to suggest it really struggles with the relationships between objects).

I’m slightly confused about a lot of this myself. But that’s the point. Areas where the subject matter is confusing or opaque invite more frustration and conflict. This is why companies like Open AI or Midjourney ought to give artists more easily accessible and plainspoken information about their tools and what they’re trained on in order to bridge the gulf.

I am a doofus.

At the heart of all of this is me not thinking things through. And so I’ve commissioned Matt Bors to illustrate me, a doofus, getting yelled at by people online. His excellent work will run as the art in a subsequent newsletter.

Charlie Warzel is a staff writer at The Atlantic and the author of its newsletter Galaxy Brain, about technology, media, and big ideas. He can be reached via email.