One Year In, ChatGPT’s Legacy Is Clear

The technology is less important than the ideas it represents.

A pixelated hand lights a candle
Illustration by The Atlantic. Source: Getty.
A pixelated hand lights a candle

Listen to this article

Listen to more stories on curio

ChatGPT is one year old today, and it’s accomplished a lot in its first trip around the sun. The chatbot has upended or outright killed high-school and college essay writing and thoroughly scrambled the brains of academics, creating an on-campus arms race that professors have already lost. It has been used to write books, article summaries, and political content, and it has flooded online marketplaces with computer-generated slop.

As we’ve gotten to know ChatGPT, we’ve noticed how malleable it is. The li’l bot loves clichés. Its underlying technology has been integrated into internet search. ChatGPT is a time wastera toy—but also, potentially, a labor-force destroyer and a way for machines to leech the remaining humanity out of our jobs. It may even be the harbinger of an unrecognizable world and a “textpocalypse” to come.

Even for a large language model with billions of parameters, trained off of perhaps terabytes of potentially questionable and opaquely scraped data … that’s quite a year.

As evidenced above, my colleagues and I have spent a great deal of time and many words trying to understand exactly what ChatGPT means to the world. It’s been a long process, in part because its launch took everyone by surprise. As Karen Hao and I reported earlier this month, OpenAI didn’t expect ChatGPT to amount to much more than a passing curiosity among AI obsessives on Twitter: In a company pool wagering how many people might use ChatGPT during its first week, the highest guess was 100,000 users. (The tool hit 1 million within the first five days.) The product was intended to be the software equivalent of a concept car. Instead, it became one of the most popular applications in the history of the internet.

The reason for its success is obvious. ChatGPT is not sentient or intelligent in any way—I still love the early description that it’s just “spicy autocomplete”—but it frequently offers a decent simulation of talking with a vaguely boring person or, say, a polite customer-service representative. Sure, ChatGPT can “hallucinate,” delivering misinformation as though it’s fact, but it can also write ballads, pass an M.B.A. exam, and debug code. Enough of these little interactions felt to a lot of people like magic, or at very least, the beginning of a bona fide technological breakthrough.

Evangelists have told me that they employ ChatGPT like they would an enthusiastic intern, or a copy editor, or a debate partner. Others have said they use it to automate the drudgery of small tasks such as writing emails, and those who program or work with databases seem to find ChatGPT and its extensions akin to having an extra arm to work with, maybe even an extra brain. A ChatGPT screen is open on their computer at all times.

And then there are those who make a bolder case: that ChatGPT is “a thinking companion,” a way to summarize large texts into digestible nuggets, to brainstorm and generate ideas, to help build and execute business plans, and, most important, to get a machine with computational power to do tasks quickly that might take a human countless hours. It’s not just that ChatGPT can solve a problem. As the programmer James Somers wrote in The New Yorker recently, it’s that “from a deep well of knowledge, it could suggest ways of approaching a problem” altogether. It offers a way to unlock a new mode of thinking.

When I hear these descriptions, I feel a sense of panic, as if a technical revolution has passed me by. I find ChatGPT too untrustworthy for research tasks. (I don’t particularly need a research assistant who may, out of nowhere, imagine misinformation and present it as fact.) I’ve spent time refining prompts and even building my own bot to try and edit my own writing, and I’ve found the output lacking in almost every way when it comes to replicating or even streamlining the job I get paid to do. My brain, I’ve come to realize, is bad at constructing prompts, a skill that seems to have more in common with programming than it does prose writing. The experience can feel akin to being present for the invention of the iPod but hating music.

A good ChatGPT whisperer understands how to sequence commands in order to get a machine to do its bidding. That’s a genuine skill, but one that eludes me as well as some other humanities types I know. The best ChatGPT prompters I know tend to be good systems thinkers or at least well-organized people—the kind who might create a series of automated protocols and smart-home integrations to turn their lights on and off. I’m the guy who sees romance in wandering around in the dark, bumping into a coffee table, to find the switch.

This is all to say that I both recognize and respect ChatGPT’s impact over the past year while also feeling a bit gaslit by it. Academically, I understand what’s happening, how the tool may unlock productivity and creativity that leaves me behind. It feels perfectly reasonable to me that chatbots will automate tasks and jobs and change the way a lot of companies handle workflows behind the scenes. I believe that it will continue to flood the internet with text of varying quality and, in many cases, be used by greedy, amoral people to generate vapid, large language lorem ipsum to turn a quick profit at the expense of whatever humanity exists on the web.

The legacy of ChatGPT may not have much to do with its utility at all. ChatGPT, the tool, is likely less important than ChatGPT, the cultural object. ChatGPT was actually what OpenAI intended to create all along: proof of concept for the bigger idea of a breakthrough in generative artificial intelligence. Even if you can’t get the bot to spit out Faulkner, ChatGPT still feels like a paradigm shift—a glimpse at a technology that had been teased in movies and popular culture for decades but never really seemed to arrive in a way that was functional for the general public. Now it’s here: proof that the generative-AI era has arrived, even if the conversations we’ve all had about the technology seem more immediately consequential than the product itself.

Whatever its limitations, ChatGPT is still most valuable as a symbol and a placeholder—a stepping stone to an age when technologists might someday replicate human intelligence. ChatGPT gave true believers and hype cyclists a prototype to gesture toward. This year, we got excited about the concept car, even if, deep down, we know that most models never see the open road.

Charlie Warzel is a staff writer at The Atlantic and the author of its newsletter Galaxy Brain, about technology, media, and big ideas. He can be reached via email.