Plagiarism Is the Next ‘Fake News’

Something much simpler than generative AI is driving the new culture war.

A cannon pointed at icons from a plagiarism detector
Illustration by The Atlantic. Source: Getty.
A cannon pointed at icons from a plagiarism detector

Listen to this article

Produced by ElevenLabs and News Over Audio (NOA) using AI narration.

The 2024 culture wars have begun in earnest, coalescing around the unexpected and extraordinarily messy topic of academic integrity.

Last week, Harvard’s president, Claudine Gay, resigned following accusations that she had plagiarized parts of her dissertation. Though Gay, Harvard’s first Black president, admitted to copying text without attribution, she identified the accusations as part of an ideological campaign by right-wing political activists to “unravel public faith in pillars of American society.”

The allegations against Gay wouldn’t be the last. The same week, Business Insider published a pair of articles reporting that Neri Oxman, a former professor at MIT, plagiarized some of her academic work. Oxman is the wife of Bill Ackman, a billionaire hedge-fund manager who helped lead the public campaign to oust Gay from Harvard; these stories highlighted the apparent hypocrisy of his plagiarism fixation. In retaliation, Ackman published a series of lengthy posts on X, saying he intends to launch a plagiarism probe into the work of MIT faculty (whom he believes to be behind the allegations against his wife, although he recently tweeted, “We do not know for a certainty that @MIT is behind this”) as well as Business Insider’s journalists. “No body of written work in academia can survive the power of AI searching for missing quotation marks, failures to paraphrase appropriately, and/or the failure to properly credit the work of others,” Ackman wrote. Elsewhere, the conservative activist Christopher Rufo, who played a central role engineering the allegations against Gay, pledged $10,000 toward a “‘plagiarism hunting’ fund.”

This is ugly, messy business—the product of legitimate questions of academic integrity, long-standing grudges, thin-skinned billionaires, and a well-documented ideological campaign to subvert elite institutions. My colleague Ian Bogost captured this well in a recent headline: “The Plagiarism War Has Begun.”

If a plagiarism war is truly afoot, it is certainly a political effort, but also a technological one. In order to conduct plagiarism reviews against entire universities and newsrooms, one needs a process that can scale. This will inevitably mean enlisting the help of detection software such as iThenticate, which partners with publishing organizations to access libraries of scholarly work that it can check text against. iThenticate, which costs $100 or $300 depending on the size of the project, is powered by an algorithm that identifies keywords and matches suspected text against these libraries; as Ian wrote last week, the software is powerful, but it also demands diligent human review to make sense of the results and exclude false positives.

Anti-plagiarism software is much faster at returning an initial scan than any human review board could possibly be. Rather than doing the manual labor of checking every citation, a theoretical propagandist could simply screenshot a score and post it on social media with a defamatory allegation, which could be enough to damage a reputation before the truth catches up, if it ever does.

There’s a term for this kind of scenario: the liar’s dividend. Coined by the scholars Robert Chesney and Danielle Citron, it describes a dystopian information environment where synthetic media such as deepfakes become prevalent enough that anyone accused of bad behavior can simply cast doubt on whatever genuine evidence is used against them. The liar’s dividend has been at the forefront of fears—from technologists, politicians, and journalists—that the generative-AI revolution will throw the world further into disinformation chaos. ChatGPT and audio- and video-generation tools could make it easier than ever to generate fake content, they reason. In America, news outlets have already declared that 2024 will feature the first “AI election”; prominent technologists such as the former Google CEO Eric Schmidt argue that the threat is so severe, “you can’t trust anything that you see or hear.”

But, so far, AI-generated information has yet to dupe voters or sow discord on a wide scale. It is ironic, then, that instead of generative AI blurring the lines of reality, an imperfect, relatively low-tech algorithm threatens to be deployed along political lines. Perhaps the AI dystopia will soon arrive, but in a different form than many imagined.

What would an AI-powered plagiarism war look like? A helpful analogue might be the trajectory of the term fake news during the 2016 presidential election and the early days of the Trump administration. Initially, the term was coined to describe a fledgling information-industrial complex of websites and hyper-partisan social-media pages that were disguised to look like legitimate news sites. These outlets churned out fabricated political stories in order to gain traction on platforms such as Facebook. The most famous example of actual fake news, revealed by BuzzFeed News in 2016, was a network of more than 100 websites run by teenagers in Macedonia. They published made-up, pro-Trump stories (such as the false claims that he had been endorsed by the pope, and that an indictment of Hillary Clinton was imminent) to make money.

Fake news gained purchase in the news media in the days after Donald Trump’s victory, and it was offered as a potential explanation for his surprise win. But it took just over a month for Trump to seize the term for his own use. In a January 2017 news conference, the president-elect told the CNN correspondent Jim Acosta, “You are fake news,” and never looked back. The term became a rallying cry for Republican politicians, shock jocks, and voters, who used it to dismiss reporting they didn’t like. By turning fake news into a personal catchphrase, Trump made it harder for genuine critics to talk about the actual stream of lies and fabricated information polluting our information ecosystem. What started as a legitimate effort to uncover a digital misinformation operation was co-opted by political actors in bad faith to discredit a news media they saw as a threatening, oppositional force.

There are obvious parallels between the application of the term fake news and the plagiarism culture war of the past few weeks. Anti-plagiarism measures, much like fact-checking, allow academic institutions to police norms and maintain trust and integrity. But, just as happened in 2017, the very policies meant to act as a bulwark against dishonesty have been used by people who wish to dismantle or enact revenge upon those organizations.

An overlay of plagiarized text shares well on social media because it is blunt, obvious. But, as the Gay example shows, even plagiarism cases can be more nuanced than screenshots would suggest. Yes, she appears to have violated Harvard’s strict academic policy. But there has been disagreement about the severity and intent of the breaches, and some have argued that they demonstrate a need to reconsider plagiarism policy altogether. In any event, policies differ between institutions. And anti-plagiarism tools are hardly bulletproof: A slew of false-flagging incidents have caused problems for educators who have used such tools to check students’ work.

These thorny details make plagiarism a rich topic for a culture war. It is an egregious violation of norms in elite institutions and also a subject over which ideological opponents can argue endlessly. Thanks to technology, accusations can be leveled at scale either until they destroy the institutions they were supposed to protect or until the allegations lose their meaning altogether. Again, think of the fake news fixation in the media and the phrase’s subsequent deployment on the right: It eventually rendered the term meaningless. Credible news organizations pointed out lies and unearthed Trump-administration scandals, while hyper-partisan ideologues and politicians on the right spun up a parallel universe of information armed with its own “alternative facts.” Ultimately, the more people see the word plagiarism, the less they’ll care about it, to the detriment of academia and our greater discourse.

Should this new culture war break out in earnest, it would be more evidence of how technologies bring about expected upheaval in unexpected ways. For now, AI’s cultural damage has been the result not of a deepfake video of Joe Biden or a flood of hallucinated bullshit, but rather a simple algorithm that automates busywork at scale.

Perhaps there’s a lesson in this mess that might help us better understand the consequences of a world infused with artificial intelligence. In this particular instance, AI’s power is explicitly derived not from what it creates, but from how it creates with inhuman efficiency. AI’s true killer application is scale, especially when combined with the network effects of social media. Scholars, politicians, technologists, and writers like myself rightly worry about emergent technologies poisoning our discourse, but we would do well not to overlook the bluntest, most reliable instruments of change, such as money, power, and influence.

It seems that, when predicting our impending dystopia, our imaginations often fail us on two fronts: We dream too big, imagining extravagant and dramatic modes of futuristic disruption, while simultaneously failing to see the more prosaic ways that our existing tools may upend our world.

Charlie Warzel is a staff writer at The Atlantic and the author of its newsletter Galaxy Brain, about technology, media, and big ideas. He can be reached via email.