While a shot has yet to be fired, some of the nation’s largest newsrooms are actively taking defensive measures to safeguard their content from ChatGPT, the groundbreaking artificial intelligence chatbot that is seen as a potential aggressor to an already struggling news industry.
A multitude of leading newsrooms have recently injected code into their websites that blocks OpenAI’s web crawler, GPTBot, from scanning their platforms for content. The Guardian’s Ariel Bogle reported last week that CNN, The New York Times, and Reuters had blocked GPTBot. But a Reliable Sources review has found several additional news and media giants have also quietly taken this step, including Disney, Bloomberg, The Washington Post, The Atlantic, Axios, Insider, ABC News, ESPN, and the Gothamist, among others. Publishers such as Condé Nast, Hearst, and Vox Media, which all house several prominent publications, have also taken the defensive measure.
The deep archives and intellectual property rights of these news organizations are immensely valuable — arguably crucial — to training A.I. models such as ChatGPT in efforts to provide users with accurate information. As one news executive, who requested anonymity because he was not authorized to speak publicly on behalf of his company, told me on Monday: “Most of the internet is garbage. Traditional media publishers, on the other hand, are fact driven and offer quality content.”
Despite the posturing behind the scenes, none of the outlets that have taken the preventive measure of blocking GPTBot offered an on-the-record response when I reached out for comment on Monday. But the move to insert code disallowing OpenAI from drawing on their large libraries of content to train its ever-learning ChatGPT bot reflects the degree to which news organizations are spooked by the company’s technology and are quietly working to address it.
Danielle Coffey, president and chief executive of the News Media Alliance, told me on Monday that news organizations are indeed alarmed by the rapidly advancing technology. Coffey said that the News Media Alliance, which represents nearly 2,000 publishers in the US, believes newsrooms “are on solid legal ground when it comes to copyright protections.” Nevertheless, they’re apprehensive about how companies like OpenAI might further upend the already embattled news sector.
“I see a heightened sense of urgency when it comes to addressing the use, and misuse, of our content,” Coffey said. “One publisher told me it is an existential threat. Another publisher told me there isn’t a business model with certain uses of A.I. … there is a sense of urgency to address this.”
What exactly these media giants do next, however, remains to be seen. News organizations might feel they’re on solid legal ground, as Coffey told me, but there has yet to be any serious action taken against the OpenAI. Barry Diller has likely gone the furthest by taking a notably aggressive stance and signaling a future lawsuit. The NYT is also reportedly weighing whether to sue OpenAI. Meanwhile, the Associated Press went a different route, hammering out its own licensing deal with the A.I. developer, though it notably did not share key terms of the agreement.
If the issue is not resolved, enormous damage could be inflicted on the publishing industry, imperiling the information environment in the US and around the world even more than it is now. It’s not difficult to imagine how A.I. bots integrated into search, apps, and now-ubiquitous smart devices might put many newsrooms out of business, ironically doing so by using the very information they’ve derived from those newsrooms. Once these outlets are wiped from existence, a void of authoritative sources to train A.I. models would be created, and misinformation could be authoritatively passed along by confused bots feeding off a diet of bad information.
“If there is nothing left of quality to feed on,” Coffey said, “then we are all going to end up with a very bleak future.”
Despite the stakes being so high, the vast majority of news organizations are declining for now to publicly address the matter. Instead, they’re simply opting to discreetly lock their content in a protective vault until a more concrete battle plan can be hammered out. The news executive that I spoke to on Monday said that, at the very least, blocking GPTBot does make an unmistakeable point.
“It sends a signal,” the executive said. “Talk to us.”