NYT v. OpenAI: Who Hacked Whom?

OpenAI has accused The New York Times of employing deceptive tactics to generate evidence for a copyright lawsuit against the AI company. In a legal filing in Manhattan federal court, OpenAI alleges that The New York Times used “deceptive prompts” to make ChatGPT reproduce the newspaper’s content, which OpenAI argues violates its terms of use and undermines the integrity of the legal process.

The lawsuit filed by The New York Times focuses on the alleged unauthorized use of the newspaper’s copyrighted material by OpenAI for training its AI systems. OpenAI’s defense is that the Old Gray Lady hacked ChatGPT to get it to reproduce NYT content, and its filing criticizes the Times for not adhering to its own journalistic standards.

This legal confrontation is part of a larger debate over whether the training of AI on copyrighted materials constitutes fair use – a principle that allows limited use of copyrighted material without permission for purposes such as news reporting, teaching, and research. Tech companies say scraping the public web for training data is fair use; copyright owners (including The New York Times) say it isn’t.

The outcome of this lawsuit may have profound implications for the future of AI. A ruling in favor of OpenAI could solidify the legal standing of AI’s fair use of copyrighted materials, potentially accelerating the growth of AI technologies. Conversely, a decision favoring The New York Times could impose new limitations on how AI can be trained, impacting the evolution of AI capabilities and the tech industry’s trajectory.

I’m not a lawyer, but it seems to me that it doesn’t matter what the NYT did to get ChatGPT to display its copyrighted content (if it actually did display copyrighted content). What matters is if obtaining the content from the public web and displaying it constitutes fair use. But… that’s just me.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.

About Shelly Palmer

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.

Tags

Categories

PreviousMistral Large: A New AI Model and a Microsoft Partnership NextApple's AI is Hiding in Plain Sight

Get Briefed Every Day!

Subscribe to my daily newsletter featuring current events and the top stories in technology, media, and marketing.

Subscribe