AI Detectors Unreliable

OpenAI, the creators of popular generative AI platforms ChatGPT and DALL-E 2, says the tool it developed to detect content written by AI only works 26% of the time: “While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that AI-generated text was written by a human.” I would not classify a classifier that fails three-quarters of the time as a “good classifier.” Just sayin’.

Considering what OpenAI had to say, I thought I’d ask ChatGPT a question: “What are the consequences of not being able to detect if a piece of content was written by ChatGPT or another generative AI platform?” Here’s what ChatGPT “thinks”:

If it’s not possible to detect whether a piece of content was written by ChatGPT or another generative AI platform, there could be several consequences, including:

  1. Misinformation and fake news: If people can’t distinguish between AI-generated content and human-generated content, they may be misled by false information and fake news.
  2. Confusion and distrust: If it becomes increasingly difficult to determine the source and authenticity of information, it could lead to confusion and distrust in the accuracy of information being shared.
  3. Impact on authorship and originality: AI-generated content could be passed off as original work, which could impact the recognition and compensation of human authors.
  4. Ethical considerations: There are ethical concerns related to the use of AI-generated content, such as accountability for the content generated and the potential for AI systems to perpetuate biases and stereotypes.

It’s important for society to address these consequences and to develop methods for detecting AI-generated content to ensure the accuracy, authenticity, and ethical use of information.

Do you agree with ChatGPT?

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.

About Shelly Palmer

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.

Tags

Categories

PreviousWho Owns Generative AI? NextWill you pay for ChatGPT?

Get Briefed Every Day!

Subscribe to my daily newsletter featuring current events and the top stories in technology, media, and marketing.

Subscribe