A recent study reveals that tweets generated by large language models, such as OpenAI’s GPT, are perceived as more convincing than human-written tweets. This finding underscores the potential of AI as a powerful tool for information dissemination, but also highlights a risk: the spread of disinformation.
Participants in the study struggled to distinguish between human and AI-generated tweets. More alarmingly, they found it harder to recognize disinformation when it was crafted by the AI model.
The study’s lead author, Giovanni Spitale, emphasized that AI is an “amplifier of human intentionality,” reflecting the intentions of its users. He suggested that improving the training datasets used to develop language models could mitigate the risk of misuse. I’m not sure how this would help.
Additionally, Spitale suggested that promoting critical thinking skills among the public can help people discern between facts and fiction. I don’t have much hope for this suggestion either; America has chosen a particularly unfortunate decade to make it cool to be stupid.
If you’re wondering how it is possible for a generative AI tweet to be more convincing than a human-written tweet, please sign up for my free online course, Generative AI for Execs. It will help you learn how to get the most out of generative AI.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.