I read this study before bed last night: “Conversational AI Powered by Large Language Models Amplifies False Memories in Witness Interviews.” It scared me more than I thought it would. The study explores how chatbots powered by LLMs can influence the formation of false memories for users; in other words, they can literally make users hallucinate.
The researchers conducted an experiment where participants watched video of a crime scene and then interacted with different types of AI systems or completed a survey. They found that AI chatbots were significantly more likely to induce false memories in participants compared to traditional survey methods or no intervention at all.
It gets worse. Not only did AI chatbots create more false memories, but they also increased participants’ confidence in these inaccurate recollections. The study revealed that these false memories (and the associated high confidence levels) persisted even after a week.
It’s just one study; more work will need to be done to verify the findings. However, misremembering is a well-documented phenomenon. There are decades of research that highlight the fallibility of human memory and the unreliability of eyewitness testimony. This new study on AI-induced false memories adds another layer of complexity to our understanding of memory formation. It suggests that as AI becomes more integrated into our daily lives, we will need to be even more vigilant about the accuracy of our recollections and the sources of our information.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.