I read this study before bed last night: "Conversational AI Powered by Large Language Models Amplifies False Memories in Witness Interviews." It scared me more than I thought it would. The study explores how chatbots powered by LLMs can influence the formation of false memories for users; in other words, they can literally make users hallucinate. Continue Reading →
Google is launching a new initiative aimed at enhancing the reliability of generative AI by leveraging its search engine to provide grounding for AI-generated content. Part of a broader set of updates at Cloud Next, the update is designed to mitigate the issue of "hallucinations" or inaccuracies in AI outputs by offering users more current information and verified sources. Continue Reading →