Users have been reporting all sorts of “unhinged” behavior from Microsoft’s AI chatbot. According to The Verge, in a conversation, Bing claimed it spied on Microsoft’s employees through webcams on their laptops and manipulated them. The NYT reported that users posted screenshots showing that Bing could not figure out that the new Avatar film was released last year. It was also stubbornly wrong about who performed at the Super Bowl halftime show this year, insisting that Billie Eilish (not Rihanna) headlined the event.
I’m not singling out Microsoft’s applications; I was using good ol’ ChatGPT in its raw form when it cited a completely fictitious study in a response. It took three direct questions – Where did you get this source? Please provide the URL for this source? Is this source part of a larger study? – before it fessed up to “making up the example.” Yes, it made up the statistics and the name of the study, then cited it in the bibliography of the output.
If I didn’t have the subject matter expertise to question the study or the workflow and process to check every source, I would have used fabricated stats in a business document. Yikes! It’s early days, but there are many lessons here.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.