Shelly Palmer

Alexa’s Most Harmful Feature Ever

Amazon Alexa may someday be able to speak to you in any voice that it has ever listened to for more than a minute. Any voice. Any person. Alive or dead. Any voice.

The new feature was mentioned Wednesday at Amazon’s re:MARS conference as a way to “make memories last.” After listening to someone’s voice for less than a minute, Alexa would be able to simulate that voice when speaking. A video of the feature depicted a child who asked to have their grandmother read them a story, and Alexa affirmed before changing her voice, according to Sky News. What could possibly go wrong?

I demonstrated this tech during our 2022 Innovation Series Summit in January by bringing Carl Sagan back to life to read a quote from one of his books. It was awesome in a controlled experiment – and the road I chose was paved with good intentions.

That said, I don’t think I’ve ever heard of a use for a deepfake that was, out of the box, more potentially harmful. This makes revenge porn (which is always clearly fake) seem like a quaint Hallmark greeting card. Imagine leaving deepfake instructions for a child from someone pretending to be a parent or guardian. Imagine leaving a voicemail message with a known banker’s voice or stock broker or trusted advisor or just Uncle Stan, the family wealth management guru, telling you to buy into his friend’s new fund. This is the easy stuff; it doesn’t take much to imagine use cases that are far, far worse.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.