Shelly Palmer

Should You Fear Killer Drones?

There’s a news story (that will not go away) about a simulated “killer” drone that turns on its operator. You can read about it here. The short answer is that the drone was inadvertently programmed to optimize for an unexpected or unwanted result. In this case, the drone was programmed to maximize its score by destroying SAM sites. It perceived the operator’s occasional “no-go” decisions as hindrances, leading to the AI “killing” the operator in the simulation. Oops!

This is not how AI is programmed today. While AI bias and unintended consequences are a constant threat to every project, engineers are aware of the need to align outcomes with human values. The question this simulation raises may be the most important question we can ask about AI: Is it possible to align outcomes with human values?

I talked about the alignment problem on this week’s Shelly Palmer LIVE (you can watch it here). Or, if you want to dig deeper into the subject, consider taking our free online course, Generative AI for Execs. It will help you frame the issue.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.