OpenAI Employees Don’t Get to Choose Wars

OpenAI CEO Sam Altman told employees Tuesday they don’t get to make “operational decisions” about how the military uses their AI technology: “Maybe you think the Iran strike was good and the Venezuela invasion was bad,” Altman said in an all-hands meeting. “You don’t get to weigh in on that.”

The meeting happened four days after OpenAI announced its Department of War (DoW) deal, which landed hours before U.S. and Israeli strikes against Iran began. Altman later posted internal clarifications promising the AI won’t be used for domestic surveillance of Americans and won’t serve NSA-type intelligence agencies without contract modifications. He also admitted rushing the Friday announcement was wrong and made the company look “opportunistic and sloppy.”

There is a disconnect between OpenAI’s safety-first branding and its defense revenue reality as employees who joined to “benefit humanity” are now coming to understand that they are building tools for battlefield intelligence. Altman can add constitutional language and exclude domestic surveillance, but the core product still enables lethal autonomous weapons.

As you know, after refusing similar terms over surveillance and autonomous weapons concerns, U.S. Secretary of Defense Pete Hegseth directed the DoW to designate Anthropic as a Supply-Chain Risk to National Security. I don’t know if the DoW has issued the official notice yet.

Can AI companies maintain top engineering talent while taking defense contracts? Google faced engineer resignions over Project Maven in 2018. Altman has positioned OpenAI as a utility provider, not a moral arbiter, saying he’d “rather go to jail” than follow an unconstitutional order. Let’s hope it never comes to that.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.

About Shelly Palmer

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.

Tags

Categories

PreviousThe Supreme Court's Non-Decision on AI Copyright is the Decision NextThe White House's AI Power Deal Probably Won't Work

Get Briefed Every Day!

Subscribe to my daily newsletter featuring current events and the top stories in AI, technology, media, and marketing.

Subscribe