OpenAI just launched Frontier, an enterprise platform that lets companies build, deploy, and manage AI agents across their organizations. OpenAI calls them “AI coworkers” to shift the budget conversation. If enterprises treat agents like employees, AI spending moves from IT to operations. OpenAI gains access to a different budget holder and a larger addressable market.
Frontier gives agents shared context across siloed systems, the ability to learn from experience, and explicit permissions that enterprises can audit. OpenAI cites a manufacturer that reduced production optimization from six weeks to one day using these capabilities.
OpenAI is embedding “Forward Deployed Engineers” directly with customer teams. This is the consulting-wrapped-in-software approach that built Palantir’s enterprise business, and it positions OpenAI as the semantic layer sitting between enterprise data and applications.
This puts OpenAI in competition with its own customers. Microsoft pays billions for OpenAI’s models and sells Copilot; Salesforce embeds those same models and sells Agentforce. Both now face a supplier selling directly to their enterprise customers. OpenAI is betting that Frontier can serve as the neutral layer for companies that want best-in-class models without vendor lock-in.
Man/machine partnerships are as old as tool use itself. Now, big tech is packaging tools that think for themselves. The biggest roadblocks will be cultural debt, bureaucracy, and the pure inertia of big enterprises. (This is an almost pure leadership challenge.) Once those barriers are broken, we’re going to live in a different world.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.