As we start to deploy enterprise-grade AI platforms at scale, leadership teams are coming face to face with two formidable challenges. The first is Technical Debt, Ward Cunningham’s enduring metaphor for describing outdated technical infrastructure. The second is Cultural Debt, the sum of every unresolved habit, unexamined process, and unspoken assumption your organization carries forward because, well, this is the (your company name goes here) way.
Every company has cultural debt. The weekly status meeting that exists because a VP got blindsided in 2019 and demanded a recurring update. The approval chain that routes through a department that no longer does the work it once reviewed. The hiring rubric that optimizes for credentials that mattered when the company was solving a different problem. None of these are malicious. All of them are expensive. And they get dramatically more expensive when they are encoded into the DNA of your hybrid human/AI workforce.
Agents Do What You Tell Them, Not What You Think You Told Them
AI agents are ruthlessly literal, which means they are expert at maintaining and deepening cultural debt. An agent that automates your onboarding workflow will automate the three unnecessary sign-offs embedded in it. An agent that drafts client communications will replicate the hedging language your team adopted after a complaint in 2021 that no one remembers. Agents do exactly what you tell them to do, not what you think you told them to do.
This is where most AI deployment strategies break down. Leaders frame the challenge as technical: choose the right model, build the right integrations, get the data pipeline clean. Those are real problems, but technical debt is an engineering problem. It can (and will) be solved by engineers. The harder challenge is organizational. If you are asking a 2026 technology to operate inside a 2019 culture, the culture will win every time.
Bureaucratic Antibodies
I’ve started calling the forces that ensure this outcome “antibodies.” They are the middle management mafia (aka “The Deep State”). They are the process owners whose institutional authority depends on the current workflow. They are not saboteurs. Their incentives are entirely rational. If your job is to review every outbound proposal, and an AI agent can draft proposals in four minutes, you have a choice between embracing a tool that eliminates your review function or finding seventeen reasons the tool needs more testing. Most people choose the testing.
The antibody response follows a predictable pattern. First, the pilot gets scoped so narrowly that it cannot demonstrate value. Then the evaluation criteria expand until the AI is being measured against standards no human employee has ever met. Finally, the project is declared “promising but premature,” and everyone returns to the process they already know. I have watched this sequence play out at companies ranging from twelve people to two hundred thousand.
Evolving Leadership Techniques
The first step to unwinding cultural debt is forcing the organization to confront its own defaults. Most cultural debt is invisible because it masquerades as “how we do things.” The only way to surface it is to start with desired outcomes and innovate from first principles.
I ran an agentic scoping workshop last month where I asked a leadership team to design their ideal content production pipeline. Every version they drew had the same architecture: a person writes, another person reviews, a third person approves. I asked them to try again, this time imagining the workflow three years from now. Same architecture: person writes, person reviews, person approves, with AI “assisting” at each stage. It took three rounds before anyone proposed a system where AI was the primary producer and humans evaluated output quality rather than doing the work themselves.
That’s the gravitational pull of cultural debt. People will redesign everything except the assumption that they belong at the center of the process.
One of the hardest parts of this work is helping people think about workflows they are not at the center of. We are self-aware, and we are almost always present in our own mind’s eye. Human-centric systems are what humans tend to imagine. Most of us, myself included, default to thinking about the future through the lens of the present or the past. Getting a room full of executives to imagine the future through the lens of the future takes deliberate effort, and it is the most valuable hour of every engagement I do.
Once they make that shift, they confront something genuinely unsettling: nothing in our experience has prepared us for working with intelligence decoupled from consciousness. It’s truly an alien intelligence, and it’s going to take time for all of us to get comfortable working alongside it.
Evals
Every agentic workflow requires an eval (evaluation framework), a rubric that grades the quality of an agent’s output. You need both objective and subjective grading. Objective grading, such as “Is the topic sentence in active voice?”, can be evaluated automatically. Subjective grading, “Does this feel right?”, must be done with a human in the loop (HITL). This is where leadership and cultural debt become the bottleneck.
Leaders must give subject matter experts time and resources (direction and compensation) to interact with and improve evals. This sounds easy, but it requires a massive cultural change. Workers are not incentivized to correct an agentic workflow that has the potential to eliminate their job. Upton Sinclair said it best: “It is difficult to get a man to understand something when his job depends on him not understanding it.”
You need HITL, you need to make it part of people’s job function, and they need to be paid to learn how to do the work (and for the work itself). Otherwise, your agentic workflows are doomed.
What History Teaches
Electrification was demonstrated in 1880. It did not show up in factory productivity statistics until the 1920s. The lag was not technological. Factories had to be physically rebuilt around electric motors instead of steam-driven belt systems, and the foremen who understood the old layout had to be retrained or replaced. Historians call this the “reorganization gap.” We are in an identical gap right now, except the reorganization is cultural rather than architectural, and the foremen are your direct reports.
Jevons’ Paradox suggests the outcome, if leadership gets this right, is not the mass displacement that dominates the headlines. When something becomes dramatically cheaper, you do not use less of it. You use catastrophically more of it. Cheap coal did not reduce coal consumption; it democratized the energy required for the industrial revolution. Cheap intelligence is unlikely to reduce work. It is far more likely to create new industries as it makes unimaginable productivity available to everyone.
The Leadership Challenge
Your org chart, your incentive structures, your approval chains, your unexamined assumptions about who does what and why: these are the real constraints on your AI ROI. Technical problems get solved. Cultural problems get inherited.
Like it or not, we are all now leaders (and managers) of AI systems that have both permission to work on our behalf and the agency to do that work. Productivity is the key driver of business success. AI can already do the work. Whether your organization will let it depends entirely on how much cultural debt you’re willing to unwind.
I’ve spent the past few years helping organizations navigate this process. Leaders that treat cultural debt with the same rigor and urgency they bring to any other strategic liability succeed. The work is unglamorous: meetings, incentive redesigns, difficult conversations with process owners, and a willingness to rebuild workflows from the outcome backward. There are no shortcuts. ROI on AI always accurately reflects what an organization’s cultural debt allows.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.