I hope you enjoyed the long weekend. At a Sunday party, I heard a guest gush that ChatGPT understands his business better than his staff. The enthusiasm is common; the reality is more nuanced.
There’s not a lot of AI news this morning, so here are five tasks that LLMs and reasoning engines appear to do — but should be left strictly to humans:
Original insight. AI can synthesize research, but drawing fresh conclusions, making lateral connections, and deciding what matters most remains human work.
Taste. AI can mimic a style guide, yet it cannot judge elegance, know when less is more, or sense when a bold stroke beats a safe one.
Emotional context. AI does not feel. It lacks the intuition to choose restraint or celebration, irony or sincerity, in the moment.
Strategic judgment. AI will forecast scenarios, but it will not bet the brand, kill a beloved project, or abandon sunk costs. Strategy is a choice, not a prompt.
Ethics. AI can debate principles, but it has no moral compass. Only people decide what “right” means for their organizations.
These gaps will narrow over time, but they currently mark the boundary between automation and leadership. Knowing what AI can’t do is just as important as knowing what it can do.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.