On November 30, 2022, OpenAI released ChatGPT as a “research preview.” Five days later, one million people had signed up. Two months after that, it became one of the fastest-growing consumer applications in history, with analysts estimating 100 million monthly active users by January 2023. Today, ChatGPT serves more than 800 million weekly users. More than 90 percent of Fortune 500 companies report using OpenAI’s technology in some form. And after a recent secondary share sale that valued the company at around $500 billion, OpenAI is widely described as the most valuable startup in history.
This is all true and worth noting. But three years into this transformation, the obvious milestones matter less than the questions we have not yet answered. We are living through something unprecedented, and we may not yet have the intellectual framework to understand what comes next.
Technology is meaningless unless it changes the way we behave. By that measure, ChatGPT represents one of the most significant technology releases in a generation. It is also profoundly underhyped. The popular narrative focuses on chatbots, academic cheating, and meme generation. The actual story is about an alien intelligence integrating itself into human systems at a speed and scale we have never experienced. We do not know how to deal with this. Not yet.
The Framework Problem
Current large language models excel at pattern recognition and generation. They produce fluent text, write working code, and summarize documents with remarkable facility. They also hallucinate confidently, lack true understanding of causality, and cannot reliably reason about the physical world. These are not bugs to be patched. They are structural limitations of systems trained to predict the next token in a sequence.
The AI research community is already working on what comes next. Some believe agentic AI, systems that can plan, execute multi-step tasks, and operate with minimal supervision, will define the next phase. Nvidia CEO Jensen Huang has called AI agents a multi-trillion dollar opportunity. Recent McKinsey research finds that nearly eight in ten companies now report using generative AI in at least one business function, yet roughly the same share say they have seen no significant bottom-line impact. The gap between deployment and results suggests we are still learning how to integrate these tools into work that matters.
Others believe the real breakthrough will come from world models: systems that understand physical reality, can simulate outcomes, and reason about cause and effect. Yann LeCun, Meta’s soon-to-be former chief AI scientist, has argued that language is a “low-bandwidth” and imperfect channel for representing a multidimensional world and that text-only LLMs will not reach human-like reasoning without richer internal models. Fei-Fei Li’s World Labs is building systems designed to perceive, generate, and interact with 3D environments. Google, Meta, Nvidia, and others are investing heavily in similar efforts. The premise is that intelligence requires more than language. It requires a model of how reality works.
The Resource Questions
Then there are the constraints. AI development depends on three scarce resources: data, compute, and energy.
The Epoch AI research group estimates that the available stock of high-quality, human-generated public text for training could be effectively exhausted between 2026 and 2032 if current trends continue. Elon Musk recently claimed in a public interview that “the cumulative sum of human knowledge has been exhausted in AI training.” That is likely an overstatement, but the underlying concern is real. Synthetic data (AI training on AI-generated content) is the obvious workaround, but it introduces its own problems: reduced output diversity, amplified biases, and a risk of models converging toward homogeneity.
Energy is another pressing issue. The International Energy Agency projects that global electricity demand from data centers will more than double by 2030 to around 945 terawatt-hours, roughly equivalent to Japan’s current electricity consumption. In the United States, data centers could account for nearly half of the growth in electricity demand through the end of the decade. Goldman Sachs Research forecasts that global power demand from data centers could rise by more than 160 percent by 2030 compared with 2023 levels. Multiple independent estimates suggest that training GPT-4 alone consumed on the order of 50 gigawatt-hours of electricity, enough to power a major city such as San Francisco for several days. The Stargate initiative, a joint venture led by OpenAI, Oracle, SoftBank, and others, has been announced as a $500 billion project to build AI infrastructure delivering on the order of 10 to 15 gigawatts of capacity, comparable to the electricity demand of a small country.
These are not technical footnotes. They are strategic realities that will shape what AI can and cannot become.
The Harder Questions
Beyond resources, there are questions about how AI integrates into human systems in ways we have barely begun to address.
The militarization of AI is accelerating. The Pentagon has awarded contracts worth up to $200 million each to Anthropic, Google, OpenAI, and xAI to accelerate AI adoption for national security. The U.S. Replicator initiative aims to deploy thousands of autonomous and uncrewed systems. Russian strategists have articulated a vision of robotizing roughly 30 percent of the country’s military equipment. The European Commission’s White Paper on European Defence Readiness 2030 explicitly lists AI among the critical technologies for future defense capability. AI is already speeding up what militaries call the “kill chain,” the process of identifying, tracking, and engaging targets. The Pentagon insists humans remain in the loop for lethal decisions. Whether that remains true as systems become more capable is an open question.
The economic disruption is real but unevenly distributed. Pew Research reports that 34 percent of American adults have used ChatGPT, with usage rates far higher among those under 30 than those over 65. Multiple analyses suggest that traffic and question volume on Stack Overflow, once the dominant Q&A site for software developers, have fallen by roughly 50 percent or more since ChatGPT’s release. Yet unemployment remains low, and the predicted wave of AI-driven layoffs has not materialized at scale. At the moment, AI is a skills amplifier. The displacement of workers whose jobs are purely executional has been slower than many expected. Perhaps we are in an interregnum before larger shifts arrive. We do not know.
An Alien Intelligence
The deepest question is epistemological. We have never shared our world with an alien intelligence before. We do not know how to do it.
I am using the term “alien” deliberately. Large language models do not think the way humans think. They do not have experiences, intentions, or goals in any recognizable sense. They process tokens and generate outputs based on statistical patterns in training data. And yet those outputs are often indistinguishable from human-generated content. They pass tests. They write code that works. They produce analysis that executives use to make decisions. They also do an almost magical job with still images, video, voice cloning, sound effects, and music. The gap between what these systems are and what they produce is philosophically disorienting.
Today, we are building new workflows around tools we do not fully understand. We are training the next generation of workers on systems that will be obsolete within months. We are making policy decisions based on assumptions about AI capabilities that change daily. This is not a criticism. It is simply where we are.
Things to Think About
Three years into the ChatGPT era, the questions are harder: How much autonomy should we grant to systems that cannot explain their reasoning? How do we maintain human agency in workflows increasingly optimized by machines? How do we govern technologies that evolve faster than our institutions can adapt? How do we ensure the benefits and risks are distributed equitably across society?
I do not have confident answers. But here is how I am thinking about it:
First, the technology will improve faster than we expect. Research from METR suggests that the length of tasks AI agents can complete with 50 percent reliability has been doubling approximately every seven months since 2019, with some evidence that this pace may have accelerated in 2024 and 2025. Plan accordingly.
Second, resource constraints are real but not necessarily binding. Energy and data limitations will shape development, but they are engineering problems that capital and innovation can address. I would not assume the current trajectory is the permanent trajectory.
Third, the institutional questions are harder than the technical ones. AI governance, workforce adaptation, and the distribution of AI’s economic benefits will require sustained attention from leaders who understand both the technology and the human systems it transforms. This is a leadership challenge, not a technological one.
Lastly, stay curious and stay humble. We are three years into a transformation that will unfold over decades. The right posture is engaged skepticism: serious enough to prepare, humble enough to adapt when we get things wrong.
Three years ago, ChatGPT was a research preview. Today it is infrastructure. Three years from now, it will be something else entirely. Our task is to build the intellectual and institutional frameworks to navigate what comes next.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.