OpenAI CEO Sam Altman has signaled a potential shift in AI research strategy, suggesting that the era of ever-larger language models (LLMs) may be coming to an end. While OpenAI’s GPT-4 was developed through scaling up and fed trillions of words at a cost exceeding $100 million, Altman believes that future advancements won’t come from size but by improving models in other ways.
Could GPT-4 be the last major breakthrough to emerge from the current approach of scaling up models? Both Altman and Cohere co-founder Nick Frosst see progress in AI development lying beyond scaling transformers. New AI model designs, architectures, and fine-tuning based on human feedback are considered promising directions for future research.
The idea that larger and larger LLMs may not be the path to greater GPT capabilities raises some interesting questions. Where will the value of generative AI be created? If not at the big tech infrastructure level, then where? The model level? The app level? A combination of all three? If you’re trying to formulate an investment thesis for your business (or your own portfolio), this is certainly food for thought.
If you want to go deeper into the tools and techniques of generative AI, consider taking our free online course, Generative AI for Executives. It will help you develop a solid understanding of this particular problem set.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.