OpenAI made the last big breakthrough in artificial intelligence by increasing the size of its models to dizzying proportions, when it introduced GPT-4 last year. The company today announced a new advance that signals a shift in approach—a model that can “reason” logically through many difficult problems and is significantly smarter than existing AI without a major scale-up.
The new model, dubbed OpenAI-o1, can solve problems that stump existing AI models, including OpenAI’s most powerful existing model, GPT-4o. Rather than summon up an answer in one step, as a large language model normally does, it reasons through the problem, effectively thinking out loud as a person might, before arriving at the right result.
“This is what we consider the new paradigm in these models,” Mira Murati, OpenAI’s chief technology officer, tells WIRED. “It is much better at tackling very complex reasoning tasks.”
The new model was code-named Strawberry within OpenAI, and it is not a successor to GPT-4o but rather a complement to it, the company says.
Murati says that OpenAI is currently building its next master model, GPT-5, which will be considerably larger than its predecessor. But while the company still believes that scale will help wring new abilities out of AI, GPT-5 is likely to also include the reasoning technology introduced today. “There are two paradigms,” Murati says. “The scaling paradigm and this new paradigm. We expect that we will bring them together.”
LLMs typically conjure their answers from huge neural networks fed vast quantities of training data. They can exhibit remarkable linguistic and logical abilities, but traditionally struggle with surprisingly simple problems such as rudimentary math questions that involve reasoning.
Murati says OpenAI-o1 uses reinforcement learning, which involves giving a model positive feedback when it gets answers right and negative feedback when it does not, in order to improve its reasoning process. “The model sharpens its thinking and fine tunes the strategies that it uses to get to the answer,” she says. Reinforcement learning has enabled computers to play games with superhuman skill and do useful tasks like designing computer chips. The technique is also a key ingredient for turning an LLM into a useful and well-behaved chatbot.
Mark Chen, vice president of research at OpenAI, demonstrated the new model to WIRED, using it to solve several problems that its prior model, GPT-4o, cannot. These included an advanced chemistry question and the following mind-bending mathematical puzzle: “A princess is as old as the prince will be when the princess is twice as old as the prince was when the princess’s age was half the sum of their present age. What is the age of the prince and princess?” (The correct answer is that the prince is 30, and the princess is 40).
“The [new] model is learning to think for itself, rather than kind of trying to imitate the way humans would think,” as a conventional LLM does, Chen says.
OpenAI says its new model performs markedly better on a number of problem sets, including ones focused on coding, math, physics, biology, and chemistry. On the American Invitational Mathematics Examination (AIME), a test for math students, GPT-4o solved on average 12 percent of the problems while o1 got 83 percent right, according to the company.
The new model is slower than GPT-4o, and OpenAI says it does not always perform better—in part because, unlike GPT-4o, it cannot search the web and it is not multimodal, meaning it cannot parse images or audio.
Improving the reasoning capabilities of LLMs has been a hot topic in research circles for some time. Indeed, rivals are pursuing similar research lines. In July, Google announced AlphaProof, a project that combines language models with reinforcement learning for solving difficult math problems.
AlphaProof was able to learn how to reason over math problems by looking at correct answers. A key challenge with broadening this kind of learning is that there are not correct answers for everything a model might encounter. Chen says OpenAI has succeeded in building a reasoning system that is much more general. “I do think we have made some breakthroughs there; I think it is part of our edge,” Chen says. “It’s actually fairly good at reasoning across all domains.”
Noah Goodman, a professor at Stanford who has published work on improving the reasoning abilities of LLMs, says the key to more generalized training may involve using a “carefully prompted language model and handcrafted data” for training. He adds that being able to consistently trade the speed of results for greater accuracy would be a “nice advance.”
Yoon Kim, an assistant professor at MIT, says how LLMs solve problems currently remains somewhat mysterious, and even if they perform step-by-step reasoning there may be key differences from human intelligence. This could be crucial as the technology becomes more widely used. “These are systems that would be potentially making decisions that affect many, many people,” he says. “The larger question is, do we need to be confident about how a computational model is arriving at the decisions?”
The technique introduced by OpenAI today also may help ensure that AI models behave well. Murati says the new model has shown itself to be better at avoiding producing unpleasant or potentially harmful output by reasoning about the outcome of its actions. “If you think about teaching children, they learn much better to align to certain norms, behaviors, and values once they can reason about why they’re doing a certain thing,” she says.
Oren Etzioni, a professor emeritus at the University of Washington and a prominent AI expert, says it’s “essential to enable LLMs to engage in multi-step problem solving, use tools, and solve complex problems.” He adds, “Pure scale up will not deliver this.” Etzioni says, however, that there are further challenges ahead. “Even if reasoning were solved, we would still have the challenge of hallucination and factuality.”
OpenAI’s Chen says that the new reasoning approach developed by the company shows that advancing AI need not cost ungodly amounts of compute power. “One of the exciting things about the paradigm is we believe that it’ll allow us to ship intelligence cheaper,” he says, “and I think that really is the core mission of our company.”
Source : Wired