Openai Research Lead Noam Brown believes some AI “reasoning” models could arrive decades ago
Noam Brown, who has led AI reflection studies in Openai, says some forms of “reasoning” AI models could arrive 20 years earlier that researchers “known (right) approach” and algorithms.
“There were various reasons why this direction of research was neglected,” Brown said during a panel of NVIDIA GTC Conference in San Jose on Wednesday. “I noticed in the course of my research that, well, there is something missing. People spend a lot of time thinking before they act in a difficult situation. Maybe it would be very useful (in AI).”
Brown was referring to his work on IG at the University of Carnegie Melon, including Pluribus, who defeated elite human poker professionals. Ai Brown helped to create it was unique at the time, in the sense that it “thinks” through problems, not to try a more rough approach.
Brown is one of the architects behind O1, a model of Openai who uses a technique called Test-Time conclusion To “think” before responding to requests. The test time conclusion involves applying additional calculations to working models to encourage a form of “reasoning”. In general, the so-called reasoning models are more accurate and reliable than traditional models, especially in areas such as mathematics and science.
Brown was asked during the panel if academics can ever hope to experiment on the scale of AI laboratories such as Openai, given the overall lack of access to institutions to computing resources. He acknowledged that in recent years it has become more difficult as models have become more intense calculations, but that academics can influence by studying areas that require fewer calculations, such as the design of the model’s architecture.
“(T) Here’s the opportunity to collaborate between Labs Labs (and academic circles),” Brown said. “Certainly, border laboratories look at academic publications and think carefully, well, it makes a convincing argument that if this is scaled further, it would be very effective. If there is this convincing argument of the document, you know, we will investigate that in these laboratories.”
Brown’s comments come at a time when it makes Trump administration deep cuts to the scientific grant of grants. AI experts including Nobel laureate Jeffrey Hinton have criticized these abbreviations, saying that they can threaten efforts for research AI both abroad and abroadS
Brown called AI comparative analysis as an area where academics can have a significant impact. “The status of AI indicators is really bad and that doesn’t require much calculations,” he said.
As we wrote before, today the popular AI indicators tend to test for esoteric knowledge and produce results that correlate poorly with skill On tasks that most people are interested in. This has led to widespread confusion For the capabilities and improvements of the models.
Updated 4:06 pm the Pacific Ocean: A larger version of this piece suggested that Brown meant models for reasoning like O1 in its original remarks. In fact, he referred to his work on the IG game before his time at Openai. We are sorry for the mistake.