AI reasoning models were supposed to be the industry’s next leap, promising smarter systems able to tackle more complex problems and a path to superintelligence.
The latest releases from the major players in artificial intelligence, including OpenAI, Anthropic, Alphabet and DeepSeek, have been models with reasoning capabilities. Those reasoning models can execute on tougher tasks by “thinking,” or breaking problems into logical steps and showing their work.
Now, a string of recent research is calling that into question. CNBC’s Deirdre Bosa reports.