The Illusion of Thinking: What Frontier Language Models Are Really Doing When They “Reason”
Over the past year, the world has witnessed the emergence of a new breed of large language models designed not just to mimic speaking—but to simulate something deeper: thinking. These so-called Large Reasoning Models (LRMs) arrive outfitted with “chain-of-thought” capabilities: long, deliberative traces of inference, logic, and self-correction. They