Furthermore, they show a counter-intuitive scaling limit: their reasoning effort and hard work boosts with dilemma complexity around a degree, then declines Regardless of getting an ample token spending budget. By evaluating LRMs with their normal LLM counterparts under equal inference compute, we determine a few general performance regimes: https://illusion-of-kundun-mu-onl77776.blogsuperapp.com/36343191/illusion-of-kundun-mu-online-for-dummies