In addition, they exhibit a counter-intuitive scaling Restrict: their reasoning hard work increases with dilemma complexity up to some extent, then declines despite acquiring an suitable token funds. By comparing LRMs with their conventional LLM counterparts beneath equal inference compute, we determine 3 efficiency regimes: (1) very low-complexity tasks https://www.youtube.com/watch?v=snr3is5MTiU