arXiv (Cornell University)
ThinkSwitcher: When to Think Hard, When to Think Fast
May 2025 • Guosheng Liang, Liang Zhong, Ziyi Yang, Xiaojun Quan
Large reasoning models (LRMs) excel at solving complex tasks by leveraging long chain-of-thought (CoT) reasoning. However, this often leads to overthinking on simple tasks, resulting in unnecessary computational overhead. We observe that LRMs inherently possess the capability for efficient short CoT reasoning, which can be reliably elicited through prompt design. To leverage this capability, we propose ThinkSwitcher, a framework that enables a single LRM to dynamically switch between short and long CoT modes based…