Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I always disable reasoning when I can. It got over hyped because of deepseek when the short one sentence chain of thought most conversational models were trained to do seemed to be enough.


That's not what I mean. "Questions that require reasoning", i.e. indirect questions that require picking a fact in the context and processing it somehow, not necessarily related to reasoning chains models natively trained to do. Something GP is talking about.

Built-in reasoning chain certainly helps in long-context tasks, especially when it's largely trained to summarize the context and deconstruct the problem, like in Gemini 2.5 (you can easily jailbreak it to see the native reasoning chain that is normally hidden between system delimiters) and DeepSeek R1-0528, or when you're forcing it to summarize with a custom prompt/prefill. The article seems to agree.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: