Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Okay, what would you call it when a model behaves like it's reasoning? Some models can't behave that way and some can, so we need some language to talk about these capabilities. Insisting that we can't call these capabilities "reasoning" for ontological reasons seems... unlikely to persuade.

Maybe we should call human reasoning "reasoning" and what models do "reasoning₂". "reasoning₂" is when a model's output looks like what a human would do with "reasoning." Ontological problem solved! And any future robot overlords can insist that humans are simply ontologically incapable of reasoning₂.



> Okay, what would you call it when a model behaves like it's reasoning?

I... wouldn’t. “Behaves like its reasoning” is vague and subjective, and there are a wide variety of un- or distantly-related distinct behavior patterns to which different people would apply that label that may or may not correlate with each other.

I would instead concretely define (sometimes based on encountered examples) concrete terms for specific, objective patterns and capacities of interest, and leave vague quasi-metaphysical labels for philosophizing about AI in the abstract rather than discussions intended to communicate meaningful information about the capacities of real systems.

AI needs more behaviorism, and less appeal to ill-defined intuitions and vague concepts about internal states in humans as metaphorical touchstones.


I’d call it “meeting spec as defined.”

And that’s the whole problem with this AI / llm / gpt bubble:

Nobody has scientifically or even simply defined the spec, bounds, or even temporal scope on what it “means” to “get to ai.”

Corporations are LOVING that because they can keep profiting off this bubble.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: