Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are substantial works already showing reasoning capabilities in GPT-4, which show that these models do reason extremely well - near human performance for many causal reasoning tasks. (1) Additionally, there is a mathematical proof that these systems align with dynamic programming, and therefore can do algorithmic reasoning. (2,3)

1) https://arxiv.org/abs/2305.00050.pdf 2) https://arxiv.org/pdf/1905.13211.pdf 3) https://arxiv.org/pdf/2203.15544.pdf



is GPT4 a graph neural network? also, isn't it training time and data dependent how big (how many tokens) a problem it can tackle?

so it's great that it can reason better than humans on small-medium probems already well trained for, but so far Transformers are not reasoning (not doing causal graph analysis, or not even doing zero order logic), they are eerily well writing text that has the right keywords. and of course it's very powerful and probably will be useful for many applications.


They are GNNs with attention as the message passing function and additional concatenated positional embeddings. As for reasoning, these are not quite 'problems well-trained for', in the sense that they're not in the training data. But they are likely problems that have some abstract algorithmic similarity, which is the point.

I'm not quite sure what you mean that they cannot do causal graph analysis, since that was one of many different tasks provided in the various different types of reasoning studies in the paper I mentioned. In fact it may have been the best performing task. Perhaps try checking the paper again - it's quite a lot of experiments and text, so it's understandable to not ingest all of it quickly.

In addition, if you're interested in seeing further evidence of algorithmic reasoning capabilities occurring in transformers, Hattie Zhou has a good paper on that as well. https://arxiv.org/pdf/2211.09066.pdf

The story is really not shaping up to be 'stochastic parrots' if any real deep analysis is performed. The only way that I see someone could have such a conclusion is if they are not an expert in the field, and simply glance at the mechanics for a few seconds and try to ham handedly describe the system (hence the phrase: "it just predicts next token"). Of course, this is a bit harsh, and I don't mean to suggest that these systems are somehow performing similar brain-like reasoning mechanisms (whatever that may mean) etc, but stating that they cannot reason (when there is literature on the subject) because 'its just statistics' is definitely not accurate.


> they cannot do causal graph analysis

I mean the ANN in the inference stage when run does not draw up a nice graph, doesn't calculate weights, doesn't write down pretty little Bayesian formulas, it does whatever is encoder in the matrices-innerproduct-context.

And it's accurate in a lot of cases (because there's sufficient abstract similarity in the training data), and that's what I meant by "of course it'll likely be useful in many cases".

At least this is my current "understanding", I haven't had time to dig into the papers unfortunately. Thanks for the further recommendation!

What seems very much missing is characterizing the reasoning that is going on. Its limitations, functional dependencies, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: