Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> ... in the case of physics one could imagine working through very high quality course materials together with Feynman ... with recent progress in generative AI, this learning experience feels tractable.

Actually, this seems to be absurdly beyond any of the recent progress in generative AI. This sounds like the kind of thing people say when their only deep knowledge is in the field of AI engineering.



It's optimistic, but given the OP is one of the best-informed technical generative AI researchers, and has been passing on that knowledge to the rest of us for a decade +, I don't think we can just dismiss it as unfounded hype :)


My point is that he's a world expert on the engineering of AI systems. That shouldn't be mistaken for expertise, or even general knowledge, about anything else.

It's a good principle to bear in mind for people from any profession, but top AI engineers in particular seem to have an unusually significant habit of not being able to recognize where their expertise ends and expertise from another field (such as, say, education) begins. They also seem very prone to unfounded hype - which isn't to say they're not also good researchers.

Maybe Karpathy happens to be better on this than his peers, I wouldn't know.


incidentally, feynman would laugh pretty hard at this


Would any hypothetical training data corpus even be sufficient to emulate Feynman? Could any AI have a sufficient grasp of the material being taught, have enough surety to avoid errors, mimic Feynman's writing+teaching style, and accomplish this feat in a reasonable budget and timeframe?

The example is obvious marketing hyperbole, of course, but it's just not going to happen beyond a superficial level unless we somehow create some kind of time-travelling panopticon. It's marred by lack of data (Feynman died in 1988), bad data (hagiographies of Feynman, this instance included), flawed assumptions (would Feynman even be an appropriate teaching assistant for everyone?), etc.

I wonder if AI fans keep doing this thing in hopes that the "wow factor" of having the greats being emulated by AI (Feynman, Bill Gates, Socrates, etc.) will paper over their fundamental insecurities about their investment in AI. Like, c'mon, this kind of thing is a bit silly https://www.youtube.com/watch?v=og2ehY5QXSc


> Feynman, Bill Gates, Socrates, etc.

One of these doesn't quite belong ;)

But these AI researchers don't even understand these figures except as advertising reference points. The Socratic dialogue in the "sparks of AGI" paper https://arxiv.org/abs/2303.12712 has nothing whatsoever to do with Socrates or the way he argued.

Fourteen authors and not a single one seemed to realize there's any possible difference between a Socratic dialogue and a standard hack conversation where one person is named "Socrates."


> Prompt: Can you compare the two outputs above as if you were a teacher? [to GPT-4, the "two outputs" being GPT-4's and ChatGPT's attempts at a Socratic dialogue]

Okay, that's kinda funny lol.

It's a bit worrying how much the AI industry seems to be focusing on the superficial appearance of success (grandiose marketing claims, AI art that looks fine on first glance, AI mimicking peoples' appearances and speech patterns, etc.). I'm just your random layperson in the comment section, but it really seems like the field needed to be stuck in academia for a decade or two more. It hadn't quite finished baking yet.


As far as I can see there are pretty much zero incentives in the AI research arena for being careful or intellectually rigorous, or being at all cautious in proclaiming success (or imminent success), with industry incentives having well invaded elite academia (Stanford, Berkeley, MIT, etc) as well. And culturally speaking, the top researchers seem to uniformly overestimate, by orders of magnitude, their own intelligence or perceptiveness. Looking in from the outside, it's a very curious field.


> there are pretty much zero incentives in ____ for being careful or intellectually rigorous

I would venture most industries, with foundations on other research fields, are likely the same. Oil & Gas, Pharma, manufacturing, WW2, going to the moon... the world is full of examples where people put progress or profits above safety.

It's human nature


> I would venture most industries, with foundations on other research fields, are likely the same.

"Industries" is a key word though. Academic research, though hardly without its own major problems, doesn't have the same set of corrupting incentives. Although the lines are blurred, one kind of research shouldn't be confused with another. I do think it's exactly right to think of AI researchers the same way we think of R&D people in oil & gas, not the same way we think of algebraic topologists.


Andrej Karpathy (the one behind the OP project) has been in both academia & industry, he's far more than a researcher, he also teaches and builds products


> > Feynman, Bill Gates, Socrates, etc.

> One of these doesn't quite belong ;)

I asked GPT to find which one:

"The one that doesn't fit in is Bill Gates.

Richard Feynman and Socrates were primarily known for their contributions to science and philosophy, respectively. Feynman was a renowned theoretical physicist, and Socrates was a foundational philosopher.

Bill Gates, on the other hand, is primarily known as a businessman and co-founder of Microsoft, a leading software corporation. While he also has made contributions to technology and philanthropy, his primary domain is different from the scientific and philosophical realms of Feynman and Socrates."


Thank you for this AI slop. It's the right answer but incoherent reasoning. It could have equally reasonably said:

"The one that doesn't fit in is Socrates.

Richard Feynman and Bill Gates are primarily known for their contributions to science and philanthropy, respectively. Feynman was a renowned theoretical physicist, and Gates is a world-famous philanthropist.

Socrates, on the other hand, is primarily known for foundational contributions to philosophy. His primary domain is thus distinct from the scientific and philanthropic realms of Feynman and Gates."


True, but I bet 90% of HN readers would have answered "Bill Gates", with reasoning similar to GPT's. So I can't exactly fault GPT too much.


Ok. Thanks for the contribution.


>> feels tractable

I mean, the guy isn't saying that it's going to 100% happen. He's saying that the problem feels like it might be doable at all. As Andrej has a background in physics, the phrase of 'feels tractable' would mean that he thinks that a path might exist, possibly, but only a lot of work will reveal that.


> As Andrej has a background in physics

This seems rather generous given that he was just a physics major. There's lots of physics majors who understand very little about physics and, crucially, nothing about physics education.


I was talking about how physicists will understate how difficult things can be.

'Feels tractable' is physics-speak for: a possibility exists to get there though it may require more teachers than there exist on the earth and more compute time per student than there are transistors in the solar system for the next 1000 years.

Anti-gravity would be 'tractable' too as we can see there must exist some form of it via Hubble expansion and then it's only a matter of time until a physicist figures it out. Making it commercially viable is left to icky engineers.

Things that a physicist don't think are 'tractable' would be time-travel and completing your thesis.

To be very very clear: I am somewhat poking fun at physicists. Due to the golden age of physics, the whole field is kinda warped a bit about what they think is a doable thing/tractable. They think that a path may exist, and suddenly the problem is no longer really all that much of a problem, or rather 'real' problems are in figuring out if you can even do something at all. 'Tractable' problems are just all the 'stamp collecting' that goes into actually doing it.


"Just" a physics major. I'm sorry but you're being ridiculous.

There's nothing just about that especially when the commenter only said he had a background in physics.


It's legitimate to call it a background in physics, but given the particular level of background and the context of this particular issue, its relevance is indistinguishable from zero.

Has he ever demonstrated any particular insight or understanding of physics or - more importantly - of physics education? As far as I've been able to find, the answer is no. Not that there's anything wrong with that. At worst it just makes him a typical physics major.


One of the things Karpathy is most famous for, perhaps the thing depending on who you ask are his instructional materials on Deep Learning and Neural Networks, of which at least hundreds of thousands have benefitted.

That's far more tangible than whatever "background" it is you're looking for. He's a good teacher. He stands out for that and that's not an easy thing to do.

Of all the things background doesn't mean much in, being a good educator is at the top of the list. Most Educators including those who've been at it for years are mediocre at best. The people who educate at the highest level (College/University Professors) are not even remotely hired based on ability to educate so this really isn't a surprise.

Genuinely and I mean no offense, your expectations just feel rather comical. People like Sal Khan and Luis von Ahn would be laughed out of the room looking for your "background".

Sure, Sal is an educator now but he quit being a financial analyst to pursue Khan Academy full time.

The real problem here is that you don't believe what Karpathy has in mind is tractable and not that there's some elusive background he needs to have. His background is as good as any to take this on.


I think you've misread this conversation. I was responding to someone who suggested that Karpathy's "background in physics" indicated some insight into whether this venture, particularly as regards physics education, will effectively give guidance by subject matter experts like Feynman.

If they had cited some other background, like his courses on AI, I would have responded differently.


Ah Fair Enough




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: