Hacker Newsnew | past | comments | ask | show | jobs | submit | flail's commentslogin

There's a huge difference between nurses or teachers and Ivy League students. Namely, the former are not remotely as prestigious roles. I highly doubt there are 20 candidates for each nurse or teacher job.

Affirmative action happens when we discuss privileged positions. Spots at Ivy League colleges definitely are positions of privilege.

So if the situation under consideration were nursing, there wouldn't be such a discussion because there wouldn't be affirmative action in place.


> do Altman and Andreesen really believe that, or is it just a marketing and investment pitch?

As for Andreessen, I don't think he even cares. As the author writes:

"for the venture capitalists that have driven so much of field, scaling, even if it fails, has been a great run: it’s been a way to take their 2% management fee investing someone else’s money on plausible-ish sounding bets that were truly massive, which makes them rich no matter how things turn out"

VCs win every time. Even if it's a bubble and it bursts, they still win. In fact, they are the only party that wins.

Heck, the bigger the bubble, the more money is poured into it, and the bigger the commissions. So VCs have an interest in pumping it up.


> Have LLMs learned to say "I don't know" yet?

Can they, fundamentally, do that? That is, given the current technology.

Architecturally, they don't have a concept of "not knowing." They can say "I don't know," but it simply means that it was the most likely answer based on the training data.

A perfect example: an LLM citing chess rules and still making an illegal move: https://garymarcus.substack.com/p/generative-ais-crippling-a...

Heck, it can even say the move would have been illegal. And it would still make it.


If the current technology does not allow them to sincerely say "I don't know, I am now checking it out" then they are not AGI, was my original point.

I am aware that the LLM companies are starting to integrate this quality -- and I strongly approve. But again, not being self-critical and as such lacking self-awareness is one of the qualities that I would ascribe to an AGI.


> We've got something that seems to be general and seems to be more intelligent than an average human.

We've got something that occasionally sounds as if it were more intelligent than an average human. However, if we stick to areas of interest of that average human, they'll beat the machine in reasoning, critical assessment, etc.

And in just about any area, an average human will beat the machine wherever a world model is required, i.e., a generalized understanding of how the world works.

It's not to criticize the usefulness of LLMs. Yet broad statements that an LLM is more intelligent than an average Joe are necessarily misleading.

I like how Simon Wardley assesses how good the most recent models are. He asks them to summarize an article or a book which he's deeply familiar with (his own or someone else's). It's like a test of trust. If he can't trust the summary of the stuff he knows, he can't trust the summary that's foreign to him either.


What's the lifecycle length of GPUs? 2-4 years? By the time OpenAIs and Anthropics pivot, many GPUs will be beyond their half-life. I doubt there would be many takers for that infrastructure.

Especially given the humungous scale of infrastructure that the current approach requires. Is there another line of technology that would require remotely as much?

Note, I'm not saying there can't be. It's just that I don't think there are obvious shots at that target.


> I stopped reading here, which is at the very start of the article (...) > (...) this article is low quality and honestly full of basic errors.

Just curious: How do you know it's full of errors, given that you stopped reading at the very start?


One more interesting aspect: the infrastructure doesn't age that well. We basically need to renew all that infrastructure every, like, 2-4 years or so? (And I think I'm being optimistic here.)


I don't think FB was an outlier. I can't be sure, but I don't think there were many (any?) companies that took more than 10 years to profitability pre-2015.

I think Twitter took 11 years, and it was 2017.

Uber is actually a good counterexample for more reasons than just how long it took to reach profitability. It also raised a lot of money $13B+ (compared to Facebook's ~$2B and Twitter's ~$3.5B), and ~$8B from IPO (that's another interesting fact; IPO when bleeding money).

However, it would rather make Uber an outlier, not vice versa. I guess Tesla and SpaceX fall into the "Uber" bucket, too (SpaceX would actually be profitable pre-2015, right?). How many others can you list?

So yes, we have extending timelines, but pouring money into a leaky bucket for 10 years is still predominantly a losing bet. For each that eventually made it you would have Foursquare, We Work, Better Place, Jawbone, Theranos (!), Fisker Automotive, etc.

And for each of those, you would have dozens that are even more forgotten because investors pulled the plug after just a few years (anyone remember fab.com perchance?). I would put Groupons of this world in the same bucket.

But even if we treated Uber and Tesla as the norm, OpenAI has already beaten them all in terms of how much funding it raised (and Anthropic is on its way there, too). Both with no signs of profitability round the corner and an absurd burn rate that can't be carried by any single customer group (and I already think about their geography as global).

That's why corporate results are so important, as they can afford to pay a premium. ChatGPT users will not.

So even among the wildest outliers, AI companies are extreme outliers.


There's Peter's Principle that says that everyone will be promoted till they eventually become incompetent at their job: https://en.wikipedia.org/wiki/Peter_principle

And then, gamedev isn't known for their progressive approach to management (to say the least). A couple of years back, it made the major news in Poland that CD Projekt RED adopted Agile. They actually pumped PR efforts in that.

In 2023.

Give them two more decades, and they might as well adopt modern management approaches or even Lean Startup.

I would speculate that a relatively high degree of incompetence of leadership in gamedev is a combination of Peter's Principle and the fact that it's an industry romanticized by many. Thus, they can afford not to fix many issues that would be fatal for an average boring corporation. There will always be new blood coming.


I think these two are two dimensions. You can have any combination of: a) single branch vs feature branches b) code review as a norm vs not required

(I'd rather draw a line with code review being/not being a norm, rather than whether it's mandatory. It can be mandatory and still shit.)

And as you suggest, I would expect that trunk-based development leads to greater care for quality. Add to that code reviews that seem to improve quality even further. I don't see a contradiction here.

Also, what the data suggests is that, for good productivity, it may be more important to have short lead times (from development to production) rather than just "no mandatory code reviews."

If you can expect code review to be done just-in-time, you retain the context, limit the tax of context switching, avoid Zeigarnik effect (https://en.wikipedia.org/wiki/Zeigarnik_effect), etc. So I guess this may be a sweet spot reconciling two sources.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: