Hacker Newsnew | past | comments | ask | show | jobs | submit | hagbarth's commentslogin

How so?

I guess because gold isn't created by decree?

Supply is only one half of value. The demand for gold is almost entirely speculative, whereas dollars can be directly used for almost anything.

This isn’t a political move. It’s a pension fund derisking a small part of their investments by moving them out of a riskier asset.

For rentals I get that. We own 2 EVs and a charger at home. Easiest driving experience ever. We just plug it in.

I’m terms of upgrading your daily life, never going to a petrol station is a great upgrade.

Haven’t quite made it in our house, we went once or twice last year to charge on a long trip. Didn’t go in.


Graphite predates AI code reviews. Obviously includes it now, but the original selling point was support for stacking PRs.


I've seen this statement before. I'm not sure why my nation winning AI, whatever that means (first to AGI?), is better for me than some other nation?


Ah yes, proving a negative. What makes you sure a stone is not capable of cognition?


An LLM is an algorithm. You can obtain the same result as a SOTA LLM via pen and paper it will take a lot of long laborious effort. That's ONE reason why LLM's do not have cognition.

Also they don't reason, or think or any of the other myriad nonsense attributed to LLM's. I hate the platitudes given to LLM's it's at PHD level. It's now able to answer math olympiad questions. It answers them by statistical pattern recognition!


A brain is an algorithm. Given an unreasonably precise graph of neurons, neurotransmitter levels at each junction, and so on and so forth, one could obtain the same result via pen and paper. It will just take a lot of long laborious effort. That’s ONE reason why brains do not have cognition.


There is a whole branch of AI trying to do this, but they are still at the very initial stages. LLMs are not the same thing at all.


Revenue is all up. And as far as I can see beating expectations.


> People think that because AI cannot replace a senior dev, it's a worthless con.

Quite the strawman. There are many points between “worthless” and “worth 100s of billions to trillions of investment”.


If you read a little further in the article, the main point is _not_ that AI is useless. But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.


> But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.

AGI is a lot of things, a lot of ever moving targets, but it's never (under any sane definition) "infinite growth". That's already ASI territory / singularity and all that stuff. I see more and more people mixing the two, and arguing against ASI being a thing, when talking about AGI. "Human level competences" is AGI. Super-human, ever improving, infinite growth - that's ASI.

If and when we reach AGI is left for everyone to decide. I sometimes like to think about it this way: how many decades would you have to go back, and ask people from that time if what we have today is "AGI".


Sam Altman has been drumming[1] the ASI drum for a while now. I don't think it's a stretch to say that this is the vision he is selling.

[1] - https://ia.samaltman.com/#:~:text=we%20will%20have-,superint...


Once you have AGI, you can presumably automate AI R&D, and it seems to me that the recursive self-improvement that begets ASI isn't that far away from that point.


We already have AGI - it's called humans - and frankly it's no magic bullet for AI progress.

Meta just laid 600 of them off.

All this talk of AGI, ASI, super-intelligence, and recursive self-improvement etc is just undefined masturbatory pipe dreams.

For now it's all about LLMs and agents, and you will not see anything fundamentally new until this approach has been accepted as having reached the point of diminishing returns.

The snake oil salesmen will soon tell you that they've cracked continual learning, but it'll just be memory, and still won't be the AI intern that learns on the job.

Maybe in 5 years we'll see "AlphaThought" that does a better job of reasoning.


Humans aren't really being put to work upgrading the underlying design of their own brains, though. And 5 years is a blink of an eye. My five-year-old will barely even be turning ten years old by then.


Assuming the recursing self-improvement doesn't run into physical hardware limits.

Like we can theoretically build a spaceship that can accelerate to 99.9999% C - just a constant 1G accel engine with "enough fuel".

Of course the problem is that "enough fuel" = more mass than is available in our solar system.

ASI might have a similar problem.


I believe AlphaFold, AlphaEvolve etc are _not_ looking to get to AGI. The whole article is a case against AGI chasing, not ML or LLM overall.


AlphaEvolve is a general system which works in many domains. How is that not a step towards general intelligence?

And it is effectively a loop around LLM.

But my point is that we have evidence that Demis Hassabis knows his shit. Just doubting him on a general vibe is not smart


AlphaEvolve is a system for evolving symbolic computer programs.

Not everything that DeepMind works on (such as AlphaGo, AlphaFold) are directly, or even indirectly, part of a push towards AGI. They seem to genuinely want to accelerate scientific research, and for Hassabis personally this seems to be his primary goal, and might have remained his only goal if Google hadn't merged Google Brain with DeepMind and forced more of a product/profit focus.

DeepMind do appear to be defining, and approaching, "AGI" differently that the rest of the pack who are LLM-scaling true believers, but exactly what their vision is for an AGI architecture, at varying timescales, remains to be seen.


Has he, his team, or DeepMind used any AGI rhetoric, even just as advertising?


Hassabi has talked about AGI in a lot of interviews. So has members of his Deepmind team. And of course current and former alphabet employees - the most prominent being schmidt. He definitely thinks it is coming and said we should prepare for it. Just search for his interviews on AI and you'll get a bunch of them.


This entire thread: people who are too lazy to read basic info about AI companies but have an opinion about "AGI rhetoric".

What do you think OpenAI was founded in a response to?


Musk’s wild-eyed AGI visions and hysteria towards the sober-minded, research-focused efforts of DeepMind / Demis Hassabis?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: