Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I personally scoff at this kind of stuff because it's asking for something no one has the power to give. It's like asking to stop the development of nukes in the 40s and 50s, it's just not gonna happen.


Preventing the development of nukes entirely was obviously not going to happen. But delaying the first detonations by a few years, and moving the Partial Nuclear Test Ban treaty up a few years, was quite achievable.

Whether delaying AI development a little matters depends on whether you think the success of AI alignment, applied to future superintelligence, is overdetermined to succeed, overdetermined to fail, or close to borderline. Personally I think it looks borderline, so I'm glad to see things like this.


I'm firmly in the camp that delaying it's development could make a difference, I just don't see how that's possible. These models are relatively simple and the equipment necessary to develop them is public (and relatively cheap if we're talking about corporate or national scales). At least with nukes, there was a raw material bottleneck, but there really isn't a limiting factor here that any "good guys" could choke point. It's out there and it's going to get worked on, and the only people the "good guys" can limit are themselves.


And during that period, and later during the cold war, the decision to make (or stop making) nukes was in the hands of maybe 5 people total. Today there are thousands of companies and tens/hundreds of thousands of people who can legitimately compete in the space. Best of luck trying to resolve a prisoner's dilemma between all of them.


It actually very easily could have happened and almost did happen but the Russians decided to go back on their effort to do it. It really did almost happen. People act like it’s hard. Stopping a huge asteroid is hard. There might not be enough physical resources to do it. Stopping AI or nukes is definitely easy.


> Stopping AI or nukes is definitely easy.

Under what definition of easy? If it's easy to stop, then why don't the people signing the letter just do it, rather than trying to appeal to others to do it instead?

Aligning thousands of people (all of the people with the knowledge and resources to move forward quickly), with no dissenters, (as a single dissenter could move things forward), to a common goal is not easy. It's effectively impossible.


Well, AGI is detrimental to literally all humans. If everyone understood the facts then everyone would vote for solutions. As it becomes more and more obvious we are getting closer to this.

If one of the many close calls had gone a little differently and a city were nuked accidentally, it would cause a global outcry and there would be yet another international effort to reduce the global nuclear stockpile to zero. And there’s a very good chance it would succeed. At the very start there was actually an agreement to not initiate a nuclear arms race but the Russians went back on it. So, for something that is “impossible”, we seem to see signs of it all the time.

The reason you think it’s impossible is because most things aren’t like this. Most things benefit some people and harm other people. This harms all people. Be a part of the solution instead of dismissing real solutions when you have no logical reason for doing so.


> Well, AGI is detrimental to literally all humans

That's up for debate. I personally think AGI will be good, though not with a level of certainty that would allow me to use it as an axiom in a conversation.

> The reason you think it’s impossible is because most things aren’t like this

The reason I think it's impossible is because there are several other things like this and we did the same thing. When there is massive benefit to dissenters, getting everyone on board fails. Nukes, fossil fuels, deforestation, industrial chemical controls, overfishing, etc. are all examples of how we continue to fail at the exact same task.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: