Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure you understand what disingenuous means. Nobody is saying that sub-human level AIs will suddenly surpass humans 'the next day'. That's totally absurd and I really don't know where you get that from.

The supposition is that a general purpose AI that is better than a human mind could be better at designing and optimising AIs than a human, then the next genration AI it designs will be even better and that's what sets off the exponential AI intelligence cascade. Imporvements in algorithms can provide startling advances. In some respects the improved efficiency of algorithms has even outstripped the gains from Moore's Law.

Personaly I think stong AI like this is pretty far in the future, more than a few decades at least and more likely several generations, but I do think it is eventually likely to happen.



Indeed it does not mean what I think it meant, I was thinking the opposite of ingenious.

My claim is that, for a fixed hardware, this exponential cascade cannot happen as rapidly as some claim, if it ever happens (whatever 'exponential intelligence' means). We're already improving AI using an intelligence that will only be available to the AIs themselves in a very long time, and yet this rate of improvement is not a scary doomsday rate.

Look at hardware for example. We've been using computers to design better computers for a very long time. And yet, the use of computers in this design is effectively limited by some non decisive, local optimizations, like achieving good routing and good electromagnetic compatibility. If you had supercomputers from 30 years ago you could still run the software that designs computers today -- whereas if you followed this "self-improvement" logic we should be using almost all of our computational power right now to achieve more computational power. The problem is that we are still vastly more capable of building theories and designing than computers. By the time an AI gets much better than we currently are at independently improving it's software, it will likely already be seeing diminishing returns; to achieve notable improvement it will need to improve hardware as well, which has the same problem, and an additional one that it's very hard for an AI to independently improve it's own hardware (it requires a global manufacturing supply chain).


I think you miss a couple things.

* An increase in hardware for an AI wouldn't require an increase in theoretical capacity of hardware. It would just require more stuff to be put on the machine running the AI. Even if the AI was running on the world's largest supercomputer, the amount of RAM, processors, etc on the machine could still be upped substantially with the resources available.

* What a hypothetical general AI would be emulating is not simply more machines. It would, theoretically, be able to quickly emulate many people working with many dumb machines over a long period of time.

The only thing your argument proves is current machines can't improve themselves - which I think everyone agrees with.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: