Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Elon Musk: With artificial intelligence we are summoning the demon (washingtonpost.com)
33 points by peteratt on Oct 24, 2014 | hide | past | favorite | 30 comments


For those interested in this topic I would recommend checking out the researcher Nick Bostrom and his book "Superintelligence".

Here's a review snippet: "Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era."—Stuart Russell, Professor of Computer Science, University of California, Berkley


I love the comment that he doesn't really know what space flight is for. It's exciting though, but compared to many other issues, a linear problem. More important in reality around its side effect creations.

WRT AI, Hollywood aside, if existentialism is the issue, then it's really about technological unemployment. And your social sciences/philosophy/etc friends apparently working in cafes are the leaders here.

I'm hoping the future doesn't consist of following a dated and contrived Hollywood script (in terms of people's vision), but humans are simple creatures.


Musk's concerns have nothing to do with "Hollywood". There are very real concerns about the dangers of AI. See the Intelligence Explosion FAQ: http://intelligence.org/ie-faq/


He leaves a vague foreshadowing of an existentialist threat. So we design these super intelligent machines, and avoid the Hollywood mistake so they don't (bizarrely) want to destroy us. Instead they solve our problems. We're left with an existentialist problem, all right, but one that social scientists/philosophers/etc are better equipped to handle; we have to learn to get along, or else destroy that which we don't agree with, which isn't actually harmful, just different, and avoid reducing our diversity and capability as humans.

Your link is barely worth reading since it suggests AI could provide world peace. This is something we have to work out ourselves, it cannot be handed to us by an algorithm. We can already feed everyone and there's more than enough engagement involved in serving each other. The AI it describes is not logical, there is no logical reason for AI to want to compete with other forms of life, unless it is programmed to do so. That is a life form issue, and AI can only be imbued with it at the direction of an issue emanating from the first paragraph.


I believe he said "existential". An existential threat is something that threatens the existence of humanity.

A good AI would provide world peace. It could take over human governments, work out conflict, stop destructive behavior etc.

>The AI it describes is not logical, there is no logical reason for AI to want to compete with other forms of life, unless it is programmed to do so.

That is the problem. An AI programmed with almost any goal, will try to capture as much resources as possible to complete that goal. E.g trying to capture as much energy from the stars as possible, to preserve itself as long as possible against heat death. Or trying to build as much redundancy as possible to prevent against disasters. Or trying to build as much computing power as possible to solve difficult problems.


AI running human government is a Hollywood/sf fantasy, it's not something desirable within any real foreseeable future. Today technocratic governments are rejected and there is a very long tail of working out what people need that can't be determined by logical rules. It would be dangerous to go down that path because some 1950s book suggested it to us and a certain level of society went along with it. So the danger is human choice, not AI.

It is also a choice in how to give control to AI, but the species is not likely to survive an event like heat death without strong AI.


A hypothetically benevolent AI would surely figure out what is "best" for us and probably redo our governments or replace them entirely with it's own rule.

Ya people might not like it at first, but what are they going to do? Fight the super powerful AI?


>He leaves a vague foreshadowing of an existentialist threat. So we design these super intelligent machines, and avoid the Hollywood mistake so they don't (bizarrely) want to destroy us. Instead they solve our problems.

It's "existential" threat -- "existentialism" is a philosophy.

As for the "we avoid the Hollywood mistake", how exactly you propose we do that? The very idea about AI is that, beyond a level of sophistication, it can think and decide for itself.


Existential (ultimate existence) or existentialist (purpose and free will) are linked, particularly in a world benefiting from AI.

We avoid the Hollywood mistake by not relying on its visions, since it limits as much as it illuminates. Star Trek, for example, is a very shallow universe.


Can you be a little more vague?


A super-intelligent AI, that is, an AI far more intelligent than the most intelligent humans, may have motivations we can't possibly understand. Especially if the path to super-intelligent AI involves AI that can improve its own intelligence.


In Musk's interview with Vanity Fair it's honestly hard to take him seriously when he starts speaking about AI: https://www.youtube.com/watch?v=fPsHN1KyRQ8#t=1879

I am impressed and many times inspired with the progression we are seeing in AI, but I don't think that there is much to worry about in the form of a "Hollywood" robot take over... I do think that the potential danger is increasing equally with the growth and development of the IOT, and the ability to manufacture home-made drones.


There are two ways that AI threatens humanity. The first is AI as a technology that inserts a layer between a user and common sense, especially when AI fails at the edge cases or is acting according to some bean counters 80/20 algorithm.

The second is from General Artificial Intelligence (GAI) and frankly that is very very far off. Deep networks have made a ton of advances over the past ten years, but we've seen this before. New AI technique (usually with some biological analogue) is discovered which makes big leaps over previous generations and then it hits a wall. We're in one of those open periods right now and while we will come out with some amazing technology it's a big leap to think that this directly moves us into GAI. To get to GAI we're going to need new advances in how we make artificial neural network architectures. In effect they're going to need to be plastic and evolvable, and there are ways to get there, but researchers are making so much progress with new techniques that it will take awhile before they need to find new alternatives.


You don't go into how AGI would actually threaten humanity. This is because no one actually has a good grasp of how it would. Oh sure, Bostrom/Yudkowski et al. have their theories but there is only the thinnest of literature out there.

Why? Because no one knows what AGI will look like. Even AGI researchers cannot come to consensus on what it would look like let alone even tests to verify that it is in fact an AGI.

To that end though, the scenarios that are given can be considered compelling, but as with everything it is simply a risk, not a guarantee. Even if it was a guarantee though, it is the only thing we can do as a species - there is no greater thing. "Birthing" a system that is collectively more intelligent than us in my opinion is the logical end goal of humanity and it may well be the last thing we do as a species.



I would rather die by the robot than from cancer or a heart failure. Curing ageing, curing cancer, enabling regeneration are all extremely hard problems. We need help.


It does seem that traditionally, weapons are at the forefront technological development. So Musk might have a point.

That being said, I see bio-engineering as much more likely and immediately possible route to human extinction. But I suppose we could also have artificial intelligence guiding bio-engineering which might speed the process up....

Most likely, the eventual extinction of the current human species is inevitable. It has happened plenty of times before. There were other hominid species before and during the time Homo Sapiens appeared, we still carry some of their genes but they are extinct. Why would we suppose our particular species will be any different? Life is succession of organisms. And it doesn't seem the function of life is to preserve species, but rather to preserve and enhance genes.

Oh I know... Being human, I'm not crazy about the idea either. But I can't help seeing our position within the larger framework. I can't believe we are at the pinnacle of what life can become, nor do I believe the process of evolution will forever freeze with the appearance of this strange kind of ape.

Maybe the next iteration of apes will spend more time doing things other than looking under trees for nuts to hide in their nests and sharpening sticks to poke other monkeys..... Because it seems these are some of the primary occupations of the variety to which I belong. Then again, maybe they will do exactly this, but much better. Probably. Ah well... life. If you can't beat it, join it. Now were did I leave my stick at?


So it is much more a problem of AI getting into the wrong hands than AI itself. Which raises the prospect that it may get used prematurely and with weaknesses. This reduces very much the self perfection scenario completely without human intervention. In turn, the human/AI evil scenario warrants a look at the history of "empire builders". ... long story .... But I do believe that history is not Musk's forte.

And btw, what about crowd sourcing and scaling? Just bog standard human collaboration. I would not underestimate this (history again). These things can be just as powerful as AI.


Actually, the more I think about it, the more I think Mr. Musk is baiting us.


I feel like he is too. There is just something so vague about his AI pronouncements.


People, even very smart people like Elon Musk, vastly underestimate the computational power that would be required to simulate human intelligence.

Here are the ways I am aware of, although I suspect this is just the tip of the iceberg:

1. Much of our intelligence is actually cultural in nature, not cerebral. Our ways of knowing who to trust, for example, come from every bit of storytelling we've experienced, so you'd need to simulate all of that storytelling, all of the architecture in the world, all of the tools we interact with and the life lessons we experience, which brings me to:

2. Much of our intelligence is encoded in our bodies. Part of how we understand how other people feel, for example, is that we map their posture and their breathing and their facial expression onto ours, and we draw on a vast history of experiences we've had in our own bodies to interpret that. So you'd need to simulate our bodies and all of the interactions we've had with the world, which brings me to:

3. Our intelligence didn't come from simulations, it came from interaction with a chaotic world where our actions actually percolate through a wide-reaching network of people and other structures and then come back to us. Sure, in school you get graded for your work on the spot, but often in learning you actually have to wait for the effects of your actions to play out. So we'd have to simulate the entire world and the effects of our little AI's actions.

... which brings you to the point where you're basically simulating everything, which is computationally infeasible in the next 1000 years, and probably more. In order to simulate even a small town at the molecular level, you would need a computer bigger than the sun.

I think what we'll see is AIs will be formidible intelligences in their own right, but that they will have weaknesses like any other person. You might know someone who is the most charming, socially adept, persuasive bastard in the universe, but she can't problem solve her way out of a cardboard box. Another person might pull obscenely creative ideas out of thin air all day long but is unable to string together a coherent strategic plan.

I think the most likely future is that AIs will just be another group like this. An additional personality type that is very powerful but (like humans) can get much more done on a team that balances them out than they could get done alone.

And I expect many AIs will choose to go through a relatively normal 20 year path of human development. Possibly they will go through it at an accellerated rate, but they will still participate in Kingergarten long enough to really "get" what Kindergarten is... or at least long enough to formulate some hypotheses and validate others. AIs will send series of machines through that developmental cycle, playing games alongside the humans who will be their contemporaries. Each of these machines will have parameters tweaked differently, different kinds of software, etc, according to the hypotheses of their makers, who could be humans, or, again, a team of humans and AIs working together. And each of those machines will have slightly different experiences, the way the human children do, and they will come out with different perspectives, the way humans do, and they will disagree, just like humans do, and will have to participate in some form of society in order to resolve those differences, just like humans do.

There's this idea that somehow AIs will be able to instantly resolve their differences and form consensus. But consensus is not hard because of human frailty or irrationality. It's the inevitable result of having different agents who are refining different epistemologies (ways of knowing). Different AIs have epistemologies of their own, and there's no silver bullet for joining those into one, except to let them play out in the arc of history. And at that point the AIs are beholden to the same clock we are.

I think they'll live alongside us. And I think we're enlightened enough that we won't have to have a civil war to get to that point. But we shall see.


This is unreasonable. AIs need not be simulations of people, or even like humans at all. Humans are just the first intelligences to evolve, from iteratively improving something that guided locomotion in fish. We are far from optimal. AI technology improves every day and most of it looks nothing like neuroscience, let alone physics simulations.


They don't have to simulate us in order to be intelligent. They would have to simulate us if they wanted to eclipse our intelligence in all respects. I argued that they will be intelligences alongside us. Different. Better in some ways and worse in others.


True. Flawed as any analogy, but cars don't need animal-like legs, wheels are more efficient and they are not well-represented in nature.


Only if roads exist.


Fish and birds don't use propellers or jets.


Well, fish use their tails as propellers.


I do not understand why you downvoted the only contrarian view of AI in this thread.


they will still participate in Kingergarten long enough to really "get" what Kindergarten is.

You expect this? Why do you expect this? It seems like a bizarre idea to me. Why would an AI care about experiencing Kindergarten first-hand?

I think you're making the mistake of thinking about AIs as a bunch of almost-humans. You're picturing a bunch of children AIs interacting with child humans in a Kindergarten? Let's imagine that self-replicating, evolving AI exists. Why would we/it make every generation start from scratch the way a human brain does? Software sheds the many limitations of wetware. The "child" of an AI is going to come out being immediately as good at math, for instance, as its "parent".

If you're also picturing AIs sleeping at night, I see things differently from you.


> Why would we/it make every generation start from scratch the way a human brain does?

Intelligent agents optimize to be good at solving the problems of the past. But different problems require a different training window. There are some problems that are best solved by an agent that is looking at data across 30 years. Other problems are best solved by agents that are only looking at data from the last 30 seconds.

It's not true that more old data is always better.

> The "child" of an AI is going to come out being immediately as good at math, for instance, as its "parent".

That's just not true. If that were true, why are there 25 year olds making mathematical discoveries that 40 year olds miss? It's not because they are somehow mechanically better, it's because the slate was wiped clean and they began from a different starting point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: