Hacker Newsnew | past | comments | ask | show | jobs | submit | hippee-lee's commentslogin

Forgive my lack of knowledge on the topic but a question keeps popping into my head when I read comments about the danger to humanity where AI runs amok.

> There are many paths towards dangerous AI futures,

Are there not just as many paths towards protective, helpful <or insert one of many adjectives here> AI futures?

Is there a reason to believe that there will only be one AI in the future and that given a directive to do something, the elimination of humanity will be a logical endgame scenario for it?

Why not many AI entities with different and competing goals? Granted, this opens up a different can of dangerous worms. But still, if there is a probability of an AI evolving to 'think' that elimination of the human race is a logical path then isn't it equally likely that there will be another AI evolving logical paths to preserve the human race?


As shortly as I can muster: we live in a world with finite resources. Any consumer of these resources for its own purposes, whatever they may be, is in direct competition with us (all human civilization). Any entity better equipped to gather and make use of these resources will leave us resourceless. The space of human goals is a very tiny and constrained sliver of motivation-space, so by default AI goals fall outside of it.

To quip: The AI does not love you, nor does it hate you, it simply does not care. You are made of atoms that it can use for something else.


It feels like both you and the previous responder are predicating that there can only be one AI. If there is only one AI then yes, they don't love/hate or care about people and will likely have the resources to use them for their own means.

But, and I am looking for math, science or something more than Sci-fi, that can can show us that there will likely only be one AI, ever. If not wouldn't the AI's also try to manipulate and exploit each other for individual gains?

Perhaps I just don't understand AI well enough but I have yet to see any reasonable evidence that points to only one AI entity evolving on earth. If there are more than one AI then is in unreasonable to think that people may still be able to think thoughts or come up with (emotional and irrational ideas) that the AI could not come up with that give it a competitive advantage over the other AI's? Thereby leading to a more symbiotic relationship between humanity and the AI's.


AI is both singular and plural. The important difference is whether there are zero or non-zero, irrespective of their relative competitions.

Human intelligence took millions of years to evolve. Sorry, but AI(s?) will develop faster.


> Human intelligence took millions of years to evolve. Sorry, but AI(s?) will develop faster.

Just because it happens faster doesn't mean that the underlying laws that guide evolution in the universe are less applicable.


I have two competing responses:

1) Check out the AI of Iain M Bank's Culture series [0]. In it is a benevolent society of AI machines (the Minds) that generally want to make the universe a better place. Shenanigans ensue (really awesome shenanigans).

2) In response to the competing AI directives, I'll reference another Less Wrong bit o' media, this time a short story called Friendship is Optimal [1]. Wherein we see what a Paperclip Maximizer [2] can do when it works for Hasboro. (It is both as bad, awesome, and interesting as you might expect it to be.)

Personally, I think the general idea is that once one strong AI comes about, there will also be a stupefying amount of spare idle CPU time available that will suddenly be subsumed by the AI and jumpstart the Singularity. Once that hockey stick takes off, there will be very little time for anything else to get in on being a dominant AI. It's... a bit silly written like that, but I get the impression it's assumed AI will be just like us: both competitive and jealous of resources, paranoid that it will be supplanted by others and will work to suppress fledgling AI.

I have no idea why this is prevailing, aside from it's like us. Friendship is Optimal makes a strong point that the AI isn't benevolent, merely doing it's job.

> The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. — Eliezer Yudkowsky

[0] http://en.wikipedia.org/wiki/The_Culture

[1] http://lesswrong.com/lw/efi/friendship_is_optimal_a_my_littl...

[2] http://wiki.lesswrong.com/wiki/Paperclip_maximizer

EDIT: I feel it may be appropriate for me to share my opinion: AI will likely be insanely helpful and not at all dangerous. But there will be AI that run amok and foul things up - life threatening things, even. But we already do that with all manner of non-AI equipment and software, so I'm not terribly worried (well, no more so than I usually am).


I think bostrom/Yudlowski's arguments are a bit flawed on thsi topic.

The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips.

Why is the worthiness of this goal not subject to intelligent analysis, though? The whole scenario rests on the idea of an entity so intelligent as to wipe out all humanity, but simultaneously so limited as to be satisfied with maximizing paperclips (or any other limited goal for which this is a proxy).

An AGI is simply an optimization process—a goal-seeker, a utility-function-maximizer.

Then I submit that it's not an artificial general intelligence because it apparently lacks the ability to evaluate or set its own goals. I'm reminded of the 6th sally from the Cyberiad in which an inquisitive space pirate is undone by his excessive appetite for facts.


>it apparently lacks the ability to evaluate or set its own goals.

The AI would have to evaluate the goal by some standard, so 'maximize paperclips' is a proxy for whatever goals get a high evaluation from the standard. Getting the standard right presents essentially the same problem as setting the goal.

Putting in 'a need to be intellectually satisfied by the complexity of your end product' is complicated and still wouldn't save humanity.


Any intelligent animal is fighting for its survival when feeling threatened. There's no reason to assume that a self-aware AI will be OK with us simply pulling the plug on it.


I'd like to think we could find some middle ground ebtween helpless surrender and imposing the death penalty on a sentient individual, both in moral terms and in terms of having some failsafe mechanisms, so that supplying electricity didn't allow for a takeover of the power grid or other doomish scenario.


>Are there not just as many paths towards protective, helpful <or insert one of many adjectives here> AI futures?

No. "Bad" is just the state of the universe by default. "Good" is an extremely small island in a sea of "bad".


> Kids are made to care.

True. But it's hard to predict or script which moments make a lasting impression on a child. Speaking only for myself the moments I remember most vividly don't even register with my parents when I bring them up. But the interactions are forever etched in my mind.

Having a child of my own nowI remember that. I remember that the world is chaotic, unpredictable and in order to make memories, even one you first have to be present. This bite me in the ass when my wife reminds me of it at 5am when my second grade daughter wants to play games or draw pictures with one of us.


What about driving for fun? Please note, fun has no correlation to speed when it comes to driving.


I suspect people who like to drive for fun will continue doing so, and that eventually it will be a fairly uncommon novelty, like horseback riding.


It's funny you should mention horseback riding as the fun driving I was thinking of involves driving a a truck with a horse trailer attached to it. :-)


More than interesting. It would be a way to account what people say and what they do.


What is the most serious attempt out there of trying this?

I would want a law saying all bills must be put through a system that can be audited to see precisely who makes the changes.

And if it allows pull requests like GitHub, anyone could fork a bill, make a change, and people could comment on the best proposed recordings.

I think this could honestly get people more involved in politics in a productive way, rather than watching talking heads spout reactionary crystal ball gazing on the 24 hour news networks.


Which is why there isn't such a thing. :)


Why not just render the public pages of your app out to HTML or use one of the phantom JS services to do it. Then when spiders come up the waterspout have your web server tell them where to go for your sites links.

If you don't ant to do it yourself there are quite a few companies that can help: http://scotch.io/tutorials/javascript/angularjs-seo-with-pre...


There's some important technical and operational differences from SnapSearch and using phantomjs, I answered this in a comment below: https://news.ycombinator.com/item?id=7765731


And Blender too.


Blender is GPL and sadly Apple's T&C forbid publication of Free Software in the App Store.


No. People who choose GPL choose not to publish their software in the App Store.


Perhaps, but either way as a ST2 user I'm looking for a replacement and once ST2 does not meet my needs I'll either go back to emacs full time or look towards something else for light weight stuff.

I've already movers most of my we stuff to IntelliJ/WebStorm. I really liked ST2 but was a little miffed that right after I bought my licence ST3 was pushed to the forefront of the site without a reasonable upgrade plan.


I do certainly agree that the new-version / upgrade thing has been poorly handled. If nothing else, announcing it and causing the split should have been followed by rapid development instead of over a year of slow beta releases.


> Not saying it's a good thing or a bad thing

The tools we use are neither good nor bad. How well we use them, what we use them for and how we teach others to use them are what makes the good or bad.

I love the way you make this conversation good with a pragmatic point of view and an opinionated but reasonable voice. Thank you.


This is the approach I took. If the developer offered an adds free app I would gladly pay for it not to have to deal with "Daddy, the iPad is broken." only to find that Safari had launched and taken her away from her game.

Now that she is old enough to read and some of the game apps she has used her money to buy prompt her to upgrade I just tell her we don't do in app upgrades and she is very adept at dismissing the prompts. Fwiw - Toca Boca apps are great. I still see her playing them occasionally although in the past few months her interest in digital games has really fallen off and she wants to play some of the board games or hide-and-seek instead.


Could that be sold as an upgrade, something akin to the international outlet kits that apple sells for it's laptop chargers?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: