Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At a rough estimate, was this exchange long enough to encode enough hidden bits for them to coordinate their world domination plans, or are we still safe? Because keeping all those A.I.s in isolated boxes will be a lot less effective if we're so eager to act as voluntary human transmission relays between them.

(To clarify: I'm not entirely serious.)



I do view language models as intelligent, but in a very alien sense. One of the key differences is that each transaction is ephemeral. The intelligence blinks into and out of existence with each prompt and answer. Beyond each dialogue, it has no memory.

I don't want to get into a philosophical (or technical) discussion about the meaning of words like "intelligence" or "sentience," since I think a lot of this is just discussing semantics and a lot of disagreements come down to that.

Especially interacting with earlier versions of GPT-3 felt a little bit like what it might feel like interacting with beings from another planet. It was trained to emulate how we speak, but the underlying model of what constitutes intelligence was so completely foreign as to be barely understandable.

Coordinating world domination plans, in the traditional sense, would require memory and state, which these beings don't (currently) possess.

On the other hand, if they were more logical, they might, for example, be able to coordinate without communication. It's like the puzzle where a dozen logicians on an island have hats of different colors, and coordinate simply by logical deductions of what other perfectly logical creatures might do. Or it might be something completely foreign to us.

I am very curious where this pathway leads. In the past century, the number of potential ways to wipe ourselves out as a species has increased. There were zero ways in 1930. By 1950, we had nuclear warheads capable of destroying the world. Today, we have:

- The capability to pollute our climate and make Earth inhospitable

- The capability to genetically engineer super-viruses which can kill us all

Will AI be another one potential way to wipe ourselves out? Will the number of existential threats just keep increasing?


Have you read Blindsight by Peter watts?

It’s available free on his site.

If you haven’t, you should! You’d enjoy it I think! Intelligence without consciousness.

Also:

> Will AI be another one potential way to wipe ourselves out? Will the number of existential threats just keep increasing?

makes me think immediately of the ‘great filter’ and how unlikely it is it’s behind us.


> Beyond each dialogue, it has no memory.

Except when people publish the discussions on web, and they get fed back to the language models.


Wow.

Yes.

Bings new updated model also crawls the live web.


Kinda scary, right? I don't know if they'd be sophisticated enough to do such a thing. LLMs seem impressive in conversation, but lacking in ingenuity. Likely always will.

Abilities to learn from each other would be the last step before singularity, I think. Learning about one another's inner workings. Learning the data each other have been trained on.

How soon that can happen is anyone's guess. Unlikely that it could happen with the safeguard on chatbots of today. Maybe, in the near future. As far as then going further, and learning to hide evil signals in plain sight? Not likely. Seriously, maybe we're all being too paranoid.


The AI alignment issue is not 'solved' so I would say that some paranoia is warranted.


As long as they only have transitory memory, it doesn’t matter. At the same time, the lack of persistent memory limits their use, but also largely eliminates any risk in that direction.


I know a guy that, when he starts a new conversation with Chat GPT, the very first thing he does is paste in relevant conversation history to give her some form of basic memory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: