Maybe they're just electrical signals in the brain but that's not how we experience them.
A CPU cannot experience the electrical signals that flow through it. Unless you believe rocks are sentient.
I could just as easily turn your argument around and argue that torturing you is perfectly OK because all I'd be doing is creating electrical signals in your brain.
I could just as easily turn your argument around and argue that torturing you is perfectly OK because all I'd be doing is creating electrical signals in your brain.
That's actually (almost) the point of my comment - if, in two logical steps you can get from your argument to "torture is alright", perhaps you should reconsider your argument.
Perhaps you should also consider why (a large part of) humanity considers torture to be a Bad Thing?
What is "pleasure" and "pain"? Ultimately, they're just electrical signals triggering the release of certain chemicals in the brain, with subsequent alterations to how the brain processes other stimuli.
My argument then is that if all pain is to you is a set of electrical signals than why should one avoid causing it. We don't usually have moral qualms about generating electrical signals.
I can't show that a silicon life-form lacks subjective experience anymore than I can prove that a human possesses it. What I know is that I have subjective experiences and that there are certain things, like pain and death that I really like to avoid. Since other human beings are very similar to me and also exhibit behaviors in the presence of painful stimuli similar to my own, I assume that they also possess such experiences. I refrain from causing them unpleasant experiences because doing so would tend to create a world in which such experiences are more likely in general and thus make it more likely that I would in turn undergo such experiences.
Consider this thought experiment.
Suppose that in the relatively near future we create a machine of equal or greater intelligence than humans. Suppose also, for sake of argument, that, for whatever reason, these machines do not have subjective experiences, but behave as though they do to facilitate human interaction. Would such a machine consider killing a human to be wrong ? Why should it ? The human claims that it fears death and wishes to avoid it. The machine also claims to fear death but in reality feels no such fear. If the machine has been told by humans that there is no difference between it and them because both can be reduced to sets of electrical signals then it could only assume that a human, despite it's behavior, experiences no actual fear of death and therefore that killing a human is not wrong. So it seems to me that there is a real danger in making the assumption that any sufficiently intelligent machine is sentient.
Is there a meaningful difference between a machine that claims it fears death but in reality doesn't (for some definition of "fears"), and something that actually 'fears' death?
How can you tell whether something 'fears' death or not?
But that is how we experience them - though perhaps not with electrical signals, but chemical ones instead. e.g. dopamine. We're programmed to enjoy dopamine.
Humans experienced pleasure long before they knew anything about dopamine.
You seem to be rejecting the reality of subjective experience by equating it to its objectively observable physical correlates.
Moral judgements are not (at least historically have not been) based on such observables but rather on the assumption that that which causes oneself pleasure or pain typically does so for others as well.
You can put someone in an FMRI and find that certain brain signals correlate with that person claiming to feel pain (ie. correlate brain state with behavior) but you can never establish a causal relationship between those signals and the subjective experience of pain because subjective experience is by definition not objectively observable.
These sorts of philosophical arguments have been going on for a long time with little practical effect. However there is a danger that if machines do become sapient and surpass us in intelligence that based on such arguments they will conclude that human beings are merely an inferior form of intelligence which is not worth preserving. If humans themselves deny the existence of subjective experience then why should machines believe in such a thing.
Computers also calculated pi long before we used them to design circuits.
"However there is a danger that if machines do become sapient and surpass us in intelligence that based on such arguments they will conclude that human beings are merely an inferior form of intelligence which is not worth preserving"
If I hit a rock and it cried in pain, I would believe it to be sentient.
Sentience does not come from the raw materials -- we're just a bunch of carbon, hydrogen, nitrogen etc. ourselves -- but from the way those materials are arranged to process information.
There is not much of an objective difference between the pain reactions of e.g. insects and robots we have today. So I would say yes, those robots experience a similar reaction as insects. But then again, I don't think insects have a sufficiently advanced neural system to suffer, so it's okay to kill the critters.
A CPU cannot experience the electrical signals that flow through it. Unless you believe rocks are sentient.
I could just as easily turn your argument around and argue that torturing you is perfectly OK because all I'd be doing is creating electrical signals in your brain.