Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Then you probably don't know what 'consciousness' means, because none of these processes give an explanation as to why or how qualia arise.

Also, "trial-and-error stateful processing" is so vague and broad that I don't feel that it meaningfully describes anything more than 'computation'.



There's no evidence that qualia are anything more than clusters within some vector space which is useful to describe sensory inputs. They arise because they are useful for making sense of the external world.

I know that philosophers like to believe in some mystical bullshit about a whole different category of things, but if you believe in evolution, things are simple. An animal receivers a sensory input, it tries to use it to improve its survival chances. It usually makes sense to tell signal from noise, by clustering and transforming it. You cannot derive any useful information from a single vibration of air, but if you transform from time domain to frequency domain, you can observe that certain frequencies relate to information about outside world, such as presence of predators, etc. If you do several layers of such transformations you arrive to qualia.

If you study signal processing and NNs, these things become obvious. If you give a signal processing guy a task to detect human speech, for example, he will filter frequencies, then estimate loudness and compare to background noise. If you train NN, it will do the same -- you will likely get a neuron which represents "loudness in human speech". Same if you train a computational process using evolutionary process: loudness in specific frequency range carries out useful information, so no matter what process you use, you will have some representation of this quality.

Same with "color red" or whatever other qualia you can think of -- it's just a region in a space which arises from useful transformations of the incoming signals which maximize useful information.


>There's no evidence that qualia are anything more than clusters within some vector space which is useful to describe sensory inputs.

I'm sorry, but this just betrays that you have no clue what qualia refers to. It's about the _subjective experience_ of interpreting data. Why don't we have 'smell experience' for visual data and a 'visual experience' for smell data? Why aren't our interpretations of red and blue switched (so that we interpret red visual data as a blue visual experience and blue visual data as a red visual experience)? Hell, why do we need visual experience to act on it at all?

Saying "they arose because they're useful' or trying to reduce it down to cluster analysis does absolutely nothing to explain qualia. There is absolutely no evidence that a NN neuron (or set of them) that can detect a certain trait such as loudness also has subjective experience. Frankly, we essentially know absolutely nothing about subjective experiences except that we ourselves have them.

>I know that philosophers like to believe in some mystical bullshit about a whole different category of things, but if you believe in evolution, things are simple

I genuinely recommend reflecting on this statement. You're essentially saying that you can solve in a single paragraph and using only knowledge that a CS undergrad might have (!) a problem that professional philosophers have grappled with for decades. Do you really think that is more likely than you simply not understanding the problem?


> Why don't we have 'smell experience' for visual data and a 'visual experience' for smell data?

Many people do. It's called synesthesia.

But usually vectors which have different meaning are not mixed together. While you can describe sound using a vector and you can describe a picture using a vector, adding these vectors makes no sense, it's not useful. So we just don't have pathways which mix data of different kinds. Except that sometimes there's mixing on some level, so one can talk about a blue sound and red word, for example.

> There is absolutely no evidence that a NN neuron (or set of them) that can detect a certain trait such as loudness also has subjective experience.

Not true. If we observe a correlation between a human interpretation of a signal and ANN neuron, then we can say that this neuron _represents_ the quality a person can talk about. That is, there's a correspondence between a neuron and experience human has. One implies another, so they must represent the same thing.

> You're essentially saying that you can solve in a single paragraph and using only knowledge that a CS undergrad might have (!) a problem that professional philosophers have grappled with for decades.

I do. What I'm saying is that professional philosophers specialize on creating various fictional concepts which do not exist in the real world and describe various problems with these stuff. So the thing is, there's no real problem with qualia -- it's a fictional concept.

Suppose I tell you that I've been working on a problem of relating foobla to brantoblaze for decades. If I cannot describe what things things correspond to real world, it's a fictional problem which exists only within my brain. It cannot be solved, but also, doesn't need to be solved.

Another example: Philosophers have been thinking about a nature of language for decades. Wittgenstein came up with revolutionary idea that language is used to communicate useful information between individuals. Any CS undergrad can design a system and a language which communicates useful information between instances of the system: he doesn't need decades of philosophical thought because he already understands concepts such as 'state' and 'information', which are sufficient.

> Do you really think that is more likely than you simply not understanding the problem?

It's far more likely that philosophers do not understand signal processing, vector spaces, ANNs and so on.


> Why aren't our interpretations of red and blue switched (so that we interpret red visual data as a blue visual experience and blue visual data as a red visual experience)?

What would that even mean? Is it based on some underlying assumption, that we have some hardwired, a priori subjective experience of red and blue colors and later we only associate these with perceptions of blood, roses, ripe apples and sky, water accordingly?

That might be just an illusion and our color perception is learned, so blue is just the color of sky and water, and nothing more.

> Why don't we have 'smell experience' for visual data and a 'visual experience' for smell data? Some people do (synesthesia), but generally lack of such experience mixes can be explained by different part of the brain getting different inputs, and impossibility of e.g. auditory stimulus to generate the same response as seeing red color would do.


>Is it based on some underlying assumption, that we have some hardwired, a priori subjective experience of red and blue colors and later we only associate these with perceptions of blood, roses, ripe apples and sky, water accordingly?

The point is that we don't know. We know that we have a subjective experience of an image, where 'red' parts correspond to visual red light stimuli, but it's impossible for me as an individual to know what your subjective experience of an image looks like. If someone 'smelled' subjective images it would be impossible to tell the difference as long as they still 'smell' red light as red.

>That might be just an illusion and our color perception is learned, so blue is just the color of sky and water, and nothing more.

But where does that 'blue' experience come from?

>Some people do (synesthesia), but generally lack of such experience mixes can be explained by different part of the brain getting different inputs, and impossibility of e.g. auditory stimulus to generate the same response as seeing red color would do.

Sure, I'm just asking why there are different experiences to begin with. Why do we experience smell, sound, etc. the way we do?


Sure, my answer to that is that subjective experiences are learned in a chaotic process, so location of neurons representing the same color might be in different parts of our visual cortices. At the same time we learn patterns that exist in the real world, so our experiences will most probably create similar associations, e.g. associating mint taste with green or strawberries smell with red.

> But where does that 'blue' experience come from? It is learned by having interactions with blue things and trying to find patterns, common qualities about these, and such association of possible outcomes creates a subjective concept, a qualia of "blueness".

> Sure, I'm just asking why there are different experiences to begin with. Here I don't have good explanations. Maybe it's matter of different wiring in our brain, maybe it comes from the fact, that you can predict how things are going to look to your left eye by looking on them with your right eye or how things are going to feel touching your face after touching them with your hand - allowing you to create abstractions between such perceptions, that are not transferable across senses.


> but it's impossible for me as an individual to know what your subjective experience of an image looks like

This smells like a god of the gaps or argument from ignorance type of argument to me. And it's not impossible, just difficult to execute. All we would have to do is digitize your brain and swap out the responsible part, then you can form a memory of that, swap the original back in and do a comparison. Or something like that. All handwavy science-fiction of course since we currently lack detailed knowledge how this works, but that does not imply there's anything special about it, only that one human might be processing the data slightly differently than another human. The same way that one human may assign a different most-common-meaning to an ambiguous word than another.


>This smells like a god of the gaps or argument from ignorance type of argument to me

huh? Are you saying that 'we don't know how subjective experience works' is a argument from ignorance?

>All we would have to do is digitize your brain and swap out the responsible part, then you can form a memory of that, swap the original back in and do a comparison.

I'm not sure what that is supposed to mean. What is 'the responsible part' and what would swapping it out achieve? I'm still only going to have my subjective experience.


> huh? Are you saying that 'we don't know how subjective experience works' is a argument from ignorance?

I'm saying that the lack of knowledge does not imply that there's anything special about it that wouldn't also arise naturally in an NN approaching even animal intelligence.

> What is 'the responsible part' and what would swapping it out achieve? I'm still only going to have my subjective experience.

I assume that "subjective experience" has some observable consequences, of which you can form memories. Being able to swap out parts of a brain will allow you to have a different subjective experience and then compare them. It is an experimental tool. I don't know what you will observe since that experiment has not been performed.


>Only that there's something special about subjective experience that wouldn't arise naturally in an NN approaching even animal intelligence.

That isn't at all what I've said. I'm saying that 'qualia' exist and that we have no clue how they arise. Maybe they arise from complicated enough systems, maybe they don't. Hell, maybe panpsychists are right and even a rock has some sort of consciousness. My issue is with people who are confident that a big enough NN necessarily has consciousness.

>I assume that "subjective experience" has some observable consequences, of which you can form memories. Being able to swap out parts of a brain will allow you to have a different subjective experience and then compare them. It is an experimental tool. I don't know what you will observe since that experiment has not been performed.

Unless you presuppose that there is some part that completely determines subjective experience (I don't think it'd even be possible to identify such a part if it existed), I don't see how that would work. Yes, you can swap out a part and see that your subjective experience changes, but this tells you nothing about the subjective experience of others.


> I'm saying that 'qualia' exist

If by qualia you mean slight differences in information processing in human brains, then sure. If you mean anything more than that I would like a) a better definition than the one I have given b) some observational evidence for its existence.

> My issue is with people who are confident that a big enough NN necessarily has consciousness.

Not necessarily, just potentially. After all there will be many inefficient/barely-better-than-previously/outright detective big NNs on the path to AGI.

If you're asking whether an intelligent NN will automatically be conscious then it depends on what we mean by "intelligent" and "conscious". A mathematical theorem prover may not need many facilities that a human mind has even though it still has to find many highly abstract and novel approaches to do its work. On the other hand an agent interacting with the physical world and other humans will probably benefit from many of the same principles and the mix of them is what we call consciousness. One problem with "consciousness" is that it's such an overloaded term. I recommend decomposing it into smaller features that we care about and then we can talk about whether another system has them.

> Hell, maybe panpsychists are right and even a rock has some sort of consciousness.

If we twist words far enough then of course they do. They are following the laws of physics after all which is information processing, going from one state to another. But then all physical systems do that and its usually not the kind of information processing we care that much about when talking about intelligences. Technically correct given the premise but useless.

> I don't think it'd even be possible to identify such a part if it existed

We're already making the assumption we have the technology to simulate a brain. If you have that ability you can also implement any debugging/observational tooling you need. AI research is not blind, co-developing such tooling together with the networks is happening today. https://openai.com/blog/introducing-activation-atlases/


>If by qualia you mean slight differences in information processing in human brains, then sure. If you mean anything more than that I would like a) a better definition than the one I have given b) some observational evidence for its existence.

Subjective experiences i.e. how I actually experience sense data. There is no real, objective observational evidence and there can't be. How would you describe taste to a species of aliens that understands the processes that happen during tasting, but don't taste themselves? It's simply impossible. I know that I have personal, subjective experiences (the 'images I see' are not directly the sense data that I perceive), but I can only appeal to you emotionally to try and make you believe that it exists operating under the assumption that you too must have these experiences.

>One problem with "consciousness" is that it's such an overloaded term. I recommend decomposing it into smaller features that we care about and then we can talk about whether another system has them.

This entire discussion has been about consciousness in the philosophical meaning i.e. the ability to have some form of subjective experiences.

>If we twist words far enough then of course they do. They are following the laws of physics after all which is information processing, going from one state to another. But then all physical systems do that and its usually not the kind of information processing we care that much about when talking about intelligences. Technically correct given the premise but useless.

This isn't about twisting words, some people genuinely believe that everything is conscious with more complex system being more conscious.

>We're already making the assumption we have the technology to simulate a brain. If you have that ability you can also implement any debugging/observational tooling you need. AI research is not blind, co-developing such tooling together with the networks is happening today

The point is that it's about _subjective_ experiences.


Fish tastes like fish because the taste is a categorizing representation of that sensory input.

What you can do is today is start with a feature map. We can do that with colors https://imgs.xkcd.com/blag/satfaces_map_1024.png (do you perceive this color as red?) and we can do that with smells https://jameskennedymonash.files.wordpress.com/2014/01/table... That's a fairly limited representation but words are an incredibly low-bandwidth interface not suitable to exporting this kind of information in high fidelity, so we can't. That does not mean it's conceptually impossible. If you wanted to export subjective experience itself then you'd need the previously mentioned debugging interface. Our brains don't have that built-in, but software does. I.e. a program can dump its entire own state and make it available to others.

To me subjective experience seems to be an intermediate representation, deep between inputs and outputs, and due to the various limitations we're bad at communicating it. That doesn't mean there's anything special about it. It is a consequence of compressing inputs into smaller spaces in ways that are useful to that entity.

> This isn't about twisting words, some people genuinely believe that everything is conscious with more complex system being more conscious.

Anything that interacts with the world will have an internal, idiosyncratic representation of that interaction. Even a rock will have momentarily vibrations traveling through it that carry some information about the world. One of today's NNs will have a feature layers that roughly correspond to concepts that are of human interest. They're often crude approximations, but it's good enough for some use-cases. Animal brains just have more of that.

So in that sense, sure, it's a continuum. But there's nothing mysterious about it.


>Fish tastes like fish because the taste is a categorizing representation of that sensory input.

Yes, but why does the fish taste have the taste it does? Hell, try explaining what fish tastes like, without evoking similar tastes.

>What you can do is today is start with a feature map. We can do that with colors https://imgs.xkcd.com/blag/satfaces_map_1024.png (do you perceive this color as red?) and we can do that with smells https://jameskennedymonash.files.wordpress.com/2014/01/table.... That's a fairly limited representation but words are an incredibly low-bandwidth interface not suitable to exporting this kind of information in high fidelity, so we can't. That does not mean it's conceptually impossible. If you wanted to export subjective experience itself then you'd need the previously mentioned debugging interface. Our brains don't have that built-in, but software does. I.e. a program can dump its entire own state and make it available to others.

But a feature map doesn't tell you anything about how the space itself works. If you look at that smell graph, you'll see that it uses comparisons, because it's literally impossible for us to explain what smelling is like without saying "well, it's similar to smelling x". Someone who is born without smell could memorize that chart, understand everything there is about smelling, but he wouldn't actually know what it's like to smell.

>To me subjective experience seems to be an intermediate representation, deep between inputs and outputs, and due to the various limitations we're bad at communicating it. That doesn't mean there's anything special about it. It is a consequence of compressing inputs into smaller spaces in ways that are useful to that entity.

We're not just bad at communicating it, but we're bad at understanding it, because our conventional means of measuring things doesn't really work for subjectivity. I'm not saying it's "magical", but it's not certain that we even can potentially build tools to interact with it.


> But a feature map doesn't tell you anything about how the space itself works.

The space is what is doing the work. Of course it's vastly more complex than a simple image with a few regions painted into it. There are only implementation details below it. The issue is that we cannot import and export them. With software that is a wholly different matter and they be transplanted, fine-tuned, probed and so on.

> but it's not certain that we even can potentially build tools to interact with it.

I agree that this is all very speculative, we don't have the technology and it can take a long time until we can actually inspect a human brain. But we may be able to do the same much easier to artificial intelligences, once created.


"I know that philosophers like to believe in some mystical bullshit about a whole different category of things"

I've found that it's the people who are most contemptuous of philosophy that could benefit most from taking a few good introductory philosophy courses, so they can finally realize that an enormous part of their own (often unconscious) understanding of and attitude towards the world derives from long established philosophy.


I didn't mean to disparage philosophy, but just point of that thinking pure in terms of abstract concepts is often unproductive.


Yeah I think we need to maintain this distinction between a scientific understanding of the world and what human beings actually are, and the abstractions we invent in order to ensure we don’t destroy ourselves as a species. I firmly believe that without the latter we are doomed, which is why I intentionally deceive myself with ideas like the existence of a singular benevolent unifying presence (often referred to as “God”) in order to shape my own behavior.


what we need psychologically to survive is not the same as what is really there. Qualia is a bullshit word like trying to say there is a difference between the numeral '1' and the word 'one'. I do not dispute the fact that we are better people when we recognize that there are intelligences/forces greater than us. Presuming that our current science is 'good enough' reminds me of the time when a prominent physicist said that 'all that remains is to add decimal points of precision to our measurements'.


>Qualia is a bullshit word like trying to say there is a difference between the numeral '1' and the word 'one'

I'm not sure what this is supposed to mean. Are you saying that sense data is the same as the subjective experience of it?


Data comes in, response is determined by the associations built up in the brain both over the subject’s own life time and all of evolution.

I don’t see the need to add some mystical third component, sounds to me like philosophers just made up something to talk about because they don’t get paid unless they have something to talk about.

From wiki:

Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, as well as the redness of an evening sky.

Nothing here seems to be above and beyond simple developed response to stimuli, same as a meal worm.


Yes, we collect sense data, process it and then respond to it, but we also have a subjective experience. We 'see an image in our minds' as part of processing it, but why and how? Something like the 'The taste of wine' isn't a response, but a subjective feeling. Why do we have a subjective experience of pain instead of reacting to it?

To be honest though, if you're actually interested in discussing this your comment really isn't conductive to fruitful discussion (especially the part denigrating philosophers). If not, why even respond?


Regarding “the taste of wine”. I have for uninteresting reasons never gotten drunk, never trained my brain to associate positive feelings with wine or alcohol. As a result, wine for me subjectively is elaborate fruit juice. When I am given a fancy wine, I can taste the components people describe, but the subjective experience they have of the amazing taste I do not share, because I simply don’t have those associations. Similarly for beer my subjective experience is an association of disgust.

Those of us who believe neurons are all there is are simply arguing everything is just abstractions and associations, just neurons firing. What you call a subjective experience is a particular set of neurons firing as a response to stimuli. There is nothing more than that, and while you can form an abstraction in your brain that describes something more than that, this would be an example of an improperly trained neural net.


> especially the part denigrating philosophers

Quite the opposite, really. I simply refuse to put them on a pedestal by making arguments like you did saying, essentially, “philosophers have been saying this is complicated for hundreds of years, therefore it must be”. Which, as other have stated, is not a logical argument but a faith based one. It’s really quite the same as my friends HS theology class, in which a correct answer to “what evidence do we have of the existence of god” was “thousands of years of belief and study by renowned theologians”, aka. bullshit.


Isn’t he just saying that qualia is in the same category as God? It seems like an abstraction we created to reason about and hopefully solve some social coordination problem, but not an essential part of reality.


Because when you are sensing something your brain tries to find associations with past experiences in your memory and builds an "image" from that, and because peoples have different experiences in their memories it becomes subjective and a little different for everybody. Also when it's something very different from your past experience the mix of the closest memories can be weird and seemingly unrelated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: