Proof That Other People Are Conscious

Something like this…

1. Person A can’t teach person B what word W means, if A doesn’t know what W means.
2. One can’t know what ‘consciousness’ means if one is not conscious.

From 1 and 2 =>

3.Person A can’t teach person B what ‘consciousness’ means, if A isn’t conscious.

4.I learned word ‘conscious’ from the people in the linguistic community.

From 4 and 3 =>

5. People in the linguistic community are conscious.

OK, now you know… you are conscious.

8 thoughts on “Proof That Other People Are Conscious

  1. OK, but what’s the argument for 2? There are all sorts of words for things, including mental states, that I can know the meaning of without having ever experienced them. I know what “phantom limb” means, despite having all my limbs, and if I lost a limb and experienced phantom limb sensations, I’d probably know right away that they were, in fact, phantom limb sensations, because I’d learned the meaning of “phantom limb” prior to having lost the limb. I know what color blindness, aphasia, etc. mean, too, and again, if I experienced them at some point in the future, would know that I was experiencing them because I’d previously learned the meanings of the words. I know what echolocation is, despite the impossibility of ever experiencing echolocation. Granted, I don’t know what it’s like to experience echolocation, but then teaching the word “consciousness” doesn’t involve teaching what it’s like to be conscious any more than teaching “phantom limb” involves teaching what it’s like to have one of those experiences, or “echolocation” involves teaching what it’s like to see with sound.

  2. Hey Chris, thanks for the comment!

    Teaching of those terms (‘phantom limb’, ‘color blindness’, ‘aphasia’ etc…) might not involve teaching what it is like to have those concrete experiences, but I think their meaning involves the idea that there is something that-it-is-like to have those particular experience. ‘Consciousness’, I think is different in that way, that it doesn’t refer to any particular what-it-is-likeness of an experience, but to the what-it-is-likeness in general. So, to say, we can learn about there being different what-it-is-likeness related to different conditions even we don’t know exactly what-it-is-like to have those conditions, but we can’t learn about general what-it-is-likeness without having any experience whatsoever.

    There are few additional things I was considering:

    a) ‘People in linguistic community’ is I guess too strong, because I don’t learn the word ‘consciousness’ from all people in linguistic community. So maybe the conclusion should be ‘Some people in linguistic community are conscious’.

    b) Maybe instead of 2, one can argue for more general – A person can’t know what any word means, if he/she is not conscious. (To me this seems right too, and it points to the relation between meaning and consciousness)

    c) Meaning as use. It could be that ‘consciousness’ as used in the linguistic community doesn’t refer to any what-it-is-likeness to have an experience, but it is a word which have certain use in certain language-games, or something like that. It is imaginable that a linguistic community is than a bunch of unconscious robots which have specific use for “consciousness”. But, this I think can’t help the skeptic, because if he is denying (to us robots) that we don’t have consciousness, he isn’t using ‘consciousness’ as it is used, in our robots-linguistic community.

    And a little aside:
    For all those terms you mentioned (‘phantom limb’, ‘color blindness’, ‘aphasia’), we can easily form a guess of what-it-is-like to have those comparing to the other experiences (which seems normal considering the “what it is LIKE” question? It is LIKE this experience, or LIKE that experience. “Aha, now this experience is nothing like any of the previous experiences I had!”). The “echolocation” might be little different, but still having the general idea of what-it-is-like, we may have the idea that the experience of echolocating has its what-it-is-likeness.

  3. “1. Person A can’t teach person B what word W means, if A doesn’t know what W means.”

    Why not? I couldn’t learn the meaning of the word from a dictionary? Or from a nanny-bot? I don’t see why not…all that is required is that I am able to introspect and determine that I am conscious and be told by the dictionary that that is called ‘consciousness’. So maybe other people are just robots or hallucinations, I am conscious and so I will know what they are talking about even though they do not. What is wrong with that?

  4. Hey Richard,

    Yeah, I think there is that issue. Let me try to see what can be saved :)

    Would you consult a dictionary that is written by a person that doesn’t know what words described in that dictionary mean? It seems to me that a skeptic can’t say ‘I learned the word consciousness from the dictionary’, because that his learning only makes sense if he accepts that the description is written by somebody who knew what the word means. Same would go for a nanny-bot, as long the nanny-bot contains dictionary entries. The nanny-bot can be seen just as a way to bring the kid the definition that was written by somebody else. (Same as if we search for definition of word on internet, it isn’t internet that writes the definition, but just provides to us definition written by someone who understood the word).

    Given this, one might maybe argue that at least one other sentient being (the one that has written such kind of definition) is/was conscious.

  5. Regarding Richard Brown’s objection:

    Suppose the nanny-bot is programmed not with dictionary entries, but with a sort of response mechanism that emits all of the right sounds involved in teaching someone to deploy the concept ‘consciousness.’ In any case where someone doesn’t know (I mean, can’t employ correctly) some concept, he has got to pick up the concept through some enumerable series of events, perhaps involving talking or ostensive pointing. Nanny-bots might conceivably be equipped to do whatever it is that they’ve got to do to teach the concept, whether by conscious being or otherwise.

    My point is to remove the idea of a dictionary definition from the equation, to defuse your point about not wanting to trust a dictionary not written by someone who knows the meanings of the words in the dictionary. Now you might mount a similar objection to Richard and me by claiming that a similar consideration applies to the reliability of concepts taught by bot-nannies. I can see this working in two ways:

    1) A non-conscious nanny-bot can’t possibly teach a concept, because she can’t really apply it, in virtue of being, well, a robot– she can only push symbols around in her machine brain, or whatever. I take my description of the robot as being programmed to respond in the ways a normal nanny might in teaching a concept to defeat the idea that teachability requires the teaching apparatus, where not conscious, to know the meaning of what it teaches. Still, we would not say that a human teaches someone a concept if she didn’t know the concept herself, except under very strange circumstances.

    2) A non-conscious nanny-bot, in order to be able to teach a concept, has to have been programmed by somebody, some thinking being, who does know the concept, which you brought up above. Now, as solipsism is the view under consideration, I don’t consider insane replies to be out of bounds, and am thus inclined to say: “might such a nanny-bot be born out of a random conflagration of metal particles?” Then, the robot might be some sort of automatic research machine, which ‘came’ to an ‘understanding’ of the concept to be taught without having been programmed with it. This is a little hard to imagine for the concept ‘consciousness,’ but I think it is essentially possible. Criteria for someone’s knowing the concept involve that person applying it to things which are conceivably conscious, according to the criteria for the application of the concept consciousness. We don’t make knowledge-ascriptions regarding the nanny-bot, but certainly it’s got to be possible for some robot to decide to label all and only the bipeds with blabbing mouths with some special label, say, ‘suoicsnoc,’ and then display the contents of its memory banks in such a way that the learner learns to apply a concept to the same individuals.

    This might seem odd, because it amounts to a seeming instance of concepts-from-nowhere. Well, is there a way to come by a concept besides being taught the concept? Now your objection may run: well, fine, but someone had to program the robot to ‘learn’ from its environment such that it could ‘come up’ with such a novel concept to apply to the talking hairless apes. I grant this, hesitantly; it is no less than the question, “did someone have to program us, such that we can develop new concepts in response to our environments, and go on in teaching their application to others?”

  6. Hi Jon,

    Thanks for the comment, and sorry for me being late with the response.

    I think the “response mechanism” case is not much different from dictionary entries. At least if the nanny-bot is programmed by someone to give specific response, isn’t that in principle how my computer ends up firing specific pixels on my monitor which I after searching for definition online recognize as a description of the meaning of some word?

    On other side , it seems to me that you are right that such nanny-bot which has that specific kind of response might appear due to random conflagration of particles. I don’t have any objection to that. I guess if such swamp-nanny-bot came to being, it won’t be true in fact that I *learned* meaning of ‘consciousness’ from it, as ‘consciousness’ isn’t a word in a conscious community which has a meaning to be learned.

    There are two things I want to say here:

    1.I think this swamp-nanny-bot scenario is possible, and the situation is interesting to analyze. If there isn’t actually linguistic community in which words have meanings, can “other people are not conscious” have meaning? Can I think that the swamp-robots scenario is possible, and still use those words to express my thoughts? Maybe there is a post somewhere there.

    2.The ‘proof’ in the post wasn’t just being intended as an argument against solipsism. I wanted to use it to lay out few thoughts on how language and consciousness relate in interesting way, which might shine some light on the ‘other minds’ problem.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s