I will try to give a sketch of how I think transcendence of the intentional matter (or if you like it better – objectivity of intentional content) can be grounded in something else. First let me say, that I don’t think it can be grounded in language, because this kind of transcendence is needed for learning language to occur in first place. In order to learn language we must figure out what other people mean by their words, so what the words mean (be it a concrete thing, a category, abstract concept, etc..) must be accessible to us in order to learn the word.
To provide the possibility of learning what those words mean, we must also beforehand admit the possibility of multiple people to access the same thing which is the meaning of the word (the signified).
So, transcendence of intentional matter must be grounded in public accessibility of intentional matter, and can’t be grounded in the language itself, but is necessary condition for language.
From phenomenological standpoint, we don’t have to search very far for this public accessibility of the intentional content. If we reflect on our experiences, we experience ourselves as a subject in the world – in fact a subject which exists-along with other subjects in the same world. The things we see around us, are there publicly accessible in the form they are, and their being like they are is not dependent on us.
So, if we are looking at a strawberry, it is there in the world, in front of us, it has that specific color, shape, specific taste and smell. And neither of those is seen as belonging to something inside of our head. My head is <here>, and it is the strawberry which is of that form, of that color, and with that particular taste.
Because the strawberry is seen as existing in the world and as having all those qualities by itself, it is seen as publicly accessible along with its shape, color, taste, etc… it can be accessed by me and/or other subjects in the world by seeing, tasting, touching and so on… It’s color, taste and shape are in such way seen as independent of my sensing them.
The accessibility covers more then direct acquittance with the shape, color and other sensible qualities of the thing – we are also aware of the changes things undergo in the world around us, and possibilities of how things can change.
Allowing that kind of awareness to other people (and animals) is what the problem of other minds is about, allowing that other people notice the things, notice the changes and are aware of the possibilities of their changes. In such way other-minds are different from the other things we observe around us – while the mindless things simply undergo changes, the creatures with minds are additionally aware of those changes, and the possibilities of how things may change. This awareness constitutes the awareness of other minds.
But how come we become aware of other minds? For the “mindness” to be accessible it must be also placed in the world, and one possibility that comes to mind is that we observe goal-directed behavior, where other people (and animals), act upon other things accomplishing some goals… their acts are of intentional nature, they show awareness of the things and their changes, and awareness of the possibilities – in their acting they choose to bring one of those possibilities vs. the other.
Of course, having the technological know-how we have at this historical moment, it is easy for us to imagine a machine, which might show the same kind of behavior, and still be “mindless”. Does that mean that the observing intentional acting is not enough for grounding the other minds? I would say no, as imagining of mindless machine with goal-directed behavior is possibility imagination of more abstract type. In our direct awareness of the world, the intentional acts are necessarily first seen as connected to awareness of the things, their changes, and possibilities of change. The possibility of a mindless machine which merely undergoes changes, but which looks like acting intentionally (without being aware) can come just later in our knowledge. So to say, we don’t need theories in order to see a creature as having mind (e.g. Folk Psychology as a Theory) the awareness and intentional behavior is transparent in the being-along with others. In fact we need a theory in order to see a creature who appears to act intentionally as not having a mind.
I use awareness here, and don’t qualify it specifically as “conscious awareness”, as I think that this distinction of conscious vs. some normal awareness is product of the confused looking at things… Namely, it is only when we give primacy to the reductionist view of natural sciences and we classify us as just another type of machines, that we come around to a need to distinguish ourselves from other machines. Having abstracted from the distinction which is already there in our living in the world (namely the distinction between creatures aware of things, changes and possibilities and mechanical creatures which are merely undergoing changes) we try to reintroduce the distinction in “mechanical” way by adding another abstract notion of “consciousness” to the picture of mechanical body (human body which mechanically undergoes changes).
Some connected posts:
Qualia and Natural Science
Two things about colors worth considering (the first thing)
Intra-Subjective vs. Inter-Subjective Transcendence
Technorati Tags: consciousness, AI