I have noted lot of times that in my view the everyday concepts are incommensurable with the physical concepts, and that it isn’t possible to reduce the former to the later.
But, sometimes (probably professional deformation) I think about what would it take for a machine/computer program (be it “normal” or neural network) to implement concepts and have some kind of semantics. And how the working of such machine can be related to our own thinking. So, here is some of my thoughts on this.
A Simple Picture
A sensory input which can receive data of a predetermined form. For example we can think of a matrix of artificial sensory neurons each with possibility to be excited and to give out signal of some kind.
The signals from the input then go into some processing part of the machine. Additionally there are the signals from whatever previous processing there was. The concepts are then limited by the capacity of the sensory input and whatever capacity for combining this input with previous state we have.
In this picture a simple “learning entity” with two sensory off/on neurons (A and B) in its “eye”, might be able to learn concepts like (1)AonBon, (2)AoffBoff, (3)AonBoff, (4)AoffBon, (5)A=B, (6)A<>B. If additionally the processing part has possibility for memory, we would have possibility to learn a bunch of new concepts depending on the changing of the (1)-(6) concepts, and also the new more abstract concepts like – T1=T2, T1<>T2, etc..
The possible concepts are in this picture limited and defined by the structure of the sensory input and the capacity of the “processing part” to combine that input in different ways. In the example that results with combinatory concepts (like 1-4) and on this combination a possibility for abstract concepts is possible (like 5 and 6 in the example). We can also note that there we can talk about “is” relation between some of the concepts. For example (1) and (2) are (5), and (3) and (4) are (6).
In the example, there is just two sensory input neurons, with simple Off/On state. If we imagine how many input neurons we have (e.g. in the eye), and the amount of states they can get into (not a simple Off/On), it is clear that we have a lot of combinations to account for, and also lot of abstractions that can be taken out of them. Also, it is not clear for how much time the sensory information is retained (for example in some kind of buffer), and thus how much additional combinations one can get.
In the example there is not much semantics to this info, but in case of one eye for example, the neurons also have semantic information in sense that neuron A is closer with neuron B than neuron C; that neuron A is “more intensive” than neuron B; combination of all this, additionally temporal relations of all kinds; abstractions on all of this and so on.
BTW, when I say semantics, I’m talking about sensory semantics, which is based on the morphological information from the sensory input which can be used in the “further processing”. For example every neuron from a two-dimensional matrix of neurons is characterized by its “position”, also two neurons can be “more or less close” one to another. A neuron can be “inside” or “outside” a set of other sensory neurons which form some “closed area” and so on. It should be clear however that from there being sensory semantics – i.e. information which is based on morphology of the sensory input (and not on the contingent information they transmit in certain moment) doesn’t necessary imply that there will be some “meaning” in such system in the sense we have in us. So, in such system there is nothing problematic to form a concept of “circle” (e.g. by differentiating the dx/dy movement of a sensory motor which follows a form across activated neighboring neurons – think our eyes following a form and there being information about how the eyes were moving), “point within a circle”, “big/small circle”, and so on. However those words are referring to the sensory semantics, and for all we know it might be only us which are able to relate that sensory semantics to the meaning of our words “circle”, “point within a circle”, “big/small” etc…. That goes for every word I put in quotes within this post.
We can have “A Priori Synthetic Judgments” in Such Model
I said that in this picture one can talk about “is” relation between concepts. I should say also, repeating the note in previous paragraph, that it isn’t clear if our “is” is the “is” of such system. We can see that there is something which we might understand as is, but if there is any meaning in the machine, any awareness of the relation between the concepts, and so on – that doesn’t follow at all.
Except “is”, what makes this kind of system compatible with the way we learn concepts is this – while it is necessary to have some input from the neurons in order to form concepts, concepts are possible which are based on the morphological information, and which don’t depend on the information about “external” world. Such is the case for the mentioned concept of “circle”, or mentioned concept of “inside/outside of shape”, and so on. Those are based on the morphological semantics.
So, we can have some kind of a priori synthetic judgments in this picture, as far as we can put in work some kind of reverse-activation process, call it “imagination”, in which the concept can somehow activate the possible input space from which this concept can be abstracted, and then the machine can figure out that some other concept will also be activated in that case. (Of course how far the reverse-activation process goes is another issue, it can just to one previous level, or very close to the sensory input. One might imagine “fixing in the imagination” the position of the “circle”, so that one can “imagine circle on a specific place”).
Cross Modal Concepts
What we imagined thus far is a two-dimensional matrix of sensors. If we compare those with “eyes” we can also imagine different sensors. We can imagine “ears”, “skin” and so on. Each of those can have its own semantics, but if we want to compare this system to us, we need to have cross-modal semantics, a semantics which will “integrate” eyes, skin, ears, and so on in one semantics. We should be able to say that the voice comes “from left”, and it should be the same concept “left” as in the visual information, etc… To do that, we need to imagine that there is “integration” of semantics from different sensory inputs , which means that one set of abstractions “work” upon information which comes from different sensory modalities.
One interesting thing to think about, is that on sufficient level of abstraction, we can have a central “less/more” comparison semantics. In any case if by learning “more/less” once we are to be able to use it in every case of different concepts.
One can speak about “metaphors” also in this model. Because in the hierarchy of abstractions same abstractions might appear in different things (that is, same abstract structure might appear), together with backward-activation mentioned in case of “synthetic a priori judgments” + some abstractions (ignoring some things about the concept), one can imagine that one can “get” from one concept to the other more or less easily, depending on the similarity of the abstractions which are the concept.
Connected to this, and the cross-modal working of some of the concepts, one can speak about e.g. time is length in space metaphor, by pointing that there is a cross-modal abstraction which is applied for both cases. Of course that is not the only way to “implement” it.