At This is the Name of This Blog, Trent is asking if there is such thing as a conscious inference. He is reporting that he can’t notice that he is doing anything in the cases which would usually be called “conscious inferences”. Besides maybe wondering about premise, and becoming aware of the conclusion.
Trent seems to connect that to a theory of there being a separate cognitive module which does the inference when we wonder, and return the result to us as on screen.
I left comment there, that we are actually doing something (at least in the cases when we are not doing the inference mechanically). What we are doing is that we are understanding the premise.
I would further argue, that it is that understanding (or comprehension) which is in fact what we call “conscious inference”, and not some separate process.
There are two arguments I want to give about this:
I want to point of how we sometimes use “what it means” and “understand what it means”. Often we use it to connect the inference B with the original statement A. So we say “A means B”, and this is which fits with my position, we can equate person failing to do inference B with person failing to understand what A means. I think that in fact we are more likely to say “he doesn’t understand what it means”, and not “he fails to do the inferences”. It seems to me that this second way of talking, might in fact be connected to some wrong assumption that our mind works as some formal logical machine.
In the cases which are described by “doing consciously the inferences”, we do understand the necessity of the inference, or how the assumption is related to the inference. If the work was done by some separate module and if we were getting as a ready result, it wouldn’t matter for us if we are getting this or that result, what we would have is some blind faith in whatever is returning us the results. But, it is not the case. When I understand that if (A v B) & ~A means B, it is because I understand fully what is going on, not because I’m dependent on some cognitive module.
Connected to this, I want to point that the things are similar in other cases. For example when we compare two shades of color and say that one is brighter, if we aren’t doing it mechanically, we are in fact aware that the one color looks brighter to us, and we are not merely reporting an answer which came to us from some separate “comparator” – so to say, that relation between the two shades is there in the shade as we are aware of them. Same as if we are comparing two shades of color as being same, when we are saying that 1+1=2 (though, this is risky to say on philosophical blog, because I guess philosophers don’t understand 1+1=2 as all “normal” people do), etc…