Why a neural network can’t be conscious (2)
Posted by Tanas Gjorgoski on April 13, 2006
Shorter, and hopefully more cleaner and better argument is given in more recent posts:Can we digitize the brain and retain consciousness and Consciousness and Special Relativity
Few posts ago, I gave a record/reply thought experiment (I included it in this post also), which I think shows that specific kind of neural networks can’t be conscious. After talking to a friend ,she told me that it seems to her that my argument is very similar (if not the same) with The Story of a Brain by A. Zuboff. I will first compare the two arguments, and after that try to give additional explanation, and possible variations of my argument against possibility of (certain kind of) artificial neural networks to be conscious.
The Story of a Brain
A man’s brain is put in nutrient bath. By stimulating the sensory inputs to the brain scientists are able to create corresponding conscious experiences. But, at some point there is an accident and the brain is split into its two hemispheres. First instead of original connections, the hemispheres are connected by wires, then by radio transmitters and receivers. After some time new method is used, impulse cartridges are connected to each of the hemispheres, which compute the signals that would have been produced by the other hemisphere. In this way each hemisphere gets exactly the signals it would get, even though it is no longer communicating with the other side. In the story then, the brain is separated into more and more parts, and finally we have each neuron connected to its own impulse cartridge.
There is more to the story, but I told just the part which is similar. The Record/Replay argument on other hand was following:
Let’s say that the system is composed of “digital” neurons, where each of them is connected to other neurons. Each of the neurons have inputs and outputs. The outputs of each neuron are connected to inputs of other neurons, or go outside of the neural network. Additionally some neurons have inputs that come from outside of the neural network.
Let’s suppose additionally this system is conscious certain amount of time (e.g. two minutes), so we will do reductio ad absurdum later. We are measuring each neuron activity (inputs and output signals of the neuron) for those two minutes in which the system is conscious (maybe we ask it if it is conscious, it does some introspection, and answers that it is). We store those inputs and outputs as functions of time. After we got that all, we have enough information to replay what was happening in the neural network by:
* Resetting each neuron internal state to the starting state, and replaying the inputs which come from outside of the neural net, and first inputs which come from inside of neural net (starting state). As the function is deterministic, everything will come out again as it was the first time. Would this system be conscious?
* Reset each neuron internal state to starting state, then disconnect!! all the neurons between each other, and replay the saved inputs to each of them. Each of the neurons would calculate the outputs it did, but as nobody would “read them”, they would serve no function in the functioning of the system, actually they wouldn’t matter! Would this system be conscious too?
* Shut down the calculations in each neuron (as they are not important as seen is second scenario – because the outputs of each neuron are also not important for functioning of the system while the replay). We would give the inputs to each of the “dead” neurons (and probably we would wonder what we are doing). Would this system be conscious?
* As the input we would be giving to each of the neurons actually doesn’t matter, we would just shut down the whole neural net, and read the numbers aloud. Would this system be conscious? Which system?
In which step did the assumed system changed from conscious to unconscious? Or maybe such system can’t be conscious in first place? In this post I have added one alternative of this argument, which might even more intuitively show that there is problem with the idea that such artificial neural network can be conscious, as through similar steps we change it to a system which is impossible to be conscious, and yet it is impossible to say that in some of the steps something which might be important for the consciousness is lost. But first let me point to the differences between Story of a Brain and the Record/Replay argument…
So, it seems to me that there are those important differences:
- The story of a brain is story of a brain, the Record/Replay argument is a story of artificial neural network.
- The brain in the story is evolving brain, which in a way continues with its life, the artificial neural network is such that its internal state can be reset to the starting state.
- The artificial neural network is in principle divisible to its neurons (or at least it can be said that Record/Replay argument holds just for those kind of artificial neural networks). We don’t know if that holds for the human brain.
Isn’t Record/Replay argument saying that our brains can’t be conscious?
My friend brought different line of attack to the Record/Replay argument. She said that
- our brain is neural network (I will mark this proposition as A)
- and our brain is conscious (B), hence it can’t be true that
- Record/Replay argument shows that neural network can’t be conscious (C)
But, this is slightly problematic reasoning – in order from assumptions A and B to follow that C is wrong, we don’t need to assume A, but more precise proposition, namely that brain is such neural network that the scenario in Record/Replay argument can be applied to it, and this assumes more things, like possibility for dividing, possibility for reseting internal states, and so on.
So, instead of A, we should talk about the proposition A2 – that our brain is such and such neural network which makes it possible for Record/Replay scenario to be applied to it. This is surely not an obvious fact (as A), so I don’t think that A2 and B can be used as facts to attack C. (what A2 would mean I analyzed in the paper Replay argument, given on the papers page on this weblog)
Alternative Scenario: Replay Neurons Scenario
While talking yesterday, I figured out that the Record/Replay scenario can be little modified thus. (I will call this Replay neurons scenario)
- We record internal signals of the artificial neural network which we assume is conscious for certain time
- We construct replay neuron clones, which when started fire the exact same outputs as the original neurons did, in same timely manner.
- We change one of the original neurons with a replay neuron, reset the other original neurons, and replay the inputs to the network, starting the replay neuron so that it acts as the original would have acted.
- We repeat (3) by changing more and more original neurons with replay neurons.
The issue is now this. After some time we end up with a system composed of replay neurons. Surely we won’t argue that this system is conscious. So if we assume that it is possible for the artificial neural network to be conscious, we need to specify where in the course of changing original neurons with replay neurons, the system became unconscious.
Now by changing the Replay/Record scenario this way, it is not so similar to Zuboff’s story, but is much more like the Searle’ story that I wrote about in a previous post. To remind you in that scenario the neurons in person’s brain are one by one changed with artificial neurons.
And now the same question will appear as in Searle’ book… the system will function as it did (through all replay sessions), but what will happen with the consciousness. Will there be there less and less consciousness, as we put more and more replay neurons? It seems to me as the only plausible answer for those who think that the original neural network can be conscious, as it is obvious, I think that we end up with something that can’t be conscious, and I don’t think anyone would say that after changing X number of neurons, the consciousness is suddenly cut off.
Let me discuss now the only place where someone might argue that consciousness is lost in the Replay neurons scenario. The argument would go like this:
The original neurons are such that their inputs are causally effective, the inputs affect some processes in the neurons which then result with certain outputs, which in turn are causally effective towards the happenings in other neurons and so on. The replay neurons on the other hand are such that they ignore any input signals that come to them, and just post output signals in timely manner (as programmed).
Which brings us to new scenario…
Before getting into it, let me repeat the fact assumed is that we know what kind of input signals, and in what kind of timely manner each neuron gets, and what kind of outputs it produces.
So now instead of making a simple replay neuron, we can create more complex replay neuron in following way: Instead of simply just reproducing the outputs in timely manner, because we know what kind of inputs we will get we create this kind of list in the replay neuron… wait for input signal A1, and after given X1 time, send the output signal B1. Then wait for other input signal A1, and so on and so on. Now, the replay neurons are causally effective, their outputs are important for how the connected replay neurons will work. But does it mean that the “neural net” created of those kind of replay neurons will be conscious?
I take that the answer is no.
But what went wrong here. If in the original Replay neurons argument, we can say that because the parts are not causally effective, we have lost the system, and without system we can’t talk about conscious system… we have now parts which are causally effective to each other. (This part reminds me to Block’s argument in Psychologism and Behaviorism, just that there the system didn’t know what kind of questions will expect trough the time. We here have situation that we know exactly what kind of inputs and outputs we had in the original system).
Now, we can work towards more complex replay neurons. We can, for example, having the timely outputs and inputs and using mathematical methods for compression, develop algorithms in each replay neuron which will generate the outputs not by just looking into the list, but through some more complex mathematical formula, including some internal state. (What makes the situation more interesting is that artificial neural networks are usually trained by giving them certain inputs, and the wanted outputs are used to fix different internal variables of the network).
Maybe in this process we will get to the computation that was implemented in the original neuron, maybe not. However, it seems clear that just causal connection between the individual neurons is not enough, we will need to argue that what is important is that the physical system implements certain computation too, and not any computation that would do for practical purposes! See Chalmers’ paper on what does it mean for physical system to implement given computation.
But why would a consciousness appear if this specific computation is implemented, and not the other one? Would this mean that we can never know if we got the computation algorithm right, that we choose the wrong one which don’t produce consciousness?
For analysis of how this argument can be related to the connection of brain and consciousness, you can further check this pdf document I wrote.
Zuboff, A. (1981). The story of a brain. In D. R. Hostadter & D. C. Dennett (Eds.), The mind’s I. London: Penguin.
Block, N. (1981). Psychologism and behaviorism. Philosophical Review 90:5-43.
Searle, J. R. (1992), The Rediscovery of the Mind (Representation and Mind). The MIT Press
Chalmers D. J. , A Computational Foundation for the Study of Cognition .