Here is simple refutation of neural-network producing consciousness idea. It can be used as attack to much more general set of systems, and hopefully I will be posting a short paper on this issue in next few weeks I hope.
Here is the simple argument:
Let’s say that the system is composed of “digital” neurons, where each of them would be determined by: input from other neurons, internal state, the calculation it is doing, and output it gives to other neurons. And because we assume it is not important how was the calculation made every neuron pk can be changed by any system which does set of output functions yi=f(x1..xj). Let’s suppose additionaly this system is conscious, so we will do reductio ad absurdum later.
Now, let’s say we are measuring each neuron activity and internal states for a 2 (10, 20) minutes, in which the system is conscious (maybe we ask it if it is conscious, it does some introspection, and answers that it is). We store their inputs and outputs as functions of time. After we got that all, we can replay what was happening by:
- Resetting each neuron internal state to the starting state, and replaying the inputs which come from outside of the neural net, and first inputs which come from inside of neural net (starting state). As the function is deterministic, everything will come out again as it was the first time. Would this system be conscious?
- Reset each neuron internal state to starting state, then disconnect!! all the neurons between each other, and replay the saved inputs to each of them. Each of the neurons would calculate the outputs it did, but as nobody would “read them”, they would serve no function in the functioning of the system, actually they wouldn’t matter! Would this system be conscious too?
- Shut down the calculations in each neuron (as they are not important as seen is second scenario – because the outputs of each neuron are also not important for functioning of the system while the replay). We would give the inputs to each of the “dead” neurons (and probably we would wonder what we are doing). Would this system be conscious?
- As the input we would be giving to each of the neurons actually doesn’t matter, we would just shut down the whole neural net, and read the numbers aloud. Would this system be conscious? Which system?
UPDATE: I finished the paper based on this argument, and it is listed on the papers page
4 thoughts on “Playback argument (why a neural network can’t be conscious)”
:) an interesting argument: but lest assume that the system is programmed to remember all input and to activate specific behaviors in respons to each local circuit programmed as a result of the initial programming and then to link all local circuits to initiate even more behaviors and then link all links to 11000 000 000 000 other links that are programmed as more and more information is fed into the system.
now there is no resetting of the neurons since each neuron remembers both its connectivity and the behavior that results when it is activated. no matter what you do – short of wiping a magnet over the system (theoretical sledgehammer to the brain)the system will consistently produce behavior according to the stimulation of the neurons.
now if that behavior has to contemplate the behavior of the neurons it can do so only as a result of some activation of some neurons and the resultent interactivity of neurons and use the behavior of the system as indicative of whether it is conscious or not. is such a system conscious?