Here is simple refutation of neural-network producing consciousness idea. It can be used as attack to much more general set of systems, and hopefully I will be posting a short paper on this issue in next few weeks I hope.
Here is the simple argument:
Let’s say that the system is composed of “digital” neurons, where each of them would be determined by: input from other neurons, internal state, the calculation it is doing, and output it gives to other neurons. And because we assume it is not important how was the calculation made every neuron pk can be changed by any system which does set of output functions yi=f(x1..xj). Let’s suppose additionaly this system is conscious, so we will do reductio ad absurdum later.
Now, let’s say we are measuring each neuron activity and internal states for a 2 (10, 20) minutes, in which the system is conscious (maybe we ask it if it is conscious, it does some introspection, and answers that it is). We store their inputs and outputs as functions of time. After we got that all, we can replay what was happening by:
- Resetting each neuron internal state to the starting state, and replaying the inputs which come from outside of the neural net, and first inputs which come from inside of neural net (starting state). As the function is deterministic, everything will come out again as it was the first time. Would this system be conscious?
- Reset each neuron internal state to starting state, then disconnect!! all the neurons between each other, and replay the saved inputs to each of them. Each of the neurons would calculate the outputs it did, but as nobody would “read them”, they would serve no function in the functioning of the system, actually they wouldn’t matter! Would this system be conscious too?
- Shut down the calculations in each neuron (as they are not important as seen is second scenario – because the outputs of each neuron are also not important for functioning of the system while the replay). We would give the inputs to each of the “dead” neurons (and probably we would wonder what we are doing). Would this system be conscious?
- As the input we would be giving to each of the neurons actually doesn’t matter, we would just shut down the whole neural net, and read the numbers aloud. Would this system be conscious? Which system?
UPDATE: I finished the paper based on this argument, and it is listed on the papers page