Shorter, and hopefully more cleaner and better argument is given in more recent posts:Can we digitize the brain and retain consciousness and Consciousness and Special Relativity
Few posts ago, I gave a record/reply thought experiment (I included it in this post also), which I think shows that specific kind of neural networks can’t be conscious. After talking to a friend ,she told me that it seems to her that my argument is very similar (if not the same) with The Story of a Brain by A. Zuboff. I will first compare the two arguments, and after that try to give additional explanation, and possible variations of my argument against possibility of (certain kind of) artificial neural networks to be conscious.
The Story of a Brain
A man’s brain is put in nutrient bath. By stimulating the sensory inputs to the brain scientists are able to create corresponding conscious experiences. But, at some point there is an accident and the brain is split into its two hemispheres. First instead of original connections, the hemispheres are connected by wires, then by radio transmitters and receivers. After some time new method is used, impulse cartridges are connected to each of the hemispheres, which compute the signals that would have been produced by the other hemisphere. In this way each hemisphere gets exactly the signals it would get, even though it is no longer communicating with the other side. In the story then, the brain is separated into more and more parts, and finally we have each neuron connected to its own impulse cartridge.
There is more to the story, but I told just the part which is similar. The Record/Replay argument on other hand was following:
Let’s say that the system is composed of “digital” neurons, where each of them is connected to other neurons. Each of the neurons have inputs and outputs. The outputs of each neuron are connected to inputs of other neurons, or go outside of the neural network. Additionally some neurons have inputs that come from outside of the neural network.
Let’s suppose additionally this system is conscious certain amount of time (e.g. two minutes), so we will do reductio ad absurdum later. We are measuring each neuron activity (inputs and output signals of the neuron) for those two minutes in which the system is conscious (maybe we ask it if it is conscious, it does some introspection, and answers that it is). We store those inputs and outputs as functions of time. After we got that all, we have enough information to replay what was happening in the neural network by:
* Resetting each neuron internal state to the starting state, and replaying the inputs which come from outside of the neural net, and first inputs which come from inside of neural net (starting state). As the function is deterministic, everything will come out again as it was the first time. Would this system be conscious?
* Reset each neuron internal state to starting state, then disconnect!! all the neurons between each other, and replay the saved inputs to each of them. Each of the neurons would calculate the outputs it did, but as nobody would “read them”, they would serve no function in the functioning of the system, actually they wouldn’t matter! Would this system be conscious too?
* Shut down the calculations in each neuron (as they are not important as seen is second scenario – because the outputs of each neuron are also not important for functioning of the system while the replay). We would give the inputs to each of the “dead” neurons (and probably we would wonder what we are doing). Would this system be conscious?
* As the input we would be giving to each of the neurons actually doesn’t matter, we would just shut down the whole neural net, and read the numbers aloud. Would this system be conscious? Which system?
In which step did the assumed system changed from conscious to unconscious? Or maybe such system can’t be conscious in first place? In this post I have added one alternative of this argument, which might even more intuitively show that there is problem with the idea that such artificial neural network can be conscious, as through similar steps we change it to a system which is impossible to be conscious, and yet it is impossible to say that in some of the steps something which might be important for the consciousness is lost. But first let me point to the differences between Story of a Brain and the Record/Replay argument…
So, it seems to me that there are those important differences:
- The story of a brain is story of a brain, the Record/Replay argument is a story of artificial neural network.
- The brain in the story is evolving brain, which in a way continues with its life, the artificial neural network is such that its internal state can be reset to the starting state.
- The artificial neural network is in principle divisible to its neurons (or at least it can be said that Record/Replay argument holds just for those kind of artificial neural networks). We don’t know if that holds for the human brain.
Isn’t Record/Replay argument saying that our brains can’t be conscious?
My friend brought different line of attack to the Record/Replay argument. She said that
- our brain is neural network (I will mark this proposition as A)
- and our brain is conscious (B), hence it can’t be true that
- Record/Replay argument shows that neural network can’t be conscious (C)
But, this is slightly problematic reasoning – in order from assumptions A and B to follow that C is wrong, we don’t need to assume A, but more precise proposition, namely that brain is such neural network that the scenario in Record/Replay argument can be applied to it, and this assumes more things, like possibility for dividing, possibility for reseting internal states, and so on.
So, instead of A, we should talk about the proposition A2 – that our brain is such and such neural network which makes it possible for Record/Replay scenario to be applied to it. This is surely not an obvious fact (as A), so I don’t think that A2 and B can be used as facts to attack C. (what A2 would mean I analyzed in the paper Replay argument, given on the papers page on this weblog)
Alternative Scenario: Replay Neurons Scenario
While talking yesterday, I figured out that the Record/Replay scenario can be little modified thus. (I will call this Replay neurons scenario)
- We record internal signals of the artificial neural network which we assume is conscious for certain time
- We construct replay neuron clones, which when started fire the exact same outputs as the original neurons did, in same timely manner.
- We change one of the original neurons with a replay neuron, reset the other original neurons, and replay the inputs to the network, starting the replay neuron so that it acts as the original would have acted.
- We repeat (3) by changing more and more original neurons with replay neurons.
The issue is now this. After some time we end up with a system composed of replay neurons. Surely we won’t argue that this system is conscious. So if we assume that it is possible for the artificial neural network to be conscious, we need to specify where in the course of changing original neurons with replay neurons, the system became unconscious.
Now by changing the Replay/Record scenario this way, it is not so similar to Zuboff’s story, but is much more like the Searle’ story that I wrote about in a previous post. To remind you in that scenario the neurons in person’s brain are one by one changed with artificial neurons.
And now the same question will appear as in Searle’ book… the system will function as it did (through all replay sessions), but what will happen with the consciousness. Will there be there less and less consciousness, as we put more and more replay neurons? It seems to me as the only plausible answer for those who think that the original neural network can be conscious, as it is obvious, I think that we end up with something that can’t be conscious, and I don’t think anyone would say that after changing X number of neurons, the consciousness is suddenly cut off.
Let me discuss now the only place where someone might argue that consciousness is lost in the Replay neurons scenario. The argument would go like this:
The original neurons are such that their inputs are causally effective, the inputs affect some processes in the neurons which then result with certain outputs, which in turn are causally effective towards the happenings in other neurons and so on. The replay neurons on the other hand are such that they ignore any input signals that come to them, and just post output signals in timely manner (as programmed).
Which brings us to new scenario…
Before getting into it, let me repeat the fact assumed is that we know what kind of input signals, and in what kind of timely manner each neuron gets, and what kind of outputs it produces.
So now instead of making a simple replay neuron, we can create more complex replay neuron in following way: Instead of simply just reproducing the outputs in timely manner, because we know what kind of inputs we will get we create this kind of list in the replay neuron… wait for input signal A1, and after given X1 time, send the output signal B1. Then wait for other input signal A1, and so on and so on. Now, the replay neurons are causally effective, their outputs are important for how the connected replay neurons will work. But does it mean that the “neural net” created of those kind of replay neurons will be conscious?
I take that the answer is no.
But what went wrong here. If in the original Replay neurons argument, we can say that because the parts are not causally effective, we have lost the system, and without system we can’t talk about conscious system… we have now parts which are causally effective to each other. (This part reminds me to Block’s argument in Psychologism and Behaviorism, just that there the system didn’t know what kind of questions will expect trough the time. We here have situation that we know exactly what kind of inputs and outputs we had in the original system).
Now, we can work towards more complex replay neurons. We can, for example, having the timely outputs and inputs and using mathematical methods for compression, develop algorithms in each replay neuron which will generate the outputs not by just looking into the list, but through some more complex mathematical formula, including some internal state. (What makes the situation more interesting is that artificial neural networks are usually trained by giving them certain inputs, and the wanted outputs are used to fix different internal variables of the network).
Maybe in this process we will get to the computation that was implemented in the original neuron, maybe not. However, it seems clear that just causal connection between the individual neurons is not enough, we will need to argue that what is important is that the physical system implements certain computation too, and not any computation that would do for practical purposes! See Chalmers’ paper on what does it mean for physical system to implement given computation.
But why would a consciousness appear if this specific computation is implemented, and not the other one? Would this mean that we can never know if we got the computation algorithm right, that we choose the wrong one which don’t produce consciousness?
For analysis of how this argument can be related to the connection of brain and consciousness, you can further check this pdf document I wrote.
Zuboff, A. (1981). The story of a brain. In D. R. Hostadter & D. C. Dennett (Eds.), The mind’s I. London: Penguin.
Block, N. (1981). Psychologism and behaviorism. Philosophical Review 90:5-43.
Searle, J. R. (1992), The Rediscovery of the Mind (Representation and Mind). The MIT Press
Chalmers D. J. , A Computational Foundation for the Study of Cognition .
Technorati Tags: neural network, AI, Zuboff, Chalmers, Searle
30 thoughts on “Why a neural network can’t be conscious (2)”
Ok, you somehow divide a brain in parts and simulate the inputs and outputs: if the inputs and outputs really are equivalent, then you have constructed an artificial part-brain around a part of the original brain, which then results in a normally counsious brain.
I admit: at least theoretically you wouldn’t need to run a full brain simulation on the new artificial part. You could somehow guess or look up the signals it feeds to the original brain part. This means no real distributed processing, no conciousness. I would argue, that for consciousness to arise there has to be fine-granulated distributed processing, using shortcut functions or lookup tables won’t do it.
Thanks for the comment Epinesh.
The argument is not supposed to work on real brain, but the artificial neural network of certain type. (if you are talking about my argument, and not Story of the brain).
In this kind of artificial neural network, all inputs of all the artificial neurons can be recorded. Including the inputs from “outside” of the neural network.
So, if we can reset the artificial neurons to their starting states, and replay the inputs as they were, the neurons (if we suppose that they are deterministic), will function exactly same. And there would be no difference from the time we recorded the signals, hence if the neural network was conscious the first time, it should be conscious again.
And also, as we also know the inputs and outputs of each of those neurons through that certain amount of time in which the neural network was conscious, we can change it with another artificial neuron which will have same input/output function.
We can do it with pure “replaying of outputs”, or some lookup table; from the point of the neurons connected to it, this artificial neuron will “function” the same.
But surely after changing all the neurons with reply , or with look-up table neurons, this artificial neural network wouldn’t be conscious. EVEN each of the neurons will have the same input/output, as the neurons it changed.
So, the question for the people who support the thesis that this kind of artificial neural network can be conscious, is to say what is that which is present in the original case, but which is not present in this final case, which produces consciousness.
You say “no distributed processing”, but what does it means? What’s the difference in neuron executing his original function (e.g. some mathematical formula), and executing another which for this limited time gives the same results?
How are two implemented mathematical functions different if they give the same results? One can say that the other function *would* potentially give different results in different cases, but this potential is never actualized in this case!
Surely, it would only be a particular definition of consciousness that would disallow a neural network to be considered conscious? I think it is more practical to strive for an understanding in the perspectival and in multi-meaning/contextual, rather than betray ourselves in the search for ‘mono-meaning’ or ‘mono-definition’. :)
Hey Fraser, thanks for the comment.
I’m very suspicious of talking of definition(s) of consciousness. As for the sense I use the word here… I think Searle talks about “working definition” of consciousness, as of something we have from the moment we wake up till the moment we get back to sleep. In this sense, definition doesn’t serve as a formal definition, where the defined is fully reducible to the definition, but more as a pointing so the correspondents can agree that they are talking about the same thing.
Though, it is not specific just in the case of consciousness. If we are in some place, and there is a tree before us, I might want to start to discuss the tree, and I will point to it while I say something about it. There is no need for definition there, what is needed is just that me and you think and talk about the same thing.
Now back to the consciousness. Why require a definition, if we have no reason to doubt that we are not talking about the same thing? While I don’t know what consciousness is, I know what I’m talking about when I use the word, and I guess you also know.
So to say, knowledge of the thing of which we talk is not required in order to talk about that thing. In fact, if that was a requirement then it would be hardly possible to discuss anything.
And… I’m not even sure that this post adds anything to the comprehending/understanding of consciousness. I tend to agree with you that if we are to give positive account for consciousness being more precise and delimiting senses in which “conscious” is used would be much more important.
UPDATE: Ah, didn’t recognized you at first (didn’t check email field). Nice to see you here. Loved the musical piece. Waiting for the MySpace URL when you get around to publish samples there. :)
I think my broadest definition of consciousness may be considered crazy by some, but let me try and explain it, so as to perhaps arrive at a conclusion that everything within nature, whether man-made etc, has to be a consciousness.
I am no scientist, but let’s imagine some substance. Now, that a substance may repel within certain contexts, or relationships, or undergo somekind of fusion in other contexts, or relationships, to me, displays, dictates, a fundamental and rudamentary consciousness in operation. Why? Because the substance, by its very existance within reality, is therefore bound to relationships with other ‘phenomena’ within nature; and its attributes vary according to varying context etc. It is these relationships of existing phenomena that constitute consciousness, in my opinion. And I think we may be being short sighted if we simply attach consciousness to certain organic phenomena.
This definition may require some meditating upon for it to ‘click’ in place for some; but coming up from this definition, as humans, we still sit within it. A glass jar also sits within this definition of consciousness; and certainly a neural network! :)
I don’t consider that definition crazy, but I see problems with it.
To apply attribute of consciousness to so wide range of phenomena seems to me to render that attribute (consciousness) useless and meaningless.
If some definition or sense of “consciousness” renders the word useless, might be good idea to return to the common usage of the word (in the ordinary language), in order to see where the need to use of that word appeared; and to see if really we can “do away” with that word.
For example, you would need to dispose with such common language sentences like :”He lost consciousness”, “He regained consciousness”, “He did it unconsciously”, etc…
BTW I’m not saying that there there is fundamental distinction between organic things, and artifacts; but I think there is distinction for which the word “conscious” is applied – We observe two types of things… things that undergo changes, and things which are like us aware of the changes going on in the world (what is happening/what was happening) and the possibilities open for acting. I wrote on this topic (also in connection to the issue of other minds) here.
Hey first off, thanks for your kind comments on mi music – glad you liked!
Ok, where are we on the consciousness front. With my previous definition of consciousness, I didn’t mean to do away with the meanings of that word within its other various contextual useages in common language, but rather add to the meanings – the perspectives. What I am trying to get at, really, is that the answer to the question posed by this thread – ‘Why a Neural Network Can’t Be Conscious’ – has to be a yes and no, depending on the premiss. However, considering the particular premiss of the thread – the particular definition of consciousness that is assumed – then I would have to say, ‘No, a neural network is not conscious’ (at least I hope I have read the thread properly and got that right :))
Are we really aware, also, of the changes that are going on in the world to any greater extent than molecules in some substance reacting to an increased temperature etc? These are the kind of links I was also hoping to make with the previous definition of consciousness. An idea that I am playing with, an d am simply putting across for consideration. :)
Cheers for now,
PS: When I said, ‘Are we really aware, also, of the changes that are going on in the world to any greater extent than molecules’, I didn’t mean to say ‘extent’, I was just making a parallel in ‘the nature of things’.
Consciousness involves Simultaneous events
The brain is an area of neurophysiology activity. Neurophysiology activity consists of electrochemical reaction. Thus at any given time, the brain state is defined by a subset of electrochemical reactions, derived from a large set of possible reactions.
Consider the phenomenon of a. conscious thought. As at any given time the brain physical state consists of a collection of electrochemical reactions (events), it can be inferred that they are collectively responsible for the conscious thought. This means that at least in part, simultaneous events are responsible for thought. In other words, thought creates a connection between simultaneous events. This is in contradiction to the consequences of special relativity, which states that the fastest connection between events is the speed of light and thus excludes the possibility of connection between simultaneous events.
Consider the memorizing of, say, the value 5. This would necessarily involve more than I point in space as, say, if it is assumed a single electron records 5 by taking a particular potential. Then it by itself cannot define (or know) 5, as its magnitude would be defined only with respect to another datum or event defined as a unit potential, thus involving at least 2 simultaneous events.
Consider the experience of vision. While we focus our attention on an object of vision, we are still aware of a background and, thus, a whole collection of events. This would mean at least an equal collection of physical events in the brain are involved.
Consciousness is 4 Dimensional
Take the experience of listening to music. It would mean being aware of what went before. Like vision, it would probably mean that while our attention at any given time is focused at that point in time, it is aware of what went before and what is to follow. In other words, it spans the time axis. Many great composers have stated that they are able to hear their whole composition. Thus their acoustic experience is probably like the average person’s visual experience. While focusing at a particular point in time of their composition, they are nevertheless aware of what went before and what is to come. The rest of the composition is like the background of a visual experience. Experiencing the composition in this way, they are able to traverse it in a similar fashion to which a painting is observed. In this sense, an average person in comparison can be seen as having tunnel hearing (like tunnel vision) when it comes to music, thus
making it very difficult for him or her to reproduce or create new music. It can be seen that consciousness is a 4-D phenomenon.
Hi Frank, interesting thoughts!
First two issues that I have with your solution of the problem:
You point to the idea that it is the spatiotemporal arrangement of the events within the brain which is responsible for the conscious thought. But, if I understand you right, it seems to me that this alone implies epiphenomenalism about consciousness. At least, if we don’t figure out some further metaphysical principle, which would make it necessary for the spatiotemporal arrangement to be related to consciousness, the presence or absence of it, seems doesn’t change anything to the actual functioning of the brain.
I also have problems with giving to the idea of simultaneity a big importance. That because I believe the time itself is merely abstraction from the changing things. But then, I buy special relativity, and you don’t seem to.
Having said that,I agree with you on the importance of the phenomena to which you point through phenomenological description (the examples of memorizing, of listening music, etc…), and that good theory of mind, should give account for this. Also, it seems to me that you don’t do the real time vs. phenomenological (experienced) time distinction, and there I agree also.
I dont agree with epiphenomenalism for the following reason.
Acquisition of knowledge by humanity is dependent on the consciousness of the individual. When a person makes an observation and comes to an understanding, this understanding is this person’s subjective knowledge. If another person, on making a similar observation, arrives at a similar subjective understanding, this knowledge they share can be taken to be part of humanity’s objective knowledge. Thus, all of [b]humanity’s[/b] objective knowledge is a subset of all of humanity’s subjective knowledge; that is, there can be no objective knowledge that has not been some person’s (dead or alive) subjective knowledge. Thus, an intrinsic assumption behind all of [b]humanity’s[/b] objective knowledge is the similarity of the axioms of consciousness of the individuals.
This is were I think consciousness is.
We know the universe is expanding.
This expansion takes place over a 5th Dimension. Consider any influence on the expansion from this 5 dimension. It will not be detectable from within the 4D of the Universe. Given that from our subjective experience of music we may be viewing the 4D universe, then if this be the case, we must be in the 5th dimension. As such free will is possible as it can be an influence via the 5th Dimension.
So I read your “replay-argument.pdf”. I think you draw the wrong conclusion.
The correct conclusion, in my opinion, is that computation/calculation is not important in generating consciousness. Only information is important.
So when you record the outcomes of the first run, you have all of the necessary information. This information contains the consciousness.
Subsequent runs where you do the replay just bring the consciousness into step with your perceptions, but to the consciousness itself the replay isn’t necessary.
Your “reductio ad absurdum” isn’t absurd at all, I think this example is evidence in favor of platonic realism or idealistic monism, or some such.
I’m not sure how to understand the stance that the information is important.
Usually when we talk about information, it is information about something, that is we talk about information in the context of some communication, where different symbols carry information about something else.
So, when you say that the information is important, what kind of information are you pointing to?
Basically I’m sort of combining your idea with what Chalmers had to say here: http://consc.net/papers/computation.html.
So the information is contained in the discrete states of the CSA (combinatorial state automata). Chalmers says that computation provides the “causal connections” between these discrete states.
However in the replay argument you present, these computations/calculations are removed…replaced by lookups into precomputed results. And looking at your example, it would seem that consciousness would still be the result, certainly if you only replaced one neuron’s calculations with lookups, but I’d say even if you replaced nearly all (or even all) of the neural calculations with the simple lookups.
So, calculation doesn’t seem to be necessary to producing consciousness.
But, the precomputed results DO contain the information for the discrete states. Basically you’re just indexing into this information when you do lookups.
So it seems to me that consciousness is in the information, not in the calculations/computations that originally generated the information.
Time, in my view, is an artifact of consciousness. If you are conscious, you have a perception of time passing and things changing. But time is generated by consciousness. Not the other way around.
I’d say what you are doing when you calculate the neural outputs, or do lookups to find the neural outputs for a single timeslice of a simulation, is you are bringing the consciousness being represented into your perceptual time frame…you are just synching up with it.
Obviously you don’t need an outside observer to make sense of your neural firing patterns, to find the information…you are emerge independently from the information represented by those patterns, independently of any outside observer. Any suitably set of patterns (spatially or temporally organized) will have a consciousness that emerges independently from the represented information.
Which is basically platonic realism, correct?
Thank you for the explanation Fubaris,
Yes, I would say that if we allow that there is a consciousness, even when we have simple lookups, it would seem that we have Platonic realism about information, and its relation to consciousness.
I don’t know how to think of this though, in relation to the notion of information we usually have. That is, usually when we talk about information, we are talking about information about something, that is, we are talking about signals traveling through communication channels, where they can take one of possible states, and where each of those states has some predetermined meaning. For example, if we have symbols 1 or 0, one can use 1 to transfer information e.g. that the light is on, and 0 to transfer the information that the light is off.
But the ones and the zeros taken by themselves are not enough to speak about information. That is, only if they mean something we can talk about information. So, back to the look-ups, if we want to give some platonic existence to information, what kind of information would be this. Maybe you are saying that we should take it as an information about what kind of neural patterns there were in the brain?
That is an interesting concept, but I’m having troubles forming some reasonable idea in my mind in relation to it.
So the nature of the information in the lookup table is clear to me. It’s a representation of someone’s mental state (as represented by neural states) at a given moment in time. All of the information is there regardless of whether it makes sense to you, THOUGH if you want to be able to access that person’s mental state then you do need to know how to use the lookup table.
So I think you’re looking at the “simulated consciousness” from the outside, and saying “this information has to mean something to me for it to be real”.
But to the consciousness that’s being simulated, it doesn’t matter whether you can interpret it or not, right?
If certain patterns are represented by the data, then a consciousness will emerge.
Now, Chalmers, in the paper I referenced, says that data about mental states isn’t enough. He says that causal relationships (as implemented by computations) between “brain states” are also important to generate consciousness. He says this to avoid the “dust theory” problem (http://interweave-consulting.blogspot.com/2007/08/dust-theory-for-beginners.html).
BUT, I think that your record/replay example goes a long way towards saying that causal relationships are NOT important, because the neural computations aren’t done, yet consciousness STILL should be generated (especially for the cases where only a small subset of neurons are in replay mode).
Now, I suppose that you could say that there is still some computation being done in the form of doing the lookups, and that this would be sufficient to preserve the causal realtionship topology that Chalmers talks about. But lookups are a pretty slim reed, calculation wise, and I think some variations on the replay idea could whittle it down even further.
> Yes, I would say that if we allow that there is a consciousness
If neural networks are sufficient to generate consciousness, then I think we have to allow it, don’t we?
And I think that all of the available evidence points to the idea that neural networks ARE sufficient.
What I’m saying is that the information has to mean something in order to be information. Take for example the saved outputs of the neurons. They will be represented by some state of affairs in some system, maybe by bunch of switches of some kind. But, surely the state of affairs by itself isn’t information about something, or by some specific something. Take one switch for example, and say it is turned on. But it can be used to represent an infinity of things. Or take bunch of switches, and their state, same with those. They might represent something, or really they might not represent anything at all. We might be at some switch-garbage, where people dispose of their switches, not caring about their state.
So, when you talk about the information in the lookup table, I want to point that the lookup table is some physical system and it is in some physical state. But if we take it in isolation, I don’t see any reason to treat it as information in first place. Why would we treat it as an information about consciousness, and not information e.g. that there is a lion in the context of communication where people agree to represent that there is a lion with that specific state of affairs?
So, I think if we want to talk about the information, we probably need to get to wider context… to relate that state of the system (which we call lookup-table) with its function, or maybe it’s causal history. Would you agree with this?
On the issue if neural networks are sufficient to generate consciousness, I’m not sure that we know that as we don’t know what in fact correlates with the consciousness, and WHY it does. Maybe it depends on the specific chemical make-up of the neurons for example, and so on.
> I’m not sure that we know that as we don’t know
> what in fact correlates with the consciousness
I think it’s very close to a settled question. Changes to the physical structure of the brain affect conscious experience in very repeatable and predictible ways. The changes can be via damage to the brain, the changes can be via use of drugs, and the results are pretty unambiguous.
There may be some unexpected twist yet to be discovered, but the idea that the material brain’s state is directly related to and responsible for conscious experience has to be BY FAR the best explanation for the evidence currently available.
> Maybe it depends on the specific chemical make-up of the neurons for example, and so on.
The presence and concentration of various chemicals would be incorporated into the brain’s “state” at a given moment. So I don’t see this as a problem.
> What I’m saying is that the information has
> to mean something in order to be information.
It has to mean something, but it doesn’t have to mean anything to YOU. It only has to mean something to the consciousness being represented by the information.
If I give you a DVD with 4 GB of raw binary data on it, is it information or not? To you, it’s not, because you don’t know how to interpret it. To me, it is because I know what it is.
If I’m the only one who knows what it is, and I die, is it still information? Now no one can interpret it! But it is still information. It’s just not ACCESSABLE information.
So I think you need to look at the question from the viewpoint of someone being simulated, NOT from the viewpoint of someone RUNNING the simulation.
> I want to point that the lookup table is some
> physical system and it is in some physical state.
Okay, so we start with a big set of numbers that represent the brain’s state at a given moment (state A). We need to do a bunch of calculations to transform this set of numbers into a NEW set of numbers that represent the brain’s state at the NEXT given moment (state B).
But your record/replay example shows that we can skip doing the actual calculations and use precomputed values…up to and including the point where we just take state B and say, here’s the output of the calcutions that we did on state A.
You start with the idea that we could just replay the output from 1 neuron and do the rest of the calculations. And then say, let’s do replays for 10 neurons. 10000 neurons. Up to ALL of the neurons.
When we do replay for ALL the neurons, we’re basically just jumping directly to state B from state A by doing one single lookup. “Given input state A, output state B”. And by moving step by step up to this (1 neuron, 10 neurans, 10000 neurons, etc), nowhere do we see where consciousness would not be produced.
> or maybe it’s causal history. Would you agree with this?
So even in the “full replay” mode, there is still a causal link between state A and state B…the lookup. But this lookup is so computationally simple, and so disimilar from the original neural calculations, that I’m inclined to say that it has NO involvement in the production of consciousness. The lookup only has meaning to you, the observer, not to the consciousness being produced by “full replay” mode.
> So, I think if we want to talk about the information,
> we probably need to get to wider context
Again, I think you need to look at it from the viewpoint that YOU are the one being simulated. What could the people running the simulation do that would cause your consciousness to disappear?
You know what it is, and I for sure don’t say that if something doesn’t have meaning to me, it doesn’t have meaning at all. What I’m saying is that for something to be information it needs to have meaning. Consider this… what if you give me a brand new external memory – EM1, where the bits are not initialized to some value, but they have random values. Is there information on it or not? Say that you give me another external memory EM2 on which you wrote some information (that is, to you this information mean something). We would agree that EM2 contains information. But what if EM1 by pure chance happen to be in same state like EM2? If we agree that EM1 doesn’t contain information, but EM2 does, it has to be something about the wider context, that makes the state of EM2 information, and not the state of EM1.
So, I’m not saying that something has meaning for me to have information, but it has to have meaning in general. The same state of few bits of memory can be used to mean all different kinds of things, it could be ASCII, it could be UTF-8, it could be binary representation of a number, and so on… The state on it’s own as a sign, doesn’t determine what is meant by the sign.
Now, you say that this information which is in the lookup tables doesn’t mean something to me, but that it means something to the “consciousness represented by that information”. But that seems as a problem, no? It can’t be that information only means something to the consciousness, when the existence of that consciousness is dependent on the meaning of the information. So, we need the meaning of the information to be related to something else.
That’s why I mentioned the causal/historical element. Maybe we can say that information has meaning because of where it was recorded, like the information in EM2 has meaning because of where, how and why it was recorded.
Though, to me this kind of sounds weird too – that the replay system consciousness or not will depend on how it is created.
As for the mind/body issue, we really know that if we do something to the brain it affects also the consciousness, but we don’t know what is about the brain that maybe causes or is correlated to the consciousness. So, I’m just saying that we don’t know if it is the fact that it is neural network that has something to do with consciousness, maybe other systems which are not neural networks can be conscious, maybe it is specific chemicals, some specific physical processes, maybe something else, etc…
I like some of your concepts, indeed what is data or information without meaning. In truth you can look at a red rose and I can look at the same red rose and we both can say it is red. But in truth one of us could be seeing a different color but associate that color within our minds to be red, so as to call it red. And by that we can both agree it is red, even if one sees it entirely as another color. I can word it differently to perhaps give a better understanding. If a child was born that seen green as red and red as green, but was taught that the green was red, then everything it seen as green it would say was red. And even tho in truth to the child it seen green instead of red, it would not know the difference, since its green is what we would see as red and since it was taught that the color green is red. Do you understand? Another strange thing about color is that its an illusion of the mind, the eye being sensitive to frequencies associates one frequency to be perceived/imagined as a color internally, according to its programming. But science states that there is no real color externally, just varying frequencies. The questions associated with mind and reality and consciousness border something called imagination. There is always a mix of such. Ever had a dream where in the dream you saw something, or heard something, or even felt something? Most everyone has, but yet you know in the dream your real eyes seen nothing, or your real ears heard nothing, nor did your real hand feel something. Yet you can still see and hear and even feel in a dream.
Some scientist have suggested a dream to be nothing more then random neuronal firing in the brain. But if this was true have you ever ask why you always have “yourself” or should I say your imagined self or body in the dream? What and where is the fine line between real and imagined? If reality or real is just data in the brain then where are you really, and what are you? What if the brain was merely a relay station of data that transmits and receives to and from the body to the soul? What if this thing you call reality was not created or experienced in full within the brain, but was within the soul? Perhaps the external world is not even real at all,,at least not in the sense that it seems. Perhaps it is really just a web of data so to speak. Perhaps we all are interconnected into this data stream we call the external world or life. And perhaps death is when we disconnect or unplug from this data stream called life, and we plug into another data stream. This does not mean its not all real, its all very real, awake or asleep in a dream right? Just that it may all be quite different from what you thought it was.