A brood comb

….philosophical and other notes….

Can We Digitalize The Brain and Retain Consciousness?

Posted by Tanas Gjorgoski on October 15, 2007

I say – no. Here is my Record/Replay argument against it…

Premise 1: Consciousness is an occurrent property of the brain.

We can divide the properties of the system in two groups, the occurrent properties and dispositional properties. The dispositional properties of a system would be properties that characterize the behavior of the system in different potential circumstances. On other side, the system doesn’t exist in a potential circumstances, but in concrete ones, and at specific time has specific properties. Those would be occurrent ones. To give an example, an electron would have dispositional property to repel and be repelled by other negatively charged particles, but also at specific time will have occurrent property of position, momentum and so on.

Consciousness is something that system has or not in specific time. It is not a disposition.


Say that we have a simple system of two units connected with a information channel, so that the information from the first unit is transfered to the second unit like this. Let’s call this system ‘DC’ (for direct connection)

Let’s do changes to this system, so that we add another unit ‘X’ between A and B thusly…

Let’s call this system ‘IC‘ (indirect connection).
Let’s define occurrent connection (OCC) as: For the given span of time t1<t<t2, whatever data is on the output of A, the same data is at the input of B.

Premise 2: If A and B start from the same state in IC and DC case, if A gets same data from “outside”, and if ‘X’ doesn’t affect in any way OCC in the time span t1<t<t2, IC and DC will have same occurrent properties.


Step 1:
We take each neuron of a conscious person, and change it with a digital neuron. The brain (as it is now) is still conscious. The digital neurons are such that we can save their inputs and outputs (and the time of occurrence) and also can be reset to a specific state they had at time t.

Step 2:
For some time t1<t<t2, we save inputs and outputs of each digital neuron, including the data from the senses which go into neurons. Consciousness occurs between t1 and t2.

Step 3:
We reset the neurons to the state they had in t1, and start reproducing data from the senses. The system functions same as it did between t1 and t2. It has same occurrent properties, so also has consciousness.

Step 4:
In each connection between digital neurons we put a small Replay Box. It outputs in same timely manner the inputs that were saved in the t1<t<t2 period for the neuron. The neuron receives the information from the given input as it received it in the t1<t<t2 period. So we have something like this:

We reset each neuron to the state it had at t1, and reproduce the data that was coming to the neurons from the senses. Also in precise time we start Replay Boxes, so that we have a case of IC between neurons with same OCC as the DC in the time span t1<t<t2.

From Premise 2, it follows that the system in the Step 4 will have same occurrent properties through time as the system in the Step 1.
From this and Premise 1 , it follows that in the system in the Step 4 consciousness will occur.

Consciousness can’t occur in the system from Step 4, as what we have is just bunch of disconnected neurons.

Hence by reductio, it can’t be that system was conscious in  Step 1. Hence we can’t change the neurons by digital neurons, and in the process the system retaining consciousness.

———————————————-

(I had this argument in a post I wrote some time ago, but I tried to systematize it a little as a response to a post at Philosophy, Et Cetera)

Advertisements

15 Responses to “Can We Digitalize The Brain and Retain Consciousness?”

  1. Hi Tanasije,

    Interesting post!

    Why should we trust your intuition that the system in step 4 doesn’t have consciousness? True it is a messed up scenerio, a bunch of disconnected neurons, but I think you need more than that to generate a contradiction. If your premises are right then why shouldn’t we just accept that the system is step 4 would have consciousness? Sounds strange to us, but then again you purposely designed it to be an extremely strange situation…

  2. Hey Richard,

    That is a valid point. Haven’t thought much of the issue if consciousness can occur in a bunch of disconnected neurons. The part of why I didn’t go there, is because I guess I don’t expect that “brain is a computer” people will actually go so far, and bite that bullet. I’m guessing that it is more likely that they will attack second assumption.

    But now that you mention it… I guess it is worth going there. Who knows, maybe some reader of the blog will have idea how to approach such an argument. :-/

  3. Roko said

    Tanasije said: “I don’t expect that “brain is a computer” people will actually go so far, and bite that bullet.”

    As I said over on Philosophy etc, I am going that far. I am actually going even further. I think that a conscious mind can, without any loss, be replaced by an input/output table, that is a set of pairs (I,O) of sensory inputs I over some time interval, and motor outputs O over that time interval.

    Furthermore, I think that you can digitalize space, time and the values of physical fields (in a sufficiently fine way) without losing anything essential, so that the sets of all possible inputs and outputs are actually finite sets.

    Thus I am saying that a conscious mind is just some (large) table of inputs and outputs; if you consider the set of all such tables, you can say “that one is conscious, that one isn’t”, etc.

    So where does this leave the subjective feeling of consciousness that you and I experience? How can a list of pairs of inputs and outputs have qualia? I don’t feel like a list of inputs and outputs…

    I think that the answer to these kinds of questions is that our intuition is just misleading us. When someone actually works out how the human mind works, our intuitions will get adjusted and we’ll see that physicalism really does make sense.

    Anyway, happy blogging ;-0

  4. Roko said

    I forgot to emphasize in the above: I think that a mind is a (large) *finite* input/output table.

  5. Hi Roko, thanks for the comment.

    I’m not sure how input/output tables are important here.

    In the scenario 4, the digital neurons are left as they are, and it is not important if they functioning is implemented by input/output tables or in some different way.

    The replay boxes aren’t input/output tables, they are just replaying machines… they just produce a sequence of outputs in timely manner.

    So, it seems to me you are biting different bullet, as here it is not an issue if the neurons (or the mind) are input/output tables or something else, but if the disconnected bunch of neurons can have property of being conscious. Or, in the input/output analogy, can there be consciousness even the inputs and outputs are disconnected.

  6. nooprocess said

    We are still stuck with the fact that we are talking about COMPUTATION, not CONSCIOUSNESS.

  7. Roko said

    “We are still stuck with the fact that we are talking about COMPUTATION, not CONSCIOUSNESS.”

    I think these are fundamentally the same thing.

  8. Roko said

    “Or, in the input/output analogy, can there be consciousness even the inputs and outputs are disconnected.”

    Yes, I think that there can. As I see it, consciousness is a property that a mind can have. A mind is a piece of information, like a genome or a story. Thus if you disconnect a conscious mind from reality it is still a conscious mind.

    What you did with the replay boxes was rather odd, and I think it a somewhat confusing thing to do. It’s rather like taking a storybook – say “Alice in Wonderland”, and cutting all the pages up into the individual letters, but carefully labeling the bits so that you could put them back together again, and then asking if you’ve still got the story you started with. Yes, you have got the story, but that doesn’t mean that a story is a disconnected set of letters!

  9. Roko,

    Well, if it wasn’t confusing, it wouldn’t be very interesting :)

    Anyway in the scenario we end up with bunch of physically disconnected neurons, each working as a little system with a “replay box”.

    Intuitively it seems to me as they are not physically connected, we can divide them by any distance in space. We can take the digital brain apart to distant corners of the universe, and that overall sum of all of them should still be conscious. (if the assumptions and the argument is right).

    And, as Richard said, it might be just my intuition, but I don’t believe that such bunch of disconnected replaybox-neuron pairs thrown in distant corners of the universe, will have property of consciousness.

  10. John said

    What if EVERYTHING is a modification of Conscious Light?

    For instance:

    1. http://www.dabase.org/dht6.htm
    2. http://www.dabase.org/broken.htm
    3. http://www.aboutadidam.org/readings/transcending_the_camera/index.html

    4. http://www.adidabienale.org
    5. http://www.adidamla.org/newsletters/newsletter-aprilmay2006.pdf

  11. Hey John,

    I like the idea that the light has some relation to the basic principle of all things (if there is such a principle). After all, we know now that for the light time ‘doesn’t pass’, and that energy and mass can be converted one into each other.

    But I’m not sure that this kind of claim that there is one simple principle (light), and adding just adjective “conscious”, can do anything in explanation of world, subjectivity and everything.

    How does it differ from ancient claims, for example that of Thales that everything is water, or that of Anaximenes that everything is air?

  12. Anton Shepelev said

    Hello, Tanas

    Why is consciousness an occurent property? I assume that it manifests itself as the capacity for “intelligent” (whatever it means) reaction to external influences. Different influences will naturally provoke different reactions, but the capacity itself is dispositional (Aristotle’s essential vs. accidental). Influences that will destroy the system do not count, because you won’t expect consciousness from a broken (dead) system…

    To put it shorter: consciousness is a dispositional property as long as the system is undamaged and working.

    • Hi Anton,

      In this case, the type of consciousness which I was thinking about is the occurent consciousness, the I’m-existing-NOW-and-I’m-aware-of-my-being-and-other-things-NOW consciousness.This kind of consciousness is not something which can be dispositional, as it is about being-and-being-aware-of-my-being-NOW. It is something which I (as a human) do have now (that kind of awareness of my own existence), or I don’t have it (e.g. we can call that state unconscious also).

      I don’t want to say that this is the sole thing that the word “consciousness” can refer to, nor I want to claim that it is best used to refer to this thing, but hopefully with this explanation it is clear what I’m talking about.

  13. Anton Shepelev said

    Thank you for the reply, Tanas.

    You treat occurent and dispositional as mutually exlusive features, but your description of them:

    a) at the beginning of the post: “The dispositional properties of a system would be properties that characterize the behavior of the system in different potential circumstances. On other side, the system doesn’t exist in a potential circumstances, but in concrete ones, and at specific time has specific properties. Those would be occurrent ones.”

    b) and in your comment: “…the I’m-existing-NOW-and-I’m-aware-of-my-being-and-other-things-NOW consciousness”

    seems to define occurent as a special case of dispositional, i.e.: occurent is dispositional but not the other way round.

    You say that “the system doesn’t exist in a potential circumstances”, butactually it does exist in one specific realization of all potential circumstances that were possible at some previous moment. If at some momentt0 there are four sets of potential circumstances c1..c4, then at the next moment t1 only one of the four (say, c2) can be realized. If we have dispositional property Pd describing some aspect of the system’s behaviour in c1..c4, then the same property will describe it in c2.

    Referring to your example with an electrone, it has a fixed charge NOW (and ALWAYS), which makes this property both occurent and dispositional according to your definition.

    Of course, you could modify it adding that anything that depends on the circumstances is not dispositional, but what about the magnetic field created by a moving electrone. The location of this field depends on the electron’s coordinate, and strength on the electron’s speed. Yet the fundamental ablitity to create a magnetic field, and the mathematical dependency of the field’s parameters on the electron’s x(t), y(t) and z(t) is no doublt a dispositional property. This law covers all possible circumastances and is therefore dispositional. It can be used to calculate the field at some given point of time. Therefore, the calculated strength of the magnetic field is but a manifestation of the electron’s dispositional property.

    You might want to read this article on essential vs. accidental:
    http://plato.stanford.edu/entries/essential-accidental/

    The way you define occurent consciousness, it is indeed indistinguishable from a mechanical recording, but I do not see any reason in dealing with this concept because in retrospective everything can be looked upon as a recording.

    * * *

    My problem with explaining consciousness is the origin of feeling and emotions. As a friend tole me, from the physical laws that govern the behaviour of materia we can deduce only that — the behaviour of materia.

    * * *

    In a Russian-Orthdox Christian book consciousness is connected with free will, and free will with the ability to violate the law of causation by actimg out of itself and without a cause. This is an intersting point of view, because without free will in this sense (i.e. with determinism) there is no reason for emotions and feelings to exist, because all men are machines anyway…

    • I guess it is hard to distinguish them formally in some way, but I’m thinking of the difference between glass being broken vs. glass being fragile. Glass at certain point will be either broken or not, that is an occurent property. It being fragile, is on other hand, a dispositional property, because it is *about the potential* of it being broken. Maybe every dispositional property IS in one way or another nothing but a potential of the object to get some occurent property, but really, those formal conundrums do not really interest me.

      In the case of consciousness, think about some person being unconscious, and what does that mean. We aren’t saying that the person doesn’t have potential of thinking, to become aware of things, learning, acting etc… So what I was saying is that it *lacking consciousness*, or *not being conscious* is not in that sense a dispositional property – it tells us something about the system now. That it doesn’t think, is not aware of things, isn’t acting based on the input of the senses, NOW.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: