Everybody knows that one part of the computer is its memory. It is made of small parts which can take one state or other (e.g. on and off), like switches, and we say that one such part can hold amount of information of one bit. If you get eight of them, you get amount of information of one byte, 1024 bytes make one kilobyte, and 1024 kilobytes make a megabyte, 1024 megabytes make gigabyte. So if you have 1 gigabyte of memory in your computer, you have 1024*1024*1024*8 of those bits, of those switches – around 9 billion of them. If you have started switching those switches from one state to other state, from the day you were born, 15 of them per second, it would take you over 30 years to finish.
Anyway, those switches happen to be of a very particular kind… when you put them in one state, they stay in that state until you change it again, and more importantly those switches don’t disappear.
So we can look at this kind of computer memory as based on switches that exist, and which you can set now to “On” state, and then later read that state. If the state of the switch isn’t stable, or if the switch disappears, this can’t be done. And what is called RAM memory in the computer keeps the state only while the computer is turned on; it needs electrical power to maintain the state. When you turn off the computer, and then turn it on again, the states of those switches in the RAM memory are random, so you can’t read what has been written to it before the restart. Of course, we use hard disk for keeping the state when there is no power; the state of the “switches” in the hard disk is maintained even there is no power.
But we can ignore the technical details of the switches, and how their state is set or changed. From our point of view… as long as those things don’t suddenly disappear, and they maintain their state through time (state that can be set, and read later), they are switches… Like those on the picture:
If I’m forgetting things, I can use those to store information. For example I ask my wife if she would like eggs tomorrow for breakfast or not, and she says yes. Instead of remembering that, I put the switch in on position if she does, and in off if she doesn’t. The next day, when I get up, I just take a look on the switch, and there… she did want eggs for breakfast. Of course if the switches were such that they disappear this wouldn’t be possible.
But wait, you say, what if you need to remember if she would like bacon, if you need to throw up the garbage, to go buy something, etc… You will switch all those switches to on and off, but then – you will need to remember which switch is for what!
OK, I must admit it would be impractical, especially that those switches are such that can’t be tagged or written on. But there are lots of ways in which those can be still useful. First I can learn once what switch is for, and then I can use the switches to hold information for each new day. Other thing is that I can use it to store bigger information if I take few consecutive switches to store it. E.g. I can store 4 states by using two switches. “Off”/”Off”, “Off”/”On”, “On”/”Off” and “On”/”On”. In that case, I need to remember just the starting switch, but I can remember which of the four things I had to remember. If it is 8 different states, then I will need three consecutive switches, four for 16 states, five for 32 (e.g. remembering a number between 1 and 32), etc… So those switches are usable, by remembering just the starting switch, we can remember very big things. The American mathematician and electrical engineer Claude Shannon is known as father of the information theory which deals with this kind of transfer of information; in our case it is transfer of information from past to future, from me today, to the me tomorrow, but usually the sender and receiver are abstract, and are seen in such way that it is not important if they are same or different person, where they are, or in what time they are.
I could even make codes for letters (or use standard ones like the ASCII codes, which connect each Latin letter to a number), so I use 5 of those switches to remember one letter. So if I use it like that, I can, kind of, write by switching those switches. For example I can write: EGGS (each letter being 5 switches) and then the next switch “On”/”Off”, then BACON, the next switch “On”/”Off” depending on what my wife says and so on. So we can see we can use those switches to store information, and that’s why they call it memory. Note though that I will still need a way to somehow understand what the states of particular switches mean, and that the information can’t mean anything by itself. If I’m to encounter in somebody’s front yard several switches in “On” position and others in “Off” position, there is no way I can figure out from the position of the switches alone what they mean. They might be encoding what somebody’s wife wanted for breakfast, or maybe a sequence of letters, or any information that can be reduced to certain amount of states. For example my wife could some day say… “Instead of telling you what I want for breakfast, I set the first switch myself”, and I would still miss information what does “On” mean, does it mean eggs, or no eggs?
So, those states can’t mean anything by themselves, even if we are smart enough to figure out that they are supposed to mean anything. After all maybe we are in switches factory. Shannon himself said: “Frequently the messages have meaning”… these semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages”. If you have millions of switches, and each of them is in some random state, Off/On, in Shannon’s terms, you will have system with lot of information. However if it has some meaning (e.g. somebody used it to communicate something), or not, is separate question.
To be continued…
Technorati Tags: information, shannon, meaning