[sci.psychology] The Bandwidth of the Brain

mmm@cup.portal.com (Mark Robert Thorson) (12/18/90)

There is a common myth that the brain is capable of enormous computational
bandwidth -- for example that the retina sends gigabauds worth of data to
the brain.  I believe the computational bandwidth of the brain is quite low,
low enough that we could simulate a brain on today's computers if only we knew
how to do it.

As the first piece of evidence, consider the results of applying information
theory to animal behavior.  It may come as a surprise to hear that zoologists
have developed statistical techniques for measuring the bandwidth of
communication between animals.  This is possible because animals (especially the
lower species, such as fish and insects) have very stereotyped behavior
patterns.  In experiments I performed using crickets, these behaviors consisted
of waving the antennae, fluttering the wings, chirpings, and a few other things.
I would put a cricket in a box with a little matchbox house.  Crickets like
little enclosed spaces, so the cricket would go into the house and make it its
home.  After some time, I would introduce a second cricket and record the
interaction as the first cricket defends its nest against the interloper.

The interactions were very consistent.  First there would be an antenna wave
followed by an antenna wave from the intruder, then there would be a wing
flutter or a chirp, etc., finally resulting in the defender chasing off the
intruder.

I would compile data for dozens of such interactions.  To convert this data
to an estimate of the bits exchanged between the animals, I organized the
data in a matrix, with stimulus events in the columns and the response
events in the rows.  Then there was some statistical technique I've forgotten
for boiling down the matrix to a single number representing the number of
bits exchanged.

What was surprising was just how few bits are exchanged when animals interact.
In my experiments, only about 2 or 3 bits were being transmitted per interaction.
The professor of the course had a table summarizing many experiments with
other species, showing a rise in information transfer as you go up the scale
to humans, who (by this measure) can assimilate hundreds of bits per second.
This seems to jibe with reading speed -- I can almost read text blasted at me
at 1200 baud, which seems about the highest-bandwidth input that I have.

Another piece of evidence comes from reaction time tests.  These are performed
using an instrument called a tachistoscope, which is a rear-projection screen
upon which images can be flashed.  By simply asking you to respond when you
see a number flashed on the screen, we can get a figure for the speed of the
path from the eyes through the brain to the muscles.  Then, by changing the
experimental paradigm -- for example, by flashing simple math problems -- we
get a longer reaction time.  The difference between the two times is the
amount of time the brain needs to do the additional work.  By dividing this
number by the speed of nerve cells, it's possible to make an estimate for
the number of stages of nerve cells which were involved in performing the
task.  (See _Human_Learning_and_Memory_ by Roberta Klatzky for a good
introduction to the topic.)

What was surprising was how few layers are involved.  Even fairly complex
math or word-association tests seemed to correspond to 10 layers or less.

So it seems like the whole brain, engaged in a task which undeniably involves
thinking, might be modeled as a pipeline of 10 stages with no more than
1200 baud bandwidth each -- an astoundingly low amount of computational bandwidth.
Of course, this doesn't mean the same 10 stages are used for every problem,
merely that most sorts of thinking don't involve many layers of cells or
much bandwidth.

I think the reason people believe the brain has enormous computational bandwidth
is that people see bundles of nerve fibers, and assume they are like wires
in a computer or a communications network.  They falsely assume that each fiber
is an independent channel, and that the total channel capacity is the product
of multiplying the capacity of an individual fiber by the number of fibers.
This is clearly not true -- you can't have all the fibers in your spinal cord
jumping simultaneously.  Likewise, when I view a bit-mapped graphics display
with my retinas, I cannot simultaneously perceive all the dots on the screen
and I certainly can't remember or interpret them if they are changing 15 times
a second.  Presented with a single frame of random dots, I might be able to
memorize some small 10 x 10 grid subset of the image if given enough time
to memorize them (like an hour).

I think it is obvious that the brain consists of many agencies which are
"on call", but very few agencies which are simultaneously active.  Our 
remarkable ability to remember millions of minor facts, or recall some
minor event which occurred many years ago (and which one hasn't thought
about for many years) is no more remarkable than the ability of a phone book
to record the names and addresses of millions of people or the ability of
the disk drives at TRW to store millions of credit histories.

The evidence suggests that the parts of the brain that are active
during any short span of time provide very low computational bandwidth;  their
power comes from the fact that many such parts are available to be used.
I don't use my math parts while making dinner, I don't use my cooking parts
while writing this Usenet posting.  And I haven't used my Morse code parts or
my German-language parts much at all in the last 20 years.

Existing computers have far more computational bandwidth than is needed to
simulate a human consciousness.  What is needed is a model which allows the
parts that are "on call" to reside on disk and the parts which are active
to be accessible in semiconductor memory.

ins_atge@jhunix.HCF.JHU.EDU (Thomas G Edwards) (12/18/90)

In article <37034@cup.portal.com> mmm@cup.portal.com (Mark Robert Thorson) writes:
>There is a common myth that the brain is capable of enormous computational
>bandwidth -- for example that the retina sends gigabauds worth of data to
>the brain.  I believe the computational bandwidth of the brain is quite low,
>low enough that we could simulate a brain on today's computers if only we knew
>how to do it.

As far as visual processing goes, we know that a huge amount of the
information which arrives in the retina is filtered out before we
do serious cognitive processing.  But there still is a huge bandwidth of
visual information which is available to pre-attentive areas in brain,
thought most of the pre-attentive visual processing is highly parallel.

Also I think we are finding that what looked like near random nerve 
pulses are actually frequency and phase modulated (i.e. more bits per
baud, just like a 9600 BPS modem). 

I will most strongly agree that post-attentional areas in many types
of perception probably deal with alot less information than one would
naively think, but the real challenge to artificial neural systems
is performing the massive attentional task at hand.

-Tom

curve@blake.u.washington.edu (12/18/90)

A article which appeared in a recent issue Scientific American magazine defines
a human being as:

"An anaolg processing and storage device with a bandwidth of about 50 bits
 per second. Human beings excel at pattern recognition, but are notoriously
 slow at sequential calculations."

erich@near.cs.caltech.edu (Erich Schneider) (12/18/90)

I heartily suggest you go read Winograd and Flores' _Understanding
Computers and Cognition_, published by Ablex Publishing Corp. (ISBN
0-89391-050-3). It might open you to some new views on some (I think)
common misconceptions:

1) That the purpose of language is to explicitly transmit information.
2) That "thinking" is about the brain modeling a "real world".
3) That decisions are made by formulating alternatives, assigning
   benefits to the results of actions, and selecting the best alternative 
   as a result.

Sure, computers can do algorithmic processes really fast. Sure,
they've got a really high bandwidth. _Human brains don't work like
computers._ And that's the problem.

We've given the AI paradigm 50 years and a lot of grant money. What
have they given us back? Let's go on to something new and useful.
--
erich@tybalt.caltech.edu  or try erich@through.caltech.edu

"The Hierophant is Disguised and Confused."

schiebel@cs.wvu.wvnet.edu (Darrell Schiebel) (12/19/90)

In article <37034@cup.portal.com>, mmm@cup.portal.com (Mark Robert Thorson) writes:

	<...interesting behaviorial studies deleted...>

> I think the reason people believe the brain has enormous computational bandwidth
> is that people see bundles of nerve fibers, and assume they are like wires
> in a computer or a communications network.  They falsely assume that each fiber
> is an independent channel, and that the total channel capacity is the product
> of multiplying the capacity of an individual fiber by the number of fibers.
> This is clearly not true -- you can't have all the fibers in your spinal cord
> jumping simultaneously.  Likewise, when I view a bit-mapped graphics display
> with my retinas, I cannot simultaneously perceive all the dots on the screen
> and I certainly can't remember or interpret them if they are changing 15 times
> a second.  Presented with a single frame of random dots, I might be able to
> memorize some small 10 x 10 grid subset of the image if given enough time
> to memorize them (like an hour).

Although you might not be able to memorize (or "preceive") all of the pixels on
a screen, I would bet most people could RECOGNIZE a huge variety of images
displayed on the screen, including images President Bush's face each with
different expression in different lighting with combinations of hats, sun
glasses, etc.  My point is that this is the sort of task that humans excel
at, and it involves processing huge amounts of information quickly, and
most likely involves the interactions of many neurons which are not 
"processing" the "same" information, i.e. interactions in the lateral 
geniculate nucleus (LGN), striate cortex, prestriate cortex, and the 
inferior temporal lobe.

As for the communication studies, perhaps the lack of bandwidth is not due
to inefficiencies in the processing mechanisms, but rather inefficencies in
the communication medium.


						Darrell Schiebel
						Computer Science
						West Virginia University
						(schiebel@a.cs.wvu.wvnet.edu)

fremack@violet.uwaterloo.ca (Randy Mack) (12/19/90)

I think I would have to disagree with your conclusions. It would appear that
you are measuring the bandwidth of the input device (the eyes), not what
the brain is theoretically capable of. Cyberspace generally presumes the
existance of some form of direct neural connection and I don't see why this
has to be as limited as our biological forms of input.

Beyond that, what about individuals with "photographic memories"? That would
appear to indicate a much higher bandwidth. They could memorize your 10x10
grid in much less than an hour. I realize that these are abnormal cases, but
they show that the brain has more potential than you appear to be taking
into consideration.
--
Randy Mack aka Archbishop Heron        |            fremack@violet.uwaterloo.ca
---------------------------------------+---------------------------------------
"Perspective -- use it or lose it. If you have turned to this page you're
 forgetting that what is going on around you is not reality ..." - Richard Bach

cpshelley@violet.uwaterloo.ca (cameron shelley) (12/19/90)

In article <37034@cup.portal.com> mmm@cup.portal.com (Mark Robert Thorson) writes:
[...]
>
>I think it is obvious that the brain consists of many agencies which are
>"on call", but very few agencies which are simultaneously active.  Our 
>remarkable ability to remember millions of minor facts, or recall some
>minor event which occurred many years ago (and which one hasn't thought
>about for many years) is no more remarkable than the ability of a phone book
>to record the names and addresses of millions of people or the ability of
>the disk drives at TRW to store millions of credit histories.
>

A very interesting post!  How integral do see these "agencies" as being,
ie. how distinct from one another are they?  What evidence exists for the
answer?

Questions, questions... :>
--
      Cameron Shelley        | "Logic, n.  The art of thinking and reasoning
cpshelley@violet.waterloo.edu|  in strict accordance with the limitations and
    Davis Centre Rm 2136     |  incapacities of the human misunderstanding..."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce

awessels@ccwf.cc.utexas.edu (Allen Wessels) (12/19/90)

In article <ERICH.90Dec18072845@near.cs.caltech.edu> erich@near.cs.caltech.edu (Erich Schneider) writes:

>We've given the AI paradigm 50 years and a lot of grant money. What
>have they given us back? Let's go on to something new and useful.

Yeah, geez, what have these people been thinking?  Why, anyone could have 
predicted from the computer simulations being run in the 40's that AI was 
going nowhere.

jwtlai@watcgl.waterloo.edu (Jim W Lai) (12/19/90)

In article <37034@cup.portal.com> mmm@cup.portal.com (Mark Robert Thorson)
writes:
>There is a common myth that the brain is capable of enormous computational
>bandwidth -- for example that the retina sends gigabauds worth of data to
>the brain.  I believe the computational bandwidth of the brain is quite low,
>low enough that we could simulate a brain on today's computers if only we knew
>how to do it.

[test of experiments deleted]

>So it seems like the whole brain, engaged in a task which undeniably involves
>thinking, might be modeled as a pipeline of 10 stages with no more than
>1200 baud bandwidth each -- an astoundingly low amount of computational
>bandwidth.  Of course, this doesn't mean the same 10 stages are used for every
>problem, merely that most sorts of thinking don't involve many layers of cells
>or much bandwidth.

Not so.  If it is a pipeline of "agents", you only know the maximum speed of
any individual filter along the pipeline.  (A chain is only as strong as its
weakest link.)  Each agent can encode data minimally for compactness in
theory, but that doesn't mean that the brain is therefore optimal and actually
only uses 50 bits per second bandwidth.  There's nothing wrong with a model
that assumes the sending gigabits per second down the pipeline.  In the agent
paradigm, each agent can feed output to many agents.  A tree ten levels deep
can still be quite wide, and be particularly nasty if when cycles are added.

My gripe with experiments that claim to measure the bandwidth of the brain
is the validity of their measurement criteria, which may have already
assumed a fair amount of preprocessing to have taken place.

mmm@cup.portal.com (Mark Robert Thorson) (12/21/90)

> Beyond that, what about individuals with "photographic memories"? That would
> appear to indicate a much higher bandwidth. They could memorize your 10x10
> grid in much less than an hour. I realize that these are abnormal cases, but
> they show that the brain has more potential than you appear to be taking
> into consideration.

Excerpted from _Cybernetics_, Transactions of the Eighth Conference,
March 15-16, 1951, New York.  My comments at the end.

-----------------------------------------------------------------------

WARREN McCULLOCH:  May I abuse for a moment the privilege of the
Chairman and ask two or three questions?  I spent a considerable
amount of time and effort during my senior year in Yale, back in
1921, attempting to evaluate how much can be recalled under
hypnosis.  We were by no means convinced that an infinite amount
could be recalled, but only an amount which was vastly in excess
of what is available in the waking state could be recalled--I would
say perhaps a thousand times as much;  it still has an upper limit.

WALTER PITTS:  Did you try simple material such as a string of
nonsense syllables?

McCULLOCH:  We tried this sort of trick:  We took master bricklayers
who laid face brick and had them recall the seventh brick in the row,
or something else like that, in a given year.  They were able to recall
any one such brick--thirty or forty items at the most.  That was a
brick that had been through their hands some ten years before.  It
is still not an infinite amount.  A master bricklayer can lay only a
certain number of bricks per diem even when his entire attention
is riveted on laying bricks.  The amount is not infinite.

LEONARD SAVAGE:  What would you call an infinite amount?

McCULLOCH:  A thousand bits per minute at the most, or something
like that, that you can get back.  It is not infinite.  I think that is
critical--

SAVAGE:  Well, who could believe it could be literally infinite?  And
if it is not literally infinite, who could believe it would be as much as
you say?

McCULLOCH:  There are two things here which are equally important:
first, that it is vastly more than one can remember in the waking
state; and, second, that it is not infinite.

PITTS:  You could take nonsense syllables or present a board filled
with letters, and determine how many could be remembered.

McCULLOCH:  It is extremely difficult to get a man to attend to this,
but a master bricklayer looks at the face of the brick when he is
laying it.

MARGARET MEAD:  Did you find differences between two master
bricklayers in which thousand items they would notice?

McCULLOCH:  No, not significantly.  He mostly noticed three things--

ARTURO ROSENBLUTH:  How do you go back to check?

McCULLOCH:  These things are verified by checking the bricks.  They
are master bricklayers.  That means they are laying face bricks.  That
means that even ten years later, you can go back to that row and
look at the brick.  The only things you can't check are things on the
opposite side of the brick or an angle off the side of the wall.

GERHARDT von BONIN:  How many things can the normal person
remember?

McCULLOCH:  The estimate is that the maximum is 10 frames per
second.

von BONIN:  No, no, I don't mean that;  the normal person.

McCULLOCH:  On the normal person, the best man known on
receiving communication in the United States Navy could give you
a hundred letters in sequence at the end of having received a
hundred in 10 seconds.  He had to wait until he had passed through
a period during which he could not recall, and then he would give
you all the hundred letters.

JULIAN BIGELOW:  What kind of communication was it?

McCULLOCH:  Semaphore.

PITTS:  If you were to hypnotize him--

McCULLOCH:  That is not more than five bits per letter.

BIGELOW:  But it is essentially controlled by the semaphore process.

McCULLOCH:  Well, this can be sent as fast as you will.  It is sent
by machine.

DONALD McKAY:  What was the redundancy in the information about
the bricks?  Or, to put it otherwise, how frequently did the features
recalled crop up normally in bricks?

McCULLOCH:  In bricks, it is rather rare.  The kind of things men
remember are that in the lower lefthand corner, about an inch up
and two inches over, is a purple stone, which doesn't occur in any
other brick that they laid in that whole wall, or things of that sort.
The pebble may be about a millimeter in diameter.

BIGELOW:  How could he possibly remember thirty of those features
on this one brick?

McCULLOCH:  Oh, they do.  It is amazing, when you get a man to
recall a given brick, the amount of detail he can remember about
that one brick.  That is the thing that is amazing.  I note, as a result,
that there is an enormous amount that goes into us that never comes
through.

LAWRENCE KUBIE:  This is comparable to the experiments under
hypnosis, in which the subject is induced to return to earlier age
periods in his life.

McCULLOCH:  I have never done any of those.

KUBIE:  In these you say to the subject, "Tell me about your seventh
birthday,"  or his ninth birthday, or something of that kind;  and he
gives you verifiable data about the party, who was there, and so on.

McCULLOCH:  I have seen my mother's uncle, who was an incredible
person in the way he could recall things--he was law librarian in
Washington for a few years--testify that he had looked at such and
such a document some twenty years before, and shut his eyes and
read the document and the signatures under it.

SAVAGE:  I don't have absolute recall, but I can remember hearing
that story more than once in sessions of this group.

McCULLOCH:  They said, "But the document does not read that way;
it read so and so from there on," and he replied, "Then the document
has been altered and you had better check the ink."  And when they
checked it, it was a forgery.

SAVAGE:  It's the same story, all right.

-----------------------------------------------------------------------

I think McCulloch has described two phenomena here.  In the first
place, we have the bricklayers who remember bricks because they
study them thoroughly in order to make decisions about which side
faces front, and which bricks go where.  For example, if the bricks
you're using to build a wall come from two different batches -- one
with purple stones and one without -- it would be your responsi-
bility as a master bricklayer to distribute them in an aesthetic
pattern.  You would not simply place them in the order they came,
with part of the wall having purple stones and part not.

So it is no surprise that this information -- having filtered through
the highly refined judgement of the master bricklayer -- would have
a persistence in an old layer of memory.  I would not be surprised
if a master programmer could be made to recall each line of code
he has written, under hypnosis.

This is not at all the same thing as having a photographic memory
of the face of each brick.  The bricklayer is remembering details
he used to make his decisions, so he is remembering the bricks at the
resolution at which for example a master chessplayer would
remember a chess position, rather than the detail of an actual mental
photograph (if such a thing could exist).

The other phenomenon described by McCulloch is the more
well-known example of a person who claims to have photographic
memory.  I believe this is nothing more than a person who has
developed a mental exercise which happens to be useful for
providing memory cues.  He might even believe that he is viewing
the actual image of the document in his mind, but it is really nothing
more than an internally-generated synthetic image of what he
believes the document looks like--in the same manner that people
who develop an ability to manipulate their brain's sense of location
often claim to "astral travel" (i.e. the "soul" leaves the body and goes
floating around the room, house, outdoors, etc.).

Yonderboy@cup.portal.com (Christopher Lee Russell) (12/22/90)

The oringinal message that started this thread, seemed to be making a somewhat
inncorrect comparison..  It seems to me that the original post was equating 
the bandwidth of the eye/brain interface, but not the brain itself..  For
example one of the messages stated that he thought since the human brain can
only take in foo-bits of information per second that it must be a slow 
computer and therefore could be reproduced on a modern computer.  But I think
this falls apart becuase it is like saying that a vic-20 with a 19.2K baud
modem is a faster computer than a Cray with a 300 baud modem... Sure the 
vic gets the info faster, but the Cray has the storage/memory and the CPU to
really crunch it...   .IMHO.... ....Yonderboy@cup.portal.com

P.S.  This is not even considering the fact that the brain is receiving and 
processing foo-billion inputs from foo-billian nerves and five (or more) 
senses (obviously much subconciously)....

sena@infinet.UUCP (Fred Sena) (12/22/90)

In article <37034@cup.portal.com> mmm@cup.portal.com (Mark Robert Thorson) writes:
>There is a common myth that the brain is capable of enormous computational
>bandwidth -- for example that the retina sends gigabauds worth of data to
>the brain.  I believe the computational bandwidth of the brain is quite low,
>low enough that we could simulate a brain on today's computers if only we knew
>how to do it.
>

(If anyone knows of books that relate to my response below, I would really
appreciate hearing about them.	thanks.)


I agree that transmission between "creatures" (humans, animals) occurs at a
very low rate on the physical layer, but I think that there is a lot more
that occurs on the higher level or more abstract functions which are related
to the "context" of information transmission.  The more abstract functions,
or "architecture" of thought would be the hard (or impossible) part to
reproduce on an existing computer.

I think that measuring the "bandwidth" of transmission can be a misleading
indicator of the amount of complexity of the transmission sequence, or the
complexity of the entities performing the communication.  For instance, I can
say one word to you, but the amount of information could be a great amount
depending on the secondary meanings of the word are relating to the context
that the word is used and the broader implications of the word.

Like if I say "nuclear", a whole array of related images are evoked in the
person that I say it to, such as:
	bomb
	war
	reactor
	atom
	family (one of these is not like the others...)
	Enola Gay
	Hiroshima
	Department of Defense
	Department of Energy
	radiation...

Using that kind of thinking, I'm trying to imagine that there must be a way
to transfer information at a speed greater than the bandwidth of the physical
layer.

There is some kind of "pre-understanding" that goes on in a conversation
between two creatures.  I guess you could compare it to an extremely
efficient compression algorithm that is implemented both of the creatures.
Both are aware of the structure and assemble and disassemble it in the
communication process.  The difference between the way that computers and
humans perform compression is that computers do it sequentially, whereas
people do it "instantaneously".  Anyhow, the bandwidth can be low, but the
amount of "information transmission" or "understanding".  I think can be much
higher.

The interaction between two creatures might be impossible to measure in terms
of complexity because the interaction is a synchronous one as well.  Both
machines are "doing their own thing" relative to the context of the situation
and occasionally sending sync pulses to coordinate the behavior.

I think that the biggest mistake that behaviorists make is that the think
that a single creature can be isolated, examined, and then "understood"
independent of it's environment.  By understood, I mean knowing enough about
it to develop some kind of model (actual or mathmatical) which behaves
exactly as the original creature would.  The best example I have of an
analagous problem is the Heisenburg (sp?) uncertaincy principle which says (I
think) that you cannot know the position and speed of an electron at the same
time.  The reason is that it is impossible to "measure" either of properties
without affecting the other, or that any measurement made will adversly
affect the results.  Even if the affects are tiny, I think that small errors
can accumulate very quickly.  Also, "measurement" is a very subjective
process.  You only see what you are looking for, in terms of your previous
experience, and an accumulation of the experience of others who preceded you.
How much do we *really* know?

Now I'm going to go way out with some theories I have.

I think that what has happened in the field of science is that all of the
models we are using to understand the world around us has only told us about
how "we" operate and think.  Each scientific discovery about the universe is
expressed in terms of our own view.

I'm reading a book called "Other Worlds (space, superspace, and the quantum
universe)" by Paul Davies which talks a lot about the Heisenburg uncertaincy
principle and how quantum mechanics works.  I'm going one step further in
suggesting that this advanced information about quantum mechanics (and other
scientific theories) is really modelling how our minds operate, and it may
coincidently tell us about how the universe operates.  There are amazing (I
think) parallels between particle physics and the way that "selves" operate.

In quantum mechanics, transmissions are done with photons.  And the photons
operate as both a particle and a wave.  Well, the transmission of words seems
to operate in a similar fashion.  Each word has properties of particles in
that they appear to be a contained unit of information (energy?)("Knowlege is
power" :-).  The words are also part of a "wave" motion in that the words
interfere with one another via the context of the presentation of words.

I don't think that we have a chance of modelling our behavior until we can
fully understand the nature of the interference patterns that occur in our
thoughts.  Also, I think that the patterns occur in 3D on the simplest level,
so we will probably need some kind of 3D (at least 3) memory storage/
processing device to even begin approximation.  Maybe like the crystal optical
storage devices in Authur Clark's "Rendezvous With Rama".  Or perhaps a
wetware bio-memory just like our own.

	--fred
-- 
--------------------------------------------------
Frederick J. Sena                sena@infinet.UUCP
Memotec Datacom, Inc.  N. Andover, MA

zane@ddsw1.MCS.COM (Sameer Parekh) (12/22/90)

In article <37034@cup.portal.com> mmm@cup.portal.com (Mark Robert Thorson) writes:
>There is a common myth that the brain is capable of enormous computational
>bandwidth -- for example that the retina sends gigabauds worth of data to
>the brain.  I believe the computational bandwidth of the brain is quite low,
>low enough that we could simulate a brain on today's computers if only we knew
>how to do it.

	I don't think we will be able to learn how to do it.
It is a theory of mine (maybe someone that I unknowingy plagarized from?
If so, sorry.)  That something can only understand something that is less
complex than itself.  Therefore, we can not fully understand how we
work.

-- 
zane@ddsw1.MCS.COM

zarnuk@caen.engin.umich.edu (Paul Steven Mccarthy) (12/22/90)

mmm@cup.portal.com (Mark Robert Thorson) writes:

[ Discussion of master bricklayers' stunning ability to recall minute
 details about bricks layed 10 years earlier...  Assertion that a master 
 bricklayer uses extensive judgement and analysis when laying bricks ... ]

>This is not at all the same thing as having a photographic memory
>of the face of each brick.  The bricklayer is remembering details
>he used to make his decisions, so he is remembering the bricks at the
>resolution at which for example a master chessplayer would
>remember a chess position, rather than the detail of an actual mental
>photograph (if such a thing could exist).

Interesting to note here that master chess players "instantly"
memorize _valid_ chess positions, but have trouble memorizing 
_invalid_ chess positions, implying that they have a better 
internal representation for valid chess positions.  


>The other phenomenon described by McCulloch is the more
>well-known example of a person who claims to have photographic
>memory.  I believe this is nothing more than a person who has
>developed a mental exercise which happens to be useful for
>providing memory cues.  He might even believe that he is viewing
>the actual image of the document in his mind, but it is really nothing
>more than an internally-generated synthetic image of what he
>believes the document looks like

Weeellll....

I am a very visual person.  I often think by manipulating mental images.
I was never partial to algebra -- but geometry has always held a special
place in my heart.  When they started teaching me how to interpret algebra
geometrically, I did much better at it.  I enjoy drawing and sketching, 
and do them well.  I have at least partial photographic recall.  


Example 1: Highschool Civics (U.S. government) class

My teacher gave us tests which consisted of topic sentences from the
chapter with a key word (subject-noun, important adjective/adverb) 
missing.  We were supposed to fill in the missing word.  Each test
would have about forty of these questions.  I would look at the partial
sentence and recall the page on which it appeared.  I could tell you
which column (the book was printed in double-columns), relative position
vertically on the page, and whether it appeared on a left-hand (even-num-
bered) or right-hand (odd-numbered) page.  The recalled image was not 
like a rectangular photograph of the entire page set on a black background.
It was more like a 4-inch circle centered on the sentence, with the
peripheral image all there, but not accessible in detail.  I could not
recall the page number that the sentence was on unless it appeared near
the corner where the page-numbers were printed.  Needless to say, I got
an A+ in that class.  (Now if we could just convince _all_ educators 
to use this kind of test ... :)

Now you may say that I "reconstructed" these images somehow, but from
my vantage point, I was recalling a mental photograph.  Even if he had
actually composed original sentences, I would have done well, because
I learned the material, but that would have been accessed from an internal 
representation of the knowledge -- not from a photograph.  


Example 2: Psychology Experiment in College

No, I was not singled out for a special experiment.  Various researchers
offered nominal fees for participation as subjects in their experiments.
I participated in half a dozen.  Most were really sociological experiments,
testing the subjects' preconceived notions about things.  One was a memory
experiment.  

    We were given some whale-tale about "recognizing artists' styles"; 
    told that we would be paid according to how well we scored; put into
    soundproof booths with a television-screen and two buttons (yes/no).

    It came in three sessions.  Each session consisted of two parts: 
    examples, then selections.  First, 10 images were displayed on the
    screen for about 10 second each.  These were "examples" of the "artist's"
    works.  Then 50 images would be flashed on the screen, and we had
    about 10 seconds to decide whether this image was done by the same
    "artist".  There was an audible beep indicating right/wrong decisions.

    The first session consisted of red, yellow and blue rectangles of
    different sizes stacked on top of each other.  (Modern art?)  I tried
    to sense a "style" from the 10 examples.  There wasn't much to go on.
    They varied in color order (red,blue,yellow; yellow,red,blue; etc.)
    They varied in sizes (tall yellow, squat red, square blue; square 
    yellow, tiny blue, large red; etc.)  None were aesthetically pleasing.

    Then came the images.  It took me a little while to get the pace.
    (Ten seconds is not long to make a "style" judgement.)  I lost a
    few early ones by default.  Some of the images displayed were from
    the examples -- those were easy.  The others were just more of these
    random collections of red, yellow and blue rectangles of different
    sizes and orders -- some _very_ close to the examples.  I tried
    answering "yes" to the duplicates and the close ones, and "no" to
    the ones that were not close.  I got the duplicates right.  I got
    the "not close" ones right.  But all the "close" ones were wrong.
    -- I know, I'm slow. -- Ah Hah!  Style, schmyle!  They just want to
    know if I can remember a set of randomly colored/shaped blocks!  
    I got nearly all of the rest right.  (Under pressure, with 10 
    seconds to decide, sometimes you tell your left index finger to 
    press the "yes" button, but your right index finger presses the 
    "no" button instead.)

    Now I knew the game -- but they changed pitchers.  Whereas the
    first session consisted of three large rectangles generally using
    the entire screen, the second session consisted of about 50 small, 
    squares (two ibm-pc chr(219) full character-cell blocks), of red,
    blue and yellow scattered randomly on the screen (except no two
    squares touched sides, only corners if at all).  Ten seconds is
    not long to "photograph" that much detail.  Nevertheless, I had
    about a 90% hit-rate.  In the third session, they threw in their
    fastball pitcher: the images were 100 single character-cell blocks
    randomly distributed on the screen.  With the increased level of
    detail, my hit-rate fell to about 80%.  For these high-detail 
    sessions, I found that I scored better if I didn't try to think
    about them.  I put myself in a semi-trance state and hit "yes"
    or "no" without really thinking about it.  

    Afterwards, (getting my money), the researcher was stunned by my
    79% overall score (I lost most of the first session before I 
    learned the game.)  

Needless to say, I was never invited to participate in any more 
experiments.


You may say that I developed a special skill to remember these random
collections, but if so, I certainly did it quickly.  I don't recall 
the researcher's name, but if you want verification, I'm willing to
cooperate.

I don't claim to have perfect photographic recall.  But visual images
are certainly an important element of whatever my brain uses for recall.
I am also sensitive to recall through aromas and even more deeply by
tastes.  

The sense of taste changes over time.  If you enjoy a particular food
or cigarette or anything, and you consume it constantly for a long
period of time, it will not taste exactly the same to you now as it
did when you first tasted it.  There is a taste in your mouth at all 
times, but most of the time you ignore it.  In fact, most of the time
you cannot even perceive it, because it becomes part of the "background
noise" that gets filtered out before it reaches the cognitive centers 
of the brain.  This taste-in-the-mouth is primarily the result of your
dietary habits for the period.  If you're like me, then you go through
phases when you're fond of Mexican food, then Itallian, then Chinese,
etc..  At any one time, there is a relatively small number of dishes
that you eat very regularly.  The net result of all this is that there
are "archetype" background tastes that are associated with different
periods of your life.

There are times when I have exceedingly intense instances of recall
which actually carry this background taste along with them.  There
are times when a particular taste will initiate such an intense recall.
These instances are quite rare for me, and they always bring along
visual images, but they are generally not rooted to a specific instance,
but rather to an entire period of my life.  I will recall "global"
information from the period.  My feelings about myself and others
at the time.  I will _be_ the person that I was for a few moments.  


The moral of the story?  Physical sensations play an integral role
in recall.  This includes visual images.  Some people may be better
attuned to their visual circuitry for recall than others.  Personally,
I recall most things by recalling actual visual "pictures".  Sometimes
the pictures contain enough detail to extract the necessary information.
More often they just help me access the information that I need.  Is
this photographic recall?  I don't measure up to the anecdote that you
alluded to, but I do use pictures.  

---Paul...

foetus@ucscb.UCSC.EDU (71030000) (12/22/90)

The brain is not a computer.  It is, however, the most complex and
little understood thing that so many (but not all :-)) people have in 
common.  In this respect, it is like a computer.
But:
Back in the days when a telephone switchboard (or even Thaddius Cahills 
fabled Tellharmonium) was the most complicated, technologically
advanced task machine, people compared that to a brain.
Fact is, we've all just been fooling ourselves.
Anyboby get what I'm saying?

============================------------------------->FOETUS!!

foetus @ucscb.UCSC.EDU                        * If you save everything till
Long Live the Slug!                           * the last minute, it only
Drop acid, not bombs!                         * takes a minute!

ddebry@toddler.es.com (David DeBry) (12/23/90)

In article <37034@cup.portal.com> mmm@cup.portal.com (Mark Robert Thorson) writes:
>There is a common myth that the brain is capable of enormous computational
>bandwidth -- for example that the retina sends gigabauds worth of data to
>the brain.  I believe the computational bandwidth of the brain is quite low,
>low enough that we could simulate a brain on today's computers if only we knew
>how to do it.

	Can we really figure it out?  Let's draw a small parallel here
for a minute...

	Given enough time, could a pocket calculator (any model)
figure out ON ITS OWN how it works enough to replicate itself?  No,
not by itself.

	However, we wouldn't work this was if we were doing it.  We'd
have a lot of people on the project.

	So then, given enough time, could finitely large number of
pocket calculators (any models) all networked together somehow ever
figure out how each of them work to the point that they could
fabricate one of themselves?  I think not.

	And the important thing to remember here is that you can
replace the words 'pocket calculator' with anything from 'abacus' to
'Cray II' and the answer is always the same.

	I'd like to suggest the answer even holds when you put 'human'
into the question.  We're too involved in the problem to figure it
out.  Just by asking the question, we've changed the situation and the
question is invalid.  Ask a new question, and the same thing happens.

-- 
"Food to rent, food to borrow. Deposit required."     - The Bobs
"Bear left."  "Right, Frog!"                          - The Muppet Movie
"Maybe it's a bit confusing for a game \
 But Rubik's Cubes were much the same."		      - Chess
// David DeBry - ddebry@dsd.es.com - (Multiple genres for a twisted person) //

jwtlai@watcgl.waterloo.edu (Jim W Lai) (12/23/90)

In article <1990Dec22.213121.12226@dsd.es.com> ddebry%bambam@es.com writes:
>	So then, given enough time, could finitely large number of
>pocket calculators (any models) all networked together somehow ever
>figure out how each of them work to the point that they could
>fabricate one of themselves?  I think not.
>
>	And the important thing to remember here is that you can
>replace the words 'pocket calculator' with anything from 'abacus' to
>'Cray II' and the answer is always the same.

Try neuron and neural nets.

>	I'd like to suggest the answer even holds when you put 'human'
>into the question.  We're too involved in the problem to figure it
>out.  Just by asking the question, we've changed the situation and the
>question is invalid.  Ask a new question, and the same thing happens.

Actually, this hypothesis makes the assumption on the level of abstraction
required to make a simulation of oneself.  What constitutes sufficient
understanding in this regard is debatable.  The requirements for
simulation are less stringent than fabrication.

zane@ddsw1.MCS.COM (Sameer Parekh) (12/25/90)

In article <1990Dec22.213121.12226@dsd.es.com> ddebry%bambam@es.com writes:
>In article <37034@cup.portal.com> mmm@cup.portal.com (Mark Robert Thorson) writes:
>>There is a common myth that the brain is capable of enormous computational
>>bandwidth -- for example that the retina sends gigabauds worth of data to
>>the brain.  I believe the computational bandwidth of the brain is quite low,
>>low enough that we could simulate a brain on today's computers if only we knew
>>how to do it.
[Good Explanation deleted]

	I posted the same thing but you were MUCH more effective.
(I said an enitity can only understand something that is lesser than itself.)



-- 
zane@ddsw1.MCS.COM

mmm@cup.portal.com (Mark Robert Thorson) (12/25/90)

> The brain is not a computer.  It is, however, the most complex and
> little understood thing that so many (but not all :-)) people have in 
> common.  In this respect, it is like a computer.
> But:
> Back in the days when a telephone switchboard (or even Thaddius Cahills 
> fabled Tellharmonium) was the most complicated, technologically
> advanced task machine, people compared that to a brain.
> Fact is, we've all just been fooling ourselves.
> Anyboby get what I'm saying?

I think the brain contains structures which resemble the
artifacts Man makes.  These include photo albums, calendars, address
books, dictionaries, etc.  The reason we make these objects is that
there are organs in our brains which require off-line mass storage,
so these organs somehow convince the brain to make external objects
which hold data in a compatible form.

Note that I'm not referring to objects such as telephone exchanges
whose structure is largely dictated by the needs of the application.

I'm mostly talking about objects whose structure is adapted to meet
the needs of the way the brain works.  There is one object which fits
this description most of all:  computers.  If you look at the architecture
of the early vacuum-tube computers, you see how closely the structure
is modeled on the way a child does math.  In ENIAC, numbers were handled
as individual decimal digits, and multiplication was handled one digit
at a time by looking it up in a multiplication table.

Our later computers have been somewhat optimized, but their design is still
to a large extent driven by the likes and dislikes of the people who
program them.  It is only with the advent of advanced compiler technology
that it has become possible to introduce human-unfriendly computer
architectures like RISC.

I'm not saying the whole brain is a computer, but parts of it do function
much like a computer in some ways.  Among these ways are processes which
are very central to consciousness, such as making plans (programming)
and following instructions (program execution).

magi@polaris.utu.fi (Marko Gronroos) (12/26/90)

In several articles many people write about if we ever can understand
and simulate our thinking with a computer. Here I will send another
BORING and STUPID article about the subject.

I don't think that we can simulate OUR thinking with digital,
synchronized computers. It doesn't matter if they are parallel; we can
simulate that completely with sequential computers, it's just slower.
The iterative processing just brings up too many problems. One
problem, for instance, is the signal feedback between layers; it
creates a 'resonance' that causes both layers to be completely unaware
about each other's activation level (I won't confirm this problem, and
ANNs discrete in time may solve this, but anyways..). This is just one
example, there are dozens of them.
 It might be possible to simulate *some* kind of thinking with normal
computers, though. But since even one million 'neurons' with
one billion 'synapses' would be quite a lot to simulate, the 'artificial
mind' wouldn't be too intelligent..

  Why do you say that we can't understand our thinking?
  It's quite true that a pocket calculator (pc) can't understand it's
"thinking", but then, a pocket calculator doesn't THINK, it doesn't
LEARN, It doesn't make INTELLIGENT CONCLUSIONS. I don't think that we
can make a "law of not understanding oneself", if we have only one
example of beings who really can't understand ANY of their functional
principles (computers).
  We understand some of our functional principles, so is there a law
that at some point stops our advance in studying ourselves? Where is
the limit? Is it high enough to allow us to create other beings that
think (with other, simpler principles)?
  The thinking computer doesn't have to simulate our brains.  Recently
I've been studying about crystalline light computers (sounds like
science fiction, doesn't it? :-) ) that deal with 'holographic'
thoughts generated by thought pattern signal interference. The
synaptic weights would be chemical changes in crystal structure that
extinguishes the light (like in automatically adjusting sunglasses).
Although I'm not sure about the correctness of this interference
theory (aka. rubber duck theory), what I'm trying to say is that the
principle of a thinking in our future neurocomputers may be totally
different from ours.

  The last word: Is it useful for us to say that we can't create
thinking machines? That kind of law is an ANSWER, and only religions
give ANSWERS. If we believe that we can't have progress in our
research, then there really can't be any progress. That's the main
reason why I don't like most religions.. :-/

raf@mango.cs.su.OZ.AU (A Stainless Steel Rat) (12/26/90)

In article <37111@cup.portal.com> mmm@cup.portal.com (Mark Robert Thorson) writes:

[Excerpt from _Cybernetics_, Transactions of the Eighth Conference,
March 15-16, 1951, New York deleted]

-I think McCulloch has described two phenomena here.  In the first
-place, we have the bricklayers who remember bricks because they
-study them thoroughly in order to make decisions about which side
-faces front, and which bricks go where.  For example, if the bricks
-you're using to build a wall come from two different batches -- one
-with purple stones and one without -- it would be your responsi-
-bility as a master bricklayer to distribute them in an aesthetic
-pattern.  You would not simply place them in the order they came,
-with part of the wall having purple stones and part not.

But this is not the case. Each brick is not "studied thoroughly." At best,
there may be a cursory glance (due, admittedly, to the master bricklayer's
skill at such things) for this purpose. In general the bricks *are* used in
whatever order they are taken from the pile because both sides are virtually
identical (over the large samples used, what's the difference?). This doesn't
detract from your argument in the case of a programmer and I agree that
concentration on something makes it easier to remember (I'll discuss this
later).

I also remind you that remembering the purple stone had absolutely nothing to
do with aesthetic distribution of the bricks. It only occurred in the one
brick randomly(?) chosen by the hypnotist. The brick layers that I know
aren't at all interested in the aesthetic placement of each and every brick
(unless they're building their own house, of course :-); they are more
interested in just getting the job done (so they can go off and have a beer
:-).

-So it is no surprise that this information -- having filtered through
-the highly refined judgment of the master bricklayer -- would have
-a persistence in an old layer of memory.  I would not be surprised
-if a master programmer could be made to recall each line of code
-he has written, under hypnosis.

You've got it the wrong way around. The information entered the brain. In this
case, it don't think it filtered through his judgment but for the sake of
argument, we'll say that it did. If so, the chances of it remaining in
conscious memory [conscious memory is the memories that you can remember
consciously rather than the memories are only accessible via
hypnosis/meditation - assuming that the two aren't the same set] would be
greater because *attention had been paid to it*. If not, it is very difficult
to remember but this doesn't alter the fact that it had entered the brain
initially where it was obviously remembered anyway, but in such as a way as to
not be rememberable (that can't be a word, can it :-?) which is why the
hypnosis was required to elicit the information.

IMPORTANT BIT:

Think of it this way. The brain/subconscious takes in all of the sensory input
that is available to it. This is recorded (or at least a great deal of it
according to McCulloch's opinion). The mind/conscious processes whatever part
of that information that it deems to be important at a given time. The greater
the attention placed on some sensory input/internalised thought, the easier it
is to remember. This is why you may remember the cake you had for your 5th
birthday (maybe) but would have a lesser chance of remember the background
smell of the flowers that were behind you at the time [you wouldn't have been
concentrating on it at the time]. You might, however, remember even this under
hypnosis.

END OF IMPORTANT BIT

-This is not at all the same thing as having a photographic memory
-of the face of each brick.  The bricklayer is remembering details
-he used to make his decisions

Assuming that the bricklayer did use such details in some decision making
process (which I strongly doubt) 

-The other phenomenon described by McCulloch is the more
-well-known example of a person who claims to have photographic
-memory.  I believe this is nothing more than a person who has
-developed a mental exercise which happens to be useful for
-providing memory cues.  He might even believe that he is viewing
-the actual image of the document in his mind, but it is really nothing
-more than an internally-generated synthetic image of what he
-believes the document looks like--in the same manner that people
-who develop an ability to manipulate their brain's sense of location
-often claim to "astral travel" (i.e. the "soul" leaves the body and goes
-floating around the room, house, outdoors, etc.).

What is the difference between a photographic memory that is a natural ability
of someone and photographic memory that someone has developed personally?
Unless you wish to define the mechanism by which natural photographic memories
operates, the definition of the term is a functional one. He may or may not
claim to have a photographic memory (it isn't stated either way). Whatever the
case is, he *does have* a photographic memory. Whether it's a natural talent
or he learnt how to do it (which is by no means impossible to do) is
irrelevant.

I won't do on about astral travelling except to say that you don't seem to know
much about it :-(

raf

--
Robert A Fabian                       | "Sex is not a crime."
raf@basser.cs.su.oz.au                | "In my view, pre-marital intercourse
Basser Department of Computer Science | comes into the category of breaking
University of Sydney                  | and entering."     - Tom Sharpe

mmm@cup.portal.com (Mark Robert Thorson) (12/27/90)

Jim Lai says:

> Not so.  If it is a pipeline of "agents", you only know the maximum speed of
> any individual filter along the pipeline.  (A chain is only as strong as its
> weakest link.)  Each agent can encode data minimally for compactness in
> theory, but that doesn't mean that the brain is therefore optimal and actually
> only uses 50 bits per second bandwidth.  There's nothing wrong with a model
> that assumes the sending gigabits per second down the pipeline.  In the agent
> paradigm, each agent can feed output to many agents.  A tree ten levels deep
> can still be quite wide, and be particularly nasty if when cycles are added.
>  
> My gripe with experiments that claim to measure the bandwidth of the brain
> is the validity of their measurement criteria, which may have already
> assumed a fair amount of preprocessing to have taken place.

This posting and a later one from Fred Sena hit on the weakest point of
the argument I presented in my original posting.  If we are to talk
intelligently about the bandwidth of the brain, we need to recognize the
distinction between bits and bauds.  Bits are information, bauds are some
sort of lower-level phenomenon which may (but not necessarily does) carry bits.

For example, let's say I developed a program to enhance the reliability of
serial communications by using triple redundancy.  This program takes an
input message such as "hi, mark" and converts it to "hhhiii,,,   mmmaaarrrkkk"
for sending over the communication channel.  On the receive end, another program
performs the reverse process, using the redundancy to correct for any errors
that occurred during the transmission of a single character.

Now what would you say the "bandwidth" of this transmission mechanism is?
It certainly requires triple the bandwidth over the communication channel,
but is triple the amount of information being sent?  I think not, the bauds
have been tripled but the bits have stayed the same.

Likewise in the brain we see enormous neural structures used to perform
low bandwidth functions like reading and listening.  Do these structures
have some incredibly high internal bandwidth not evidenced in either the
input or the output?  Again -- for the same reason -- I say no.  The amount
of _information_ has stayed constant, even if it temporarily fanned out
into some highly decoded (i.e. redundant) representation.

I will admit there is bandwidth which is not visible at either the
input or output ends.  For example, a single move by a player playing chess
is an input which results in a single responding move by the opponent.
Each move is an event with very low information content, but a great deal
of internal processing takes place during the ten minutes the master
chessplayer spends deciding on his next move.  But this bandwidth is not
so exceptionally high when compared to other intense human activities like
reading, writing, or speaking.  The chessplayer examines each possibility,
one at a time, at a very human rate.  It would not slow down the chessplayer
very much to tell you what he is thinking as he thinks it.  There is no
mega-bandwidth simultaneous perception of the entire chess position resulting
in a responding move in a single clock cycle.  Instead, he thinks, "I can
move here.  No.  Here.  No.  Here.  Hmm, that's interesting -- then he moves
there and I go here and he goes either there or there."

It's also true that there is some unconscious pre-processing which takes
place without thinking.  For example, the chessplayer excludes moves involving
a trapped piece (such as a rook boxed into a corner) from his consideration
unless a reasonable scenario involves removing the obstacles to moving that
piece.  Likewise, while reading you take in the words but ignore the specks
of wood pulp embedded in the paper, unless those specks make a letter 
difficult to recognize.

Should this pre-processing be counted in the bandwidth?  Once again, for the
same reason as before, I say no.  To say otherwise is like saying that when I
read text at 1200 baud I'm actually reading faster than 1200 baud because I'm
managing to ignore the dust on the face of my CRT.  It is like saying my 9600
baud modem is faster than 9600 baud because it manages to ignore random clicks
and pops on the phone line.

fantom@wam.umd.edu (Thomas Mark Swiss) (12/27/90)

In article <1990Dec23.023456.21126@watcgl.waterloo.edu> jwtlai@watcgl.waterloo.edu (Jim W Lai) writes:
>In article <1990Dec22.213121.12226@dsd.es.com> ddebry%bambam@es.com writes:
>>	So then, given enough time, could finitely large number of
>>pocket calculators (any models) all networked together somehow ever
>>figure out how each of them work to the point that they could
>>fabricate one of themselves?  I think not.
>>
>>	I'd like to suggest the answer even holds when you put 'human'
>>into the question.  We're too involved in the problem to figure it
>
>Actually, this hypothesis makes the assumption on the level of abstraction
>required to make a simulation of oneself.  What constitutes sufficient
>understanding in this regard is debatable.  The requirements for
>simulation are less stringent than fabrication.
     
     It ooccurs to me that one neads very little knowledge of how the brain 
works to even do fabrication. I can make a copy of a circuit by examining the
componants and the connection between, without ANY understanding of the circuit
or its parts. "Let's see, I connect this thing with the red,black,green and goldstripes to the middle lead from this black box..."

      A lot of what the brain does is made of simple steps connected in ways 
more complex than necessary...no one grades evolution on neatness.

================================================================================
Tom Swiss                  | "You put a baby in a crib with an apple and a  
fantom@wam.umd.edu         |rabbit. If it eats the rabbit and plays with the
"If at first you don't     |apple, I'll buy you a new car."-Harvey Diamond
succeed, change the rules."|       Keep your laws off of my brain!

jwtlai@watcgl.waterloo.edu (Jim W Lai) (12/28/90)

In article <37273@cup.portal.com> mmm@cup.portal.com (Mark Robert Thorson)
writes:
>For example, let's say I developed a program to enhance the reliability of
>serial communications by using triple redundancy.  [...]
>
>Now what would you say the "bandwidth" of this transmission mechanism is?
>It certainly requires triple the bandwidth over the communication channel,
>but is triple the amount of information being sent?  I think not, the bauds
>have been tripled but the bits have stayed the same.

ASCII text is redundant.  What if we used LZW compression?  Just because
compression is possible does not mean that it necessarily takes place.
Only experimental evidence can suggest that.  The bandwidth measured is
also dependent on the encoding scheme used.

>Likewise in the brain we see enormous neural structures used to perform
>low bandwidth functions like reading and listening.  Do these structures
>have some incredibly high internal bandwidth not evidenced in either the
>input or the output?  Again -- for the same reason -- I say no.  The amount
>of _information_ has stayed constant, even if it temporarily fanned out
>into some highly decoded (i.e. redundant) representation.
>
>I will admit there is bandwidth which is not visible at either the
>input or output ends.  [...]
>
>It's also true that there is some unconscious pre-processing which takes
>place without thinking.  [...]
>
>Should this pre-processing be counted in the bandwidth?  Once again, for the
>same reason as before, I say no.  [...]

I don't think we have a real argument.  My objection was to someone who
claimed that because humans have a low external bandwidth, they could be
simulated effectively with current technology.

peter@taronga.hackercorp.com (Peter da Silva) (12/28/90)

I don't know about this "humans have low external bandwidth" business.
After all, real-time image processing and analysis is pretty damn hard.
Look at the behaviour of your typical autonomous robot...

In fact the real-time control problem for a human body requires lots
of simultaneous inputs, any of which is at a decent (by computer standards)
baud rate.
-- 
Peter da Silva (peter@taronga.hackercorp.com)
               (peter@taronga.uucp.ferranti.com)
   `-_-'
    'U`

jwtlai@watcgl.waterloo.edu (Jim W Lai) (12/29/90)

In article <1990Dec28.142958.17394@taronga.hackercorp.com> peter@taronga.hackercorp.com (Peter da Silva) writes:
>I don't know about this "humans have low external bandwidth" business.
>After all, real-time image processing and analysis is pretty damn hard.
>Look at the behaviour of your typical autonomous robot...
>
>In fact the real-time control problem for a human body requires lots
>of simultaneous inputs, any of which is at a decent (by computer standards)
>baud rate.

The answer is simple.  Most of these studies conceive of the human being as
being a disembodied brain with a single consciousness.  Too much influence
from the information processing paradigm.  Take a recorded speech...a
transcription will give all the text but loses all the data that was in the
form of nuances and inflection.  You only get what you measure for.

cho@fladrif.entmoot.cs.psu.edu (Sehyeong Cho) (12/29/90)

In article <MAGI.90Dec25185546@polaris.utu.fi> magi@polaris.utu.fi (Marko Gronroos) writes:
>
>  Why do you say that we can't understand our thinking?
>  It's quite true that a pocket calculator (pc) can't understand it's
>"thinking", but then, a pocket calculator doesn't THINK, it doesn't
>LEARN, It doesn't make INTELLIGENT CONCLUSIONS. I don't think that we
>can make a "law of not understanding oneself", if we have only one
>example of beings who really can't understand ANY of their functional
>principles (computers).

Hmmm. Dogs are quite intelligent. They think. They learn.
Now, will it be possible for dogs to understand their thinking mechanism?
Who knows? But my beliefs of this is of 0.1E-100000000 certainty. :-)
If you agree with me (dogs will probably not ..) you must think that
human intelligence is something which is ULTIMATE.
I.e., you must believe if there are any other beings more intelligent than
humans, the difference should be only marginal. Right?

I don't think so. There are things which humans would probably not understand
forever.  
>
>  The last word: Is it useful for us to say that we can't create
>thinking machines? That kind of law is an ANSWER, and only religions

Yes, I think it is very useful. It turns a day-dreamer into a researcher. :-)
We don't have to create a thinking machine. We just want our machines to
boldly do what no other machines have done before.

>give ANSWERS. If we believe that we can't have progress in our
>research, then there really can't be any progress. That's the main
>reason why I don't like most religions.. :-/

Believing humans can do anything also makes a religion, only worse.
Please don't talk about religion unless you know what it is.

Happy New Year!
    Sehyeong Cho
    cho@cs.psu.edu

jwtlai@watcgl.waterloo.edu (Jim W Lai) (12/29/90)

In article <Fqe?t9*3@cs.psu.edu> cho@fladrif.entmoot.cs.psu.edu (Sehyeong Cho)
writes:
>In article <MAGI.90Dec25185546@polaris.utu.fi> magi@polaris.utu.fi
>(Marko Gronroos) writes:
>>  Why do you say that we can't understand our thinking?
>>  It's quite true that a pocket calculator (pc) can't understand it's
>>"thinking", but then, a pocket calculator doesn't THINK, it doesn't
>>LEARN, It doesn't make INTELLIGENT CONCLUSIONS. I don't think that we
>>can make a "law of not understanding oneself", if we have only one
>>example of beings who really can't understand ANY of their functional
>>principles (computers).
>
>Hmmm. Dogs are quite intelligent. They think. They learn.
>Now, will it be possible for dogs to understand their thinking mechanism?
>Who knows? But my beliefs of this is of 0.1E-100000000 certainty. :-)
>If you agree with me (dogs will probably not ..) you must think that
>human intelligence is something which is ULTIMATE.
>I.e., you must believe if there are any other beings more intelligent than
>humans, the difference should be only marginal. Right?
>
>I don't think so. There are things which humans would probably not understand
>forever.  

The question that remains unresolved is whether or not the human level of
intelligence is sufficient to allow self-understanding (self-modelling?).  For
this, we must turn to experimental evidence, of which there is none.  It is
not required that human intelligence be the ultimate achievable, nor that
humans be able to understand all forms of intelligence.

So, can anyone give a reason why they think this is impossible, other than
personal philosophical grounds?  Strictly scientifically, one should defer
judgement.

mmm@cup.portal.com (Mark Robert Thorson) (12/29/90)

Mark Hopkins says:

> In the early stages of development, this neural net can be trained
> (say by backpropagation) so as to minimize the difference between
> first stage input and second stage output, thus training the identity
> function on it.  If backpropagation is used in modelling this
> phenomenon, it will be one of the few instances where
> backpropagation can be used for unsupervised learning.
> 
> The significance of being able to emulate the identity function this
> way is the bottleneck that has to be passed through between the
> first and second stages.  Spontaneous feature discovery/extraction
> is forced on the neural net.
>  
> What you will find in the brain, that corresponds to this, is
> RECURRENCE along the visual pathway.  That is, connections that go
> toward the visual cortex from the eye, AND connections that go
> from the visual cortex in the direction of the eye.
>  
> I bet you that the recurrence is there to perform the function
> described above.

I agree, except I doubt that it's restricted to the early stages of development.
I think it happens continuously.  For example, our
eyes may respond differently after a minute of reading compared
to a minute of driving.  You can see almost the same idea in the
following excerpt from THE THINKING MACHINE by C. Judson Herrick
(Univ. of Chicago Press, 1929).  His "organic tension" may be the force
that demands spontaneous feature discovery/extraction from the
network.

----------------------------------------------------------

It is not generally recognized that the stream of nervous impulses
between sense-organs and brain is not one-way traffic.  Most of
these nervous impulses are directed inward toward the brain, but
there is a respectable amount of transmission in the reverse
direction.

This is most evident in the eye, for the optic nerve contains a very
large number of nerve fibers which conduct from brain to retina.
Just what the effect of these outgoing nervous currents upon the
retina may be is not very clear.  In some fishes they have been
shown to cause changes in the length of rods and cones and in the
retinal pigment, and probably in our own eyes they have some effect
to alter the sensitivity of the retina to light.  Besides these outgoing
fibers in the optic nerve there are fibers in other nerves that activate
the accessory organs of vision, those, for instance, that change the
size of the pupil and the focus of the eye.

Most sense organs are under some sort of central control of this sort,
and inside the brain there are similar nervous connections which
may activate or sensitize the lower sensory centers.

Conscious attention to anything going on outside implies an organic
tension within the nervous system and this tension may extend
outward even to the sense-organ itself so that the sense of sight, for
instance, becomes more acute when we are straining the eyes to
read a distant street-sign.  The eye serves the mind and the mind,
in turn, serves the eye.

The muscles have a sensory nerve supply which is as important as
their motor nerve supply.  The "muscular branch" of a nerve, say to
the biceps, has about one-third as many sensory fibers as motor
fibers, so that every change in the contraction of the muscle is
directly reported back to the brain.  These muscular and similar
"intimate" sensations rarely are clearly recognized, yet they play
a tremendously important part as the organic background of our
conscious attitudes and reactions.  They are essential parts of our
thinking machinery.

mmm@cup.portal.com (Mark Robert Thorson) (12/29/90)

Fred Sena says:

> I think that measuring the "bandwidth" of transmission can be a
> misleading indicator of the amount of complexity of the
> transmission sequence, or the complexity of the entities performing
> the communication.  For instance, I can say one word to you, but
> the amount of information could be a great amount depending on
> the secondary meanings of the word are relating to the context
> that the word is used and the broader implications of the word.
>  
> Like if I say "nuclear", a whole array of related images are evoked
> in the person that I say it to, such as:
>         bomb
>         war
>         reactor
>         atom
>         family (one of these is not like the others...)
>         Enola Gay
>         Hiroshima
>         Department of Defense
>         Department of Energy
>         radiation...
>  
> Using that kind of thinking, I'm trying to imagine that there must
> be a way to transfer information at a speed greater than the
> bandwidth of the physical layer.
>  
> There is some kind of "pre-understanding" that goes on in a
> conversation between two creatures.  I guess you could compare it
> to an extremely efficient compression algorithm that is
> implemented both of the creatures. Both are aware of the structure
> and assemble and disassemble it in the communication process.
> The difference between the way that computers and humans
> perform compression is that computers do it sequentially, whereas
> people do it "instantaneously".  Anyhow, the bandwidth can be low,
> but the amount of "information transmission" or "understanding".  I
> think can be much higher.

Oh yeah?  Are you telling me you thought of all those associations
instantaneously?  When I hear the word "nuclear", the association
"bomb" pops up near-instantaneously, with "reactor" following a
fraction of a second later.  I'd have to rack my brains for several
seconds before "family" or "Enola Gay" pops up.  Except for the
first one or two associations, everything else on the list occurs to
me at a rate slow enough I could tell you what I'm thinking as I
think it.

[BTW, I got your letter.  I put it in my pile of bills and forgot about
it for a while, but will send it out soon when the bills go out.]

person@plains.NoDak.edu (Brett G. Person) (12/29/90)

In article <37034@cup.portal.com> mmm@cup.portal.com (Mark Robert Thorson) writes:
>There is a common myth that the brain is capable of enormous computational
>bandwidth -- for example that the retina sends gigabauds worth of data to
>the brain.  I believe the computational bandwidth of the brain is quite low,
>low enough that we could simulate a brain on today's computers if only we knew
>how to do it.
[lots of stuff deleted] >
>
>I think it is obvious that the brain consists of many agencies which are
>"on call", but very few agencies which are simultaneously active.  Our
>remarkable ability to remember millions of minor facts, or recall some
>minor event which occurred many years ago (and which one hasn't thought
>about for many years) is no more remarkable than the ability of a phone book
>to record the names and addresses of millions of people or the ability of
>the disk drives at TRW to store millions of credit histories.
But... this isn't thought or consciesneess(sp) which is what you would have
to simulate to do a good job of making a "computer brain"
You worngly assume that what makes a brain a brain is merely its ability to
transmit data across neurons. This is completely wrong. The brain functions
on a level where the whole is greater than the sum ofthe parts.  Math,
reading,fact-retainning skills, and motor-skills work together to produce a
functioning brain.
>
>The evidence suggests that the parts of the brain that are active
>during any short span of time provide very low computational bandwidth;  their
>power comes from the fact that many such parts are available to be used.
>I don't use my math parts while making dinner, I don't use my cooking parts
>while writing this Usenet posting.  And I haven't used my Morse code parts or
>my German-language parts much at all in the last 20 years.

See above.  I think that the brain is a lot more complex than you are
stating.  Its the most complex organ in the human body. One might say that
the brain defines what we are.

As a quick example, I'm posting to usenet right now, I'm hungry, so I'm
thinking about what I can cook.  I'm using my math skills to figure our how
mych more I need to type so that news won't bitch at me about including more
text than I type in. I dont speak German, but I am using my language skills
to enter this text.













--
Brett G. Person
North Dakota State University
uunet!plains!person | person@plains.bitnet | person@plains.nodak.edu

moritz@well.sf.ca.us (Elan Moritz) (01/03/91)

                        TRANS_SAPIENS and TRANS_CULTURE
                                      ***
                              REQUEST FOR COMMENTS
                     -------------------------------------


          In an earlier paper [Memetic Science: I - General
          Introduction; Journal of Ideas, Vol. 1, #1, 3-22, 1990] I
          postulated the emergence of a descendent of homo sapiens.
          This descendent will be primarily differentiated from h.
          sapiens by having * substantially  greater cognitive
          abilities *.

           [the relevant section of the paper is included below].


          >>>>> I plan to write a more substantive paper on the topic
          and would appreciate comments, speculation, arguments for &
          against this hypothesis. Relevant comments / arguments will
          be addressed in the paper and be properly
          acknowledged/referenced <<<<<.

          Elan Moritz <<<<


          -- text of h. trans sapiens section follows --

               We also introduce here the concepts of trans-culture
          and Homo trans-sapiens (or simply trans-sapiens). While
          being topics of a future paper, trans-culture can be
          described as the next step of culture dominated by deep
          connections, interactions, and relationships between
          objects created by large human/machine teams. A manifest
          property of trans-culture is the extreme and transcendent
          complexity of interactions and relations between humans and
          the cultural objects involved, with the additional property
          of being non-accessible to Homo sapiens. Examples of
          trans-cultural objects already exist; for example, there is
          no individual who (at any given temporal instance) is an
          expert in all aspects of medicine, or who is familiar with
          all biological species and their relationships, or is an
          expert in all aspects of physics, or who is totally
          familiar with all aspects of even a single cultural
          artifact (e.g. Hubble space telescope, Space Shuttle
          design, or the total design of a  nuclear power plant). In
          fact, we are approaching the point that certain proofs of
          mathematical theorems are becoming too long and difficult
          for any one individual to  keep in conscious awareness. In
          a way, these transcendent and extended complexity
          relationships are examples of more complicated
          'meta-memes', which is one of the reasons it is interesting
          to study the evolution of ideas.



               Homo trans-sapiens is the [postulated] next step in
          evolution of homo sapiens.  There is no reason to expect or
          require that Homo sapiens will not undergo further
          evolution. The bio-historical trend indicates that the
          major evolutionary development in Homo is in the
          cortico-neural arena (i.e. increasingly more complex
          organization of the nervous system and the brain).
          Specifically it is the higher level cognitive - Knowledge
          Information Processing functions that set H. Sapiens apart.
          It is asserted here (and to be discussed in a future paper)
          that H. trans-sapiens is a logical consequence of
          evolution, and that the milieu and adaptive epigenetic
          landscape for H. trans-sapiens is already present in the
          form of trans-culture. It is indeed possible that the basic
          mutations  are in place and trans-sapiens already exists or
          will appear in the biologically-near time frame.



          [ Please pass to other relevant news groups/ e-lists]


          Elan Moritz,

          snail mail:
          Elan Moritz
          The Institute for Memetic Research
          PO Box 16327, Panama City, Florida 32406

          e mail:
          moritz@well.sf.ca.us  [internet]



>>>>>>>>>>> this is a follow up to discussions on Bandwidth of the Brain.
I am curious to know folks' reaction to the channel capacity as a function
of evolutionary time.

[By the way, to save space, folks already commented on physiological
difficulties  of "larger heads". I am not talking about larger heads, but
alternative wiring/circuitry, components, density, etc. and possibly but
not necesarily larger brain volumes.]

turner@lance.tis.llnl.gov (Michael Turner) (01/04/91)

In article <22398@well.sf.ca.us> moritz@well.sf.ca.us (Elan Moritz) writes:
>
>                        TRANS_SAPIENS and TRANS_CULTURE
>                                      ***
>                              REQUEST FOR COMMENTS
>                     -------------------------------------
>
>
>          In an earlier paper [Memetic Science: I - General
>          Introduction; Journal of Ideas, Vol. 1, #1, 3-22, 1990] I
>          postulated the emergence of a descendent of homo sapiens.
>          This descendent will be primarily differentiated from h.
>          sapiens by having * substantially  greater cognitive
>          abilities *.
>
>           [the relevant section of the paper is included below].
>
 [[...much deleted...]]
>               Homo trans-sapiens is the [postulated] next step in
>          evolution of homo sapiens.  There is no reason to expect or
>          require that Homo sapiens will not undergo further
>          evolution. The bio-historical trend indicates that the
>          major evolutionary development in Homo is in the
>          cortico-neural arena (i.e. increasingly more complex
>          organization of the nervous system and the brain).
>          Specifically it is the higher level cognitive - Knowledge
>          Information Processing functions that set H. Sapiens apart.
>          It is asserted here (and to be discussed in a future paper)
>          that H. trans-sapiens is a logical consequence of
>          evolution, and that the milieu and adaptive epigenetic
>          landscape for H. trans-sapiens is already present in the
>          form of trans-culture. It is indeed possible that the basic
>          mutations  are in place and trans-sapiens already exists or
>          will appear in the biologically-near time frame.

A problem with this is that, while the fossil record is pretty unambiguous
on the question of increasing mental capacity over the last few million
years, it is rather ambiguous about any increases over the last 50,000 or
so.  In fact, there's some evidence to the contrary.

The advance of civilization doesn't necessarily depend on increases in
average intelligence.  In fact, some civilizations might have thrived
by reducing it.  Much of the Inca empire consisted of populations whose
diets were iodine-deficient, hence mildly retarded--and much more pliable.
(This retardation didn't necessarily extend to the elite surrounding the
emperor, who could afford to have seafood run by courier from the coast up
to the mountain peaks, and who might have been a gene pool unto themselves.
Perhaps *they* were the forerunners of "trans-sapiens"?  If so, it's not a
pleasant thought.)

Civilization might in fact be one big trade-off of loss of average individual
capability for increased average physical security.  The percentage of smarter
people can decline without loss to the civilization at large if the fruits of
their efforts are increasingly guaranteed wide distribution by the
infrastructure made possible by civilization.

"Trends" without recognition of driving mechanisms are useless predictors.
You have to have a model for why people are getting smarter, and test to
see if people, in fact, ARE getting smarter.  (Innately, not just because
of universal education.)

If the paradigmatic "trans-cultural" artifacts are the Hubble, the Shuttle,
and Chernobyl [mentioned in deleted portions], I'm really at a loss to see
how the people responsible for either causing or avoiding such catastrophes
are going to get laid significantly more often.  Even if they did breed more
prolifically, what kind of childhoods would their children have if their
parents are off fighting technological firestorms all the time?  They'd be
soured on having an technical career in no time.

The 80's wrote the epitaph on superbrains as a human eschatology: I read it
as "Keep It Simple, Stupid."  Large hierarchies, and the products of large
hierarchies, are failing all around us as we speak, some of them taking
down whole races and ecosystems in the process.  The smarter monkeys
might climb the mast of the sinking ship, but there's no place to breed
at the top.
---
Michael Turner
turner@tis.llnl.gov

eliot@phoenix.Princeton.EDU (Eliot Handelman) (01/04/91)

In article <22398@well.sf.ca.us> moritz@well.sf.ca.us (Elan Moritz) writes:


;               We also introduce here the concepts of trans-culture
;          and Homo trans-sapiens (or simply trans-sapiens). While
;          being topics of a future paper, trans-culture can be
;          described as the next step of culture dominated by deep
;          connections, interactions, and relationships between
;          objects created by large human/machine teams. 

This is already a given. Introducing the superman as a solution to
the problem of the legitimation of knowledge in postmodern society
merely inscribes the task in just that domain yoiu need to escape
from. Read Lyotard's "The Postmodern Condition." Your jargon,
incidentally, is deplorable. Are you a native speaker?

peter@taronga.hackercorp.com (Peter da Silva) (01/04/91)

In article <22398@well.sf.ca.us>, moritz@well.sf.ca.us (Elan Moritz) writes:
>                Homo trans-sapiens is the [postulated] next step in
>           evolution of homo sapiens.

This is the same school of pseudo-scientific gibberish that gives us
creationism.  There is no "next step" in evolution. It's not a directed
process, but simply a byproduct of selective pressure. And right now the
selective pressure on the human species is not in the direction of
greater intelligence. Quite the opposite, if anything...

>           There is no reason to expect or
>           require that Homo sapiens will not undergo further
>           evolution.

And there is no reason to believe that such evolution will be anything
desirable from the point of view of neophilic C20 computer nerds like us.
-- 
               (peter@taronga.uucp.ferranti.com)
   `-_-'
    'U`

toms@fcs260c2.ncifcrf.gov (Tom Schneider) (01/04/91)

In article <22398@well.sf.ca.us> moritz@well.sf.ca.us (Elan Moritz) writes:
>
>                        TRANS_SAPIENS and TRANS_CULTURE
>                                      ***
>                              REQUEST FOR COMMENTS
>                     -------------------------------------
>
>
>          In an earlier paper [Memetic Science: I - General
>          Introduction; Journal of Ideas, Vol. 1, #1, 3-22, 1990] I
>          postulated the emergence of a descendent of homo sapiens.
>          This descendent will be primarily differentiated from h.
>          sapiens by having * substantially  greater cognitive
>          abilities *.

I suggest you read Kurt Vonnegut's wonderful book "Galapagos".  His story of
the foundation of a degenerate race is quite reasonable as far as I could see.

For the other direction, follow the threads in sci.nanotech on superhumans.  It
seems more likely that we will be able to create super-intellegent humans
within the next few centuries than that they will appear by selective
evolution.

  Tom Schneider
  National Cancer Institute
  Laboratory of Mathematical Biology
  Frederick, Maryland  21702-1201
  toms@ncifcrf.gov

sena@infinet.UUCP (Fred Sena) (01/05/91)

In article <37353@cup.portal.com> mmm@cup.portal.com (Mark Robert Thorson) writes:
>Fred Sena says:
>> I think that measuring the "bandwidth" of transmission can be a
>> misleading indicator of the amount of complexity of the
>> transmission sequence, or the complexity of the entities performing

>Oh yeah?  Are you telling me you thought of all those associations
>instantaneously?  When I hear the word "nuclear", the association
>"bomb" pops up near-instantaneously, with "reactor" following a
>fraction of a second later.  I'd have to rack my brains for several
>seconds before "family" or "Enola Gay" pops up.

Yes, you have a point.  I think that most people would have also thought of
"bomb" immediately as well (I did).

  However, I was just trying to emphasize that there are a variety of
pre-understandings between people that are immediately reconized *before* the
next word follows.  In other words, all of the common understandings that are
transmitted along with a word, which would imply much more information
transmission than is obvious.  I'm trying to show the limitations of our
ability to evaluate information content per unit time, or bandwidth.

	--fred
-- 
--------------------------------------------------
Frederick J. Sena                sena@infinet.UUCP
Memotec Datacom, Inc.  N. Andover, MA

mcovingt@athena.cs.uga.edu (Michael A. Covington) (01/05/91)

In article <AT5I02@taronga.hackercorp.com> peter@taronga.hackercorp.com (Peter da Silva) writes:
>In article <22398@well.sf.ca.us>, moritz@well.sf.ca.us (Elan Moritz) writes:
>>                Homo trans-sapiens is the [postulated] next step in
>>           evolution of homo sapiens.
>
>This is the same school of pseudo-scientific gibberish that gives us
>creationism.  There is no "next step" in evolution. It's not a directed
>process, but simply a byproduct of selective pressure. And right now the
>selective pressure on the human species is not in the direction of
>greater intelligence. Quite the opposite, if anything...
>
>>           There is no reason to expect or
>>           require that Homo sapiens will not undergo further
>>           evolution.
>
>And there is no reason to believe that such evolution will be anything
>desirable from the point of view of neophilic C20 computer nerds like us.
>-- 
>               (peter@taronga.uucp.ferranti.com)
>   `-_-'
>    'U`

The satirical "Evolutionists' Hymn" by C. S. Lewis is appropriate here.
"Lead us, Evolution, lead us
  Up the future's endless stair,
 Push us, shape us, mold us, weed us,
  Leading on nobody knows where..."

  It ends with something like...
    "Far from pleasant, by our present,
      Standards thoug they well may be!"

Evolution is a fine theory of biology but makes a rotten religion.
It just doesn't have enough grounds for optimism!

jwtlai@watcgl.waterloo.edu (Jim W Lai) (01/05/91)

In article <1991Jan4.190256.2741@athena.cs.uga.edu>
mcovingt@athena.cs.uga.edu (Michael A. Covington) writes:
>Evolution is a fine theory of biology but makes a rotten religion.
>It just doesn't have enough grounds for optimism!

Great.  It fits right in with existential angst in cyberpunk.
Science makes no such promises, despite the claims of certain
demagogues who have to defend their irrational ideologies.

mmm@cup.portal.com (Mark Robert Thorson) (01/06/91)

Fred Sena says:

>   However, I was just trying to emphasize that there are a variety of
> pre-understandings between people that are immediately reconized *before* the
> next word follows.  In other words, all of the common understandings that are
> transmitted along with a word, which would imply much more information
> transmission than is obvious.  I'm trying to show the limitations of our
> ability to evaluate information content per unit time, or bandwidth.

But should any amount of pre-understanding between transmitter and receiver
be counted when calculating bandwidth?  I say no, because these pre-
understandings are bandwidth which should have been counted at some earlier
date when they entered the receiver's head.  To do otherwise is like Western
Union charging for a 40-word telegram when you send a 10-word telegram
which tells the recipient to read an earlier 30-word telegram already paid for.

I wouldn't say these pre-understandings are communication, because they
are not necessarily the same between the transmitter and the receiver.
You could say "horse meat" and I might think "hmm... Is that steak really
horsemeat?  I've never tried horsemeat, but I've heard it's good." while you
might be thinking "this steak tastes so bad I bet it's not even real beef,
I bet it's horsemeat".  I.e. unless you specifically describe referenced
objects (and allow that description to be counted in the bandwidth)
I wouldn't say you've communicated anything about those objects.

toms@fcs260c2.ncifcrf.gov (Tom Schneider) (01/06/91)

In article <2753@infinet.UUCP> sena@infinet.UUCP (Fred Sena) writes:

>In article <37353@cup.portal.com> mmm@cup.portal.com (Mark Robert Thorson)
writes:
>>Fred Sena says:
>>> I think that measuring the "bandwidth" of transmission can be a
>>> misleading indicator of the amount of complexity of the
>>> transmission sequence, or the complexity of the entities performing
>
>>Oh yeah?  Are you telling me you thought of all those associations
>>instantaneously?  When I hear the word "nuclear", the association
>>"bomb" pops up near-instantaneously, with "reactor" following a
>>fraction of a second later.  I'd have to rack my brains for several
>>seconds before "family" or "Enola Gay" pops up.
>
>Yes, you have a point.  I think that most people would have also thought of
>"bomb" immediately as well (I did).

>  However, I was just trying to emphasize that there are a variety of
>pre-understandings between people that are immediately reconized *before* the
>next word follows.  In other words, all of the common understandings that are
>transmitted along with a word, which would imply much more information
>transmission than is obvious.  I'm trying to show the limitations of our
>ability to evaluate information content per unit time, or bandwidth.

Now hold on a second.  You had better get your technical terms straight.
Bandwidth is a reference to the range in the frequency domain of the signal.
For example, each radio station is given a particular band width within which
to work.  The interesting thing people do is to take speech, and sounds in the
range that people can hear, 0 to 20,000 cycles per second (ie "Hertz") and they
multiply that signal by a high frequency sine wave.  This effectively shifts
the signal to high frequency, but the required bandwidth stays the same at
20 kHz for each station.

Summary:  bandwidth has units of cycles per second (Hz).

Information content per unit time is the rate of information transmission
(R) or the the channel capacity.  These have units of bits per second.
A bit is the choice between two equally likely possibilities.  Clearly this
has different units from the bandwidth.  Indeed Shannon's famous formula:

             P + N
  C = W log (-----)
           2   N

has units bits per second, where W is the bandwith in Hz.

If you are going to stick to the strict definitions that have been successfully
used by communications engineers for 40 years, then the associations that pop
up when one mentions a word are not an appropriate measure of the information
content of the word.  Shannon's measure is based entirely on the actual symbols
or messages sent over the channel and received at the receiver, NOT on how the
symbols are USED by the receiver.  If you want to reject the original
definitions, you had better have excellent reasons for doing so.

An excellent introduction to this topic can be found in:

@book{Pierce1980,
author = "J. R. Pierce",
title = "An Introduction to Information Theory:
Symbols, Signals and Noise",
edition = "second",
year = "1980",
publisher = "Dover Publications, Inc.",
address = "New York"}

>Frederick J. Sena                sena@infinet.UUCP
>Memotec Datacom, Inc.  N. Andover, MA

  Tom Schneider
  National Cancer Institute
  Laboratory of Mathematical Biology
  Frederick, Maryland  21702-1201
  toms@ncifcrf.gov

sena@infinet.UUCP (Fred Sena) (01/08/91)

>
>But should any amount of pre-understanding between transmitter and receiver
>be counted when calculating bandwidth?  I say no, because these pre-
>understandings are bandwidth which should have been counted at some earlier
>date when they entered the receiver's head.

It's probably not worth going on with this, but I'm having fun so here goes.
I am not using the traditional model for communication.  I'm wearing my
philosopher hat, not my engineer hat.  I'm probably way over my head (or just
crazy) as far as trying to work on these concepts and explain them.


There are two concepts that I am working on:

  1. I'm trying to look at communication from another "angle".

  2. Transmitted "data" depends on what you are looking for.

I think there is another layer of complexity in human communication beyond
the transmission of raw data.  I'm looking not at the "data" transmitted, but
at the processes on both ends.  I think that transmission of data which
stimulates old memory stuff is very important in part of this higher level,
because it has the effect of creating (or just *is*) an ever evolving system
of protocols.

Computer symbols are nice and neat and work fine for measuring computer data.
The problem with with measuring human communication is that the symbols are
somewhat (if not extremely) arbitrary.  If you don't believe me, try counting
the number of words in the dictionary that have more than one definition, not
to mention connotations, slang, context, etc.  Not to mention the fact that
we generate new words all of the time.

I don't think that you can just assume that you know the difference between
"noise" and "data".  It depends on what you are looking for.  It would be
like assuming that the visible spectrum is "all light".  Except well, oops,
there's these radio effects, or well look at that, you can get an image of
bones using, well, we'll call them x-rays.  Information that was not there,
all of a sudden is there, because we not care.  I guess information is in the
eye of the beholder.  I'm not sure if I even made a point here, but at least
I'm trying...

>I wouldn't say these pre-understandings are communication, because they
>are not necessarily the same between the transmitter and the receiver.

No, I wouldn't say that either.  What I am saying is they are *involved* in
communication because the effects of stirring up the pre-understanding is the
smoking gun that suggests something was indeed transmitted, even though it
was not "data" in the traditional sense of something which you didn't know
and now you know.  It's more like a trigger and I suspect that it might be
"invisible" if you don't know how to look for it.

I was waiting to get a response about my sloppy definition of bandwidth which
Tom Schneider pointed out.  Definitely an "oops" on my part.

I think that we've split enough hairs on this topic, any other suggestions...

	--fred
-- 
--------------------------------------------------
Frederick J. Sena                sena@infinet.UUCP
Memotec Datacom, Inc.  N. Andover, MA

fnwlr1@acad3.alaska.edu (RUTHERFORD WALTER L) (01/09/91)

In article <2755@infinet.UUCP>, sena@infinet.UUCP (Fred Sena) writes...
> 
>  2. Transmitted "data" depends on what you are looking for.
> 
>>I wouldn't say these pre-understandings are communication, because they
>>are not necessarily the same between the transmitter and the receiver.
> 
>No, I wouldn't say that either.  What I am saying is they are *involved* in
>communication because the effects of stirring up the pre-understanding is the
>smoking gun that suggests something was indeed transmitted, even though it
>was not "data" in the traditional sense of something which you didn't know
>and now you know.  It's more like a trigger and I suspect that it might be
>"invisible" if you don't know how to look for it.


Pre-understanding (or pre-conception or pre-judgement) isn't data, but only
a means (tool) for the receiver to interpret the data - a shortcut sometimes.
Yesterday I was watching the news - on one station was a report about an oil
spill near Tacoma. I switched stations and saw another report in which some-
body said something similar to this; "... we have had larger outflows before
but this time the controller decided to close the banks..."  It took me a
while to realize that this story was about a  bank failure in New England
and not the oil spill story because I was expecting the other story as soon
as I heard the serious tone and the word "outflow".
In this case my "pre-understanding" was incorrect and wasn't a timesaver.


---------------------------------------------------------------------
      Walter Rutherford
       P.O. Box 83273          \ /    Computers are NOT intelligent;
   Fairbanks, Alaska 99708    - X -
                               / \      they just think they are!
 fnwlr1@acad3.fai.alaska.edu
---------------------------------------------------------------------

thornley@cs.umn.edu (David H. Thornley) (01/09/91)

In article <2755@infinet.UUCP> sena@infinet.UUCP (Fred Sena) writes:
>>[Claim that references to pre-transmitted info should not be counted.]
>
>It's probably not worth going on with this, but I'm having fun so here goes.
>I am not using the traditional model for communication.  I'm wearing my
>philosopher hat, not my engineer hat.  I'm probably way over my head (or just
>crazy) as far as trying to work on these concepts and explain them.
>
I'm not an engineer, but I sometimes claim to be a philosopher, so here
goes....

>[Looking at processes involved in communication, and wondering whether
> defining "symbols" and "data" in human communication is nearly as easy
> as doing it with computers.]
>
>>I wouldn't say these pre-understandings are communication, because they
>>are not necessarily the same between the transmitter and the receiver.
>
>No, I wouldn't say that either.  What I am saying is they are *involved* in
>communication because the effects of stirring up the pre-understanding is the
>smoking gun that suggests something was indeed transmitted, even though it
>was not "data" in the traditional sense of something which you didn't know
>and now you know.  It's more like a trigger and I suspect that it might be
>"invisible" if you don't know how to look for it.
>
1.  They are involved in communication.

2.  They are not themselves bandwidth.

The difficulty with treating multiple meanings as communication is that
what is happening is not exactly communication.  The example used earlier
in this thread is the word "nuclear", which calls up meanings including
bombs, reactors, families, and cellular reactions.  Unfortunately, since
it calls up *all* of these meanings, it cannot be credited with communicating
any of them, since the actual meaning is unclear without other cues.  If
it were possible to send multiple meanings selectively, this would indeed
be an effective bandwidth increase.  An analogy in computer communications
would be a garbled message that could be interpreted as "compile nuclear.c"
or "archive nuclear.c" or "mail this to the nuclear group".

Therefore, human communication only exceeds nominal bandwidth where the
ambiguities can be resolved somehow, such as context.  "What sort of power
plant are they building there?"  "Nuclear."  Computers have done much
this sort of thing, though.  In an Infocom game, if you order "Kill gerbil",
it will respond "[with the umbrella]" if that is your only weapon; if
you have more than one weapon, it will say "With what", and will then
accept the response "umbrella".

To me, the more interesting difference is the nonverbal communication
that goes on, which is not possible to illustrate here, where even a
smiley counts as three characters transmitted.  In face-to-face conversation,
we use facial expressions and voice tones a lot.  Recorded conversations
can sound real strange without them.  How much of this you care to call
bandwidth is up to you; the actual written-down informational transfer
is rather low, but sight and sound images can be fairly faithfully
maintained for a long time, and that's a lot of bits.


>I was waiting to get a response about my sloppy definition of bandwidth which
>Tom Schneider pointed out.  Definitely an "oops" on my part.
>
That's not the real problem here, we can afford some sloppiness.

>I think that we've split enough hairs on this topic, any other suggestions...
>
I thought I'd split a hair nobody had seemed to split before, excuse me....

DHT

young@helix.nih.gov (Jeff Young) (01/10/91)

On the one hand, you might use a textual analogy for comparing the human
brain to the power of a computer, on the other hand:

think of the bandwidth that is required to transmit animation over a 
medium.  1028x1028 resolution by 24 planes of color by 30 frames per
second.  Humans have little trouble understanding such animation but
if you try to send this on a wire you need to transmit at speeds 
greater than that of FDDI.  

My only point is that computers are so blazingly fast and the brain
is so massively parallel - can we really do a fair comparison yet?
Will there ever be a fair comparison of such dissimilar systems?

jy
young@alw.nih.gov

jsl@barn.COM (John Labovitz) (01/10/91)

In article <2755@infinet.UUCP> sena@infinet.UUCP (Fred Sena) writes:
>I don't think that you can just assume that you know the difference between
>"noise" and "data".  It depends on what you are looking for.  [...]
>Information that was not there,
>all of a sudden is there, because we not care.  I guess information is in the
>eye of the beholder.

Exactly.  A blob of data means nothing except when put in context.  If I
give you a book, but it's written in a language you do not understand,
that book is not information to you.  If I teach you the language in
the book, it will then become information.

I think the ultimate computer system would let you take any data and
organize it any number of different ways.  You could build systematic
structures of the data, and link those structures with other
structures.  It's difficult to do this with today's databases; it's
easier with hypertext, but still not quite right.
-- 
John Labovitz		Domain: jsl@barn.com		Phone: 707/823-2919
Barn Communications	UUCP:   ..!pacbell!barn!jsl

toms@fcs260c2.ncifcrf.gov (Tom Schneider) (01/11/91)

In article <1991Jan9.150033.14718@cs.umn.edu> thornley@cs.umn.edu (David H. Thornley) writes:

>The difficulty with treating multiple meanings as communication is that
>what is happening is not exactly communication.

I disagree with this.  See below.

> The example used earlier
>in this thread is the word "nuclear", which calls up meanings including
>bombs, reactors, families, and cellular reactions.  Unfortunately, since
>it calls up *all* of these meanings, it cannot be credited with communicating
>any of them, since the actual meaning is unclear without other cues.  If
>it were possible to send multiple meanings selectively, this would indeed
>be an effective bandwidth increase.  An analogy in computer communications
>would be a garbled message that could be interpreted as "compile nuclear.c"
>or "archive nuclear.c" or "mail this to the nuclear group".

Aha!  You have made an intersting statement!  The problem is the confusion
in the literature about how to measure information.  I follow the early
workers, and take the measure to be the decrease in uncertainty of the
receiver as the measure of the information gained by the receiver.  It's
a state function.  The uncertainty is Shannon's measure:

H = - SUM _{all symbols, i} P_i Log_2 P_i

where P_i is the probability of the ith symbol.  This is NOT the information!
If the communication channel is noisy, the reciever has more uncertainty after
receiving a symbol or message than before.  This is often forgotten, since we
assume that our communications are clear when they are not.

For example, if I'm thinking of one of the 4 bases of DNA, your uncertainty is
1 in 4 or log_2 4 = 2 bits.  If I say "G" then your uncertainty is zero, and
the difference is 2 bits.  But suppose I said "G" and you were on a terminal
where G and C could not be distinguished.  Then your uncertainty would be 1 bit
after, NOT ZERO because you would still be uncertain about which base (G or C)
it was.  So the uncertainty decrease is 2 - 1 = 1, and you have learned only
one bit of information (ie, it could not be A or T, has to be G or C).

The interesting connection is that if I say the word "nuclear" then, although
your uncertainty has decreased, it does not go to zero.  There remains an
uncertainty as to which meaning should apply.  It's interesting that one can
usually, but not always, resolve that uncertainty from the context!

By the way, if you think we are no longer talking biology, check out our
recent paper on Sequence Logos (NAR 18: 6097-6100 (1990)), where the same
ideas are used to study binding sites.

  Tom Schneider
  National Cancer Institute
  Laboratory of Mathematical Biology
  Frederick, Maryland  21702-1201
  toms@ncifcrf.gov

mmm@cup.portal.com (Mark Robert Thorson) (01/14/91)

[I was asked to post this for uunet.uu.net!crossck!dougm (Doug Merritt)
who cannot post from his site.]

It is a well known result of Information Theory that such "pre-understanding"
is *not* part of the bandwidth of transmitted information, although it
is completely essential for practical purposes. With no such context,
information is transmitted at a certain bandwidth, but cannot be interpreted.
Interpretation is not part of Information Theory, but bandwidth is, and is
defined quite precisely.  Imprecisely, it amounts to the log of the number of
different messages that *could* have been transmitted.
 
Of course, it is simple to point out that the number of interpretations
that can be consistently (repeatably) and deterministically derived from
any given message is exactly the same as the number of possible messages.
 
So this "pre-understanding" business has nothing to do with bandwidth.
 
It may, on the other hand, reflect a lot about the number of states
in the black box's internal automata, which could potentially be
far more complex than the bandwidth of information transmitted
to and from the black box.
 
This additional complexity will show up in the distribution
of different messages (i.e. complexity of prediction over time).
If no such complexity of prediction occurs, then, and only then,
can you say that the internal automata is either internally non-complex,
or redundently complex. Kolmogorov complexity is the appropriate
measure.
 
Most of the discussion I've seen has concentrated on bandwidth arguments,
but neglected prediction complexity; *both* are essential for
speculations about the brain.
 
I think that is pretty much the last word on the subject from an
abstract point of view.

[Well, I really blew it big time by not distinguishing between communication
bandwidth and internal thought bandwidth.  Obviously the preunderstandings
are not communication, but these preunderstandings get activated while
interpreting communication, hence they should be counted under mental
bandwidth.  What units should be used?  I think it can be thoughts
per second.  I don't think anyone ever really has more than one thought
at a time, though it might seem like that when two thoughts are separated
by only a fraction of a second.  Bomb, reactor, Enola Gay ... these are
all one thought each.       --mmm]

Chris.Holt@newcastle.ac.uk (Chris Holt) (01/15/91)

(Doug Merritt):
>It is a well known result of Information Theory that such "pre-understanding"
>is *not* part of the bandwidth of transmitted information, although it
>is completely essential for practical purposes. With no such context,
>information is transmitted at a certain bandwidth, but cannot be interpreted.

So what's the word for information received in the light of a given
context?  Knowledge?

And how would one view Borges' notion of someone who rewrote all of
Don Quixote, word for word, but because it was in a different context
the result was an entirely different book? :-)
-----------------------------------------------------------------------------
 Chris.Holt@newcastle.ac.uk      Computing Lab, U of Newcastle upon Tyne, UK
-----------------------------------------------------------------------------
 "[War's] glory is all moonshine... War is hell." - General Sherman, 1879