[sci.nanotech] Brain as Database

hcobb@walt.cc.utexas.edu (Henry J. Cobb) (03/16/90)

	One way to look at the problem of "What does it mean to have a Good
copy of my brain?"  Is to consider the Brain as a database machine.

	Under this analogy experience is the data, and personality is the
'method' used to index into the database.  But experience is not stored as
'flat text', but rather as part of the program.

	The method we use to 'store' ourselves is very effcient, but is slow
to add new data and subject to losing references.  (I.e. all the things you
'know' but cannot recall. )

	A literal copy of the brain would run, but at no real improvement.
The 'clock' could be upped, but this would be just like having more time to
think.  The system could also suffer from headaches, bad moods, etc... just
like us.

	The obvious first thing to do would be to add more data directly to
the database.  This would imply that we could do the same thing in the organic
system, and tends to imply that we are therefore 'smarter than ourselves'
and this leads to paradox.

	So you could give your pet brain an extended storage of "flat data"
to read, but unless it is clocked much higher than we are it is difficult
to see how such a system would faster than you with a terminal.

	Lastly we might work on "tightening the code".  But if you force a
change of thinking on the pet, how then is it still you.  Also this implys
that we can improve the brains we already have, by being smarter than 
ourselves. (See above)

	I conclude that no real advantage is to be gained from a "thinking
machine", and that that it is best to just provide the ~5 billion people
we already have with the tools and training to undertake whatever "thinking"
needs be done.

	Henry J. Cobb	hcobb@ccwf.cc.utexas.edu
	"I was not here; I did not say this."

[Actually, a simple direct simulation could well avoid many problems
 in the headache-bad mood category (though not all).  Migraines are
 thought to be caused by certain chemical imbalances, sinus headaches
 are due to direct physical causes, etc.

 Then there are optimizations to be done in the I/O processes.  If,
 for example, we could provide a "flat database" which could be read
 without going through the eye:retina:edge-finding:character-shape
 recognition:visual-cortex:words:parsing:langauge-understanding
 process, but could be "poured" into memory directly, it would be
 like having memorized the Library of Congress.

 I can think of hundreds of ways to be "smarter than I am"--some of them
 are embarrassing since they show just how limited the human mind 
 really is:

 Throw three or four pennies on a desk.  Look at them: you don't 
 need to count, you recognize their number directly.  5 or more 
 you can't (actually, you can practice and do much better on 5
 and 6, but much over 7 and you're done for no matter how hard
 you work on it).  I'd like to be able to look at thousands of
 objects (this isn't much, there are almost 2k chars on a 24x80 
 crt screen) and know their number directly.

 How many chars was that?  Multiply 24 and 80... Isn't it instantly
 obvious the answer is 1920? no? you can do it for 2x3!

 I can visualize simple machines and do a fair simulation: e.g.,
 three gears, each meshed with the other two, are locked and can't
 move.  Some people can't even see that if it's drawn on paper--
 but that's just a small difference of degree.

 What I'm saying is that you could hook someone into an electronic
 library, CAD system, physics simulator, symbolic and numeric math
 programs, and on and on, without changing the kinds of things 
 their minds do directly in the tiny, simple cases, but simply
 take the limits off.

 Look at this word: "prestidigitation".  If you focus on the "dig"
 in the middle you can hardly see the beginning and end of the 
 word.  You don't notice how limited the actual functional part
 of your field of view is normally, but it is.

 If you stop to think just how much the power of our minds can
 be extended by the utterly primitive agency of pencil and paper,
 you'll realize just how wretched our native symbolic-level 
 processing really is.

 --JoSH]

jhallen@wpi.wpi.edu (Joseph H Allen) (03/21/90)

In article <Mar.15.16.39.37.1990.14552@athos.rutgers.edu> hcobb@walt.cc.utexas.edu (Henry J. Cobb) writes:
>	One way to look at the problem of "What does it mean to have a Good
>copy of my brain?"  Is to consider the Brain as a database machine.

>	Under this analogy experience is the data, and personality is the
>'method' used to index into the database.  But experience is not stored as
>'flat text', but rather as part of the program.

>	A literal copy of the brain would run, but at no real improvement.
>The 'clock' could be upped, but this would be just like having more time to
>think.

I'd call that an "improvement".

>	The obvious first thing to do would be to add more data directly to
>the database.  This would imply that we could do the same thing in the organic
>system, and tends to imply that we are therefore 'smarter than ourselves'
>and this leads to paradox.

'Tends' is indeed the proper word here.  Paradoxi(?) of this type only occur
when a system is out of memory and when the data stored in the system is
perfectly compressed.  The other paradox which people bring up from time to
time is "Can we understand ourselves"?  Both of these arise from two types of
confusion:  (1) people find it difficult to understand that something can be
"tagged"  I.E., to "perfectly" understand something implies that you know
where each atom is at every instance of time.  If it was this type of
"perfect" understanding then the argument would hold.  However, we use tags
(names) for complex things and gloss over unnecessary details (platonic).
(2) Even when "perfect" understanding is specified people have trouble with
data compression.  I.E., can we have a complete copy of our brain stored in
our brain?  Indeed we can not (except for trivial cases).  But we can have an
_old_ copy of our brain stored in our brain (I.E., _old_ meaning before you
include the copy as being part of the brain).  This can be done by compressing
the data within the brain and storing the compressed form in some unused area
(and compressing unused areas is easy:  "this area of 500MB is unused"= only
28 bytes)

>	So you could give your pet brain an extended storage of "flat data"
>to read, but unless it is clocked much higher than we are it is difficult
>to see how such a system would faster than you with a terminal.

Of course there's the minor problem of not knowing exactly (well not quite
that strong) how we work.

>	Lastly we might work on "tightening the code".  But if you force a
>change of thinking on the pet, how then is it still you.  Also this implys
>that we can improve the brains we already have, by being smarter than 
>ourselves. (See above)

Defining boundaries is an interesting problem.  What exactly do you mean by
"you"?  The data in your brain at some particular instance of time?  All the
data from your entire lifetime?  Only the part that makes up the
"personality"? 

What I'm getting at is that we are contantly changing and what we call
ourselves today does not refer to the exact thing we called ourseves
yestarday.  So what if I change myself at some fundamental level?  Who
are you to say that I'm not the "same" person?

>	I conclude that no real advantage is to be gained from a "thinking
>machine",

Why so derogatory?  Or do you think that we _aren't_ thinking machines?

> and that that it is best to just provide the ~5 billion people
>we already have with the tools and training to undertake whatever "thinking"
>needs be done.

And I conclude that there are quite a few advantages that can be gained, as
the moderator lists.
-- 
            "Come on Duke, lets do those crimes" - Debbie
"Yeah... Yeah, lets go get sushi... and not pay" - Duke

bfu@ifi.uio.no (Thomas Gramstad) (03/21/90)

>From: hcobb@walt.cc.utexas.edu (Henry J. Cobb)
>Date: 15 Mar 90 21:39:39 GMT

> I can think of hundreds of ways to be "smarter than I am"--some of
> them are embarrassing since they show just how limited the human mind
> really is:

...

> If you stop to think just how much the power of our minds can
> be extended by the utterly primitive agency of pencil and paper,
> you'll realize just how wretched our native symbolic-level 
> processing really is.

> --JoSH]

At the top of my head I can imagine at least three ways of brain
enhancement, and they don't even require nanotechnology (though
of course they would benefit from it):

1. "Reversible synergism" with a computer (see e g Poul Anderson's
_Avatar_).

2. Physical connection of the brain with a "crystal memory" inserted
into it by microsurgery

3. Organic computers



-------------------------------------------------------------------
Thomas Gramstad                                      bfu@ifi.uio.no
-------------------------------------------------------------------

[Quite frankly, I believe that we will have nanotechnology *before*
 we understand the brain well enough to do many of the interesting
 sorts of enhancements we've been talking about recently.
 --JoSH]