[sci.nanotech] Drexler on immortality, source of nano books.

loki@relay.eu.net (Never Kid A Kidder) (03/16/90)

In article <Mar.12.21.30.02.1990.6427@athos.rutgers.edu> SXJ101@psuvm.psu.edu writes:

	 Also, I was reading Drexler's "Engines of Creation".  In it, he
   states that immortality is impossible because of the nature of the universe
   and decay.  But I came across another book which advanced the idea that in
   the future we would be able to store our brains onto computers (tapes).
   By doing so, we would be able to make an exact duplicate of our brains and
   we could make several copies of the tapes as to never lose them.  Because
   the information in our brains makes us what we are, it didnt matter if our
   bodies decayed and died.  We could have artificial bodies (limbs, e.g.) and
   "download" our brain.  This way we would never die, excluding natural disasters
   ,etc.

I'm a wee bit shaky on thermodynamics, but I think that because
entropy always increases, you would at some stage find it impossible
to `store' the brain in a physical medium, simply because there would
only be a uniform photon buzz (or something) at some stage.  That's if
we go on expanding.  If we get a Big Crunch, then all information will
be lost when the universe becomes one singularity.  But then I suppose
either way could be considered a natural disaster...  Also, eternity
is a long time (longer than the longest thing ever, and then some).

[I opine that worrying about the heat death, or cosmological collapse,
 of the universe, now, is like a bunch of cavemen sitting around 
 worrying what will happen when erosion has washed all the land into
 the sea and there's no land left.  Eons before it happens, their
 knowledge and technological powers relative to the problem will
 have changed enough to make their speculations look silly.
 --JoSH]

mike@maths.tcd.ie (Mike Rogers) (03/16/90)

In article <Mar.14.13.39.10.1990.12561@athos.rutgers.edu> alan@oz.nm.paradyne.com (Alan Lovejoy) writes:
 >although that takes trillions of years.  But SEVERE energy problems would
 >confront you in only a few hundred billion...  Perhaps there is some extremely

 >A more mundane problem (by comparison) is the fact that the local star
 >is going to eventually burn out.  Of course, it will fry us first before
 >it sputters out.  We have maybe 50 million years before the temperature
 >starts to get uncomfortable (according to the most recent research of which
 >I am aware).  However, even we primitives can imagine possible ways to

	But the latest evolutionary models of the solar system seem to indicate
that in fact the Solar Constant has been increasing for the last billion years 
or so, altering drastically the ecosphere. The strange thing is that the bio-
sphere seems to have some kind of feedback mechanism to combat this. Gaia?
-- 
Mike Rogers, 6.3.3 TCD, D2, Eire.     | Greater love has no more than this;
mike@maths.tcd.ie  (UNIX => preferred)| Than to be laying down one's life 
mike@tcdmath.uucp (UUCP=>oldie/goodie)|	For friends.
msrogers@vax1.tcd.ie(VMS => blergh)   |                      Yeshua ben Josef

[Hardly Gaia.  Assuming that some long-term change is actually happening,
 simple evolution is plenty to explain the adaptation of the ecosystem.
 After all, that's what evolution is about in the first place.
 --JoSH]

leech@homer.cs.unc.edu (Jonathan Leech) (03/16/90)

In article <Mar.14.13.39.10.1990.12561@athos.rutgers.edu> alan@oz.nm.paradyne.com (Alan Lovejoy) writes:
>There are many horrendous obstacles to truly living FOREVER...
>But SEVERE energy problems would
>confront you in only a few hundred billion...	Perhaps there is some extremely
>elegant way to escape this fate, but no one has any realistic notion of how.

    Actually, noted physicist Freeman Dyson did a nice paper on this
(whose title and place of publication I of course forget :-); he
concluded we can last *much* longer than a few measly trillion years
in an open universe.  I vaguely recall the figure 10^70 being
mentioned.  Part of the process involved dismantling large stars and
storing their hydrogen for gradual use in efficient, cool red dwarfs,
but that was in the very early stages.	(Help!	Can anyone come up
with the reference/more details?)
--
    Jon Leech (leech@cs.unc.edu)    __@/
    ``You're everything I ever wanted in a human AND an extraterrestrial.''
	- Dr. Steve Mills in _My Stepmother is an Alien_

peb@tma1.eng.sun.com (Paul Baclaski) (03/21/90)

In article <Mar.14.13.03.16.1990.10047@athos.rutgers.edu>, hcobb@walt.cc.utexas.edu (Henry J. Cobb) writes:
> 
> 	Immortality implys infinite experence.  Restoring yourself from old
> tapes wipes out your life along with your senility.

JoSH says:

>  As for infinite experience, that becomes an interesting question.
>  If your mind "fills up", then you either get old fashioned or 
>  forget things, so you only gain limited benefit from living forever.
>  It may prove necessary to keep changing into new technology, continuously,
>  to get bigger and bigger memories.

I figure that an intricate "rejuvenation" (Fountain of Youth (TM))
will be a big market.  The idea is that the newness, naivete, and 
energy of Youth is instilled into your jaded, filled memory/brain.
The best approaches will cause selective memory loss--keeping the 
good memories, erasing the bad ones.  The tricky part is making
local changes to many parts of the brain to eliminate the jadedness
or mental momentum of perhaps a century of experience.

This would be an expensive process and would require significant 
knowledge of the organization of the human brain and mind.  I 
predict this would be a big market about 50 years after downloading
and backup of the human mind is possible.


Paul E. Baclaski
Sun Microsystems
peb@sun.com

[I'll bet that just plain physical rejuvenation will be a pretty hot 
 seller well before then-- but I agree, some way of sweeping out that
 dusty attic would be nice.
 --JoSH]

AMSA@cucisa.bitnet (03/21/90)

I haven't read Drexler's books, but there's a more immediate
issue to consider when discussing how to make humans "immortal":
NO living system thus far has the blueprints for "immortality."
Replacing parts with nanotech using the original "blueprints" of
the living system is not the real obstacle, although the engineering
issues may be a challenge.  The real obstacle is that all living
systems have a blueprint that programs for DEATH eventually.  Living
human cells can reliably replicate for less than 100 generations.  It
would seem that if one were to provide all the growth media necessary
for the cells, it could live forever -- but this has been shown not
to be the case.  Therefore, the real obstacle is to change one innate
feature of every single living system's blueprint WITHOUT radically
changing the living system (undoubtedly, many proteins are probably
responsible for aging, which is not just a matter of "wearing out",
but they all function to bring the organism to death).  It would not
do much good to download the software if the hardware had to be corrected
and can't accept the code...

Edison Wong
amsa@cucisa.bitnet

[This is true of human (and other higher vertebrates) but not of lower
 forms, at least not necessarily.  It certainly isn't true of "all
 living systems".  There are theories to the effect that human cells
 have a replication limit as a cancer defense, etc.
 If E. Coli had a replication limit, the whole species wouldn't last
 more than a few days...
 --JoSH]

bfu@ifi.uio.no (Thomas Gramstad) (03/21/90)

I'm just in the process of reading Eric Drexler's _Engines of Creation_ 
and then I remembered that I had seen the word "nanotechnology" 
somewhere on USENET....  Yet another group to be followed...
That sure is a book with powerful visions.  I find it
difficult to assess how much of it is feasible (I'm not an engineer or
technologist, but a biologist (genetics)).

(Sorry, I lost the attribution:)

>The real problem is the fact that we don't know that a backup of your
>mind is still you (even if you are nothing but information and a
>system of state transition functions).  For instance, if we create
>multiple instances of you from a backup, which one is you?  Are they
>all you?  What does the concept of identity mean in this case?  Is
>identity unique, or not?  And if not, then why would each one of the
>instances of you all object to being killed?

>The confusions and contradictions that seem to sprout like weeds when
>one considers this subject suggest to me that at least one of the
>fundamental concepts we use to define/express this problem is flawed,
>inconsistent, meaningless and/or otherwise ill-defined.  We don't know
>what we're talking about, at least not fully.

I think an understanding of our method(s) for concept-formation is
crucial if one is to assess what is possible and not.  This is
both an epistemological and a scientific issue.  For example,
even with an accurate understanding of how the mind works, a
simulation of it may still have restrictions or limitations that
the real mind doesn't have (i e don't equate a model with reality).
Is it possible to incorporate volition/free will (internal causation)
in an Artificial Intelligence?  If yes, there is the contradiction
that this programming was externally generated.  If the AI is
supposed to be immortal, other problems arise:  Fundamentally,
being alive means that self-generating and self-maintaining
action is necessary;  otherwise the organism will die.  This is 
the basis for goal-setting.  Without the alternative of death,
what does being alive mean?  How can goals be set, and value
prioritizations be made by an entity which is automatically
existing?  Well, enough rambling for now...

There is a book by Gary McGath, _Model and reality_ about
epistemological issues wrt the nature of consciousness and
concept-formation in relation to AI research, another of the
books I'm in the process of reading...  I don't have the time
to review it (I hardly have the time to read these books!),
however you may contact McGath at 72145.1014@compuserve.com for
further information about it.


-------------------------------------------------------------------
Thomas Gramstad                                      bfu@ifi.uio.no
-------------------------------------------------------------------

KPURCELL@liverpool.ac.uk (Kevin 'fractal' Purcell) (03/23/90)

On Tue, 20 Mar 90 22:45:00 EST Paul Baclaski (peb@com.sun.eng.tma1) said:

>I figure that an intricate "rejuvenation" (Fountain of Youth (TM))
>will be a big market.  The idea is that the newness, naivete, and
>energy of Youth is instilled into your jaded, filled memory/brain.
>The best approaches will cause selective memory loss--keeping the
>good memories, erasing the bad ones.

There would, of course, be a problem with this -- as most wisdom that
we obtain comes from the bad experiences we have (remember the first time you
fell in love -- now you wouldn't want to do that again? Or rather you
probably would, but avoiding all the bad bits) we stand to lose a lot
of our wisdom.

Perhaps the best way would be to seperate the wisdom from the bad experiences
but I'm not sure this is possible -- a little like the difference between
knowing that you shouldn't do something because you've been told not to do
it and knowing not to do it because you've done it before and the outcome
wasn't much fun.

He who never makes a mistake never learns anything.

>
>Paul E. Baclaski
>Sun Microsystems
>peb@sun.com
_________________________________________________________________________

Kevin 'fractal' Purcell ...................... kpurcell @ liverpool.ac.uk
     Surface Science Centre, Liverpool University, Liverpool L69 3BX

               "My karma just reversed over your dogma"

alan@oz.nm.paradyne.com (Alan Lovejoy) (03/23/90)

In article <Mar.20.23.29.43.1990.13217@athos.rutgers.edu> bfu@ifi.uio.no (Thomas Gramstad) writes:
>
>
>I'm just in the process of reading Eric Drexler's _Engines of Creation_ 
>and then I remembered that I had seen the word "nanotechnology" 
>somewhere on USENET....  Yet another group to be followed...

Welcome aboard!


>That sure is a book with powerful visions.  I find it
>difficult to assess how much of it is feasible (I'm not an engineer or
>technologist, but a biologist (genetics)).

Don't miss the forest for the trees.   The more specific the technospeculation,
the more likely it is to be wrong in some way, and the harder it is to assess
(unless it's already known to be (im)possible).  But the other side of that
coin is that it can be relatively easy to forecast the broad scope of
future technology in general terms--with the obvious exception of those things 
that will use or rely on as-yet undiscovered physical principles.

We will acquire ever greater skill at molecular engineering.  We will be able
to do things which are analogous to those things which existing molecular
machines can do.  There will be both unforeseen limitations and unforeseen
novel capabilities exhibited by future technologies--these are the things
that will make us look silly and/or naive to our future selves and to our 
children.

This is the most important theme of "Engines Of Creation."  The rest is just
window dressing by comparison.

>(Sorry, I lost the attribution:)

That's ok--now you've found it again:  you are quoting me!

>>The real problem is the fact that we don't know that a backup of your
>>mind is still you (even if you are nothing but information and a
>>system of state transition functions).  For instance, if we create
>>multiple instances of you from a backup, which one is you?  Are they
>>all you?  What does the concept of identity mean in this case?  Is
>>identity unique, or not?  And if not, then why would each one of the
>>instances of you all object to being killed?
>
>>The confusions and contradictions that seem to sprout like weeds when
>>one considers this subject suggest to me that at least one of the
>>fundamental concepts we use to define/express this problem is flawed,
>>inconsistent, meaningless and/or otherwise ill-defined.  We don't know
>>what we're talking about, at least not fully.
>
>I think an understanding of our method(s) for concept-formation is
>crucial if one is to assess what is possible and not.  This is
>both an epistemological and a scientific issue.  

Absolutely.  The universe doesn't care what set of conceptual boxes we use to
categorize our subjective experience into our internal model of objective
reality.  A population doesn't change its opinions based on how the
statistician/pollster phrases his questions, or on how he decides what group  
each individual is a member of, or on what groups he decides exist, or on
how each group is defined.  All those things may certainly affect the results
he gets and the conclusions he draws.  But they change the reality not at all.

We need polling and statistical methods which can measure the underlying
reality of a population's opinions.  And we need analogous techniques
for scientific research which do not depend on how scientific questions are
asked, on what conceptual boxes are used to classify the answers, or on 
what language and semantic system the concepts are defined in.

Perhaps all self-consistent conceptual systems are equally valid, but reality
can only be approximated to the extent that the number of conceptual systems 
which are used to think about a problem approaches infinity as a limit.

>For example,
>even with an accurate understanding of how the mind works, a
>simulation of it may still have restrictions or limitations that
>the real mind doesn't have (i e don't equate a model with reality).

The point has been made elsewhere that a computer simulation of a hydrogen
atom, no matter how detailed and/or accurate, can always be easily  
distinguished from the real thing:  just try replacing all the hydrogen
atoms in your body with computer simulations to see why this is so.
We might call this an imperfect simulation.

Imperfectly simulated objects exist in a simulated environment, react to 
simulated events and interact with other simulated objects.  Reality and 
imperfect simulation do not mix--the real object and its imperfectly simulated
twin are not freely interchangeable.

However, not ALL simulations have this problem.  For instance, computer
simulations of other computers can work so well that the only way to tell
the difference is to cheat by opening up the box and checking the internal
circuits.  We might call this a perfect simulation. 

After some thought about the differences between perfect and imperfect
simulations, I have reached the following conclusion:  only information
and symbolic functions/processes can be perfectly simulated.  Perfect
simulation is symbolic simulation.  Imperfect simulation is non-symbolic
simulation.

The implications with regard to immortality, identity, artificial intelligence
and nanotechnology are obvious.





____"Congress shall have the power to prohibit speech offensive to Congress"____
Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
Mottos:  << Many are cold, but few are frozen. >>     << Frigido, ergo sum. >>

peb@tma1.eng.sun.com (Paul Baclaski) (03/23/90)

In article <Mar.20.23.11.25.1990.12533@athos.rutgers.edu>, AMSA@cucisa.bitnet writes:
> ...Therefore, the real obstacle is to change one innate
> feature of every single living system's blueprint WITHOUT radically
> changing the living system 
> [This is true of human (and other higher vertebrates) but not of lower
>  forms, at least not necessarily.  It certainly isn't true of "all
>  living systems".  There are theories to the effect that human cells
>  have a replication limit as a cancer defense, etc.
>  If E. Coli had a replication limit, the whole species wouldn't last
>  more than a few days...
>  --JoSH]

Over-population would be a serious problem in a world where it was
possible to repair cells and avoid the built-in lifetime.  As 
intelligent organisms, a cultural mechanism could be used to prevent 
overpopulation.  However, this is a political and economic problem,
not a technical one.

Paul E. Baclaski
Sun Microsystems
peb@sun.com

[I beg to differ.  Overpopulation is a function of the exponential
 nature of reproduction, and the mortality or immortality of the 
 ancestors makes little difference.  In a binary tree for example,
 all the ancestor nodes of a given level are fewer than the difference
 between that level and the next.  Thus one would assume that 
 immortality would correspond to pushing the calendar forward
 some fixed, constant period (given exponential population growth).

 Overpopulation WILL be a serious problem if the technology does
 not keep pulling rabbits out of a hat for us.  But practical 
 immortality will make little impact: it's the exponential nature
 of reproduction that does the damage.

 --JoSH]

 

tcourtoi@jarthur.claremont.edu (Todd Courtois) (03/23/90)

SXJ101@psuvm.psu.edu writes:

>and decay.  But I came across another book which advanced the idea that in
>the future we would be able to store our brains onto computers (tapes).
>By doing so, we would be able to make an exact duplicate of our brains and
>we could make several copies of the tapes as to never lose them.  Because
>the information in our brains makes us what we are, it didnt matter if our
>bodies decayed and died.  We could have artificial bodies (limbs, e.g.) and
>"download" our brain.  This way we would never die

>     What do you think about this idea?  The information downloaded onto the
>tapes from our brains would only be the data.  How would we actually run the
>data and simulate our brains artificially?  Currently, program and data are
>separate parts of the software.  The data would be the information in the
>brain, but what about the program?  Unless we write a program that simulates th
>e brain's processor (i.e. mind), there doesnt appear to be other ways of
>simulating the brain...

I think the idea is naive.  I have no doubt that, given lots of time and
money scietists would (and probably will) discover exactly how the brain
functions, and how to create a computer which does basically the same thing.
However, aren't you missing a basic point here?  SO WHAT if you create 
a computer that thinks just like you-- that doesn't make *you* immortal.  It
makes the goddamn computer immortal, but you'll die like any disillusioned
genius.  Do you believe that our "soul," which I suppose is deposited 
in our brains,  would be transferred to the computer?

I guess the point I'm trying to make is that the brain-like machine you
create doesn't make your soul immortal.  Sure, hundreds of years from now
people might be able to chat with your computer embodiment, but will
*you* as a person, a soul, experience it?  No, I don't think so...that's
a lot like saying that your photograph makes you immortal.

Then again, some indian cultures DO believe that a photograph captures your
soul, and they refuse to be photographed.  In a sense, since your brain
is captured on computer, your spirit and knowledge and whatever would 
"live on" into the future; but alas, you wouldn't be involved.

Think of it this way: if you cloned yourself, or if you have an identical
twin, then that is a whole other person whose life is completely seperate
from yours.  You don't experience everything your twin experiences, and
even if your twin were immortal, what good does that do you?

No, if we could figure out a  way to slowly integrate our brains into
circuitry until eventually our entire brain was constructed of 
indestructible parts, then that might work.  But again, do you think
your *soul* would transfer?

What do other netters think?


--Todd Courtois

.sig ':^]

[In fact, Hans Moravec describes exactly such a procedure ( to move 
 your consciousness over into a robot, without breaking the stream
 of consciousness, and thus making sure there is a single, unbroken
 identity through the transformation).  
 The larger answer to the question is that in our current technology,
 we are each the "keeper of the flame" of our own identities, ie,
 we're like people before they knew how to start a fire, and if 
 they let it go out they were sunk.  Nowadays nobody cares if you
 let a fire go out, it's easy to start a new one.  I imagine 
 that we'll be a lot more blase' about dying when we realize that
 our "selves" in some very substantial sense will keep going.
 --JoSH]

tcourtoi@jarthur.claremont.edu (Todd Courtois) (03/23/90)

hmmmmmmmmmm

Perhaps we should do a bit of cross-posting on this topic with
sci.virtual.worlds..... It seems to me that what a few people have
suggested is that we *become part of* a virtual reality.  That is,
once you have made the real world representable inside a computer,
why not stick the person inside the computer to experience the world.

I think the thread about "tapes" and "experiences" having to do with
the limitations of memory are pretty much irrelevant.  In making the leap
from a biological mind and soul to an electronic/nanotech brain, I 
assume that we will develop whole new concepts for memory; just look at
stuff like multitasking and windowing for examples.  It isn't so much what
you remember, but what you want to access *right now*.  

It seems obvious that physical representations for ourselves using 
robotics, etc. will be feasible.  However, I am still wondering about the
fundemental concept of *moving* your mind into a computer, and by this I
mean to exclude the idea of *copying* your mind into a machine.  Even if you
could simulate the brain on a neuron-by-neuron basis, can you move your
consciousness and your soul also?  I'd like some feedback on this.......

This is a very intriguing thread.  Thanks for introducing it!!!!


--Todd Courtois

.sig not included  

[Remember the Utility Fog?  In a Utility Fog world, there would be
 a fairly seamless spectrum between existing in the real world,
 and being a simulation.  This makes for an interesting environment...
 --JoSH]