[sci.nanotech] First upload

ems%nanotech@princeton.edu (01/16/91)

At some point it actually will become (almost?) possible to do an upload.
What then?  There will surely be some sticky problems.  A rational person
wouldn't take the risk of transferring to a supposedly equivalent brain
structure without a very good reason to believe it would be identical,
not unless the original person were in danger of immediate biological
death.  A person in that "do or die" situation might very well take the
plunge, and some more-or-less equivalent person would result, but this
is unlikely to convince other healthy persons to risk their identities.
Something more is needed, or else uploading will always remain an option
of last resort.

One possibility is to produce a nanotech-style clone, and observe it's
every reaction, comparing it to your original reactions.  Massive amounts 
of sensors and difference measurement computations will be needed, but with
nanotechnology you'll have them.  This of course raises other questions.
What do you do with your clone when the experiment is done?  Killing it
seems immoral, even if you're convinced that your experiment failed
(ie the clone is not you, but a new, slightly different person) and
especially if you think the experiment succeeded. 

The clone experiment is not recommended for another reason.  You just 
don't have sufficient control over the real world to produce identical
experiences for yourself and your clone in order to compare differences.

This suggests that the proper course is to build your clone in simulation,
and measure it's reactions in a simulated environment that "exactly"
matches your real world environment.  An even more massive amount of
computation is necessary, compared to the real world clone experiment,
but once again nanotechnology should prove capable of providing them.
Most experimenters would have fewer qualms about killing their simulated
clones when the experiment was ended, but the particular few could simply
leave the experiment running as long as the resources were available. 

Of course even this simulated clone experiment has its own set of knotty
problems.  If no differences are observed, is it because the simulation is
effectively identical, or simply that your sensor network or difference
measures or the simulation itself are not fine-grained enough?  If differences 
are observed, will you be able to prove that the differences arose from 
quantum fluctuations? (To be thorough, you'll have to make the experiment
detailed enough that quantum difference are measurable, and then prove
the all the differences result from quantum effects. Not too easy.)

This leaves aside the question of whether the brain is dependent on
quantum fluctuations in its thinking process.  Mercifully, that question
will have been solved by earlier neural research.  If the answer was yes
then there is no point in attempting uploading until you've achieved the
ability to control events on a quantum level as well.

There's also the privacy issue to consider.  While you are collecting
data for your simulation, you are measuring your every internal response,
which is your business, but you will also be recording external events
in exacting detail, including all the people you deal with daily.  This
sort of eavesdropping is likely to cause some problems, and it's not
likely to go unnoticed.  (Bug detectors will come into regular widespread
use not too long after the age of micromachines begins, for privacy
reasons.)  And there is the data collected on your own internal responses
to consider.  You would not want your data to fall into the hands of a
competitor.  (Much more personal than having a thesis stolen :-)

Ed Strong
ems@princeton.edu

dmocsny@minerva.che.uc.edu (Daniel Mocsny) (01/22/91)

In article <Jan.15.17.24.01.1991.24415@athos.rutgers.edu> ems%nanotech@princeton.edu writes:
>At some point it actually will become (almost?) possible to do an upload.
>What then?  There will surely be some sticky problems.  A rational person
>wouldn't take the risk of transferring to a supposedly equivalent brain
>structure without a very good reason to believe it would be identical,
>not unless the original person were in danger of immediate biological
>death.  A person in that "do or die" situation might very well take the
>plunge, and some more-or-less equivalent person would result, but this
>is unlikely to convince other healthy persons to risk their identities.
>Something more is needed, or else uploading will always remain an option
>of last resort.

How does anything dangerous get invented? You start off with conceptual
models, advance to computer models, bend some metal to get mock-ups and
prototypes, then test with robots, animals, and/or courageous volunteers.
Every time your prototype breaks, burns, or explodes, you figure out
what went wrong, and you try to design that failure mode out of the next 
one.

In the case of uploading, we can just let its ontogeny recapitulate our
phylogeny (well, sort of). I would imagine that long before we let a
robot probe/surgeon get anywhere near a healthy human brain, the technology
will have been verified an enormous number of times on animal brains of
successively greater complexities.

Start off by uploading a slug's neural net. Once you've got that licked
(or squished), then you can work your way up through flatworms, 
sessile molluscs, insects, mobile molluscs, fish, amphibians, reptiles,
birds, and mammals. By the time you can upload an ailing chimpanzee's
brain with virtually 100% reliability, and the chimp's keeper of 
however many years can't distinguish them, then you'll be ready to
start with terminally ill humans. 

This isn't anything new, of course. Every invasive medical procedure 
leaves a trail of dead animals before it can heal people. 

Uploading animals might even be significantly useful in its own right,
even if the "ultimate" goal of uploading humans hits some unforeseen
show-stopper. Animal "models" are very popular for behavioral studies.
I'm not sure how well an animal model would live in a computer, but
it would sure stink up the lab a lot less in software.

>One possibility is to produce a nanotech-style clone, and observe it's
>every reaction, comparing it to your original reactions.

If we still have private industry and its associated marketing by the 
time uploading becomes possible, I'm sure that at least a few of the
first uploads will make a living of advertising their move to others.



--
Dan Mocsny				Snail:
Internet: dmocsny@minerva.che.uc.edu	Dept. of Chemical Engng. M.L. 171
	  dmocsny@uceng.uc.edu		University of Cincinnati
513/751-6824 (home) 513/556-2007 (lab)	Cincinnati, Ohio 45221-0171

cphoenix@csli.stanford.edu (Chris Phoenix) (01/22/91)

In article <Jan.15.17.24.01.1991.24415@athos.rutgers.edu> ems%nanotech@princeton.edu writes:
>This suggests that the proper course is to build your clone in simulation,
>and measure it's reactions in a simulated environment that "exactly"
>matches your real world environment.  An even more massive amount of
>computation is necessary, compared to the real world clone experiment,
>but once again nanotechnology should prove capable of providing them.
>Most experimenters would have fewer qualms about killing their simulated
>clones when the experiment was ended, but the particular few could simply
>leave the experiment running as long as the resources were available. 

I don't get it.  The only difference I see between a "simulated" clone and
a "real" clone is that it's possible to control the inputs to the simulated
clone more exactly.  For both clones, you're simulating neurons in silicon, 
right?  (Hmmm... what about "nanon" to mean a simulated neuron?)  
So are you saying that the simulated clone would be *you*, not a clone of
you, and so it would be OK to kill it since you wouldn't be losing information?
This may be fine if the experiment succeeded.  (Though if someone told me I
was the clone, and about to be terminated, I would sure *feel* like I was being
murdered!)  But what if the experiment fails, if the clone doesn't act exactly
like you?  Seems like you have to let the unsuccessful clones live, but you can
kill the successful ones!  Maybe I'm missing something, maybe I'm picking nits,
but this seems pretty paradoxical...

markb@agora.rain.com (Mark Biggar) (01/22/91)

In article <Jan.15.17.24.01.1991.24415@athos.rutgers.edu> ems%nanotech@princeton.edu writes:
>There's also the privacy issue to consider.  While you are collecting
>data for your simulation, you are measuring your every internal response,
>which is your business, but you will also be recording external events
>in exacting detail, including all the people you deal with daily.  
...
>  And there is the data collected on your own internal responses
>to consider.  You would not want your data to fall into the hands of a
>competitor.  (Much more personal than having a thesis stolen :-)

This is no problem.  Note that the simulation needs experience no more
then an recording of the environment you yourself experience.  So,
nano-bug yourself and play the recording to the simulation after some
delay.  As long as the simulation stays in synch with the recording there
is no difference, when and if the simulation gets out of synch with
the recording you have detected a difference.  There is no privacy
problem, just limit the set of people you interact with to those
you told about the experment ahead of time.
--
Mark Biggar
markb@agora.rain.com

ems%nanotech@princeton.edu (02/03/91)

>In article <Jan.15.17.24.01.1991.24415@athos.rutgers.edu> ems%nanotech@princeton.edu writes:
>>There's also the privacy issue to consider.  While you are collecting
>>data for your simulation, you are measuring your every internal response,
>>which is your business, but you will also be recording external events
>>in exacting detail, including all the people you deal with daily.  
>...
[Some text elided...]
>
>This is no problem.  Note that the simulation needs experience no more
>then an recording of the environment you yourself experience.  So,
>nano-bug yourself and play the recording to the simulation after some
>delay.  As long as the simulation stays in synch with the recording there
>is no difference, when and if the simulation gets out of synch with
>the recording you have detected a difference.  There is no privacy
>problem, just limit the set of people you interact with to those
>you told about the experment ahead of time.
>--
>Mark Biggar
>markb@agora.rain.com
>
I disagree. There is a problem, because restricting your simulation input
to a set of "primed" people is at odds with testing your simulated clone
with the widest possible set of situations. People who know beforehand
that a camera is on will behave considerably differently than those who
are unaware they are being observed. Unless they are all skilled actors
the people you've told about the bugging cannot possibly all put on
convincing, true to life performances. Let's face it, some things humans
do are downright embarrassing.  Imagine that you finally upload into your 
new brain and then find that it "freezes", or worse, when dealing with
strangers, or accidents, or in the bedroom, simply because you limited
your input data too much.

There may be a way out of this privacy dilemma, though. Perhaps virtual
realities can come to our rescue. Using VR we could role-play our way
through those experiences that might otherwise be too ticklish or touchy
to capture in the real world. In order for this to work the VR would have
to have enough fidelity and a large enough scope and enough role players
(thousands?) to enable you to get true to life experiences for your
simulation. And you'd have to record your behavior and the behavior of
all the characters you interacted with, in excruciating detail.

Another small step on the long rocky road to uploading....

Ed Strong	ems@princeton.edu

merkle@parc.xerox.com (Ralph Merkle) (02/10/91)

The assumption that the first person to be uploaded will do so
because he wants to save his own life does not appear to be the most
plausible scenario.

A better motive is money.

As an example, consider that Steven Spielberg is the center of a
major multi-billion dollar business.  Let us suppose that he became
terminally ill, with no prospect for recovery.  Uploading, regardless of
whether it preserved Spielberg's "consciousness" and regardless of the
various philosophical debates, would preserve his skills and abilities.
His business partners, faced with the prospect of a major financial
disaster, would have clear and obvious motives.  With an appropriate
PR campaign and a battery of lawyers, it seems likely that the various
obstacles that might otherwise hinder such an undertaking could be
dealt with.  There would also be substantial sums of money available
to pay for the various expensive procedures that might be needed.

There are a number of other reasons for uploading which have little
if anything to do with personal survival.  There are also a number of
motives for recovering partial information from human brains which
have nothing at all to do with uploading, and in which the consent
of the individual being analyzed would not be a significant
consideration....

dmocsny@minerva.che.uc.edu (Daniel Mocsny) (02/10/91)

In article <Feb.3.00.08.36.1991.28839@athos.rutgers.edu> ems%nanotech@princeton.edu writes:
>The crux of the problem lies in verifying that
>your upload technique has achieved true transference, and not just
>produced an almost perfect copy. If you're wrong then you've just killed
>the original person, no matter how traditional your line of research.

What if the "almost perfect copy" passes a complete battery of 
psychological tests verifying its/his/her competence to testify, and
then swears that you have not just killed the original person?

>Another problem with this line of development is that, by the time
>nanotechnology is on the verge of achieving upload, there will be few
>if any terminal patients around to do the experiment. I can't think
>of any disease or accident where it wouldn't be simpler to repair the
>body using assemblers. Events violent enough to damage a human beyond
>the reach of assembler repair would also be apt to leave nothing
>to be repaired. (Like crashing into the sun, for instance. Pfft! :-)

This is a very good point. The only good reason to consider uploading
would be to reduce one's vulnerability events violent enough to damage
a human beyond the reach of assembler repair. I'd breathe a bit easier,
so to speak, if I were hardened against ionizing radiation, vacuum,
temperature excursions, 2000 psi overpressure, and 1000 g impacts.

>Imagine a situation where some research group *thinks* they have 
>achieved uploading, but really have copying. The copies themselves
>swear by the technique and advertise it to all their friends and the
>original "templates" aren't around afterwards to point out the mistake. 
>Later, a second research group shows by refined measurement techniques 
>that the first research group was in error. Now aren't you glad you 
>waited?

Perhaps I am glad, but is my nonexistent copy glad? Virtually all of us
are born with at least a few copying errors, yet we may still be glad.

Actually, I don't quite understand how uploading must require destroying
the original. Why can't we use highly accurate non-destructive 3-D
imaging to map the structure and function of the brain? I thought the
destructive upload model was part of a thought experiment to show
that consciousness could be unbroken during upload. By the time we are
able to upload, we should have much greater insight into the bases
of consciousness and "self", permitting the technology to proceed on
firmer theoretical footing.

>Even worse, imagine a situation where the uploading technique works
>sometimes, but is not 100% reliable. (Has there ever been any medical
>process that was 100% reliable?) At 99% reliability, you don't have to 
>upload too many times before the long odds catch up to you.

No biological process is 100% reliable either, as many grieved parents
can attest. 

But I don't understand what you mean about "upload[ing] too many times"?
Do you mean that one individual will upload more than once? Or that
when many individuals upload, perhaps a few will die?

I assume you mean the latter. Well, you answer your own question: no
medical process is 100% reliable. Neither is any other technology.
That doesn't stop people from getting out of bed in the morning and
venturing back onto dangerous highways, getting into airplanes, etc.
Indeed, whenever we have a major transportation accident, the rescue
crews arrive at the scene by equally dangerous means.

Doesn't it seem rather insane to drive an ambulance to go pick
up victims of a crash involving the same transportation technology?
What further proof could any driver need of their obvious folly?
Yet this obviously doesn't matter to most people. People will gladly
assume any risk, provided that (1) it is familiar to them, and (2) they
believe the risk is worthwhile.

Unfamiliar risks, like nuclear power today, and uploading tomorrow
(in its earlier phases of development) will be perceived in a very
exaggerrated light. Truly lethal but familiar risks, like 
automobiles, don't keep many people excited. This seems like a strange 
example of human irrationality, but it is actually an 
evolutionarily-selected survival heuristic: "Assume anything 
unfamiliar is dangerous."

>False uploading, to coin a phrase, will not be the disaster that
>death is today, of course, since a near-perfect copy of the original
>results, with differences indetectable by normal human senses. The
>only real problems would be psychological, for the copy and his/her
>close acquaintances, and legal, since now inheritance laws would come
>into play.

What is anyone going to inherit when nanotechnology obsoletes our
poverty-influenced concepts of property? Also, legal difficulties
are largely the product of the legal profession, which will no
longer exist by the time uploading becomes feasible. (At least not
with the relative power over other segments of society which it
has today. The legal profession is 100% information-based, which means
we will have relegated it almost entirely to software by then.
Our computers will not have agendas inimical to our interests, unless
we make some big design mistakes.)

>Unless some unforeseen difficulty develops, nanotechnology should be
>able to keep the original you in good shape indefinitely, with
>some gradual enhancements with "improved components" as they become
>practical. This is why I think there will be a long hiatus between the
>time we think we have uploading and the time it actually comes into
>safe, reliable, regular use. Impatience is for short-timers.

By "long", do you mean a long wall clock time, or do you mean a time
during which "many" technological developments happen?
The dynamics of exponential progress compress technological 
developments together in the future. E.g., your notion of "gradual
enhancements" could segue quite naturally into "effectively uploading
by stages" very shortly after "catastrophic" uploading becomes
possible. I.e., a person might upload, over a period of time,
without really planning to, merely by upgrading pieces here and there.
At some point, the "upgrade" will no longer consist of meat.



--
Dan Mocsny				Snail:
Internet: dmocsny@minerva.che.uc.edu	Dept. of Chemical Engng. M.L. 171
	  dmocsny@uceng.uc.edu		University of Cincinnati
513/751-6824 (home) 513/556-2007 (lab)	Cincinnati, Ohio 45221-0171

Jim_Day.XSIS@xerox.com (02/10/91)

According to currently accepted estimates, virtually all of the atoms in the
human body are replaced over a period of seven years by ordinary metabolic
processes.  If personal identity is equated with a given collection of atoms,
then I've been "uploaded" eight times already.  I can recall things that
happened when I was five years old, but are those my memories or someone elses?

Jim Day

szabo@sequent.uucp (Nick Szabo) (02/10/91)

In article <Feb.3.00.08.36.1991.28839@athos.rutgers.edu> ems%nanotech@princeton.edu writes:

>I grant you that the mechanics of uploading will be worked out in 
>exhaustive detail and tested on lower animals, long before any human
>trials are attempted. The crux of the problem lies in verifying that
>your upload technique has achieved true transference, and not just
>produced an almost perfect copy. If you're wrong then you've just killed
>the original person, no matter how traditional your line of research.

How can an upload be verified?  There may be no philosophically
satisfactory or even scientific way to do it.  We can measure
statistical differences in various behavior patterns, but these
be significantly different just due to differences in the physical
media.


>Another problem with this line of development is that, by the time
>nanotechnology is on the verge of achieving upload, there will be few
>if any terminal patients around to do the experiment. I can't think
>of any disease or accident where it wouldn't be simpler to repair the
>body using assemblers. Events violent enough to damage a human beyond
>the reach of assembler repair would also be apt to leave nothing
>to be repaired. (Like crashing into the sun, for instance. Pfft! :-)

An upload could be done as a copy, without risking damage to the source 
media (in this case the physical body and mind).  It could be
tried thousands of times, and if it fails, so what?  Erase the RAM
and try again.


>Imagine a situation where some research group *thinks* they have 
>achieved uploading, but really have copying. The copies themselves
>swear by the technique and advertise it to all their friends and the
>original "templates" aren't around afterwards to point out the mistake. 
>Later, a second research group shows by refined measurement techniques 
>that the first research group was in error. Now aren't you glad you 
>waited?

On what basis can it be stated that there has been a "copy"
but no "upload"?  What is the difference?


>...
>False uploading, to coin a phrase, will not be the disaster that
>death is today, of course, since a near-perfect copy of the original
>results, with differences indetectable by normal human senses. The
>only real problems would be psychological, for the copy and his/her
>close acquaintances, and legal, since now inheritance laws would come
>into play.

* Insert a few new beliefs here and personality traits there while 
  uploading.  Gives a whole new meaning to "born again".

* Upload one person with the ability to mimic another that was
  also supposedly uploaded.

Yesterday it was misquoting, today it is forged photographs, tommorrow
forged personalities... 



-- 
Nick Szabo			szabo@sequent.com
Embrace Change...  Keep the Values...  Hold Dear the Laughter...