[sci.nanotech] Is uploading suicide ?

franz@cs.washington.edu (Franz G. Amador) (01/10/91)

It seems to me that the concept of uploading has a fundamental and
unavoidable flaw.  Namely, there is no way to tell if an uploading has
been successful.  By "successful" I mean that one's consciousness has
been copied into the computer, not merely one's behavior patterns.

Suppose, for example, that you are the last remaining un-uploaded
person on Earth.  All your friends have made the transfer, and their
bodies have been done away with.  "It's great," they say, "come join
us!  You can think ever so much faster, and you can live in an
infinite and wonderous variety of synthetic worlds."  You like the
idea of this new sort of existence, but how do you know that your
friends still really exist?  There's no way for you to tell if what
seems to be their voices coming from the depths of the system
represent real, thinking beings with that vital spark of
consciousness, or are simply perfect computer simulations that act
exactly right, but are just mechanical marionettes.  Are your friends
in the computer, waiting for you, or did they die with their bodies?
From the outside, you cannot tell the difference.

Now suppose that one of them says, "Okay, I'll prove to you that I'm
real," and uses the handy-dandy nanotech assembler system attached to
the computer to generate a new body with the proper brain state to
correspond to his current mental state in the computer.  Newly
corporeal, he walks over and shakes your hand and says, "See?  No
problem.  I'm currently made of just the same stuff as you, and I am
definitely conscious and remember perfectly my life in the computer.
It was great, and I'm looking forward to going back.  We're just
starting a great simulated multi-dimensional party, and everybody's
hoping I'll bring you along."  Unfortunately, even if he's telling the
truth, it doesn't solve your problem.  A perfect simulation would, of
course, create the proper memories, and once made physical again, even
your friend cannot know whether he was really conscious within the
machine or not.

Whether these points bother the reader depends, I suppose, upon
whether he or she believes that a "perfect" simulation will
necessarily create consciousness.  I do not see why it should, but the
answer is immaterial to my argument.  If you decide to risk your
consciousness by uploading, it will be an act of faith without a
guarantee, because there is no empirical means of determining if any
prior uploading has been successful.  Even if you decide to return to
material reality after a fixed time limit, and your uploaded self
doesn't change its mind, you still won't be able to know if you were
conscious while in the computer.

So if you really can't tell the difference while made of flesh and
blood (not meat - meat is dead muscle tissue), and you can return to
physical form whenever you or your simulation wants to, what
difference does it make whether you are really conscious while
uploaded?  It is the difference between life and death.  If your
consciousness is not there while uploaded, then so long as you stay in
the computer, you are dead.  If existence in there really is
preferable to that in the material world, then your simulated self
might choose never to return, and by uploading you are committing
suicide.

Franz Amador
franz@cs.washington.edu
{rutgers,cornell,ucsd,ubc-cs,tektronix}!uw-beaver!june!franz

[I am sure that this argument is a foretaste of the sort of reaction
 that some people are going to have to a number of the fundamentally
 new and different things in the world that nanotechnology is going
 to make possible.  It's incumbent on us, here and now, before that
 happens, to get a good handle on the these arguments while this is
 still speculation, and all sides can be thoroughly explored without
 arousing counterproductive passions.
 The basis of this argument (and plenty of other ones we've seen here)
 is a strict dichotomy between life and death.  In the natural world
 this makes lots of sense, because the two states are quite distinct.
 However, even current medical technology blurs the distinction, and
 in the future there will be a complete continuum.
 Mr. Amador's rhetorical device is to continue to identify the term
 "life" with its current strict meaning and call everything else 
 "death", no matter how lifelike it may seem.  Now one assumes that 
 he believes there is some undefined essence associated with a direct
 biological implementation of a person's thought pattern, and he is 
 making the distinction on that basis.
 However, even if that were true, it is argumentative rather than
 enlightening to associate the term "dead" with states other than
 the one it currently denotes.  In particular, the disturbing 
 connotations of the word are associated with the powerlessness,
 uselessness, and total loss and irretrievability which death 
 currently entails.  None of these is true of uploading, and 
 only some of them are true of other possible states (such as
 being copied into the archives).  
 Thus, rather than using an existing value-laden word such as
 "dead" it will be much more useful to use (currently) value-free
 words such as "uploaded", "simulated", "archived", "inactivated",
 and the like, and allow connotations to be attached to them as
 the states they describe become better understood.
 
 --JoSH]

Jim_Day.XSIS@xerox.com (01/12/91)

Just what constitutes an individual has been debated throughout the course of
human history by scientists, philosophers, and theologians.  So far, there
seems to be very little agreement on this issue.  But if uploading can be
accomplished nondestructively, then I would imagine that the original version
will insist that he or she is the one and only real person, no matter how
eloquently the silicon copy may argue the point.

[The silicon copy might argue instead that the concept of "one and only
 real person" was semantically null, and offer him/herself as proof.
 I can't see any logical reason there has to be a "one and only".
 --JoSH
 ps: just proving your point that there's little agreement on the issue!]

morimoto@orion.oac.uci.edu (Michael Morimoto) (01/12/91)

Question to keep you up at nights:	By extention of this argument, how
					can you know that your friends are
					really concious and 'alive' now and
					not just constructs, either by your-
					self or some _other_?  They may act
					"real" but they could be a false
					'reality' -- whatever reality means...

Just a thought,

-- Michael.

[The answer to the question is that there is no prooflike, airtight 
 assurance, but that most people have a predisposition to grant "selves"
 or consciousness, thoughts, feelings, etc, to other humans, and indeed
 to higher animal forms as well.  I would claim that this predisposition
 in human psychology essentially defines what we mean by consciousness,
 in a practical sense, and makes it very likely that any information
 processing system exhibiting sufficient verisimilitude to human
 capabilities and reactions will be accorded consciousness by humans.
 --JoSH]

peb@uunet.uu.net (Paul Baclaski) (01/12/91)

This raises difficult and ancient questions like:

    1.  What is Life?
    2.  What is Consciousness?

For the first question, I have not encountered a single compelling
argument that Life must be DNA based--definitions of life that 
include DNA do it because that is the only example we have at hand.
Because of this, we have Artificial Life for life forms that exhibit
all aspects of the general term Life, but are simulated in a computer.
(At the Artificial Life Conference, a new term popped up:  Real 
Artificial Life, i.e., Artificial Life simulated using Real proteins.)

The second question impinges upon uploading--can an artificial life
form be conscious?  If one accepts that artificial life is just as
alive as real life, then this is easy to accept.  On the other hand,
the DNA bias is strong, so we can introduce the term Artificial 
Consciousness to describe all simulated consciouness'. 

This argument is simply that we have a DNA bias for the general 
terms Life and Consciousness due to their usage for the past 
hundred or thousand years (my OED is at home...)  Because of
this bias, it is a good idea to use new terms like Artificial Life
and Artificial Consciousness.

In the case of converting an Artificial Consciousness to a 
[Real] Consciousness, the Turing Test can be invoked.  The 
upshot of this would be the conclusion that "a difference that
makes no difference is no difference."  I think die hard dualists
that believe something is lost when up/down loading will end up
like Dr. McCoy on Star Trek muttering about the teleporter (better
not do it *too* often! ;^)

Anyroad, explorations in AI, AL and AC are going to give us much
better ways of thinking about these things, and these ancient 
questions may disappear and be replaced by more specific questions.


Paul E. Baclaski
peb@autodesk.com

jeff@logicon.com (01/12/91)

In article <Jan.9.16.06.37.1991.12305@athos.rutgers.edu> franz@cs.washington.edu (Franz G. Amador) writes:
>It seems to me that the concept of uploading has a fundamental and
>unavoidable flaw.  Namely, there is no way to tell [...] that one's
>consciousness has been copied into the computer, not merely one's
>behavior patterns.

This would be a serious problem, indeed, if there were some objective
way to determine the presence or absence of consciousness other than
by analyzing behavior patterns.

>There's no way for you to tell if what
>seems to be their voices coming from the depths of the system
>represent real, thinking beings with that vital spark of
>consciousness, or are simply perfect computer simulations that act
>exactly right, but are just mechanical marionettes.

If they act "exactly right" it simply doesn't matter (this is the
fundamental premise of the Turing test).  In fact, there is no way to
really tell that you, yourself, are not just some mechanical
contrivance that is programmed to believe that it is alive, conscious,
and sentient.

Any further discussion along this line probably belongs in the
comp.ai.philosophy newsgroup rather than in sci.nanotech.

                           :: Jeff Makey

Department of Tautological Pleonasms and Superfluous Redundancies Department
    Disclaimer: I am just a guest of Logicon.
    Domain: Makey@Logicon.COM    UUCP: ucsd!snoopy!Makey

jgsmith@bcm.tmc.edu (James G. Smith) (01/13/91)

disclaimer:I'm an immunologist, not a neurologist, so there's probably just
enough fact in the following to confuse everyone.

Creating a perfect neuronal map of a person's brain on a cellular scale
(this neuron connects to these neurons, etc.) will not be sufficient to
duplicate the personality represented by that brain.  Different neurons use
different molecules to transmit signals.  As far as I know, one neuron may
be connected to 10 others, but only 3 of those others may be listening.
Also, the degree of 'listening' depends on the number of receptors for
the neurotransmitter (i.e. some may be listening more closely than others).
Also, personality may be affected by hormones. (Remember PMS?)

In order to duplicate a brain, you will have to know the composition down
to the molecular level.  I find it unlikely we will be able to do that
until the very final stages of nanotechnology.

*

bfu@ifi.uio.no (Thomas Gramstad) (01/15/91)

This discussion seems to presuppose some commonly held, yet
questionable philosophical assumptions about the mind-body issue:

(1) Epiphenomenalism; the idea that the mind is only an effect of
the brain -- thus, copy the brain (cause) and you will get the mind
(effect).  While the existence of a mind presupposes a brain (or some
other physical/structural equivalent), it is a non-sequitur to conclude
that everything in the mind is caused by the brain -- the opposite
relation -- the mind as causal agent and some neural event as effect
-- is possible also.  It's a two-way street.

(2) Symbolic representation of knowledge; it is assumed that any
knowledge is or can be represented symbolically.  While I'd be hard
put hard to suggest any other fruitful approach, it should be kept in
mind that this assumption is debatable.  If a representation or
simulation were made, it might be very different from the original.

If humans are born tabula rasa and acquired knowledge is not stored
symbolically, then an artificial replica of the adult brain may be
psychologically empty -- the mind mechanisms would be there, but not
the adult personality, the replica would in effect be a new baby (at
best), not a copy of the original person.  What reasons are there to
believe that structurally replicating the brain would give another
result (psychologically) than cloning?

---------------------------------------------------------------------
Thomas Gramstad                                        bfu@ifi.uio.no
---------------------------------------------------------------------

[I disagree with both points.  Epiphenomenalism is a "solution" to the
 mind-body problem, which arises when you assume that there are two
 different orders of reality, "mind" and "matter", and worry about 
 how the two affect each other.  The philosophical view represented
 here is closer to "naive realism", which assumes that there is only
 one order of reality, the physical world.  Replacing "mind" in this
 view is information, patterns, and information processing systems 
 which can exist use different representations for the same data.

 "Symbolic repesentation of knowledge" is a phrase which is usually 
 used to talk about AI, and it generally is taken to imply information
 structures that map onto predicate calculus and/or mathematics.
 At the very least, the information in the brain is represented in 
 more distributed, intermixed patterns evocative of "neural nets",
 but one can imagine more subtle encodings yet.  However, none of this 
 means that the information wouldn't be reproduced correctly in a
 sufficiently fine simulation or reconstruction of the brain.

 For those who missed the discussion when it came out, let me 
 recommend Calvin's "The Cerebral Symphony" for a better overview
 than I can give of the state of knowledge about how the brain 
 works.
 
 --JoSH]
 

RDEES@umiami.ir.miami.edu (Matthion) (01/15/91)

In article <Jan.12.12.00.26.1991.24223@athos.rutgers.edu>,
jgsmith@bcm.tmc.edu (James G. Smith) writes:

> Creating a perfect neuronal map of a person's brain on a cellular scale
> (this neuron connects to these neurons, etc.) will not be sufficient to
> duplicate the personality represented by that brain.  Different neurons use
> different molecules to transmit signals.  As far as I know, one neuron may
> be connected to 10 others, but only 3 of those others may be listening.

Just a note: a single neuron could be connected to as many as 60,000 others.

> Also, the degree of 'listening' depends on the number of receptors for
> the neurotransmitter (i.e. some may be listening more closely than others).
> Also, personality may be affected by hormones. (Remember PMS?)
> 
> In order to duplicate a brain, you will have to know the composition down
> to the molecular level.  I find it unlikely we will be able to do that
> until the very final stages of nanotechnology.

The effects of hormones and "listening" can all be seen as the temporal 
adjustments to whatever model you are using for the brain.  There is no reason
that the model would have to simulate or copy the _actual_ structure.  I think
it would be sufficient to simply copy the temporal and "signal strength"
characteristics of the brain's communications, and this could be accomplished
through many mechanisms that are not directly related to neuroanatomical
structure or hormonal process.  It is only neccessary to capture the essential
characteristics of these "nerve signal modulators."

As for the whole issue of uploading as suicide, it all seems to come down to
whether you believe that the human soul (ruach, pneuma, spiritis, electro-
magnetic resonance, acquired personality of a human machine, whatever...) can 
exist if implemented on a different substrate.  A hard-core materialist who
can believe in AI should have no problems.  One who doesn't believe in AI will
probably not be convinced, and with good reason (from their perspective).
Questions:
		(1) can a machine support consciousness?
		(2) if so, _must_ this consciousness arise spontaneously or
			could it be placed into the machine?

Undoubtedly much more debate will go before the technology hits us, and gives
a reason for the mental exercise.
-- 
===========================================================================
                        (__)        |    Matthew Augustus Douglas Turner
                ^^      (oo)        |
            ^^^^ /-------\/         |    Department of Somthing or Other
         ^^^^^  / |     ||          |     College of Arcane Arts (A.A.)
       ^^^^^   *  ||----||          |        The University of Miami
    ^^^^^^^^  ====^^====^^====      |
^^^^^^^^^^^^^/ ^^^^                 |       rdees@umiami.ir.miami.edu
^^^^^^^^^^^^^^^^^^^^^^^^^^          |           ...and elsewhere
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
     Cow Hanging Ten at Malibu      |    Analytical Engines Incorporated
============================================================================

[Brain structure has a host of subtleties, and I for one wouldn't
 upload onto a machine built with some of the simplifying assumptions
 you make.  An example:  Until recently it was assumed that "signal
 strength" at a given synapse was more or less a one-dimensional
 function of firing rates, but within the last year or so, there have
 been discovered some specifically coded messages with more-or-less
 symbolic meanings.  It's as if you discovered radio-frequency
 FM on a channel you thought was only carrying unencoded audio.
 So don't be too sure of those assumptions.
 --JoSH]

lovejoy@alc.com (Alan Lovejoy) (01/16/91)

In article <Jan.9.16.06.37.1991.12305@athos.rutgers.edu> franz@cs.washington.edu (Franz G. Amador) writes:
>It seems to me that the concept of uploading has a fundamental and
>unavoidable flaw.  Namely, there is no way to tell if an uploading has
>been successful.  By "successful" I mean that one's consciousness has
>been copied into the computer, not merely one's behavior patterns.

How do I know that YOU are conscious?  What objective measurement can I 
undertake to reveal that ANY individual is conscious?  Is it not possible
that what I experience as consciousness is unique to me--and different in
kind from what any other "intelligence" in the whole universe experiences?
What proof can you give me that you are not merely a machine that appears
to mimic my internal mechanisms and external behavior, but in which no
"consciousness"--as I experience it--actually exists?  

And why do you believe that other human beings are conscious?  Just because
we all say we are?  Doesn't the fact that we all have so much trouble defining
what consciousness *IS* in a formally rigourous way cause you *ANY* doubt
whatsoever that all or most of us are lying--or mistaken--about being 
conscious?  If we all experience pretty much the same thing, shouldn't we be
able to agree on its description/definition?  

You and I don't share the same molecules, the same body, the same memories
or the same identity.  Yet we accept--mostly--the idea that both of us are
"conscious"--whatever that is.  Why?  Essentially, because of the similarities
between us.  We consist of atoms, organized into the same types of molecules.
Our structure, in fact, is VERY highly co-analogous/similar from atoms to 
molecules to cells to tissues to organs to our bodies as a whole.  Not only 
that, our behavior is highly similar--again from atoms to bodies.  And we
observe that, as a general rule, functional/behavioral similarity correlates 
very highly with structural similarity. I accept that others are conscious
because I know that they are very, very much like me, and my experience 
demonstrates to me that such a high degree of similarity of observable
properties correlates highly with a high degree of similarity of ALL properties,
whether they are observable or not. 

Lacking a "consciousness detector"--a la Star Trek--this "proof by analogy"
is the only way available for deciding who--or what--is conscious.

>Whether these points bother the reader depends, I suppose, upon
>whether he or she believes that a "perfect" simulation will
>necessarily create consciousness.  I do not see why it should, but the
>answer is immaterial to my argument.  If you decide to risk your
>consciousness by uploading, it will be an act of faith without a
>guarantee, because there is no empirical means of determining if any
>prior uploading has been successful.  Even if you decide to return to
>material reality after a fixed time limit, and your uploaded self
>doesn't change its mind, you still won't be able to know if you were
>conscious while in the computer.
>
>So if you really can't tell the difference while made of flesh and
>blood (not meat - meat is dead muscle tissue), and you can return to
>physical form whenever you or your simulation wants to, what
>difference does it make whether you are really conscious while
>uploaded?  It is the difference between life and death.  If your
>consciousness is not there while uploaded, then so long as you stay in
>the computer, you are dead.  If existence in there really is
>preferable to that in the material world, then your simulated self
>might choose never to return, and by uploading you are committing
>suicide.
>
>Franz Amador
>franz@cs.washington.edu
>{rutgers,cornell,ucsd,ubc-cs,tektronix}!uw-beaver!june!franz

The key point at issue is this: is consciousness a function of the physical 
composition of an intelligent machine, or is it a function of the logical
operation of an intelligent machine (that is, the semantic content or meaning 
of what the machine does)?  Specifically, this is what we want to know: is 
consciousness a unique effect of neural networks implemented in a protein 
molecule medium?  Or could it be a unique effect of neural networks?  Or, 
might it simply be a general effect of intelligent machines whose degree 
depends upon the level of intelligence?

What we are really seeking to discover is the level of abstraction at which
the consciousness effect operates. That is, what transformations in the
properties/structures/functions/components of a conscious machine preserve
consciousness, and which do not.  What are the minimal necessary and sufficient
conditions for creating the consciousness effect? 

We know, for instance, that introducing certain molecules into the brain
suspends the consciousness effect produced by that machine--as do other even
more dramatic changes such as halting the blood flow.  We also know that
consciousness is largely independent of the physical and/or temporal  
locations at which a brain machine operates--among other things.  By inductive
analogy, we accept that it is also independent of the identity of the atoms
used as the composition medium of a brain machine (one hydrogen atom appears
to work just as well as another; in fact, there appears to be complete
interchangeability of parts at the molecular level: any molecule of the same
substance is as good as any other).  However, the structures in which those
molecules occur--where each molecule is in relation to the others--is observably
rather important to the consciousness effect.  And yet there is more than one
molecular structure that a brain machine can have and still produce 
consciousness.  There is, apparently, an infinite set of molecular structures
of brain machines that produce consciousness--and an infinite set that do not.

We also know that there is a very high correlation between changes in the
molecular structure of a brain machine and changes in the precise nature of the
consciousness ("mental state") produced by that brain machine.  It is an
experimentally reproducible fact that changes in conscious state correlate
strongly with changes in the molecular structure of the brain, and that 
induced changes in the molecular structure of a brain modify the consciousness
of that brain.  It is even predictable to various degrees--depending on the 
nature of the change--how a change in the molecular structure of a brain will 
affect the consciousness, or how a change in consciousness will be reflected
by a change in brain molecular structure. 

So why is brain molecular structure so special? For that matter, why is the
molecular structure of anything significant?  Or even better, why is the
structure of anything significant? Now THAT's a fundamental question!

Structure is simply the relations between things.  A relationship is determined
by the similarities and differences between two things.  The full relationship
between two things is a function of the differences in the values of all the
attributes of those things.  The distance relation between two objects is a 
function of the difference in the values of their space-time coordinates 
(location attributes).  To ask how X relates to Y is to ask what are all the
differences and similarities between X and Y.  

A system is a structured collection of interacting components.  The definition
or description of each component includes the factors that distinguish each
component from every other component, as well as the factors that determine
how each component interacts with--if at all--every other component.  In other
words, each component of a system is defined by its relation(ship)s to all the
other components in the system.  So asking why the structure of something is
important is equivalent to asking about the importance of that thing's 
components and their interrelationships--which is another way of talking about
the decomposability/decomposition of a system, and the differences/similarities
between the system's components.

The point I am trying to make is that structure, (de)composition, relationship,
similarity and distinction must be understood as different "aspects" (ways of
looking at) the same thing.  And what is that thing?  Now that's ANOTHER
fundamental question! 

The answer, obviously enough, is reality itself.  Ah, but what is reality?  
That, of course, is the ultimate question.  All that we really do when we 
attempt to define reality is decompose it into components based on our ability 
to detect similarities and/or differences.  We even do this merely by 
decomposing the state of the Universe into "separate" attribute values!  To 
say that the universe has a temperature and a size is to assume that size and
temperature are distinct attributes.

It is just as valid to turn things upside down and claim that Nature is 
actually a set of inherently distinct things, which only appears to form an 
integrated whole--a "universe"--as a result of our mental abstraction processes.
It is neither possible--nor necessary--to determine which of these completely 
inverse views is correct.  The distinction is probably just as meaningless as 
the one between the waves and particles of atomic physics.  One may either
take unity to be the true state of things--in which case decomposition (division
into components based on similarities and/or differences) is merely a semantic 
device, or one may take individuality as the true state of things--in which 
case abstraction (factoring out invariants by finding similarities and 
parameterizing based on differences) is merely a semantic device.  Are you able
to see these two inverse scenarios as the same thing viewed from a different
perspective?  I hope so.

Did you notice that we are getting rather close to a recursive, self 
referential argument? That tends to happen with really fundamental questions.
Perhaps it indicates that the question is wrong, meaningless or stupid. Perhaps
it is enough to say that Reality is What Is. Period.

Our investigation of What Is is necessarily conducted by observing it.  To
observe, we use our senses to detect--and our minds to interpret--signals
from our enviroment.  These signals convey information about the state of the 
world.  Physical law determines a relationship between the state of the world
and the signals generated by the world when in that state.  Actually, physical
law determines the future state(s) of the world as a function of the present
state.  It is our own arbitrary semantic system that differentiates certain
aspects of the state of the world as "signals" that indicate the value(s) of 
other aspects of previous states of the world.  This needs to be emphasized:
the physical mechanism by which information is conveyed to us from our 
environemnt is actually a process of the universe transitioning to new states
as determined by the previous state and natural law. We extract information
from "signals" by deducing the previous state(s) of the universe that are
implied by the present state of the universe--as represented by those objects
we label "signals."  The "information" carried by the "signal" is whatever
can be "deduced" about previous universal states given the state of the  
"object(s)" or phenomen(on)(a) we use as the basis of our deductive reasoning.
Let us call this the transmission of information by the interpretation of the
states of objects/phenomena as tokens of the previous states of (perhaps other)
objects/phenomena.

The important characteristic of information conveyed in this way--that is,
by "interpretation of tokens"--is that it must be true.  It cannot be 
mendacious.  It can certainly be misinterpreted, but that is an entirely 
different thing.  Why?  Because natural law guarantees what the predecessor 
states can be for any state of the universe--perhaps not always in an 
unambiguous way, but in a deterministic way.  (Yes, I know about chaos--that's 
why I mentioned that things can be ambiguous.  Chaotic behavior is still 
deterministic.  The past and the future are still constrained by the present 
deterministically--just not uniquely).  

It would seem that the distinction we make between such "information" and the
"tokens" that convey that information is rather artificial.  The distinction
serves no purpose.  One can think of these "different" things as actually
being the "same thing" without suffering any negative consequences. To prove
this, one need only observe that *INFORMATION CANNOT EXIST SEPARATELY FROM
THE TOKENS--OR SYMBOLS--THAT "ENCODE" IT*.  ("Aha!", the reader says.  "Now
I begin to see what the last 15 paragraphs might have to do with the supposed 
subject of this article!!!".  The reader might want to ponder the fact that
Knowledge Representation is a very important subcategory of Artificial 
Intelligence research.   Perhaps there is some deep reason for that...).  

Fortunately, the same information can be redundantly encoded by "different" 
tokens and/or symbols.  But to say that the information is no longer encoded 
by any tokens anywhere is equivalent to saying that some prior state of the 
universe is no longer deducible from the present state.

However, it IS possible to lie, to convey false information.  To prove this,
it is merely necessary to compare the public statements of any government with
the facts :-).  Humor aside, it is a well known fact that human beings, and
perhaps some apes/chimpanzees, deliberatey send false messages.  Distinguishing
between the "necessarily true" information conveyed by physical tokens and
the "NOT necessarily true" information conveyed by intelligent/conscious
machines is rather important.  At least, I think so :-).

What is it that physically distinguishes tokens--which convey necessarily true 
information about the state of the world--from symbols--which convey 
not-necessarily true messages from one intellect to another?  We have already
determined that they are different logically (functionally) in terms of their
dependability.  What is the mechanistic, physical difference that motivates
the difference in behavior?

The difference does NOT lie in the physical medium in which the tokens or
symbols are manifested.  The words on this page--which are symbols--are composed
of matter/energy.  You read the words by using your eyes to detect the photons
that are reflected off the page--or perhaps emitted by excited phosphors from
a CRT screen. Your eyes/nervous system/brain deduce what letters are on the
page by interpreting the information conveyed as tokens by these photons.

But the information encoded by those letters and words is NOT deduced by
reliance on natural law to compute all possible previous states of the universe
that are implied by the present state.  The meaning of these symbols does not
depend on natural law, but rather on mutual agreement between the sender and
the receiver of the message.  In effect, they create their own "law" which
defines the relationship between the symbols and their referents.  Such 
"invented" laws only hold to the extent that parties to the agreement that
creates the laws abide by the agreement.  

The universe enforces its own laws, because its laws are really defined by 
whatever it does.  Fortunately, those laws allow both for predictability and 
unpredectability.  Laws invented by agreement always involve adding more 
"predictability" or "symmetry" than nature does to the dynamics of some system.
This can be done by taking local action, which is possible because some
symmetries of nature can be broken locally.  Local symmetry breaking means
that "invented" symmetries can be locally enforced.

So symbols can refer to that which is not, unlike tokens, because the 
"symmetries" (agreements) on which they depend are only valid locally.  There
exist regions of space-time where those symmetries do not hold.  The map is
not the territory.  A computer simulation of a hydrogen molecule is NOT a
hydrogen molecule, but merely a manipulation of SYMBOLS that represent the
mathematically significant components of a hydrogen molecule.

Consider, however, this question: when a video camera records a picture on
video tape, is the information on the video tape recorded symbolically 
(possibly false) or as tokens (necessarily true)?  Now consider the same 
question with respect to memories recorded in a brain machine.  The relevance
of all this to the original subject should now be starkly clear, I think.

Although the fact that brain machines communicate--even internally--using
symbols which can signal falsehoods proves that brain machines do store
at least some information symbolically, that fact is not sufficient to prove
that the mechanism of consciousness is a function of symbolic information
processing.  However, if it can be shown that all information stored and
processed by brain machines is symbolic, it irrefutably follows that 
consciousness is a function of symbolic information processing--just like
the processing that digital computers do!

So then, are memories symbols or tokens?  If memories can be false, they
are symbolic--by definition.  Need I say more?



-- 
 %%%% Alan Lovejoy %%%% | "Do not go gentle into that good night,
 % Ascent Logic Corp. % | Old age should burn and rave at the close of the day;
 UUCP:  lovejoy@alc.com | Rage, rage at the dying of the light!" -- Dylan Thomas
__Disclaimer: I do not speak for Ascent Logic Corp.; they do not speak for me!

[It should be pointed out that lying (and cheating and stealing) are not
 necessarily functions of consciousness, as witness the viceroy butterfly,
 the cuckoo, etc.  More specifically, sending false signals is likely to
 evolve in any situation where there is an agent which (a) does significant 
 cognitive reduction and (b) has interests opposed to those of the agent
 evolving the deception.  (a) simply means that you can't fool, e.g.,
 plants, but only things which make what we might call decisions.  
 You could arguably call that symbolic, but I don't.  A housefly has a 
 fairly well-understood neural circuit that connects its visual inputs
 directly to a stimulus to flight.  This circuit can be "fooled"--that
 doesn't mean the fly is doing anything symbolic.
 --JoSH]

cage@sharkey.cc.umich.edu (01/16/91)

In <Jan.12.12.00.26.1991.24223@athos.rutgers.edu>,
 jgsmith@bcm.tmc.edu (James G. Smith) claims:
>Creating a perfect neuronal map of a person's brain on a cellular scale
>(this neuron connects to these neurons, etc.) will not be sufficient to
>duplicate the personality represented by that brain....

Backing it up with this assertion:

>In order to duplicate a brain, you will have to know the composition down
>to the molecular level.  I find it unlikely we will be able to do that
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>until the very final stages of nanotechnology.
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

If you are concerned with atomic abundances, you can do it NOW.
I have a friend working with ion-probe (microscopy?), in which
a beam of ions is aimed at a sample and the atoms which come out
of the sample are measured for composition by mass spectrometric
techniques.  By varying the aim, energy and angle of impact of
the ion beam, the sample's composition can be measured over areas
small enough to look at the details of IC chip construction and
at different depths.  This is already being used to probe the
insides of cells.

If the brain in question is already being taken apart for
transmission electron microscopy, using flourescent antibodies
or ion probes or whatever to look at the chemical composition
is no problem.

We are closer than you think.
--
Russ Cage	Ford Powertrain Engineering Development Department
Home:  russ%rsi@sharkey.cc.umich.edu
Member:  HASA, "S" division.

schumach@convex.com (Richard A. Schumacher) (01/16/91)

For that matter, how do we know that it's us that wakes up
each morning? All we have for evidence is the memory of
having gone to sleep the previous night. But this mere
memory, identical to what perfect copies of ourselves would
have after being uploaded (or stepping out of the transporter)
is obviously not sufficient for many readers of this thread...

[Now we're getting somewhere.  Just to show yourself what an illusion
 consciousness really is, think back over the previous 20 minutes
 (whenever you read this).  You'll be amazed how little of the time
 you're actually self-aware.
 --JoSH]

RDEES@umiami.ir.miami.edu (Matthion) (01/16/91)

JoSH (and everybody else)--

	This is basically a reply to your comment on my last post on
uploading.  I just want to clear up what I said.  I do not suggest that we
build a simple model of the human neural system (HNS) and try to upload into
it, but instead suggest that there may be a way to model the details of the 
HNS in a system that has a completely different substrate.  That is, the 
underlying mechanism could be very different, but could still produce the 
needed complexity and order to "host" human consciousness.  The comment was 
on the ability to use models of a different essential character, not on 
simplifying the system.  Sorry for any confusion.
	Another comment about uploading as copying.  The statement was made 
earlier in this thread (sorry...I don't remember who) that true uploading
would require some shift in the person's consciousness during the process,
or else we may simply be making a backup copy.  The idea came up a little
later (via JoSH) that if there are copies of a book still around, we
do not consider the book to be lost.  This implies certain assumptions about
self-awareness.  If a person does not believe in self-awareness as a 
phenomenon _sui_generis_ but instead as some product of the system as a 
whole, then I suppose that each copy is valid as equivlent to the "master."
BUT, that particular "stream" will have ended, and that individual "you" will
have ceased to exist.  That "you" is dead.  Any knowledge that it acquired, and
did not leave a spare copy of, is gone.
	In my opinion, the interesting question is whether you can keep a
running copy of experience as it happens, and then reactivate this after
a body has been destroyed.  If you can, did the conscious stream end and a
new one start (the _sui_generis_ position)?  Or is there only a small gap in
an otherwise continuous experience?


===========================================================================
Matthew Augustus Turner          |          Department of Somthing or Other
RDEES@Umiami.ir.miami.edu        |          Analytical Engines Incorporated
============================================================================

mvp@hsv3.uucp (Mike Van Pelt) (01/22/91)

In resonse to article <Jan.14.20.46.38.1991.7349@athos.rutgers.edu> the moderator writes:
>[This assumes you don't believe that the essence of a mind is the
> information.  We don't think of a book as "lost" if there are
> plenty of copies left.  I wish someone could specify just what 
> is about a person that is lost if the information exists to
> create a copy that is as close to the original as the same
> physical body would have been the next day.   --JoSH]

OK, a gedanken experiment.  Dr. Zarkov makes a copy of you.  Now,
he points a gun at your [original] head.  What is your reaction?

1) No problem.  Let him shoot, there's a backup.

2) Both of you charge him at once; one of you can get him
   before he can shoot you both.

   2b) Are you *sure* you and your copy will make this
       decision simultaneously?

3) Does it make a difference if the gun is pointed at you [copy]?

4) Why?
-- 
Have you ever noticed that the New Agers             Mike Van Pelt
were never peasants in their "past lives"?           Headland Technology
Always dashing princes and daring knights --         (was: Video Seven)
What balderdash! -- Eric Green quoting Doonsbury     ...ames!vsi1!v7fs1!mvp

[(1) Obviously not.  I wouldn't want someone to kill a copy of me,
 or my brother, or my friend.  
 (2) probably.  One would expect a certain esprit de corps among copies
 of a single individual.  Indeed, I'd expect the whichever instantiation
 the gun was not pointed at to rush him.
 As long as the copying process is of a high-enough fidelity, I can't
 see distinguishing between the original and the copy--both have equal
 legitimacy as continuations of the original identity.
 --JoSH]

hibbert@xanadu.com (Chris Hibbert) (01/22/91)

John Papiewski asked why he would consider an uploaded version of
himself or his friends to be the same person.


You should really read Hans Moravec's book: "Mind Children."  The
particular scenario he postulates for uploading is that your brain is
analyzed and simulated a layer at a time, and you get to try out each
layer and decide if it's good enough before it gets destroyed in order
to provide access to the next layer.  The only technical ability you
need to believe in is the ability to insert signals non-destructively
at a depth greater than or equal to the depth at which you can
non-destructively analyze the functionality and state.  Given the size
that our nano-agents will be compared to the size of neurons, that
doesn't seem too hard.

Now what you need to do is figure out why you won't trust a simulation
of your brain when you can check each piece of it against the real
thing as it's being put together.  Remember the ship that was rebuilt
one plank at a time?  If all the pieces work the same, why won't it
still be you?

I'm not going to be first, but if my friends who upload first have the
same personalities and behavior as they had before (to the best of my
ability to tell) and report no loss of thinking skill, then I'll be
willing as soon as I get tired of climbing rocks and playing
volleyball in the unenhanced-human division.

Chris

landman@eng.sun.com (Howard A. Landman) (01/22/91)

In article <Jan.9.16.06.37.1991.12305@athos.rutgers.edu> franz@cs.washington.edu (Franz G. Amador) writes:
>It seems to me that the concept of uploading has a fundamental and
>unavoidable flaw.  Namely, there is no way to tell if an uploading has
>been successful.  By "successful" I mean that one's consciousness has
>been copied into the computer, not merely one's behavior patterns.

Raymond Smullyan has written a number of entertaining and enlightening
essays on and around this subject.  Here's a thought experiment: suppose
someone invents a potion that destroys "consciousness" while leaving
"behavior patterns" intact.  (Mr. Amador insists that it must be
possible to have one without the other.)  A friend of yours takes the
potion.  You ask her how she feels.  She answers "Nothing's changed!
This potion doesn't work!" as indeed she must (since that's what her
behavior pattern would have been had she been conscious).  You try to
convince her that she is mistaken, the potion is scientifically proven
to eliminate consciousness, but she stubbornly insists that she is
still conscious, arguing that she is clearly self-aware, able to respond
to both you and her environment, remember everything she knew before,
feel the same feelings, etc.  Patiently, you explain why she is wrong.
(What *exactly* do you say?)  Finally, she gets disgusted and storms
off in anger.

What's wrong with this scenario?  Is such a potion inherently just not
possible?  Might it be because the distinction between consciousness
and behavior is a false one?

Now replace taking the potion with being "uploaded".

Now suppose that after your friend uploaded, the company making
the machines issues a statement, saying that their first-generation
machine sadly only uploaded behavior patterns, but their new improved
machine can upload consciousness as well.  People uploaded into either
version behave exactly the same, however.  Can you describe a way to
tell one machine from the other?  (How should the government, or even
Consumer Reports, evaluate their claim?)

And, suppose both potion and uploading exist: in what ways would
a person who took the potion and then uploaded differ from a person
who just uploaded without taking the potion?

In one form of Buddhist logic, any distinction in a syllogism must
be qualified by two examples showing that the distinction is a
meaningful one.  This prevents you from wasting time proving useless
things (such as "every even prime greater than 2 is an elephant
with less mass than a proton"!).  For example:

	Where there's smoke, there's fire.
	Here there is smoke.
		(Like in a kitchen)
		(Unlike in a lake)
	Therefore, here there is fire.

I challenge Mr. Amador to give two examples, one of behavior
patterns with consciousness and the other of the same behavior
patterns without consciousness.

--
	Howard A. Landman
	landman@eng.sun.com -or- sun!landman

lovejoy@alc.com (Alan Lovejoy) (01/22/91)

In article <Jan.15.17.16.30.1991.24295@athos.rutgers.edu> lovejoy@alc.com (Alan Lovejoy) writes:
>Consider, however, this question: when a video camera records a picture on
>video tape, is the information on the video tape recorded symbolically 
>(possibly false) or as tokens (necessarily true)?  Now consider the same 
>question with respect to memories recorded in a brain machine.  The relevance
>of all this to the original subject should now be starkly clear, I think.
>
>Although the fact that brain machines communicate--even internally--using
>symbols which can signal falsehoods proves that brain machines do store
>at least some information symbolically, that fact is not sufficient to prove
>that the mechanism of consciousness is a function of symbolic information
>processing.  However, if it can be shown that all information stored and
>processed by brain machines is symbolic, it irrefutably follows that 
>consciousness is a function of symbolic information processing--just like
>the processing that digital computers do!
>
>So then, are memories symbols or tokens?  If memories can be false, they
>are symbolic--by definition.  Need I say more?

To which JoSH comments:

>[It should be pointed out that lying (and cheating and stealing) are not
> necessarily functions of consciousness, as witness the viceroy butterfly,
> the cuckoo, etc.  More specifically, sending false signals is likely to
> evolve in any situation where there is an agent which (a) does significant 
> cognitive reduction and (b) has interests opposed to those of the agent
> evolving the deception.  (a) simply means that you can't fool, e.g.,
> plants, but only things which make what we might call decisions.  
> You could arguably call that symbolic, but I don't.  A housefly has a 
> fairly well-understood neural circuit that connects its visual inputs
> directly to a stimulus to flight.  This circuit can be "fooled"--that
> doesn't mean the fly is doing anything symbolic.
> --JoSH]

I am not sure I understand what motivated this comment.  It seems somewhat
nonsequitur, so that I feel concern that JoSH may not have completely 
understood the original posting (which was from me, in case you did not
notice).  Of course, it is my responsibility as the writer/sender of the
message to formulate it in such a way that it is understood.  

So let me amplify some things that I feel--after considering JoSH's coment,
and rereading what I wrote several times--might be more clearly stated.

To *formally* define what a symbol is guarantees that everyone is talking about 
the same subject.  It provides a "discovery procedure" for finding symbols
that can be applied by anyone anywhere and produce the same results.  The 
justification for the definition I use is an important subject, but perhaps
that should be addressed elsewhere.

It would be possible to infer from my posting that there is some absolute
dichotomy between tokens (from which information can be deduced based on
natural law) and symbols (from which information can be deduced by assuming 
that certain "laws," "invariants," or "symmetries" were operating even though
that assumption is not guaranteed to be true by natural law).  I wish to
explicity disabuse the reader on any such notion.  Instead, conceptualize
that there is a continuum stretching between pure symbolism and stark reality.

Information implies choice.  To send me a signal, you must be able to choose 
between alternate actions.  That which you must do willy-nilly is not 
informative (that is, nothing new is learned by the observer).  In order to 
encode a signal, you must perform optional actions.  In other words, when the 
sole content of the message is the tokens themselves--the measurable, physical 
embodiment of the message--then the message is not symbolically encoded.  
Symbolic encoding exists when tokens mean more than just themselves.  A thing 
is only a token for itself--not a symbol.  It is only a token for those things
NECESSARILY TRUE given the fact of the existence of the thing-token.  But it 
*MAY BE* a symbol for any and all *other* things that are not guaranteed-by-
natural-law antecedents, consequences or co-causalities of the thing-token.    

Let us return to the question of the video camera and the video tape.  Is the
information on the video tape recorded as symbols or as tokens?  You may think
that just about everyone would answer this question more or less identically.
You would be wrong.  It can be fun to start an argument by asking this question
of a group, and watching people take sides and thrash back and forth.

Obviously, EVERTHYING happens because of the operation of natural law.  So why 
is not all information conveyed by tokens--based on the definition of token?
The answer is because 1) natural law does not permit one to *uniquely* compute 
the actual previous state(s) of the universe, and 2) the previous (or next)
state of the universe depends upon the state of the universe as a whole.  
Therefore, deducing a unique previous state of the universe from a token is not
actually practical.  One must make certain *ASSUMPTIONS* about the past history
of a token in order to "deduce" what object or phenomenon produced that token. 
What happens to a photon depends on its environment, which varies.  

Consider how some extraterrestrial technical culture might try to figure out
what information--if any--was on a video tape produced by our culture.  It is
certainly true that the video camera/recorder/tape system operates by natural 
law, and that therefore what gets recorded on the video tape by the camera can 
be predicted by applying natural law--and more importantly, it is also true that
the inverse calculation can be performed: one can "deduce" what the video 
camera "saw" by examining the "tokens" recorded on the tape.

However, what gets recorded on the tape is NOT just a function of what the
camera was pointed at and the physics of the tape.  It is also a function of
the mechanisms in the camera and the recorder.  There is more than one way
of implementing a camera/recorder system, even ones utilizing significantly 
different physical effects/phenomena.  Video tape players are able to recover
the information on the tape by *ASSUMING* that the tape was recorded--and the
image was encoded--by one of the many possible techniques for doing so.

There is a wide variability in the qaulity, magnitude and quantity of the
assumptions that must be made in order to interpret the significance of a token
correctly.  As the quality, magnitude and quantity of such assumptions 
increases, so does the degree to which the information conveyed by the token 
is symbolic rather than "inherent" or "necessarily true."  Conversely,
the fewer assumptions that must be made in order to interpret the significance
of a token, the less risk one takes that the information deduced from that
token will be false.
 
It's a question of degrees of freedom.  The more arbitrary (unconstrained)
the relationship between information content and its carrier, the more the 
carrier of the information acts as a symbol. 

One might imagine that the extraterrestrials attempting to decipher our
videotape might feel somewhat inclined to posit that the information on the
tape was stored symbolically.  There are just so many ways it could have been
encoded!!!  

And so it is with the brain.  There is more than one way to symbolically
encode the same information.  The reason we have not yet deciphered the code
used by the brain for storing memories is precisely because it is symbolic.
The relation between a symbol and its referent is arbitrary--that is, the symbol
could just as easily refer to something else, or the referent could just
as easily be referenced by some other symbol.  

The actions of the neurons in JoSH's housefly only have meaning in the context
of the housefly as a whole.  The meaning of the positions of the molecules 
in the synapse of some housefly neuron depend very much on the fact that they 
are where they are.  In some other, arbitrary context, those same molecules 
either mean something else or nothing at all.  To "decipher" the meaning of
those molecules, one must *ASSUME* what their context is--or was.  So the
information in the housefly neural network is symbolically encoded.

The bad news about symbolically encoded information is that it can be false.

The good news about symbolically encoded information is that it can exist 
as multiple copies encoded in an infinite number of ways.                                               

-- 
 %%%% Alan Lovejoy %%%% | "Do not go gentle into that good night,
 % Ascent Logic Corp. % | Old age should burn and rave at the close of the day;
 UUCP:  lovejoy@alc.com | Rage, rage at the dying of the light!" -- Dylan Thomas
__Disclaimer: I do not speak for Ascent Logic Corp.; they do not speak for me!

kathy@uunet.uu.net (Kathy Vincent) (02/10/91)

hibbert@xanadu.com (Chris Hibbert) writes:



>John Papiewski asked why he would consider an uploaded version of
>himself or his friends to be the same person.

>Now what you need to do is figure out why you won't trust a simulation
>of your brain when you can check each piece of it against the real
>thing as it's being put together.  Remember the ship that was rebuilt
>one plank at a time?  If all the pieces work the same, why won't it
>still be you?

Reminds me of the Mathematical Bridge at Cambridge University in
England.  The story I heard was that it was originally built (a couple
hundred years or so ago) without nails, held together and supported
only by the pure principles of physics, meticulously applied.

Then not all that long ago, some person(s) got the bright idea of
taking it apart to see how it was assembled and then reconstructing it.
They were unable to return the bridge to its original state successfully,
even tho they, presumably, carefully observed the bridge's construction
as they took it apart.  The bridge is still there, but the principles
of physics have some assistance from a few nails.

	I can think of several possible, relevant morals to the story.