[net.abortion] A Hypothetical Question

js2j@mhuxt.UUCP (sonntag) (10/23/84)

Suppose an AI program were written, which, AFTER BEING TRAINED, could pass
the turing test, submit articles to netnews, and generally be recognized
as an intelligent individual.  Further suppose that the program became 
an intelligent individual via a time-consuming process of idiot conversations
and that the program itself was only a LEARNING program and that other
files contained all of the information which the program had learned.  In
it's initial startup state, the program might know a core of <100 constructed
words.

Some people, including myself, would consider it criminal to purge the program
or it's memory files, as I would consider it a 'person'.
Suppose we make a copy of ONLY the program and port it to another machine.  Is
it wrong to purge the copy of the program?  Remember, this copy has not yet
'learned' anything.  I would say not.  The 'person' is the combination of
program and data files.  A copy of the program is not a 'person' until it
has learned something.

As you've probably already seen, I wish to draw an analogy between the copy
of the program and a fetus.  Until the human learning machine ( the fetus/baby)
has begun to learn, experience, and achieve self-awareness, the analogy holds.

But when does the fetus/baby begin this process?  Well, we're still left with
something to debate about.  It's clear that once it is born the process has
begun.  The baby's mind is learning to hear and see immediately.  Does it
begin earlier?  I'm not sure.  To be safe, I believe that abortions should
be performed only earlier than ~6 months.  It seems quite possible that the
fetus can begin learning to hear before birth.

From the results of the resent poll, in which a not insignificant portion
of the respondants thought that a 'person' was a member of the species
Homo Sapiens, I guess that many of you will not agree that this analogy
applies.  I submit that 'a member of the species Homo Sapiens' is too narrow
a definition of personhood.  If intelligent aliens from Rigel land tomorrow,
establish diplomatic relations and start making like tourists, will those
of you who think that they aren't persons think that it's alright to kill
them?  It won't even be illegal until special legislation is passed.

Jeff Sonntag
ihnp4!mhuxl!mhuxt!js2j

andrews@uiucdcsb.UUCP (10/24/84)

This analogy is a bit farfetched.  There is no such computer capable
of independent thought at this time, or in the foreseeable future.
Computers can only do what they have been programmed to do.  However,
a child can grow beyond his or her initial "programming".

				Brad Andrews

js2j@mhuxt.UUCP (sonntag) (10/25/84)

> This analogy is a bit farfetched.  There is no such computer capable
> of independent thought at this time, or in the foreseeable future.
> Computers can only do what they have been programmed to do.  However,
> a child can grow beyond his or her initial "programming".

The possibility or lack thereof of a computer program such as I have described
being written in our lifetimes or ever bears NO relation to the aptness of the
analogy.  I'll go further - even if we somehow could prove the impossibility
of such a program, it does not affect the validity of the analogy.  
   A structural recap of the arguement:
	W(x) ::= "It's wrong to kill x."
	T(x) ::= "x is a thinking individual."
        W and T are restricted to operate on arguements which are or have the
   potential to become a thinking individual.

	I argued that     T(x) implies W(x), using as example x="a computer
program with the potential to become a thinking individual."
	I argued that NOT(T(x)) implies NOT(W(x)), with the same x.
	Itis clear that ((T(x) implies W(x)) and (NOT(T(x)) implies NOT(W(x))))
    implies (T(x)=W(x)).  (more succinctly, ((T => W) ^ (~T => ~W)) => (T=W). )
    NOTICE that this is true WHETHER OR NOT there exists an example of T(x)
    where x="a computer program with the potential to become a thinking 
    individual".  The proper place to challenge this arguement is the next
    step, where generalization takes place - we use y="a fetus, which may or
    may not yet have become a thinking individual", and claim that if the
    above arguements were accepted with x , they should also be true for y, 
    and say T(y)=W(y), or "It is wrong to kill y if and only if y is a thinking
    individual".  A fetus is not a thinking individual when the number of
    cells it consists of is less than several thousand.  *WHEN* a fetus
    becomes a thinking individual is also open to arguement, although
    there must exist some stage of developement at which nearly everyone
    can agree "no, it has not yet become a thinking individual."  

    I invite challenges to the generalization step and debate on the safest
lower limit for the *WHEN*, but objections of the form "but no computer can
think anyway" will no longer be responded to.

    No, please!  NOT net.abortion.predicate_calculus!

Jeff Sonntag
ihnp4!mhuxl!mhuxt!js2j

andrews@uiucdcsb.UUCP (10/26/84)

Ok, given that such a computer could exist.  Not just one that could
"think", but one that could become aware of its own existance, then
yes, it would be life and I would desire to protect it as such.
However, the human birth process is more than just a cloning.
A unique individual is created, but when one copies a computer
program, a clone is made.  What I am saying is that the two processes
are not the same.

				Brad Andrews