[soc.religion.christian] Who will teach morals to computers?

mclarke@ac.dal.ca (10/18/90)

Time and progress march on ... Technology advances and society is adjusting
to the new technologies and incorporating it.

There are tow evolving technologies which will gradually change the 
way society makes day to day choices.  They are artificial intelligence
and parallel processing.  Essentailly this means that within about 20 years
computer technology should be able to emulate the human reasoning process..

When the technologies mature corporate and government choices can be made by 
computers.  They will be more efficient, faster, more accurate and much
cheaper.  Professional jobs will be done by computer.  This includes 
medical, legal, etc. positions.

The question is " WHO is going to teach the computers human values, morals,
etc."  Christian bodies had better start investing $$$ in research now to
avoid future crisis.

Michael Clarke
Halifax, N.S.

Des Colores

djohnson@ucsd.edu (Darin Johnson) (10/22/90)

In article <Oct.18.03.18.07.1990.1192@athos.rutgers.edu>, mclarke@ac.dal.ca writes:
> There are tow evolving technologies which will gradually change the 
> way society makes day to day choices.  They are artificial intelligence
> and parallel processing.  Essentailly this means that within about 20 years
> computer technology should be able to emulate the human reasoning process..

20 years ago many people predicted this would be the current state of
affairs now...  Most researchers now would rather pick a name different
from 'artificial intelligence', since the term leads many people to
assume that the goal of this research is to produce the sort of self-
aware computer common in science fiction.  Instead the major emphasis
is on producing computers with more intelligent behavior than they have
now.  For example, a program that plays chess would have more intelligent
behavior by exploring a few good moves than by exhaustively searching
all possible moves.  Of course, the word "intelligent" itself has very
vague meanings.  In the past, being able to rapidly solve some math
problems was considered 'intelligent', however after computers were able
to solve these problems the definition changed.  One common definition
of intelligence in the field is "anything a human does better than
a computer".

> When the technologies mature corporate and government choices can be made by 
> computers.  They will be more efficient, faster, more accurate and much
> cheaper.  Professional jobs will be done by computer.  This includes 
> medical, legal, etc. positions.
> 
> The question is " WHO is going to teach the computers human values, morals,
> etc."  Christian bodies had better start investing $$$ in research now to
> avoid future crisis.

A computer that needs morals is so very distant and improbable that
investment now is useless (if it ever happens, who's going to remember
who gave who money 200 years ago?)  If man-made machines ever get this
advanced, they will still be the tools of who ever uses them.  Ethical
training should start with the users first.

(personally, I don't think a machine generally accepted as 'thinking'
will ever exist)
-- 
Darin Johnson
djohnson@ucsd.edu

[Certainly we are some time away from a system that is as autonomous
in as large a range of activity as a human being.  I refuse to
speculate whether such a thing will ever exist.  However we already
have systems that are goal-directed, and some work is going into a bit
more autonomy in choosing goals.  We already have to start thinking
about ethical implications of how computers are used.  (Those
interested in such issues should be reading comp.risks.)  I think
rather than having one day when we suddenly realize that computers are
intelligent and we have to start teaching them ethics, the nature of
the ethical issues involved with their use will slowly change as they
become more autonomous and use higher-level goals.  If we're lucky,
the changes will happen slowly enough that we don't get taken by
surprise.  --clh]

mgobbi@cs.ubc.ca (Mike Gobbi) (10/22/90)

No matter how sophisticated programs and computers get, they will never be
conscious as we understand the term (I am a computer science student and have
studied this question in on of my courses, so I am pretty confident in my
statement).  The programs will no more have "morality" than does an animal
trap.

  Some PEOPLE will write programs that come to "immoral" conclusions, and others
will write software that comes to "moral" conclusions.  Nobody teaches the
computer what is right or wrong -- that is built in by the designer and the
users.

  I suspect that the decisions these computers make will be EXACTLY the same
decisions that humans in the same situation would make (only faster).  Thus,
if you want to enforce morality, you have to ensure that the laws are moral.

  Coincidentally enough, this is already being done.  The abortion issue and
euthenasia issue are two medical problems that spring to mind.  How can we
complain that a computer has issues an immoral judgement when we allow a
person to issue the same judgement now?  On the legal side there are many
questions relating to homeless, jobless, and opressed situations.  Just as in
the medical profession, there is no consensus here on what is correct.

  I think worrying about what computers MIGHT do is far less important than
worrying about what society IS doing right now.
--
     __
    /..\      In quest of knowledge....
 --mm--mm--         Mike Gobbi

arm@Neon.Stanford.EDU (Alexander d Macalalad) (10/23/90)

In article <Oct.22.02.29.36.1990.20966@athos.rutgers.edu> mgobbi@cs.ubc.ca (Mike Gobbi) writes:
>No matter how sophisticated programs and computers get, they will never be
>conscious as we understand the term (I am a computer science student and have
>studied this question in on of my courses, so I am pretty confident in my
>statement).  The programs will no more have "morality" than does an animal
>trap.

Hm.  I'm not sure how confident I am in your statement.  First, I'm not
convinced that computer science is the most appropriate area to study
consciousness.  Second, I'm not sure if anyone understands the term, let
alone that there is some general consensus about it.  Third, I'm not
quite sure if consciousness or the lack thereof has anything to do with
the morality of computer programs.

>  I suspect that the decisions these computers make will be EXACTLY the same
>decisions that humans in the same situation would make (only faster).  Thus,
>if you want to enforce morality, you have to ensure that the laws are moral.
>
>  Coincidentally enough, this is already being done.  The abortion issue and
>euthenasia issue are two medical problems that spring to mind.  How can we
>complain that a computer has issues an immoral judgement when we allow a
>person to issue the same judgement now?  On the legal side there are many
>questions relating to homeless, jobless, and opressed situations.  Just as in
>the medical profession, there is no consensus here on what is correct.

I think here we are closer to teasing out some of the issues associated with
decision making systems (which may or may not be conscious).  (I shift from
intelligent systems to decision making systems because ethics comes into
play in decision making, and not intelligence per se.)  What is a moral
decision and an immoral one?  Is morality simple enough that we can encode
it, legislate it, prescribe it?

More and more decision making systems are being introduced, usually in the
medical domain, although these systems usually are careful in leaving the
final decision in the hands of the physician.  It is instructive to look at
the ethics and value systems encoded into these systems.  At the heart of
these systems is an essentially utilitarian ethics, where the system strives
to maximize X, where X could be "quality of life", "happiness", etc.  The
values of the decision maker are measured in terms of X to get a utility
function which can then be plugged into to arrive at the appropriate
decision.

Although better than a purely algorithmic approach, because it tries to take
into account the values of the decision maker, this utilitarian approach
leaves much to be desired, at least from my point of view.  For one thing, 
I can't help but feel that something is lost in the translation from
values to utility function.  Plus, the choice of X itself is something of
a value judgment in itself.

>  I think worrying about what computers MIGHT do is far less important than
>worrying about what society IS doing right now.

Hopefully, though, we can learn something about morality through decision
making systems, just as we are learning something about intelligence from
intelligent systems.

>--
>     __
>    /..\      In quest of knowledge....
> --mm--mm--         Mike Gobbi

Alex Macalalad

johnb@gatech.edu (John Baldwin) (10/25/90)

In article <Oct.23.04.12.02.1990.11464@athos.rutgers.edu> arm@Neon.Stanford.EDU
 (Alexander d Macalalad) writes:

>Is morality simple enough that we can encode
>it, legislate it, prescribe it?

The obvious and simple answer to this is NO.  This was the faulty thesis
used by the Pharisees when they began developing the verbal tradition...
they sought to remain holy by simply encoding moral decisions in a
sufficiently-complex set of verbal laws, which, when followed, would
"guarrantee" the nonviolation of the written Law (i.e. God's law, which
is the one that *really* mattered).

In doing this, they lost sight of the whole reason Jehovah gave us the
Law to begin with:  if there was some way to encode "holiness" in a
set of "programmed" commands, wouldn't God be able to do that?

No, the purpose of the Law (as has been oft-stated before, ad infinitum)
is to SHOW mankind, by vivid illustration, how we fall short of the Glory
of the Almighty God.  As one poster put it, it was "a ministry of death."

Back to the original discussion thread [re: AI and Christianity],
I, too, find it more important to be concerned with people, today, instead
of technology tomorrow.  By the way, my work heavily involves AI, and I
think one of the reasons God has me here is to ensure that there is present
an element of properly-focussed concern.

Incidentally, go back to the last chapter of the book of Matthew;
you'll notice that Jesus didn't leave us with the command to "make sure
everybody is moral."  He commanded us to "make disciples of all the nations,
baptising them in the name of the Father, and of the Son, and of the Holy
Spirit, teaching them to observe all things that I have commanded you..."

While observing Jesus' commands certainly implies morality, it was the
state of each person's RELATIONSHIP with Him that was the crucial idea.
Thus, we "make disciples."   Not (just) believers, or "moral people,"
but *disciples*... DAILY FOLLOWERS in a personal relationship with the
Son of God.


        I realise this is somewhat of a circumlocutious (!) way to
        make the point.  I apologize for that;  much of the above
        posting is to make the whole thing a little more understandable
        for the new Christians (and perhaps the non-Christians) who
        may be reading the newsgroup.

        Readers' Digest Version:  worry about making disciples and
        telling people about your Boss and Best Friend (my boss is
        a Jewish Carpenter!), worry less about what will happen with
        the future of technology.  Satan will seek to twist this
        (as well as everything else) to his own warped purposes, anyway,
        and to get us focused on *anything* but what's really important.




-- 
John T. Baldwin                     | "Pereant qui ante nos nostra dixerunt!"
Search Technology, Inc.             | (A plague on those who said our good
johnb%srchtec.uucp@mathcs.emory.edu |  things before we did!)

[I fear that your characterization of the Pharisees fails to do
justice to what they were really trying to do.  I realize that making
careful historical judgements of a 1st Cent. group is not the primary
purpose of your message, but I hate to see them get the sort of
continual bad press that the Christian tradition has tended to give
them.  I follow Paul in rejecting the Law for myself as a primary way
of responding to God.  But it should be possible for Chritians to
adopt our approach without misrepresenting the alternatives.  There
were no doubt people for whom the Law had become primarily a burden
and a "ministry of death."  Christianity itself has become that in
some times for some people.  But for many Jews, following the Law was
a way of expressing their dedication to God, a way of seeing to it
that this dedication is shown in everything they do.  It was not seen
as an imposition, but as a gift.  --clh]

sc1u+@andrew.cmu.edu (Stephen Chan) (10/29/90)

>Excerpts from netnews.soc.religion.christian: 22-Oct-90 Re: Who will
teach morals t.. Mike Gobbi@cs.ubc.ca (1436)
>
> >No matter how sophisticated programs and computers get, they will never be
> >conscious as we understand the term (I am a computer science student and have
> >studied this question in on of my courses, so I am pretty confident in my
> >statement).  The programs will no more have "morality" than does an animal
> >trap.

	Having studied Epistemology, Computer Science, and Cognitive Psychology
(not that any of that means anything), I would have to say that the 
general question of consciousness or intentionality in machines is still
very much up in the air. If your professor lead you to believe that the
question is settled, then he has provided a biased assessment of the
problem.
	
	Original question is a little far fetched. Unless AI paradigms take a
quantum leap, any AI application will only be clever, single purpose
machines. None of the current AI paradigms is even close to creating an
autonomous intelligence, capable of free will and moral choice. If you
do not have free will, then you cannot be a moral agent. In which case
the developers of the system will be culpable for any activity of the
system. 
	Who will teach morals to laser satellites?

		Stephen Chan