[soc.religion.christian] AI

starpath@athena.mit.edu (11/13/90)

>     __
>    /..\      In quest of knowledge....
> --mm--mm--         Mike Gobbi

Well, good.  Here is a bit for you:

>No matter how sophisticated programs and computers get, they will never be
>conscious as we understand the term (I am a computer science student and have
>studied this question in on of my courses, so I am pretty confident in my
>statement).  The programs will no more have "morality" than does an animal
>trap.

Now that is a pretty strong statement.  In fact, it is pretty mistaken.
I really am not familiar with Rutgers CS program, but I am somewhat familiar
with MIT's CS program, it includes/has included Marvin Minsky and Seymour
Papert, two rather big names in AI.  (No, I'm not CS, I'm EE, so I'll
have to paraphrase.)

Minsky believes that while the AI approaches so far do fairly well at
answering specific problems, (IE expert systems) there one major
problem: such systems cannot 'learn' in the sense of creating new abstractions
based on old information.  Of course, current systems can create new things,
but only by following some pre-programmed set of algorithms.  His
hope is to eventually build a system complicated enough to be able to
learn much like a child can; that is, give the system only one axiom:
the ability to learn "things," instead of some specific list of things.
Only this system would have the ability to truly learn, and therefore think.

Now Minsky thinks this is possible.  Papert, who is also a great man in
the field of early childhood development, thinks that this is possible.
No one has been able to refute them.  Lots of philosophers have come up
with thought-experiments to show that the intellect cannot be contained
only within our skulls.  Unfortunately, none of these have held.  While
I must admit I do not know such things by name, if you send me information
as to why you think such a machine will never exist, I can probably put
together enough to show you what errors there are in the argument.

>Some PEOPLE will write programs that come to "immoral" conclusions, and others
>will write software that comes to "moral" conclusions.  Nobody teaches the
>computer what is right or wrong -- that is built in by the designer and the
>users.

The problem is that systems that are complicated enough to have AI cannot
be 'programmed' in our normal understanding of the word.  It would be
roughly equivalent to the human non-ability to have control over heartbeat
rate, digestion system, etc.  An AI computer cannot be "written" to do
something 'immoral' by quickly changing the code.  Intellegent systems
are only partially genetic; the environement plays a very large role
in development.

>  I suspect that the decisions these computers make will be EXACTLY the same
>decisions that humans in the same situation would make (only faster).  Thus,
>if you want to enforce morality, you have to ensure that the laws are moral.

Cute, but not true for real AI.  This would be true for an expert system,
one which takes data only from it's creator.  An AI computer is not so
intrinsically linked to it's creator.

>  Coincidentally enough, this is already being done.  The abortion issue and
>euthenasia issue are two medical problems that spring to mind.  How can we
>complain that a computer has issues an immoral judgement when we allow a
>person to issue the same judgement now?  On the legal side there are many
>questions relating to homeless, jobless, and opressed situations.  Just as in
>the medical profession, there is no consensus here on what is correct.

But the computers you mention here are not making 'decisions.'  The answers
the spew are not powerful enough to call them that; it is merely the
end result of an algorithm.  A 'judgement' is weighing the issues and
making a decision based on the issues.  The judgement is made when the
code was written, not when the answer is processed.  An AI machine would
really make a judgement.

>  I think worrying about what computers MIGHT do is far less important than
>worrying about what society IS doing right now.

That's for sure!

Well, thank you for listening to me.  I'm sorry I have to go now...my
human user wants to log onto me.

	From the account of: David Hollingsworth
	starpath@athena.mit.edu


[Just for reference, Mike Gobbi is at UBC, which I take it is
University of British Columbia, not Rutgers.  --clh]

jhpb@granjon.garage.att.com (11/14/90)

Mike Gobbi wrote:

    >No matter how sophisticated programs and computers get, they will
    >never be conscious as we understand the term (I am a computer
    >science student and have studied this question in on of my courses,
    >so I am pretty confident in my statement).  The programs will no
    >more have "morality" than does an animal trap.
    
David Hollingsworth responded:

    Now that is a pretty strong statement.  In fact, it is pretty
    mistaken.  I really am not familiar with Rutgers CS program, but I
    am somewhat familiar with MIT's CS program, it includes/has included
    Marvin Minsky and Seymour Papert, two rather big names in AI.  (No,
    I'm not CS, I'm EE, so I'll have to paraphrase.)

I agree with Mike Gobbi, as would anyone who knew just a little about
Catholic theology -- and believed it -- but that's not what I want to
get into.

Great as some of the drivers behind AI might be, I think the ones that
believe they can (in concept) duplicate a human being are completely
mistaken.  (Note that not all of them believe this.  There was an
interesting issue of Scientific American just recently that had articles
addressing this subject.)

Fundamentally, the mistake is in adopting materialism as a philosophical
system.  In seeing a human being as nothing more than a highly complex
state machine (to use EE terminology).

Materialism is but one philosophy.  Unfortunately, great computer
scientists are not necessarily great philosophers.  If such men would
stick to computer science they'd be great.  But some of them wish to
draw conclusions that are really outside their area of competence.

Joe Buehler

sc1u+@andrew.cmu.edu (Stephen Chan) (11/14/90)

> >Excerpts from netnews.soc.religion.christian: 13-Nov-90 AI
> starpath@athena.mit.edu (4352)
>
> >Now that is a pretty strong statement.  In fact, it is pretty mistaken.
> >I really am not familiar with Rutgers CS program, but I am somewhat
> familiar
> >with MIT's CS program, it includes/has included Marvin Minsky and
> Seymour
> >Papert, two rather big names in AI.  (No, I'm not CS, I'm EE, so I'll
> >have to paraphrase.)

	Actually, from what I've heard, the Rutgers Undergrad CS program is
pretty darn good: there are lots of computer oriented companies in the
N. Jersey area which rate the Rutgers people highly.
	Don't be fooled by the big names "Minsky" and "Papert". The are
definitely partisans in the AI field - they are brilliant men, but they
don't have a monopoly on truth. My understanding is that the Minsky
folks believe human intelligence is fundamentally a symbol processing
system (I may be wrong). There are lots of folks who disagree.
	Besides, Minsky and (I think) Papert put out a paper in the late 60's
or early 70's which "proved" that the whole class of connectionist
networks were incapable of modelling human intelligence, because they
couldn't even handle an XOR operation. As a result of that "proof",
funding got diverted to the symbol processing field of AI.
	Well, about 5 years ago, derivatives of the early connectionist
networks started tearing up the landscape in field related to pattern
recognition, fields where symbol processing AI were just not cutting it.
In fact, the connectionist networks were capable of fairly accurate
"generalization"; they created their own internal representation for
input and could identify other members of a category without having seen
it before.
	Minsky and Papert called it wrong, and being a Big Name doesn't mean
you are always right.

> >Excerpts from netnews.soc.religion.christian: 13-Nov-90 AI
> starpath@athena.mit.edu (4352)
>
> >>  I suspect that the decisions these computers make will be EXACTLY
> the same
> >>decisions that humans in the same situation would make (only faster). 
> Thus,
> >>if you want to enforce morality, you have to ensure that the laws are
> moral.
>
> >Cute, but not true for real AI.  This would be true for an expert
> system,
> >one which takes data only from it's creator.  An AI computer is not so
> >intrinsically linked to it's creator.

	Well, you are assuming that *real* AI is possible. The other fellow is
assuming that real AI is impossible. No use arguing about conclusions
when you start out with different givens.

> >Excerpts from netnews.soc.religion.christian: 13-Nov-90 AI
> starpath@athena.mit.edu (4352)
>
>
> >But the computers you mention here are not making 'decisions.'  The
> answers
> >the spew are not powerful enough to call them that; it is merely the
> >end result of an algorithm.  A 'judgement' is weighing the issues and
> >making a decision based on the issues.  The judgement is made when the
> >code was written, not when the answer is processed.  An AI machine would
> >really make a judgement.

	I was taught decision making under the Rational Actor model. Basically,
I make a list of different alternative, then the possible outcomes
associated with each alternative. I then assign probabilities to each
outcome, and a payoff associated with each outcome. I multiply the
payoff by the probability, this is the Expected Utility. I then choose
the alternative associated with the outcome assigned the highest
Expected Utility.
	That is an algorithm, and I learned it from someone else. Does this
mean that any decision made by that system is non-intelligent?

	You're getting a top rate education at MIT, you go there to have people
teach you how to be an EE. If you use what they teach, then it does not
detract from your intelligence.

	Sure, computers are programmed to do things a certain way, but weren't
we all "programmed" in how to solve algebra problems, how to treat other
people, etc... ?

	There's an aspect of free will and intentionality which needs to be
considered. If an AI doesn't have these, then you can't talk about
making the AI moral, you can only talk about making the AI _safe_.

	By the way, folks have been trying to use reason and problem solving
skills to develop morality for a long time. In the end, it seems they
merely provide systems to justify our own moral beliefs, but pure reason
cannot come up with morality by itself.
	- Stephen Chan

User Consultant
Distributed Workstation Services
Carnegie Mellon University

EMAIL: sc1u@andrew.cmu.edu		PHONE: (412)268-6267

[Again, let me say that no participant in this discussion is from
Rutgers except me, and I'm just the moderator.  (I happen to have a
Ph.D. in Artificial Intelligence from CMU, but I haven't been actively
involved in AI for a number of years.)  --clh]