[comp.society.futures] does AI kill?

bzs@BU-CS.BU.EDU (Barry Shein) (07/14/88)

Ah, finally it comes full circle...

For several years now people have been openly advertising and selling
deductive software as "AI" (in many cases, as *definitively* AI.)

Now that AEGIS' (possibly) makes a big mistake, we'll just tell them
it was never AI (as we all *knew* all along.)

Nyah Nyah...fooled ya!

Thanks for all the money tho, sorry about the embarrassment.

	-Barry Shein, Boston University

bzs@BU-CS.BU.EDU (Barry Shein) (07/15/88)

>The question is not whether AI should be used for life-and-death applications,
>but whether it should be switched on in a situation like that.

Maybe I'm slow today, but could someone please distinguish those
two options for me?

	-Barry Shein, Boston University

breen@SILVER.BACS.INDIANA.EDU (elise breen) (07/19/88)

>In article <1376@daisy.UUCP> klee@daisy.UUCP (Ken Lee) calls for
>comments about a Washington Post story blaming the Aegis system
>for faulty reasoning in the misidentification of a commercial
>airliner as a hostile aircraft.

>The trouble with AI is that it is not yet AW.

Nor is it AC!

But seriously, why is it so difficult to find researchers in our 
field who are willing to discuss the ethics of what we are doing?
Why are courses in the ethics of AI not offered to graduate students
in Cognitive Science, Computer Science and related branches of Psych
and Linguistics.  The best that our University offers is two grad-
level philosophy of science courses relating to AI, and these are
not mandatory---indeed many professors actively DISCOURAGE their
students from taking even one of them.  Excuse me for being a naive
first year grad student if the answer to this question is something
obvious like "we have to teach you to look the other way because 
all the grant money comes from sources with unethical strings attached."

---Elise Breen

breen@SILVER.BACS.INDIANA.EDU (elise breen) (07/19/88)

Perhaps we all ought to watch a re-run of the Tracy & Hepburn picture
DESK SET.  Why should we be trying to eliminate the human element anyway?

---E. Breen

nat%drao.nrc.CDN%ean.ubc.ca@BU-IT.BU.EDU (Natalie Prowse) (07/20/88)

It seems to me that all this discussion about whether or not humans are
kept 'in the loop' misses an important point: humans CREATED the 'loop'
in the first place.  The computer system didn't make the mistake, the
human designers did (in the AEGIS disaster).  But the more important
question is: Without the AEGIS system, would a human have made the
SAME mistake??  We can give AI systems more powerful senses than we
could ever have (ie 'radar'), and we can program in deductive reasoning,
but nothing is going to be perfect, because WE aren't perfect.  No human
has the capability for FLAWLESS decision making. (At least, none I'VE seen!)

All this discussion reminds me of a little ditty I read somewhere:

I really hate this damn machine,
I wish that they would sell it
It never does quite what I want,
but only what I tell it.

How can a person who is not capable of flawless decision making, design
a system that IS??  I have to admit, In some cases, I'd almost take
my chances with the computer system.  Granted, it can't feel compassion,
but by the same token, it can't feel malice or greed either.

Look at the situation in the courts.  In a recent program I watched on
our local 'KNOWLEDGE NETWORK' (sort of public TV), a discussion centered
on the fairness of the courts.  The EXACT SAME case was presented to a
variety of judges, (a burglary or something), and the sentences ranged
from 3 months probation to 5 years in jail.  I became very concerned after 
listening to the panel's discussion over the problems in the justice system, 
(and I'm sure it's as bad in the U.S.). Not that I'm into any criminal 
activities, but if I had to come up against a Judge, I think I'd rather 
take my chances  with a good AI system that metes out justice.  At least
there is no chance that the computer might have just had a fight with 
its spouse that morning, and is in a terrible mood when I come before it!!

We have to accept that anywhere we choose to let computers make
the decisions for us, they are in fact, making decisions in the
same way, (hopefully), than an EXPERT human would, and that is
by no means PERFECT.  At this point in time, computers are good
at straight DECISION MAKING.  We can't, as yet, program in emotions,
but then what is the point to that?  We have humans with emotions.
I think we have to decide where 'emotionless' decision making should best
be applied, and leave computers out of the other areas.  This is
perhaps where the ethics come into play.  Granted, a starwars-type system 
might accidentally shoot down an unidentified airbus, but what are
the chances that, in a totaly human controlled system, you don't get
some fanatic with  an apocalyptic view, who would push that button
to see some prophesy fulfilled? 


			-Natalie Prowse, D.R.A.O., Penticton, B.C.


P.S. - can someone tell me who wrote that little poem?, I can't remember 
       where I read it.

bwk%mitre-bedford.ARPA@BU-IT.BU.EDU (Kort) (07/20/88)

AC?  [I am drawing a blank on that one.]

You are quite right, Elise.  Perhaps the most important thing we
personally consruct is our value system, yet there is scant guidance
for such a critical activity.  I did sign up for a course in Ethics
in Grad School.  It was an obscure course in the Philosophy Department,
and it mainly centered on then-current research in Social Choice Theory.

I am of the opinion that it is possible to construct a value system
associated with one's knowledge and skills.  If knowledge tells us
how to do things, values tell us whether it's a good idea.  But to
construct a value system, one has to be able to foresee the consequences
of one's actions.  Alas we are myopic.

I define a Value System as a collection of preferences which transforms
Knowledge into Wisdom.

As to your immediate question, I enjoy discussions of ethics, and I am
always on the lookout for others of similar interest.  Welcome to the
discussion, Elise.

Regards,

--Barry Kort

someshg@hpindda.HP.COM (Somesh Gupta) (07/22/88)

/ hpindda:comp.society.futures / bwk%mitre-bedford.ARPA@BU-IT.BU.EDU (Kort) /  4:40 am  Jul 20, 1988 /
AC?  [I am drawing a blank on that one.]

You are quite right, Elise.  Perhaps the most important thing we
personally consruct is our value system, yet there is scant guidance
for such a critical activity.  I did sign up for a course in Ethics
in Grad School.  It was an obscure course in the Philosophy Department,
and it mainly centered on then-current research in Social Choice Theory.

I am of the opinion that it is possible to construct a value system
associated with one's knowledge and skills.  If knowledge tells us
how to do things, values tell us whether it's a good idea.  But to
construct a value system, one has to be able to foresee the consequences
of one's actions.  Alas we are myopic.

I define a Value System as a collection of preferences which transforms
Knowledge into Wisdom.

As to your immediate question, I enjoy discussions of ethics, and I am
always on the lookout for others of similar interest.  Welcome to the
discussion, Elise.

Regards,

--Barry Kort
----------