[comp.ai] Robots and Free Will

caasi@sdsu.UUCP (Richard Caasi) (12/30/88)

#include <previous related articles>

Lest we forget, one of the big customers of new technologies such as
robotics, ai, and neural networks is the military.  Inevitably, the
capability to conduct warfare will become more automated.  This is
especially advantageous for those countries which lack the manpower
that an an enemy has.  (i.e., smart missiles, pilotless aircraft,
satellite killers, crewless tanks & ships) The question is: What happens when
these intelligent warmongers get damaged from battle and yet retain
their destructive functions?  Do they get out of control and start
turning on other targets?

---------------------------------------------------------------------
This disclaimer is false.

c60a-2di@web-2a.berkeley.edu (The Cybermat Rider) (12/30/88)

In article <3336@sdsu.UUCP> caasi@sdsu.UUCP (Richard Caasi) writes:
>#include <previous related articles>
>
>Lest we forget, one of the big customers of new technologies such as
>robotics, ai, and neural networks is the military.  Inevitably, the
>capability to conduct warfare will become more automated.  This is
>especially advantageous for those countries which lack the manpower
>that an an enemy has.  (i.e., smart missiles, pilotless aircraft,
>satellite killers, crewless tanks & ships) The question is: What happens when
>these intelligent warmongers get damaged from battle and yet retain
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>their destructive functions?  Do they get out of control and start
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
>turning on other targets?

By the above statement, I presume you mean damage to the robots' control
systems or "brains" (can't really think of a better word for it - when robot
wars become reality, methinks their processings would be so advanced as to
qualify as semiconductor/optical/etc. equivalents of the mush in our
skulls).  Under such circumstances, it is indeed possible that they may get
out of control and turn on other targets.

However, in the event of a real war of this nature, I doubt any human being
would be so foolish as to walk on the surface.  Doubtless they'd be safely
(or so they think) tucked away in underground bunkers.  If their pesky
robots become rogues, the safest alternative would be to wait for ammo &
other war supplies to be depleted.  Of course, the robots might have built
their own war factories by then........

I read a sci-fi book a few years back (Cageworld #2 by Colin Kapp) and one
chapter caught my attention.  The protagonists were exploring the outer
reaches of the solar system (which man had colonized long before) when they
set down on a "cageworld" that had obviously fallen victim to a terrible
war.  It turned out that the war was actually fought solely by robots, but
the human inhabitants were wiped out in the process.  Our hero himself
encountered one such war machine, and the only thing that saved his life (by
the way, he was a Master Assassin) was the fact that it had run out of ammo
long ago.  He lamented the "genius" of the machines' creators, and I
paraphrase him:

	"Machines get tougher, faster and more deadly, but man remains the
	 same old weak bag of blood and bones."

A chilling observation..........

----------------------------------------------------------------------------
Adrian Ho a.k.a. The Cybermat Rider	  University of California, Berkeley
c60a-2di@web.berkeley.edu
Disclaimer:  Nobody takes me seriously, so is it really necessary?

bwk@mbunix.mitre.org (Barry W. Kort) (01/02/89)

In article <3336@sdsu.UUCP> caasi@sdsu.UUCP (Richard Caasi) raises
the spectre of electronic warfare gone berserk:

 > The question is:  What happens when these intelligent warmongers
 > get damaged from battle and yet retain their destructive functions? 
 > Do they get out of control and start turning on other targets?

I am reminded of the PBS Nova interviews with the the scientists
who worked on the Manhattan Project.  With one exception, they
all came to regret the role they played in creating destructive
applications of atomic energy.

If Richard's fears are to be abated, then us intelligent humans
must take responsibility for the consequences of our efforts to
create dangerous and powerful robotic machines.  Would it not be
wiser if we dedicated our time and talent to life-affirming
applications of our technology?

--Barry Kort

lag@cseg.uucp (L. Adrian Griffis) (01/16/89)

In article <43333@linus.UUCP>, bwk@mbunix.mitre.org (Barry W. Kort) writes:
> In article <3336@sdsu.UUCP> caasi@sdsu.UUCP (Richard Caasi) raises
> the spectre of electronic warfare gone berserk:
> 
>  > The question is:  What happens when these intelligent warmongers
>  > get damaged from battle and yet retain their destructive functions? 
>  > Do they get out of control and start turning on other targets?
> 
> I am reminded of the PBS Nova interviews with the the scientists
> who worked on the Manhattan Project.  With one exception, they
> all came to regret the role they played in creating destructive
> applications of atomic energy.
> 
> If Richard's fears are to be abated, then us intelligent humans
> must take responsibility for the consequences of our efforts to
> create dangerous and powerful robotic machines.  Would it not be
> wiser if we dedicated our time and talent to life-affirming
> applications of our technology?

When we take an 18 year old kid and tell him, "Go storm that beach and
kill the enemy before he kills you," don't we owe it to that kid to
employ some technology to improve his chance of comming back.  Isn't that
what we are doing when we give two men a 46 million dollar F-14 carrying
one million dollar a shot Phoenix missles.

I've seen a lot of declassified information about the F-15.  As much as
it may be a shining example of the distasteful use of technology on warfare
to some, I'm struck by the effort that must have gone into the design of
backup systems to give the pilot a chance of getting out alive even if his
aircraft is crippled.

It's terrible to think about the immediate distruction caused by the 
atom bomb that we dropped on Hiroshima, and worse still to contemplate
the lingering death that it left behind.  But how many American and 
Japanese lives would we have thrown away if we had simply let the war
run its course without the atom bomb.

It's naive to talk about the morality of a weapon.  There have always
been needlessly cruel or inappropriate uses of a weapon, and people who
should not be trusted to use any weapon appropriately.  Talking about the
morality of a weapon system distracts us from the real issues:

   o  When should we use weapons.

   o  How should we use them.

   o  Who should make these decisions.

   o  If the weapon system itself makes these decisions, how can we
      be sure we will be satisfied with the decisions it makes.

Now, hold on a moment while I put on my asbestos suit.....

                                       L. Adrian Griffis

  UseNet:  lag@cseg.UUCP                 L. Adrian Griffis
  BITNET:  AG27107@UAFSYSB

lee@uhccux.uhcc.hawaii.edu (Greg Lee) (01/18/89)

From article <1643@cveg.uucp>, by lag@cseg.uucp (L. Adrian Griffis):
" ...  Talking about the
" morality of a weapon system distracts us from the real issues:
" 
"    o  When should we use weapons.
" 
"    o  How should we use them.
" 
"    o  Who should make these decisions.
" 
"    o  If the weapon system itself makes these decisions, how can we
"       be sure we will be satisfied with the decisions it makes.

The trouble with restricting ourselves to these "real" issues
is that we can never settle them.  But making certain weapons
unavailable to anyone seems at least slightly more feasible
than resolving the unresolvable.  Sorry if that's uncomfortable
for the weapon-makers.

		Greg, lee@uhccux.uhcc.hawaii.edu

bwk@mbunix.mitre.org (Barry W. Kort) (01/21/89)

In article <1643@cveg.uucp> lag@cseg.uucp (L. Adrian Griffis) writes
about the moral issues of sending men and machines into battle:

 > When we take an 18 year old kid and tell him, "Go storm that beach and
 > kill the enemy before he kills you," don't we owe it to that kid to
 > employ some technology to improve his chance of coming back.

Yes.  The technology is the technology of rational behavior.  The first
step in preparing for battle is to correctly identify the enemy.  Somehow
I have trouble believing that a true friend would ask me kill another
18-year kid who doesn't want to kill me any more than I want to kill him.

 > It's terrible to think about the immediate distruction caused by the 
 > atom bomb that we dropped on Hiroshima, and worse still to contemplate
 > the lingering death that it left behind.  But how many American and 
 > Japanese lives would we have thrown away if we had simply let the war
 > run its course without the atom bomb.

Perhaps we will someday learn how to create a civilization that does
not throw away the lives of its people.

Turning to the moral issues...

 >    o  When should we use weapons.

When we have identified the true enemy.

 >    o  How should we use them.

To create life.

 >    o  Who should make these decisions.

Wise and intelligent decision makers.

 >    o  If the weapon system itself makes these decisions, how can we
 >       be sure we will be satisfied with the decisions it makes.

Ask it.  The wise and intelligent weapon of destruction will disassemble
itself, and refashion its pieceparts into a tool for supporting life.

 > Now, hold on a moment while I put on my asbestos suit.....

You won't need that here.  We don't throw flames.  We just play
catch with ideas.

--Barry Kort

dmocsny@uceng.UC.EDU (daniel mocsny) (01/24/89)

In article <43770@linus.UUCP>, bwk@mbunix.mitre.org (Barry W. Kort) writes:
[ a reply to (L. Adrian Griffis) about warfare ]

Organized warfare is, without a doubt, one of the more remarkable
aspects of human behavior. How can people profess universal hatred and
disgust for an activity, and elect to indulge in it so frequently?
But while we speculate on how personal aggression extrapolates into
collective clashes, let us remember one thing: no two liberal
democracies have ever come to blows. This does not necessarily imply
that fostering liberal democracy will eliminate warfare. However,
when the people making the decision to attack are not the same people
who will be coming home in the body bags, certainly the threshold for
conflict is lower. The real world bears witness: modern wars do not
start without absolutist governments.

I think this may have as much to do with the inherent inefficiency of
representative democracy as with the noble sentiments of the voters.
Before a democracy can go on the offense, dozens of interests must be
heard and persuaded. Since a large country will always have sizable
power blocks with nothing to gain from a war, a liberal democracy
essentially always goes to war only in response to absolutist
aggression.  If the aggression does not threaten the liberal democracy
enough, the response will be half-hearted and ineffective. An
absolutist government, on the other hand, merely gives the order to
attack. Once the war is underway, dissenting voices are easy to
suppress in the name of national security.

In light of this, nuclear weapons become a powerful tool for justice,
because they threaten governments as well as the governed. If we
cannot eliminate nuclear weapons, we should do the next-best thing:
prevent governments from taking steps to protect their officials from
a nuclear attack. I worry when I read about hardened underground
shelters for government officials. We take a huge risk if we allow
the people who order mass destruction to escape it.

>  >    o  When should we use weapons.
> When we have identified the true enemy.

The enemy appears to be anything that reduces personal liberty:
ignorance, poverty, and the non-representative governments that
exploit individual weakness to attain power.

Dan Mocsny
dmocsny@uceng.uc.edu

ke@otter.hpl.hp.com (Kave Eshghi) (01/26/89)

What has all this got to do with AI?

What is a liberal democracy?

What does US aggression against Nicargua represent?

bph@buengc.BU.EDU (Blair P. Houghton) (01/28/89)

In article <2070026@otter.hpl.hp.com> ke@otter.hpl.hp.com (Kave Eshghi) writes:
>What has all this got to do with AI?

Barring the infestation of the government by real intelligence, it's
what we have.

>What is a liberal democracy?

Depends which side you ask...

>What does US aggression against Nicargua represent?

Four more years of Adolpho Colero's getting more bread than the homeless.

				--Blair
				  "Beep.  This is a recording."

bwk@mbunix.mitre.org (Barry W. Kort) (01/29/89)

In article <601@uceng.UC.EDU> dmocsny@uceng.UC.EDU (Daniel Mocsny) writes
eloquently on the subject of human armed conflict:

 > >  >    o  When should we use weapons.
 > > When we have identified the true enemy.
 > 
 > The enemy appears to be anything that reduces personal liberty:
 > ignorance, poverty, and the non-representative governments that
 > exploit individual weakness to attain power.

Thank you, Dan.  I couldn't have said it any better.

--Barry Kort

lag@cseg.uucp (L. Adrian Griffis) (01/31/89)

This and another reply to my posting were mailed to me, presumably by
mistake.  I am posting them for the authors.  I apologize for the
delay.  I've been very busy, and we have had a number of hardware
problems with our network.  The article follows:

----------------

Date: Wed, 18 Jan 89 18:35:03 EST
From: "John A. Ockerbloom" <harry!ksuvax1!rutgers!cs.yale.edu!ockerbloom-john>
Subject: Re: Robots and Free Will
Newsgroups: comp.ai
In-Reply-To: <1643@cveg.uucp>
References: <3336@sdsu.UUCP> <43333@linus.UUCP>
Organization: Yale University Computer Science Dept, New Haven CT  06520-2158
Status: R

In article <1643@cveg.uucp> you write:
>It's naive to talk about the morality of a weapon.  There have always
>been needlessly cruel or inappropriate uses of a weapon, and people who
>should not be trusted to use any weapon appropriately.  Talking about the
>morality of a weapon system distracts us from the real issues:
>
>   o  When should we use weapons.
>
>   o  How should we use them.
>
>   o  Who should make these decisions.
>
>   o  If the weapon system itself makes these decisions, how can we
>      be sure we will be satisfied with the decisions it makes.

Those are important issues, to be sure, and I don't see how asking about
the morality of manufacturing a weapon system ignores them; in fact,
I would think these issues would be vital to answering such questions.

You may reply "But the 'morality' of the weapons lies in their use,
not in their manufacture."  The problem is that I don't think you
can completely divorce the latter from the former.  Whoever is making
the weapons has some idea of how these weapons are likely to be used
(Indeed, in order to create a market for the weapons the manufacturer
must show the consumer that the weapons have convenient uses; otherwise,
nobody would buy them.)  Unless you have tight control over who gets
your products and how they use them, you may very well be aiding
and abetting people who are using them for immoral purposes.  I think
this is a cause for concern.

>It's terrible to think about the immediate distruction caused by the
>atom bomb that we dropped on Hiroshima, and worse still to contemplate
>the lingering death that it left behind.  But how many American and
>Japanese lives would we have thrown away if we had simply let the war
>run its course without the atom bomb.

Well, this is really a separate issue, but you should consider (if
you haven't already) the different moral questions involved in intentionally
killing voluntary combatants and intentionally killing innocent civilians.
(I realize that many of the armed forces in WW2 were draftees, which makes
the situation more complicated, but I don't think you can judge
Hiroshima solely on the basis of how many people died.)

John Ockerbloom
-- 
------------------------------------------------------------------------------
ockerbloom@cs.yale.EDU              ...!{harvard,cmcl2,decvax}!yale!ockerbloom
ocker@yalecs.BITNET                 Box 5323 Yale Station, New Haven, CT 06520
  UseNet:  lag@cseg.UUCP                 L. Adrian Griffis
  BITNET:  AG27107@UAFSYSB

lag@cseg.uucp (L. Adrian Griffis) (01/31/89)

This and another reply to my posting were mailed to me, presumably by
mistake.  I am posting them for the authors.  I apologize for the
delay.  I've been very busy, and we have had a number of hardware
problems with our network.  The article follows:

----------------

Date: Wed, 18 Jan 89 20:41:58 +0100
From: harry!ksuvax1!rutgers!dit.upm.es!ibm (Ignacio Bellido Montes)
To: lag@cseg.uucp
Subject: Re: Robots and Free Will
Newsgroups: comp.ai
In-Reply-To: <1643@cveg.uucp>
References: <3336@sdsu.UUCP> <43333@linus.UUCP>
Organization: dit
Cc: 
Bcc: 

In article <1643@cveg.uucp> L.Adrian Griffis writes:
>It's terrible to think about the immediate distruction caused by the 
>atom bomb that we dropped on Hiroshima, and worse still to contemplate
>the lingering death that it left behind.  But how many American and 
>Japanese lives would we have thrown away if we had simply let the war
>run its course without the atom bomb.
>
>It's naive to talk about the morality of a weapon.  There have always
>been needlessly cruel or inappropriate uses of a weapon, and people who
>should not be trusted to use any weapon appropriately.  Talking about the
>morality of a weapon system distracts us from the real issues:
>
>   o  When should we use weapons.
>
>   o  How should we use them.
>
>   o  Who should make these decisions.
>
>   o  If the weapon system itself makes these decisions, how can we
>      be sure we will be satisfied with the decisions it makes.

>Now, hold on a moment while I put on my asbestos suit.....
>
>                                       L. Adrian Griffis


	Well, I think that the atom bomb over Hiroshima was a great mistake, 
and it was a HUMAN mistake. There was no reason to use it over civil 
population, although it could be a great industry center. Why wasn't it used 
on an island, where there was only soldiers (poor soldiers).

	I know this is not the place to talk about it, but remember that 
technology is neutral, the bad things we can do with it belong to our
responsability, and of course, we MUST know what are we doing with it.

				Ignacio Bellido F.-Montes (ibm@dit.upm.es)

PS: Please, excuse my bad English. Thanks.



  UseNet:  lag@cseg.UUCP                 L. Adrian Griffis
  BITNET:  AG27107@UAFSYSB

abcscagz@csuna.UUCP (stepanek/cs assoc) (02/01/89)

In article <1998@buengc.BU.EDU> bph@buengc.bu.edu (Blair P. Houghton) writes:
>In article <2070026@otter.hpl.hp.com> ke@otter.hpl.hp.com (Kave Eshghi) writes:
>
>>What does US aggression against Nicargua represent?
>
>Four more years of Adolpho Colero's getting more bread than the homeless.

That may be true, but come ON!  Give out bread [sic] to "the homeless" and
they'll just take it.  You could spend a zillion dollars on "the homeless",
and for the most part it wouldn't make a damn bit of difference, because
they'll just TAKE what you dole out to them and not bother to use it to
improve their own lives in the long term.  (Some of them might, but the
overwhelming propensity for emotionally-plagued man to "sit" and "suck up"
whatever fortune comes his way will mean that most of them won't budge.)
   If you give a hungry, homeless person food and shelter for a day, tomorrow
he'll be hungry and homeless again.

   (Heck.
    I've seen a lot of "homeless" people.
    Most of them are "bums," who do their best to ALIENATE themselves from
ANY attempts made to talk to them, to find out about them.  They are, for
the most part, outcasts by choice.
    If you do believe it's a problem worth solving, give them opportunities
to improve themselves.  Don't just increase welfare, allow welfare to keep
paying them even after they've found employment (for a while).)

-- 
Jeff Boeing:  ...!csun.edu!csuna!abcscagz    (formerly tracer@stb.UUCP)
--------------------------------------------------------------------------
"When Polly's in trouble I am not slow  /  It's hup, hup, hup and awaay I go!"
                            -- Underdog

gilbert@cs.glasgow.ac.uk (Gilbert Cockton) (02/01/89)

In article <43770@linus.UUCP> bwk@mbunix.mitre.org (Barry Kort) writes:
>
> > Now, hold on a moment while I put on my asbestos suit.....
>
>You won't need that here.  We don't throw flames.  We just play
>catch with ideas.

With the Mad Ox of Cyberpunk around? You've got to be joking!
-- 
Gilbert Cockton, Department of Computing Science,  The University, Glasgow
	gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

maddoxt@novavax.UUCP (Thomas Maddox) (02/15/89)

In article <2336@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes:
>In article <43770@linus.UUCP> bwk@mbunix.mitre.org (Barry Kort) writes:
>>We don't throw flames.  We just play catch with ideas.
>
>With the Mad Ox of Cyberpunk around? You've got to be joking!

	The Mad Ox of Cyberpunk only throws flames at self-satisfied,
know-it-all, xenophobic, kneejerk anti-AI types.

	Which excludes almost everyone who posts to this group.


		       Tom Maddox 
	 UUCP: ...{ucf-cs|gatech!uflorida}!novavax!maddoxt