[can.ai] Star Wars

havens@ubc-vision.CDN (Bill Havens) (04/01/85)

We as Computer Scientists must accept a new responsibility of our
technology. We are apparently at a critical decision point.  Having
created weapons that can literally destroy life on the planet, we are now
considering entrusting the use of these terrible weapons to computers and
computer software.
 
To the lay public (and unfortunately to our leaders as well) the allure of
computers makes it seem like an appropriate technology to apply to
our strategic defense against a very real Russian threat.
Reagan has painted a picture of a "defensive umbrella" that would forever
shield us from Russian hegemony.  He has even offered to share the
technology with the "enemy". The dream is wonderful but the
reality is quite the opposite.
 
Unfortunately, the "arms race" greatly increases the likelihood of nuclear
annihilation.  Every weapon improvement which makes our forces more accurate,
less visible, more numerous, and now more automatic increases the vulnerability
of our enemy. Their forces become less effective unless they increase their 
number, power, accuracy, etc.  
More ominously, when faced with inherently overwhelming
"first strike" weapons (such as the MX-missile, the Pershing missile, and now
laser battle stations), their only effective strategic response is "launch
on warning" (sometimes referred to as "use 'em or lose 'em").
The Russians have been put in exactly this situation by America's superior
technology.  And they have stated that, given the 5 to 6-minute flight time
between West Germany and Moscow, they must adopt this strategy.
 
How does this argument relate to Computer Science and, in particular, current
events here in peaceful Canada?  We are the EXPERTS that our government
is asking (or will soon ask) to develop automatic systems to protect us from
incoming Russian missiles.  The decision times are too short for our leaders
to make strategic decisions.  Laser battle stations will have at MOST
60-seconds of "boost phase" in which to shoot down the Russian missiles.
Who will make these decisions?  Suppose our leaders are indisposed (or can't
be found as happened in the US recently)?  Even if available and awake,
what kind of decision about the fate of the Earth can anyone make in 
60-seconds?  The US Congress asked the same questions in hearings and were told
by the "Strategic Defense Initiative" representatives from the Pentagon that
maybe the President won't be in the decision loop.  In other words,
the decisions about our planet will be made by automatic systems, that is,
by complex AI programs communicating over a vast satellite computer network.

The implicit assumption in this technology is that computer systems can be
devised to be perfectly safe, absolutely reliable, and employ
correct, completely debugged algorithms.  Alan Borning has recently
written an article called "Computer Reliability and Nuclear War" in which
he dispels this myth.  No computer system ever will have these properties.
At best we can expect machinery and algorithms which are constructed to 
exacting standards and tested extensively.  Unfortunately, automatic strategic
defense can never be tested under real conditions (except once!) and subtle
bugs will always remain.  As Borning points out, it is exactly those subtle
unexpected situations when software fails most spectacularly.   Yet the
Pentagon wants to use our technology for exactly these complex, confusing,
split-second battlefield decisions.  The idea is insane! 

Every system will eventually fail.  The domestic Nuclear Energy industry
has relied on ever increasing safety precautions, redundant systems, and
reliability testing to develop a "safe" energy source.  But the Three Mile
Island (TMI) reactor was within 30-minutes of a real meltdown.  
All the automatic systems had failed or been disabled.  
The technology was out of control and no HUMAN understood what was going on.
Indeed, President Carter visited the reactor at this time unaware of the
real threat of the accident.  In short, technology is only "safe" when
you are willing to accept the consequences of infrequent but inevitable
failures. If TMI had "gone critical", a large part
of Pennsylvania would have been made uninhabitable for years and thousands
of people would have been killed or contracted cancer.  BUT life on the Earth
would have continued.  We would have survived (and perhaps have been made
wiser).  This is not the case with "Star Wars".  An eventual catastrophic
failure will mean a holocaust that no one can even imagine. To relinquish
our destiny to our own imperfect technology is not a sane decision.

Either our leaders are afflicted by this insanity or they are unaware of 
the real dangers involved.  They do not realize that we as a nation are much 
less safe with this technology than without it.  I choose to believe the 
latter and have faith that we can use our expertise and reputations
to modify our national direction.  But common sense will not necessarily
prevail.  We must make a public stand now if it is to have any real effect.
I urge you to voice your concern loudly to colleagues, your MP, anyone
in the Press who will listen, and to sign the declaration circulated by 
Ray Reiter over this network.


Bill Havens...
havens@ubc.CSNET
..!ubc-vision!havens

henry@utzoo.UUCP (Henry Spencer) (04/02/85)

> ...  The decision times are too short for our leaders
> to make strategic decisions.  Laser battle stations will have at MOST
> 60-seconds of "boost phase" in which to shoot down the Russian missiles.
> Who will make these decisions?  ...
> ...maybe the President won't be in the decision loop.  In other words,
> the decisions about our planet will be made by automatic systems, that is,
> by complex AI programs communicating over a vast satellite computer network.

"Decisions about our planet"?  Are you not confusing offensive weapons
with defensive systems?  Surely deciding to shoot down rising missiles
is not going to endanger our planet any more than the missiles would.
The worst side effect of an incorrect automatic "shoot" decision would
be to kill several cosmonauts, a tragedy but hardly a planetary disaster.
And this should be simple to guard against, given even a few minutes
advance notice of normal space launches.

I have heard no suggestion that the offensive weapons should be changed
from their current "launch on Presidential command only" status.

> Every system will eventually fail.  The domestic Nuclear Energy industry
> has relied on ever increasing safety precautions, redundant systems, and
> reliability testing to develop a "safe" energy source.

There are no safe energy sources.  Not coal, not solar, not nuclear.
They all kill people.  The objective of the nuclear-power industry was
to build a system that killed fewer people than any other energy source,
per megawatt-hour.  They have succeeded.  Look at the numbers, not the
rhetoric, please.

> But the Three Mile
> Island (TMI) reactor was within 30-minutes of a real meltdown.  

30 minutes is a long time, even for human reactions.  If you read a
detailed and unbiased account of the events, such as the special issue
of IEEE Spectrum on the TMI disaster, you will discover that there was
never any serious danger of widespread disaster.  There were fears
aplenty at the time, but in hindsight (although ONLY in hindsight) they
were quite unjustified.


Please, if we are going to debate SDI, let us debate on the basis of
facts, not uninformed hysteria.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

banner@ubc-vision.CDN (Allen Banner) (04/02/85)

Got me...I'll work harder at the facts.

I still stand by the point that weapons research can be (and in my opinion, 
at least, IS as far as SDI is concerned) destabilizing.

Lots of people are poorly informed, blinded by misconceptions and
prejudices, or simply have radically different perspectives which may not
appear logical to us.  Some of those people may be in decision-making
positions.  And some of those people are undoubtably Soviets (not just the
Americans we pick on all the time).  It seems to me that we should be
careful not to provide fuel for the belief that there is a threat of a
first-strike by "our side"...however misinformed that belief may be.  SDI
seems to be open to that interpretation.

Regarding sums of money in the order of trillions of dollars...

Here in B.C., foresters are grubbing around for a paltry sum of 5 or 6
hundred million for reforestation...did they get it?...nope...and that's
peanuts compared to a trillion!  (Of course, they DID get part of it)  A
trillion dollars is alot of money and would go a long way to addressing some
of the world's other problems.  Unless it gets freed up from "defense"
spending we won't be able to get on with the next battle of having it spent
on other appropriate causes and not wasted...one "battle" at a time.  As
long as we "trust" the Soviets enough to conduct business with them and meet
across bargaining tables, then there must be a less expensive alternative
than building this massive defensive system.  How about something such as a
"defense-protected build-down" (October, 1984; Bulletin of the Atomic
Scientists for those who may not be familiar with it).  It may not be a
perfect alternative but neither is SDI.  (I would be very interested to hear
what other people feel about that approach)

Regarding "peace through strength" versus "peace through goodwill":

There is another alternative; peace through fear.  Fear of self
annihilation.  The use of "superordinate goals" for conflict resolution has
been well demonstrated (see "Reducing Intergroup Conflict", pages 454 to 474
in B.H. Raven and J.Z. Rubin (1976), "Social Psychology: People in Groups"
published by Wiley & Sons, New York...don't mistake me...I am NOT a
psychologist, but what they had to say made sense to me).  Fear of
extinction could be used as a superordinate goal...it is a threat to all and
it is in the mutual interest to cooperate to do something about it.  The
point of superordinate goals is that "peace through fear" would encourage
the *gradual* development of "peace through goodwill".  I think that SDI is
evidence of the fear.  We need to convince people that "peace through
strength" will not work indefinitely ON A GLOBAL SCALE.

The bottom line is what should the scientific community do?...

Prior to the last American election, Congressman Les AuCoin (Bulletin, 
November'84) said "The views of the scientific community carry considerable 
weight with the American people in these matters.  These views need to 
be expressed clearly, firmly and promptly."  He was referring to support for
Walter Mondale and not for Reagan...let's all be optimistic and hope that we
can be more effective in convincing our government than they were of
convincing the American people.


					Al Banner

fred@mnetor.UUCP (04/02/85)

> 
> 30 minutes is a long time, even for human reactions.  If you read a
> detailed and unbiased account of the events, such as the special issue
> of IEEE Spectrum on the TMI disaster, you will discover that there was
> never any serious danger of widespread disaster.  There were fears
> aplenty at the time, but in hindsight (although ONLY in hindsight) they
> were quite unjustified.
> 
> 
> Please, if we are going to debate SDI, let us debate on the basis of
> facts, not uninformed hysteria.
> -- 

	It is a fact that in the 24 hours after the TMI incident
there were no less than 7 official explanations... all different!
Now that all the administrators involved have had time to get
together and decide on a good story it is next to impossible
for anyone to ever find out what really happened. It is not
surprising that the story now given shows no cause for fear.
	At the same time I would like to say that life has never been
safe on this planet. We only have different things to watch out
for. Radioactive materials are a relatively new danger, and
the real problem is that most people don't know how to handle
them. This unfortunately includes some of the people that are
supposed to handle them. 
	Like any other group of people, the military has good and 
not so good people involved with it. What bothers me is their
tendancy to follow set rules rather than think. True, it is
often more "safe" to stay within a framework of regulations, but
somehow I would feel better if decisions about nuclear weapons
and even peaceful nuclear enterprises were made by poets. I know
a few, and these people really think about their decisions, and
about life.

	I could go on for pages, but in the interests of you, who 
have to wade through this, I'll stop here.

Cheers,		Fred Williams

mmt@dciem.UUCP (Martin Taylor) (04/03/85)

>Regarding "peace through strength" versus "peace through goodwill":
>
>There is another alternative; peace through fear.  Fear of self
>annihilation.  The use of "superordinate goals" for conflict resolution has
>been well demonstrated (see "Reducing Intergroup Conflict", pages 454 to 474
>in B.H. Raven and J.Z. Rubin (1976), "Social Psychology: People in Groups"
>published by Wiley & Sons, New York...don't mistake me...I am NOT a
>psychologist, but what they had to say made sense to me).  Fear of
>extinction could be used as a superordinate goal...it is a threat to all and
>it is in the mutual interest to cooperate to do something about it.  The
>point of superordinate goals is that "peace through fear" would encourage
>the *gradual* development of "peace through goodwill".  I think that SDI is
>evidence of the fear.  We need to convince people that "peace through
>strength" will not work indefinitely ON A GLOBAL SCALE.

Let us imagine that we knew that there were unfriendly aliens in the
neighbourhood, who had the power to erase life from this planet UNLESS
we found a way to avoid, quasi-permanently, the threat of internal war.
Do you not think that we all (US, USSR, China, Chad etc.) would be
working strenuously together to thwart this *external* threat?  The
US, UK and USSR put aside strong animosities in 1939-45 to deal with
an external threat, and I see no reason to believe that they would
not do it again in the face of a threat external to the planet.

If this assumption is so, then what prevents us from working strenuously
together to avoid self-annihilation?  Is it that we don't all agree
that the threat exists?  I find that hard to believe, but I cannot
come up with another explanation.
-- 

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt

mmt@dciem.UUCP (Martin Taylor) (04/03/85)

>        Like any other group of people, the military has good and 
>not so good people involved with it. What bothers me is their
>tendancy to follow set rules rather than think. True, it is
>often more "safe" to stay within a framework of regulations, but
>somehow I would feel better if decisions about nuclear weapons
>and even peaceful nuclear enterprises were made by poets. I know
>a few, and these people really think about their decisions, and
>about life.

In this case, I *think* I would trust set rules rather than poetic
thoughts.  Some poets have pretty apocalyptic visions, after all,
and it might be just the greatest "happening" on earth to annihilate
all life!  No-one should be able to make decisions about nuclear
weapons, because there shouldn't BE any. But there they are, and
someone has to ensure that they aren't used.  The rules may be
wrong, and may fail, but remember Hitler was an artist, and I seem
to remember that Nero was a musician, Idi Amin and Qaddafi poets
(vague memory here).  Poetic (artistic) insight doesn't make people
caring and gentle, and thus unable to contemplate launching Armageddon.

All the same, I think that the core of what we are trying to achieve
is to make the world a place safe for poets, who represent our highest
values.
-- 

Martin Taylor
{allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt
{uw-beaver,qucis,watmath}!utcsri!dciem!mmt

clarke@utcs.UUCP (04/03/85)

This whole thing scares me to death (not literally, I hope).  It almost cer-
tainly won't work, and even if it does, it will be only partially ready for
a good long time.  In the meantime the Russians will be (a) laughing them-
selves silly at the amount of money the Americans -- and perhaps we -- are
wasting, and (b) more worried than usual, just in case it does work.

I don't want Russian missiles being controlled by worried people.  That's me
they're pointed at.

jim@hcr.UUCP (Jim Peters) (04/04/85)

> to make strategic decisions.  Laser battle stations will have at MOST
> 60-seconds of "boost phase" in which to shoot down the Russian missiles.
> Who will make these decisions?  Suppose our leaders are indisposed (or can't

Surely you're not suggesting that we might decide *not* to shoot down
antagonistic Russian missiles? The things might cause a lot of damage
when they impact upon your (or my) city.