[mod.politics.arms-d] Arms-Discussion Digest V7 #61

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (11/19/86)

Arms-Discussion Digest               Tuesday, November 18, 1986 8:56PM
Volume 7, Issue 61

Today's Topics:

                   SDI dilemma kills the whole idea
to research AI or not to ..., that is the question; military or open?
                      why not nuclear airplane?
      whether programming can somewhat replace a human employee?
    bomber crew determining nucwar status for themselves firsthand
                           anti-tank ideas
                             Selling SDI
     telepresence for SDI, massively parallel human intelligence?

----------------------------------------------------------------------

Date: 1986 November 16 04:51:41 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:SDI dilemma kills the whole idea

<POM>    From: Peter O. Mikes <pom at s1-c.arpa>
<POM>    At this point in time (just based on common sense) it seems to me
<POM>    that the boost phase would be the least suitable one..

<LIN> Date: Fri, 31 Oct 1986  16:03 EST
<LIN> From: LIN@XX.LCS.MIT.EDU
<LIN> Subject: SDI::  boost phase or bust 

<LIN> The general argument is that if you don't get them in boost, you will
<LIN> have too big a load to handle when the boosters deploy all their decoys.

I agree with both of you, you can't use boost phase interception
because it's immoral (forward basing) and threatening (the capability
can be used offensively) as well as requiring too-fast decision (about
2 minutes) which would preclude human intervention and make accidental
war likely, and you can't make SDI work without it. Thus SDI is a
total losing idea.

------------------------------

Date: 1986 November 16 05:16:16 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:to research AI or not to ..., that is the question; military or open?

<DB> Date: Saturday, 1 November 1986  23:26-EST
<DB> From: "dave brewer..." <brewster%watdcsu.waterloo.edu at CSNET-RELAY.ARPA>
<DB> To:   ARMS-D
<DB> Re:   Professionals and Social Responsibility for the Arms Race

<DB>   5) the ability of expert systems to continuously monitor
<DB>      stock values and react has led to increased volatility
<DB>      and crisis situations in the stock markets of the world
<DB>      recently.  What happens if machine induced technical trading
<DB>      drops the stock market by 20 % in one day , 50 % in one day ?

I don't think this is because of the use of computers. I think this
because of the tendancy of people to use stupid "pyramid/speculation"
algorithms for investing. You all know what a pyramid scheme is? You
invest in something just because of a promise that later investors
will bail you out? I think speculation in Gold and just about anything
else that is overpriced is just a pyramid scheme in disguise. The
price is rising, so you invest more money, in the belief that later on
somebody else will buy you out a higher price. But at some point this
speculation breaks down because the price is so much higher than the
value that nobody in their right mind is willing to buy you out, so
whoever is the last person to buy before the price collapses gets
stuck with a big loss. In my opinion, almost all "investment" in
"collectables" is really just pyramid-style speculation in disguise.
Anyway, if you do this speculation manually, you suffer a big personal
loss but it doesn't affect the market much. If lots of large investors
have computer programs that do this instantly, we get DO get
measurably-increased volatility.

I'd rather see "contrarian" computer programs for stock. Look at only
the long-term value of the stock, not the day-to-day recomputed value.
When stock is cheaper than value, buy; when stock is more expensive
than value, sell; if stock is VERY cheap, buy more; if stock is VERY
expensive, sell everything you have in that stock (liquidate that
stock). If those computer programs were running the automated trades,
it would decrease rather than increase volatility. <<Opinion of REM>>

<DB>    1) not all problems can be reduced to computation, for
<DB>       example how could you conceive of coding the human
<DB>       emotion loneliness.

I think this cliche that you can't precisely define loneliness (hence
you can't code) it is based on some mystique about emotions, similar
to our inability to "create life" because "God created life" or
somesuch nonsense. Perhaps we can't code all the details of loneliness
or life because in practice they are too complicated (with too many
associated factors), but I think the basic idea can be defined and
someday soon programmed. I won't define life here because others have
already done it in recent years. Loneliness is the absense of needed
or desired connection with peers, usually for sex or communication.
Desire means a primitive goal, which usually means genetic survival
for a living entity; although the word is mis-used to mean just about
anything the entity says it wants. Need means a non-primitive goal
which is a means to an end, the end being a primitive goal or some
non-primitive goal which is one step closer to the primitive goal on
which it is ultimately based. Survival requires reproduction (which
involves sex in humans) and information to build a world model in
one's mind to better know how actions relate to accomplishments and
side-effects and thus better compute a strategy for survival. If a
computer is programmed to survive by natural means, that is observing
the world and figuring out how to manipulate the world to better
survive, it may compute that it needs communication to build a world
model, and if cut off from communication may be "lonely". If a
computer is programmed to replicate itself asexually (by cloning),
there is no need for "sex", but if a computer exchanges software with
other computers to get more facilities to aid survival, this
exchanging may be regarded as "sexual" activity, and if the computer
is unable to find compatible other computers to exchange software with
it may be "sexually lonely".

<DB>    2) AI will never duplicate or replace human intelligence
<DB>       since every organism is a function of its history.

If we had a way to exactly copy a human brain, maybe we could
duplicate human intelligence. But this isn't what AI is about. AI is
to approximately replace humans with equivalent function, not
duplicate a human mind or exactly replace a human mind with an exactly
like-functional mind. When a worker quits a job and a replacement is
found, the replacement is never exactly like the former worker, merely
sufficiently similar that the job can be adapted slightly to fit the
new worker (the claim usually is the new worker is trained to do
exactly what the former worker did, but in fact this is never
achieved, the work performance may be in the same defined set, within
the same error tolerance, but the output is never EXACTLY the same,
for example the new pizza maker might put 2% more sausage on the pizza
than the former pizza maker). To achieve approximate equivalence of
function, as humans do, AI doesn't need to duplicate the exact same
history that the human being replaced had, merely set up a set of
history or knowledge that is sufficient to do the desired task within
error tolerances. Therefore I think the argument against AI is an
attack against a straw man.

<DB>    6) courage is infectious, and while it may not seem to be
<DB>       a possibility to some, the arms race could be stopped cold
<DB>       if an entire group of professions, (ie computer scientists),
<DB>       refused to participate. 

Curious sidelight, have you read the book "The Selfish Gene",
especially the last chapter on memes (the mental or software
equivalent of genes)? Your terminology "infectious" may be literally
true in that model. Courage itself is like bird or cancer, a single
name for many many different organisms, but the particular kind of
courage evidenced by computer scientists may be a specific meme that
is "going around" (in the influenza sense).

<DB>    8) every researcher should assess the possible end use of
<DB>       their own research, and if they are not morally comfortable
<DB>       with this end use, they should stop their research.

I agree, look at all the various uses, and if exclusively or
predominately destructive to what we want in the future then stop, but
if a mixture of good and bad then go ahead and do the research while
advocating good applictions and refusing to work on specific bad applictions.

<DB>       He specifically referred to research in machine vision, which he
<DB>       felt would be used directly and immediately by the military for 
<DB>       improving their killing machines.  While not saying so, he implied
<DB>       that this line of AI should be stopped dead in its tracks. 

I say go ahead and work on machine vision providing the results aren't
classified secrets, but don't work on any specific military
installations or applications of the research.

<DB>   2) His background, technical and otherwise, seems to predispose
<DB>      him to dismissing some technical issues a priori. i.e. a machine
<DB>      can never duplicate a human, why ?, because !.  

Yup.

<DB> The main question that I see arising from the talks is : is it time
<DB> to consider banning, halting, slowing, or otherwise rethinking 
<DB> certain AI or technical adventures, such as machine vision, as was
<DB> done in the area of recombinant DNA.

No, only in cases where the only reasonable application would be
military, or the work is classified (secret) from the start, or the
military have the option of retroactively classifying it, or you are
in fact working on a military application.

------------------------------

Date: 1986 November 17 00:44:44 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:why not nuclear airplane?

<PFD> Date:     Fri, 7 Nov 86 21:11 EDT
<PFD> From:     "Paul F. Dietz" <DIETZ%slb-test.csnet@RELAY.CS.NET>
<PFD> Subject:  24 hour waiting period?

<PFD> There *is* one strategic weapons system that could still be flying
<PFD> three days after an attack: the nuclear airplane!  Too bad we
<PFD> didn't build it... (:-)).

Hmmm, that would be cheaper than SDI. Anybody know why it wasn't built?
It'd cost more than a billion dollars per plane? Nowadays that may be
"cheap" compared to alternatives. Debate on this topic please??

------------------------------

Date: 1986 November 17 00:51:13 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:whether programming can somewhat replace a human employee?

<H> Date: Sat, 8 Nov 86 09:55:53 PST
<H> From: ihnp4!utzoo!henry@ucbvax.Berkeley.EDU
<H> Subject:   Professionals and Social Responsibility for the Arms Race

>   2) AI will never duplicate or replace human intelligence
>      since every organism is a function of its history.

<H> This just says that we can't exactly duplicate (say) human intelligence
<H> without duplicating the history as well.  The impossibility of exact
<H> duplication has nothing to do with inability to duplicate the important
<H> characteristics.  It's impossible to duplicate Dr. Weizenbaum too, but
<H> if he were to die, I presume MIT *would* replace him.  I think Dr. W. is
<H> on very thin ice here.

Yup, well said, Henry.

>    5) technical education that neglects language, culture,
>       and history, may need to be rethought.

<H> Just to play devil's advocate, it would also be worthwhile to rethink
<H> non-technical education that covers language, culture, and history while
<H> completely neglecting the technological basis of our civilization.

The James-Burke series on "Connections" and "The Day the Universe Changed"
do a nice job of showing relationships between history/politics/religion and
science/invention.

------------------------------

Date: 1986 November 17 00:54:11 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:bomber crew determining nucwar status for themselves firsthand

<H> From: hplabs!pyramid!utzoo!henry@ucbvax.Berkeley.EDU
<H> Date: Sun, 9 Nov 86 08:40:11 pst
<H> Subject: Unequivocal Confirmation of Detonation

<H> Related thought:  if the B-52s and B-1s get airborne under attack, a
<H> large percentage of the surviving bomber crews will be able to personally
<H> verify nuclear explosions on US soil.  They don't scramble that fast; they
<H> will know about it when the base behind them gets blasted.  How good are
<H> the bomber -> command communications?  (Communications systems intended
<H> for "go" orders aren't necessarily two-way.)

I would think it reasonable to give bomber crews standing orders to this
effect: "If you take off (scramble) in response to an alert of soviet
attack (as yet unconfirmed or only partially confirmed), and if after you
are airborne you or your crew personally observe your base behind you
being destroyed in a thermonuclear fireball, you should immediately begin
flying toward the USSR. During your flight you should make all reasonable
attempts to intercept military or commercial broadcasts to determine
whether any CCC posts or cities still exist. If during the next 12 hours
you do not obtain any evidence that the USA still exists, you are hereby
ordered to go ahead and destroy your targets without any further
authorization. If however you do continue to receive radio signals that
indicate the USA still exists, but do not receive any explicit orders,
you should use your own judgement whether to attack your target or not."

------------------------------

Date: 1986 November 17 01:29:06 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:anti-tank ideas

<LIN> Date: Mon, 10 Nov 1986  09:07 EST
<LIN> From: LIN@XX.LCS.MIT.EDU
<LIN> Subject: defenses, first strike

<LIN> A real defensive problem is building a man-portable anti-tank weapon
<LIN> that will work against modern battle tanks.  There is no such beast
<LIN> now.  Given a billion dollars in R&D money, I am quite certain that we
<LIN> could build such a thing.

The light-fiber-communication device is a start (my nieve opinion).
Suppose we had two fibers, one from the human to the launch station, and
one from the launch station via the reel to the in-flight anti-tank
missile. Then the human could be a few hundred yards away from the launch
station, maybe even moving around a little (a few yards) after the launch
station has been set up earlier. The human could watch for incoming tanks
from behind a tree or bush, launch on sight of tank, and the blast from
the launch gives the tank commander no precise idea where the human is
physicaly located thus no easy target other than the spooling
communication fiber itself. Thus the human can both launch and survive,
which is a great moral advantage over the original design.

------------------------------

Date: Thursday, 13 November 1986  00:26-EST
From: cfccs at HAWAII-EMH
To:   ARMS-D
Re:  Selling SDI

 o SDI is sold to the masses and Congress via spectacular exaggeration
   about what it will accomplish.

I haven't seen any spectacular exaggerations.  Any examples?
Can you prove that the proposed accomplishments will never be possible?  I
know of one instance, that is if we don't try!

 o SDI is encroaching on our Universities and academic freedom.  Just as
   bad, in my eyes, is that it is making all University/DoD connections
   look untenable.

I disagree.  DoD has given money for specific research projects from its
very inception.  Why is it now considered encroachment and a loss of
freedom?  Why do you feel all University/DoD connections are being looked
upon as untenable (unreasonable, undefensible, not capable of being main-
tained)?  Your arguement sounds too emotional.

 o SDI will be permanently untestable.

By untestable I assume you mean we cannot be sure it can be proven effective
to the Nth degree.  That may be true, but what can?  We can model its
effectiveness like we do with most new technology, and we can benchmark it in
small scale and calculate the full potential using the benchmark as a base
measure.  No defense can ever be tested fully unless under actual battle
conditions, so why target SDI for that arguement?

 o SDI will, and here I agree with Harold Brown, former physicist and sec-
   retary of Defense, have offensive uses long before--read this moneywise
   as well as timewise--any partially practical defensive ones are ready
   for (non)testing.  Of the top of my head, the rail gun comes to mind, as
   does light speed ASAT.

I agree that this may be possible, but not necessarily true.  The fact
remains that the same is true for most types of research.  Almost 100%, if
you only count DoD money.  It is the business of DoD to figure out potential
uses before the other side does.  This may be paranoid, but it is their job.
Again, why use this arguement here?  It is not unique to SDI.

   (There is this awful double standard invoked by SDI proponents that I
   just can't stand: technology & engineering & lots of late night program-
   ming sessions will overcome all the difficulties, even though we can't
   ever figure out all possible countermeasures, but it's somehow obvious
   that offensive uses are theoretically impossible, now and forever, no
   matter how ingenious our lab boys (and girls) are.  Sorry, I just don't
   buy that sort of non reasoning, and I don't the Soviets do either.)

They say the atmosphere will prevent SDI from ever being able to hit an Earth
target.  They do not discount offensive uses in space itself.  I believe that
if the effort were put in, any limits now 'obvious' would be overcome.
However, this is not the point.

 o SDI, while it may not do much for stopping arms, does wonders for stop-
   ping arms discussions.  Considering Reagan's stated antipathy towards
   arms agreements in the past, and the fact that SDI is his own brainchild,
   I seriously question his motives.

There are only two ways to end a nuclear threat.  You have to make nuclear
weapons impotent, or you have to do away with them all together.  The
President has decided the latter is not possible considering all the
distrust that has built up between the US and SU, and that ultimately, they
are not the only two players.  The whole world is involved.  Many of whom
would do much to gain the power the SU and US now have.

   So let me turn your R&D question around:

   If R&D are so wonderful, why doesn't Reagan throw some gigabucks at the
   Universities, no strings attached?  After all, Reagan has stated that SDI
   research is to be shared with the Soviets, so why not bring it out in the
   open to begin with?

   Or how about throw some of these gigabucks at NASA?  (Directly, not just
   SDI trickledowns)  And not just money to replace the Challenger, either,
   but a full vigorous space program already.

   In other words, my final point of contention with SDI is:

 o It costs a hell of a lot.

Why should money be thrown to the universities without a specific goal in
mind?  Why fund a more vigorous space program when the technology required to
protect any advances in space is not being developed?  Sounds like putting
your money into a bank that doesn't have a vault!  If it can be proven that
no advances will be made by persuing SDI technology, then let's see proof.
All we have is different people trying to see into the future without even
trying the water.  An analogy for both sides of the issue is trying to walk
on water.  One side declares it cannot be done because it is impossible.  The
other declares it can be done, we just haven't figured out how yet!  Is that
a good reason to quit trying?

CFCCS @ HAWAII-EMH

------------------------------

Date: 1986 November 17 01:46:46 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:telepresence for SDI, massively parallel human intelligence?

One possible SDI design I haven't seen discussed is massively parallel
human natural inteligence linked by a giant computer net, with very
little actual artificial intelligence. Is this because nobody thought
of the idea before I did, or because there's something fundamentally
wrong with it, or because it's politically unsuportable even though a
good idea? During a national emergency, instead of everyone rushing
for the hills or crying&praying etc., we have everybody at their TV
sets watching the incoming attack from the viewpoint of one of the
observation&attack stations (different station for each viewer except
for multiple backups to allow majority logic or failsoft if somebody
faints). When somebody sees a missile, that viewer uses the joystick
to aim the cursor at the missile, at which point the attack station
latches on the image and follows through with a laser or particle beam
or whatever the method is. The viewer has the task of visually
discriminating warheads from decoys. If the viewer sees just a point
of light, the cursor is aimed, the observation station first locks on
it and magnifies the image until it is large enough to fill most of
the screen, then the human either presses the FIRE button or presses
the DECOY button.

During times of peace the same TV/joystick hooked through the same
communication net can provide training exercises so viewers can learn
how to distinguish various kinds of decoys from various kinds of
Soviet warheads. Training exercises can use computer simulations of
visual images as well as actual videotape of Soviet tests.

Anybody interested in developing or criticizing this idea?

------------------------------

End of Arms-Discussion Digest
*****************************

ARMS-D-Request@XX.LCS.MIT.EDU (Moderator) (11/19/86)

Arms-Discussion Digest               Tuesday, November 18, 1986 8:56PM
Volume 7, Issue 61

Today's Topics:

                   SDI dilemma kills the whole idea
to research AI or not to ..., that is the question; military or open?
                      why not nuclear airplane?
      whether programming can somewhat replace a human employee?
    bomber crew determining nucwar status for themselves firsthand
                           anti-tank ideas
                [cfccs: Arms-Discussion Digest V7 #58]
     telepresence for SDI, massively parallel human intelligence?

----------------------------------------------------------------------

Date: 1986 November 16 04:51:41 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:SDI dilemma kills the whole idea

<POM>    From: Peter O. Mikes <pom at s1-c.arpa>
<POM>    At this point in time (just based on common sense) it seems to me
<POM>    that the boost phase would be the least suitable one..

<LIN> Date: Fri, 31 Oct 1986  16:03 EST
<LIN> From: LIN@XX.LCS.MIT.EDU
<LIN> Subject: SDI::  boost phase or bust 

<LIN> The general argument is that if you don't get them in boost, you will
<LIN> have too big a load to handle when the boosters deploy all their decoys.

I agree with both of you, you can't use boost phase interception
because it's immoral (forward basing) and threatening (the capability
can be used offensively) as well as requiring too-fast decision (about
2 minutes) which would preclude human intervention and make accidental
war likely, and you can't make SDI work without it. Thus SDI is a
total losing idea.

------------------------------

Date: 1986 November 16 05:16:16 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:to research AI or not to ..., that is the question; military or open?

<DB> Date: Saturday, 1 November 1986  23:26-EST
<DB> From: "dave brewer..." <brewster%watdcsu.waterloo.edu at CSNET-RELAY.ARPA>
<DB> To:   ARMS-D
<DB> Re:   Professionals and Social Responsibility for the Arms Race

<DB>   5) the ability of expert systems to continuously monitor
<DB>      stock values and react has led to increased volatility
<DB>      and crisis situations in the stock markets of the world
<DB>      recently.  What happens if machine induced technical trading
<DB>      drops the stock market by 20 % in one day , 50 % in one day ?

I don't think this is because of the use of computers. I think this
because of the tendancy of people to use stupid "pyramid/speculation"
algorithms for investing. You all know what a pyramid scheme is? You
invest in something just because of a promise that later investors
will bail you out? I think speculation in Gold and just about anything
else that is overpriced is just a pyramid scheme in disguise. The
price is rising, so you invest more money, in the belief that later on
somebody else will buy you out a higher price. But at some point this
speculation breaks down because the price is so much higher than the
value that nobody in their right mind is willing to buy you out, so
whoever is the last person to buy before the price collapses gets
stuck with a big loss. In my opinion, almost all "investment" in
"collectables" is really just pyramid-style speculation in disguise.
Anyway, if you do this speculation manually, you suffer a big personal
loss but it doesn't affect the market much. If lots of large investors
have computer programs that do this instantly, we get DO get
measurably-increased volatility.

I'd rather see "contrarian" computer programs for stock. Look at only
the long-term value of the stock, not the day-to-day recomputed value.
When stock is cheaper than value, buy; when stock is more expensive
than value, sell; if stock is VERY cheap, buy more; if stock is VERY
expensive, sell everything you have in that stock (liquidate that
stock). If those computer programs were running the automated trades,
it would decrease rather than increase volatility. <<Opinion of REM>>

<DB>    1) not all problems can be reduced to computation, for
<DB>       example how could you conceive of coding the human
<DB>       emotion loneliness.

I think this cliche that you can't precisely define loneliness (hence
you can't code) it is based on some mystique about emotions, similar
to our inability to "create life" because "God created life" or
somesuch nonsense. Perhaps we can't code all the details of loneliness
or life because in practice they are too complicated (with too many
associated factors), but I think the basic idea can be defined and
someday soon programmed. I won't define life here because others have
already done it in recent years. Loneliness is the absense of needed
or desired connection with peers, usually for sex or communication.
Desire means a primitive goal, which usually means genetic survival
for a living entity; although the word is mis-used to mean just about
anything the entity says it wants. Need means a non-primitive goal
which is a means to an end, the end being a primitive goal or some
non-primitive goal which is one step closer to the primitive goal on
which it is ultimately based. Survival requires reproduction (which
involves sex in humans) and information to build a world model in
one's mind to better know how actions relate to accomplishments and
side-effects and thus better compute a strategy for survival. If a
computer is programmed to survive by natural means, that is observing
the world and figuring out how to manipulate the world to better
survive, it may compute that it needs communication to build a world
model, and if cut off from communication may be "lonely". If a
computer is programmed to replicate itself asexually (by cloning),
there is no need for "sex", but if a computer exchanges software with
other computers to get more facilities to aid survival, this
exchanging may be regarded as "sexual" activity, and if the computer
is unable to find compatible other computers to exchange software with
it may be "sexually lonely".

<DB>    2) AI will never duplicate or replace human intelligence
<DB>       since every organism is a function of its history.

If we had a way to exactly copy a human brain, maybe we could
duplicate human intelligence. But this isn't what AI is about. AI is
to approximately replace humans with equivalent function, not
duplicate a human mind or exactly replace a human mind with an exactly
like-functional mind. When a worker quits a job and a replacement is
found, the replacement is never exactly like the former worker, merely
sufficiently similar that the job can be adapted slightly to fit the
new worker (the claim usually is the new worker is trained to do
exactly what the former worker did, but in fact this is never
achieved, the work performance may be in the same defined set, within
the same error tolerance, but the output is never EXACTLY the same,
for example the new pizza maker might put 2% more sausage on the pizza
than the former pizza maker). To achieve approximate equivalence of
function, as humans do, AI doesn't need to duplicate the exact same
history that the human being replaced had, merely set up a set of
history or knowledge that is sufficient to do the desired task within
error tolerances. Therefore I think the argument against AI is an
attack against a straw man.

<DB>    6) courage is infectious, and while it may not seem to be
<DB>       a possibility to some, the arms race could be stopped cold
<DB>       if an entire group of professions, (ie computer scientists),
<DB>       refused to participate. 

Curious sidelight, have you read the book "The Selfish Gene",
especially the last chapter on memes (the mental or software
equivalent of genes)? Your terminology "infectious" may be literally
true in that model. Courage itself is like bird or cancer, a single
name for many many different organisms, but the particular kind of
courage evidenced by computer scientists may be a specific meme that
is "going around" (in the influenza sense).

<DB>    8) every researcher should assess the possible end use of
<DB>       their own research, and if they are not morally comfortable
<DB>       with this end use, they should stop their research.

I agree, look at all the various uses, and if exclusively or
predominately destructive to what we want in the future then stop, but
if a mixture of good and bad then go ahead and do the research while
advocating good applictions and refusing to work on specific bad applictions.

<DB>       He specifically referred to research in machine vision, which he
<DB>       felt would be used directly and immediately by the military for 
<DB>       improving their killing machines.  While not saying so, he implied
<DB>       that this line of AI should be stopped dead in its tracks. 

I say go ahead and work on machine vision providing the results aren't
classified secrets, but don't work on any specific military
installations or applications of the research.

<DB>   2) His background, technical and otherwise, seems to predispose
<DB>      him to dismissing some technical issues a priori. i.e. a machine
<DB>      can never duplicate a human, why ?, because !.  

Yup.

<DB> The main question that I see arising from the talks is : is it time
<DB> to consider banning, halting, slowing, or otherwise rethinking 
<DB> certain AI or technical adventures, such as machine vision, as was
<DB> done in the area of recombinant DNA.

No, only in cases where the only reasonable application would be
military, or the work is classified (secret) from the start, or the
military have the option of retroactively classifying it, or you are
in fact working on a military application.

------------------------------

Date: 1986 November 17 00:44:44 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:why not nuclear airplane?

<PFD> Date:     Fri, 7 Nov 86 21:11 EDT
<PFD> From:     "Paul F. Dietz" <DIETZ%slb-test.csnet@RELAY.CS.NET>
<PFD> Subject:  24 hour waiting period?

<PFD> There *is* one strategic weapons system that could still be flying
<PFD> three days after an attack: the nuclear airplane!  Too bad we
<PFD> didn't build it... (:-)).

Hmmm, that would be cheaper than SDI. Anybody know why it wasn't built?
It'd cost more than a billion dollars per plane? Nowadays that may be
"cheap" compared to alternatives. Debate on this topic please??

------------------------------

Date: 1986 November 17 00:51:13 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:whether programming can somewhat replace a human employee?

<H> Date: Sat, 8 Nov 86 09:55:53 PST
<H> From: ihnp4!utzoo!henry@ucbvax.Berkeley.EDU
<H> Subject:   Professionals and Social Responsibility for the Arms Race

>   2) AI will never duplicate or replace human intelligence
>      since every organism is a function of its history.

<H> This just says that we can't exactly duplicate (say) human intelligence
<H> without duplicating the history as well.  The impossibility of exact
<H> duplication has nothing to do with inability to duplicate the important
<H> characteristics.  It's impossible to duplicate Dr. Weizenbaum too, but
<H> if he were to die, I presume MIT *would* replace him.  I think Dr. W. is
<H> on very thin ice here.

Yup, well said, Henry.

>    5) technical education that neglects language, culture,
>       and history, may need to be rethought.

<H> Just to play devil's advocate, it would also be worthwhile to rethink
<H> non-technical education that covers language, culture, and history while
<H> completely neglecting the technological basis of our civilization.

The James-Burke series on "Connections" and "The Day the Universe Changed"
do a nice job of showing relationships between history/politics/religion and
science/invention.

------------------------------

Date: 1986 November 17 00:54:11 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:bomber crew determining nucwar status for themselves firsthand

<H> From: hplabs!pyramid!utzoo!henry@ucbvax.Berkeley.EDU
<H> Date: Sun, 9 Nov 86 08:40:11 pst
<H> Subject: Unequivocal Confirmation of Detonation

<H> Related thought:  if the B-52s and B-1s get airborne under attack, a
<H> large percentage of the surviving bomber crews will be able to personally
<H> verify nuclear explosions on US soil.  They don't scramble that fast; they
<H> will know about it when the base behind them gets blasted.  How good are
<H> the bomber -> command communications?  (Communications systems intended
<H> for "go" orders aren't necessarily two-way.)

I would think it reasonable to give bomber crews standing orders to this
effect: "If you take off (scramble) in response to an alert of soviet
attack (as yet unconfirmed or only partially confirmed), and if after you
are airborne you or your crew personally observe your base behind you
being destroyed in a thermonuclear fireball, you should immediately begin
flying toward the USSR. During your flight you should make all reasonable
attempts to intercept military or commercial broadcasts to determine
whether any CCC posts or cities still exist. If during the next 12 hours
you do not obtain any evidence that the USA still exists, you are hereby
ordered to go ahead and destroy your targets without any further
authorization. If however you do continue to receive radio signals that
indicate the USA still exists, but do not receive any explicit orders,
you should use your own judgement whether to attack your target or not."

------------------------------

Date: 1986 November 17 01:29:06 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:anti-tank ideas

<LIN> Date: Mon, 10 Nov 1986  09:07 EST
<LIN> From: LIN@XX.LCS.MIT.EDU
<LIN> Subject: defenses, first strike

<LIN> A real defensive problem is building a man-portable anti-tank weapon
<LIN> that will work against modern battle tanks.  There is no such beast
<LIN> now.  Given a billion dollars in R&D money, I am quite certain that we
<LIN> could build such a thing.

The light-fiber-communication device is a start (my nieve opinion).
Suppose we had two fibers, one from the human to the launch station, and
one from the launch station via the reel to the in-flight anti-tank
missile. Then the human could be a few hundred yards away from the launch
station, maybe even moving around a little (a few yards) after the launch
station has been set up earlier. The human could watch for incoming tanks
from behind a tree or bush, launch on sight of tank, and the blast from
the launch gives the tank commander no precise idea where the human is
physicaly located thus no easy target other than the spooling
communication fiber itself. Thus the human can both launch and survive,
which is a great moral advantage over the original design.

------------------------------

Date: Mon, 17 Nov 1986  14:38 EST
From: LIN@XX.LCS.MIT.EDU
Subject: [cfccs: Arms-Discussion Digest V7 #58]

Date: Thursday, 13 November 1986  00:26-EST
From: cfccs at HAWAII-EMH
To:   ARMS-D-Request
Re:   Arms-Discussion Digest V7 #58

From: CFCCS@Hawaii-EMH

 o SDI is sold to the masses and Congress via spectacular exaggeration
   about what it will accomplish.

I haven't seen any spectacular exaggerations.  Any examples?
Can you prove that the proposed accomplishments will never be possible?  I
know of one instance, that is if we don't try!

 o SDI is encroaching on our Universities and academic freedom.  Just as
   bad, in my eyes, is that it is making all University/DoD connections
   look untenable.

I disagree.  DoD has given money for specific research projects from its
very inception.  Why is it now considered encroachment and a loss of
freedom?  Why do you feel all University/DoD connections are being looked
upon as untenable (unreasonable, undefensible, not capable of being main-
tained)?  Your arguement sounds too emotional.

 o SDI will be permanently untestable.

By untestable I assume you mean we cannot be sure it can be proven effective
to the Nth degree.  That may be true, but what can?  We can model its
effectiveness like we do with most new technology, and we can benchmark it in
small scale and calculate the full potential using the benchmark as a base
measure.  No defense can ever be tested fully unless under actual battle
conditions, so why target SDI for that arguement?

 o SDI will, and here I agree with Harold Brown, former physicist and sec-
   retary of Defense, have offensive uses long before--read this moneywise
   as well as timewise--any partially practical defensive ones are ready
   for (non)testing.  Of the top of my head, the rail gun comes to mind, as
   does light speed ASAT.

I agree that this may be possible, but not necessarily true.  The fact
remains that the same is true for most types of research.  Almost 100%, if
you only count DoD money.  It is the business of DoD to figure out potential
uses before the other side does.  This may be paranoid, but it is their job.
Again, why use this arguement here?  It is not unique to SDI.

   (There is this awful double standard invoked by SDI proponents that I
   just can't stand: technology & engineering & lots of late night program-
   ming sessions will overcome all the difficulties, even though we can't
   ever figure out all possible countermeasures, but it's somehow obvious
   that offensive uses are theoretically impossible, now and forever, no
   matter how ingenious our lab boys (and girls) are.  Sorry, I just don't
   buy that sort of non reasoning, and I don't the Soviets do either.)

They say the atmosphere will prevent SDI from ever being able to hit an Earth
target.  They do not discount offensive uses in space itself.  I believe that
if the effort were put in, any limits now 'obvious' would be overcome.
However, this is not the point.

 o SDI, while it may not do much for stopping arms, does wonders for stop-
   ping arms discussions.  Considering Reagan's stated antipathy towards
   arms agreements in the past, and the fact that SDI is his own brainchild,
   I seriously question his motives.

There are only two ways to end a nuclear threat.  You have to make nuclear
weapons impotent, or you have to do away with them all together.  The
President has decided the latter is not possible considering all the
distrust that has built up between the US and SU, and that ultimately, they
are not the only two players.  The whole world is involved.  Many of whom
would do much to gain the power the SU and US now have.

   So let me turn your R&D question around:

   If R&D are so wonderful, why doesn't Reagan throw some gigabucks at the
   Universities, no strings attached?  After all, Reagan has stated that SDI
   research is to be shared with the Soviets, so why not bring it out in the
   open to begin with?

   Or how about throw some of these gigabucks at NASA?  (Directly, not just
   SDI trickledowns)  And not just money to replace the Challenger, either,
   but a full vigorous space program already.

   In other words, my final point of contention with SDI is:

 o It costs a hell of a lot.

Why should money be thrown to the universities without a specific goal in
mind?  Why fund a more vigorous space program when the technology required to
protect any advances in space is not being developed?  Sounds like putting
your money into a bank that doesn't have a vault!  If it can be proven that
no advances will be made by persuing SDI technology, then let's see proof.
All we have is different people trying to see into the future without even
trying the water.  An analogy for both sides of the issue is trying to walk
on water.  One side declares it cannot be done because it is impossible.  The
other declares it can be done, we just haven't figured out how yet!  Is that
a good reason to quit trying?

CFCCS @ HAWAII-EMH

------------------------------

Date: 1986 November 17 01:46:46 PST (=GMT-8hr)
From: Robert Elton Maas <REM%IMSSS@SU-AI.ARPA>
Subject:telepresence for SDI, massively parallel human intelligence?

One possible SDI design I haven't seen discussed is massively parallel
human natural inteligence linked by a giant computer net, with very
little actual artificial intelligence. Is this because nobody thought
of the idea before I did, or because there's something fundamentally
wrong with it, or because it's politically unsuportable even though a
good idea? During a national emergency, instead of everyone rushing
for the hills or crying&praying etc., we have everybody at their TV
sets watching the incoming attack from the viewpoint of one of the
observation&attack stations (different station for each viewer except
for multiple backups to allow majority logic or failsoft if somebody
faints). When somebody sees a missile, that viewer uses the joystick
to aim the cursor at the missile, at which point the attack station
latches on the image and follows through with a laser or particle beam
or whatever the method is. The viewer has the task of visually
discriminating warheads from decoys. If the viewer sees just a point
of light, the cursor is aimed, the observation station first locks on
it and magnifies the image until it is large enough to fill most of
the screen, then the human either presses the FIRE button or presses
the DECOY button.

During times of peace the same TV/joystick hooked through the same
communication net can provide training exercises so viewers can learn
how to distinguish various kinds of decoys from various kinds of
Soviet warheads. Training exercises can use computer simulations of
visual images as well as actual videotape of Soviet tests.

Anybody interested in developing or criticizing this idea?

------------------------------

End of Arms-Discussion Digest
*****************************