hogg@utcsri.UUCP (John Hogg) (06/19/85)
This is paper #3 in the CSRI/CPSR presentation that I said I'd post. Ric
Hehner spoke (in a non-technical manner) about his own area of expertise:
program correctness.
I should mention that both the authors whose papers I've posted have
indicated that they do NOT wish to be contacted about the contents; address
any replies to me. Ric and Andrew feel (correctly, in my opinion) that
debating on Usenet accomplishes little. However, in the hopes that I
can get Henry to concede a number of points, if not the entire issue,
here is the
Submission to the Liberal Task Force on
Peace, Security and World Disarmament
by
Eric C.R. Hehner
1985 May 30
One of the main components of the Strategic Defence
Initiative (SDI, or "Star Wars") is the development of com-
puter systems capable of recognizing and responding to a
nuclear attack. The response may be a laser or particle
beam from a satellite, or even the launching of retaliatory
nuclear weapons; these responses would be automated. I
quote from the U.S. Department of Defence Report "Eliminat-
ing the Threat Posed by Nuclear Ballistic Missiles, Vol. 5,
Battle Management, Communications and Data Processing", Feb.
1984:
In all scenarios for a BMD [Ballistic Missile Defence]
battle, timely action is essential, and it is acutely
so if battle is joined in the boost phase [by satellite
lasers]. Time scales are too short [300 seconds of
boost, minutes to impact] for human decision makers to
select among a complex array of alternatives. It seems
clear that the DTST [Defensive Technologies Study
Team], therefore, that _s_o_m_e _d_e_g_r_e_e _o_f _a_u_t_o_m_a_t_i_o_n _i_n _t_h_e
_d_e_c_i_s_i_o_n _t_o _c_o_m_m_i_t _w_e_a_p_o_n_s _i_s _i_n_e_v_i_t_a_b_l_e _i_f _a _b_a_l_l_i_s_t_i_c
_m_i_s_s_i_l_e _d_e_f_e_n_c_e _s_y_s_t_e_m _i_s _t_o _b_e _a_t _a_l_l _c_r_e_d_i_b_l_e.
So it is absolutely necessary to get the software
right. Unfortunately, it is not possible.
I realize that any statement of impossibility sounds
like an invitation to be proven wrong. But not in this
case. Let me illustrate with the old examples of pennies on
a checkerboard. If you place one penny on the first square,
two on the next, four on the next, and so on, doubling at
each square, the number of pennies on the final square is
greater than the age of the universe measured in seconds.
Computer programs to solve certain problems resemble the
checkerboard in that two alternative cases must be con-
sidered, and for each alternative, there are two subcases to
be considered, and so on. The number of cases is the same
as the number of pennies. Regardless of any advances in
technology that may come, such programs cannot be executed.
This is a well-known, standard result in computer science.
One such problem is chess; it is easy to write a program to
play a perfect game of chess, but that program cannot be
executed. Another such problem is the interpretation of
sensory data, which is required to recognize an attack.
Another is the strategy to respond to an attack.
The SDI calls for new developments in artificial intel-
ligence to solve these problems. Artificial intelligence is
the branch of computer science that begins with the full
knowledge that the correct solutions to its problems are not
feasible, and it seeks solutions that work pretty well most
of the time, and fail only occasionally. And that is very
valuable whenever the benefits of success outweight the
consequences of failure. With further research, computers
may be able to play chess so well that they almost always
make good moves, and rarely make dumb ones. But for SDI,
the consequences of misinterpreting the data, or making a
dumb strategic move, could very well be the start of a
nuclear holocaust.
There is another reason that the SDI goals are not
achievable. Even if perfect pattern recognition and perfect
strategy were feasible, programmers make mistakes. Every
large program has bugs. Once again, this can be tolerated
in application where the consequences of failure can be
tolerated. When banking software fails, customers are
inconvenienced. When airplane navigation software fails, a
planeload of people may die. This has happened more than
once; apparently the airlines consider it tolerable. When
IBM delivers an operating system, it may have as many as
1,000 bugs. These are not discovered by IBM, but by its
customers after delivery. When hardware fails, if the
failure is detected, a backup system can be substituted.
But when software fails, if the failure is detected, any
backup copy has the same errors in it.
The U.S. Department of Defence is aware of the problem
of human error in programming, and they propose a solution.
Quoting again from the Fletcher report:
The development of a BMD [Ballistic Missile Defence]
battle management system will extend over several years
and will be so complex that no one individual will have
a complete understanding of the system. First, a com-
plete set of system requirement is needed. Automated
tools must be developed that will permit a formal set
of requirements to be specified and verified for com-
pleteness and consistency... Experience with large sys-
tems has shown that, when errors are corrected or a
system is upgraded with new capabilities, the probabil-
ity of introducing new errors is very high. This is
because it is impossible to understand all the possible
implications that a change in a large system may have.
Here again, automated tools are needed...
The SDI proposal is to write another program, called a
verifier, that can check the correctness of programs. But a
verifier, being a large program itself, may well contain
bugs. And verification is another one of those problems
that are not feasible in the large systems being proposed.
Verifiers exist now; they have been used to verify programs
consisting of several hundred lines of code. The proposed
SDI software to recognize, evaluate and respond to nuclear
attack, according to Defence Department estimates, will be
about _8 _m_i_l_l_i_o_n lines of code. Without doubt, it will con-
tain a lot of bugs.
I have been speaking as a scientist, but I would like
to speak for a few minute as a human being. It seems to me
that scientific debate about the feasibility of Star Wars
misses the main point. The threat posed by nuclear weapons
is a political problem, with an obvious, if not easy, polit-
ical solution. When politicians propose a scientific solu-
tion, they are raising a distraction from their own
failures. The effects of SDI will be to provide an excuse
to continue manufacturing nuclear weapons; the idea is that
we can have our nuclear weapons and be safe from them too.
Ronald Reagan will feel even less obliged to negotiate arms
reductions. Even if SDI were completely successful in its
aims, countermeasures would soon follow: perhaps missiles
with short boost phases, or missiles that do not leave the
atmosphere and so remain out of range of X-ray lasers, or
anti-satellite weapons that destroy the lasers before the
missiles are launched, or... And lasers that can destroy
missiles undoubtedly have their offensive uses. SDI is no
solution to the arms race, but a further escalation.
I am not against defence; in fact, I have worked for
the Defence Research Establishment Pacific. Furthermore, my
institute, the Computer Systems Research Institute, stands
to gain from SDI research contracts. I am sure that you
will be hearing from self-interest groups arguing for SDI,
without concern for the national or international good. And
there are scientists who think only of their science, and
not of the human consequences. But some scientific research
should not be done. I wish some scientists had said that a
long time ago.
--
John Hogg
Computer Systems Research Institute, UofT
{allegra,cornell,decvax,ihnp4,linus,utzoo}!utcsri!hogghenry@utzoo.UUCP (Henry Spencer) (06/25/85)
I'm going to make a longer response to the main content of Ric Hehner's paper, but I thought I'd get a couple of peripheral things out of the way first, separately. > ...Ric and Andrew feel ... that debating on Usenet accomplishes little. I disagree, but not for the obvious reasons. It is quite possible that there is little chance of the debaters changing their views as a result of debate. What such discussion/debate can, and sometimes does, accomplish is to inform and educate the spectators. This is not trivial. > ... When airplane navigation software fails, a > planeload of people may die. This has happened more than > once; apparently the airlines consider it tolerable. ... I would like to see some references on this. If Ric is thinking of the Mount Erebus and KAL007 crashes, then he has weakened his argument with a serious error of fact. Neither of these crashes was a software failure (modulo some small uncertainty about KAL007). The Mount Erebus crash is definitely known to have been human error. Some navigation reference points were changed, and the change was not propagated to all the people who should have been told. As a result, correctly-working navigation software flew the plane into a mountain that the software thought was well off to one side. Bad weather made it impossible for the pilots to detect this in time. There is no suspicion of hardware or software failure. KAL007 is a bit less certain, because of lack of evidence, but the most plausible theory is a subtly-incorrect order given by the crew to the navigation software. (Ironically, slightly "smarter" software might have saved KAL007. The navigation software had to be given explicit instructions for each leg of a complex course, and had no overall knowledge of the course or the constraints on it. Hence the software was unable to apply any real consistency checks to the orders given to it.) As far as I know there is no suspicion of hardware or software failure. Finally I doubt that the airlines -- or the passengers, or the crews, or the government regulatory agencies -- consider fatal crashes "tolerable" regardless of their cause! -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry
henry@utzoo.UUCP (Henry Spencer) (06/26/85)
One other air-travel incident that I forgot, although this one wasn't
actually a crash: the CAAC 747 near-crash over the Pacific. Alas, this
doesn't support Ric's comment either, since this was the result of
correctly-functioning software attempting to carry out impossible orders.
Once again, putting more judgement in the software might have led to the
discovery of the problem before it became critical.
--
Henry Spencer @ U of Toronto Zoology
{allegra,ihnp4,linus,decvax}!utzoo!henry