henry@utzoo.UUCP (Henry Spencer) (07/05/85)
[Thought I'd never get around to my promised followup, didn't you? No such luck.] First, I want to get a side issue out of the way: linkage of SDI to offensive strategic weapons. Then (in the next message) I will discuss the central issue of software problems of SDI. > The response [to a detected attack] may be [activation of defences] > ... or even the launching of retaliatory > nuclear weapons; these responses would be automated. This is a theme that SDI opponents assert repeatedly: that implementation of SDI may/will involve automation of the offensive nuclear systems as well. There is no logical necessity for this whatsoever. The *only* system that needs split-second response is boost-phase interception; nothing else requires action within seconds. In particular, the launch of retaliatory offensive weapons does not require such lightning response. So there is no requirement that it be coupled to SDI activation. Also, such a coupling would fly in the face of forty years of practice and policy on the activation of nuclear weapons. One hears scare stories about accidents causing nuclear alerts, with the implication that the world was on the hair-trigger edge of war. Nonsense. False alarms in warning systems are much more common than people realize, and even in the well-publicized cases, there was never any serious chance of war. This is precisely *because* nobody takes the hardware's word for it. Nor, for that matter, any single human being's word for it. This policy is not an accident. There are elaborate safeguards around strategic nuclear weapons, aimed at making *certain* that no irrevocable action occurs without positive confirmation that an attack is in progress. (Something that bothers me is the "peace movement"'s serious ignorance of the nature of the systems they criticize.) Much of the recent uproar about "launch on warning" is because a launch-on-warning policy would require seriously weakening the "positive confirmation" criteria. (Note that "launch on warning" does *not* inherently imply automatic launch, despite some of the more hysterical reports.) I emphasize that the positive-confirmation rule and the multiple precautions are not an accident, but the direct result of major policy decisions which will not be lightly overturned. So automatic initiation of offensive weapons, definitely a scary thought, is not only unnecessary but would be a total about-face from long-entrenched fundamental policy. And in any case, the issue of automatic initiation of offensive weapons has little or nothing to do with SDI deployment. The two policy issues are quite independent, although if both were adopted their implementations might (repeat, *might*) share some hardware. Let us not confuse the two. -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry
jchapman@watcgl.UUCP (john chapman) (07/08/85)
> [Thought I'd never get around to my promised followup, didn't you? > No such luck.] . . . > nuclear weapons, aimed at making *certain* that no irrevocable action > occurs without positive confirmation that an attack is in progress. > (Something that bothers me is the "peace movement"'s serious ignorance of > the nature of the systems they criticize.) Much of the recent uproar Sigh...., somehow it's always the peace movement thats portrayed as ignorant; don't the seriously ignorant among those who promote a nuclear "defense" bother you. > about "launch on warning" is because a launch-on-warning policy would > require seriously weakening the "positive confirmation" criteria. (Note > that "launch on warning" does *not* inherently imply automatic launch, > despite some of the more hysterical reports.) I emphasize that the It is my impression that the principal feature of launch on warning is that the side being attacked does not wait for actual detonation or impact of incoming missiles before ordering retaliation. This does reduce the amount of time available for a decision and if weapons delivery systems continue to decrease delivery time (or even appear to effectively do so by various forms of stealth) it will necessitate either 1. having an impregnable retaliatory system so it is not necessary to launch before impact, or 2. employ automatic launch systems since there will not be time for human decision making. > positive-confirmation rule and the multiple precautions are not an > accident, but the direct result of major policy decisions which will > not be lightly overturned. So automatic initiation of offensive weapons, > definitely a scary thought, is not only unnecessary but would be a total > about-face from long-entrenched fundamental policy. > > And in any case, the issue of automatic initiation of offensive weapons > has little or nothing to do with SDI deployment. The two policy issues > are quite independent, although if both were adopted their implementations > might (repeat, *might*) share some hardware. Let us not confuse the two. > -- > Henry Spencer @ U of Toronto Zoology > {allegra,ihnp4,linus,decvax}!utzoo!henry John Chapman
henry@utzoo.UUCP (Henry Spencer) (07/10/85)
[I had better get this done, or else everyone will have forgotten what the original article was about! I find myself with less and less time to read and post on can.politics, and may drop off the list.] Having dealt with some side issues, I now [do I hear a chorus of "at last"? :-)] come to the heart of the matter: the software situation as it affects an SDI system. I agree with many of Ric's comments, but not with some of his conclusions. The prospects for verifiably correct programs of the size involved are dim, going on zero. DoD's notion of automating the problem away is a fantasy at this time; the technology is not up to it. I find it amusing, in a black sort of way, that DoD has been sold on the wonders of program verification and AI program generators by the same community that is now (in part) frantically trying to retract those claims. [I should note that I'm not thinking of Ric here.] However, this is not necessarily a crippling problem, for several reasons. The first is that absolute correctness is not required, if what we are worried about is accidental initiation of war. The "no accidental war" requirement does not demand, for example, perfect discrimination of decoys from real warheads. It is sufficient that the system reliably discriminate between "major attack" and "no major attack". This is a much looser criterion than complete correctness. Furthermore, why should the decision to activate an SDI system against a major attack have to be automated at all? Yes, the decision times are short. So are the decision times involved when flying a plane or driving a car! There is a difference between the complexity of pointing hundreds of defensive weapons at the right targets, and the complexity of deciding that it is appropriate to do so. The former may well need to be mostly or totally automated; the latter does not. It is important to distinguish between DoD's well-known mania for automating everything, and the degree of automation that is actually *needed*. Looking at a set of sensor displays and deciding, promptly, whether a major attack is in progress or not does not sound as if it is beyond human capabilities. I see no reason why the decision to hold fire or open fire needs to be automated at all. The inevitable possibility of human mistake or error can be dealt with by the method already used for such decisions as ICBM launches: simultaneous approval by multiple observers is required. [begin brief digression] (I would further speculate -- note that this is now speculation, which I am not prepared to defend at length -- that if one were willing to accept the costs of having large numbers of well-trained people on duty in several shifts, it would not be impossible to use a largely manual control system for a missile-defence system. Terminal homing of interceptors would probably have to be automated, as would some initial prefiltering of data, but I speculate that the "battle management" functions could be done by humans with adequate speed. Complex, yes; impossible, maybe not.) [end brief digression] > ... Artificial intelligence is > the branch of computer science that begins with the full > knowledge that the correct solutions to its problems are not > feasible, and it seeks solutions that work pretty well most > of the time, and fail only occasionally. This definition seems to me to be politically slanted, although it is definitely based on fact. Heuristic methods necessarily do not give *optimal* solutions, by definition, but that is a far cry from implying that they sometimes fail to give *correct* solutions. The inability to quickly compute an optimal solution for, say, the "travelling salesman" problem does not imply an inability to quickly compute a valid (although perhaps far from optimal) solution for it. Furthermore, it is an observed fact in a number of applications -- notably linear programming -- that the (relatively bad) worst-case behavior of the algorithm never happens unless the input data is carefully concocted with that in mind. Note that I am not saying that the computing problems of SDI are easy to solve. What I am saying is that a claim of impossibility which does not examine the details of the problem is invalid. Many existing systems rely on suboptimal heuristics; a number of them involve risk to human life if the heuristics fail badly. It is often possible to construct heuristics that quite reliably produce viable -- not optimal -- answers for realistic inputs, regardless of how bad their theoretical worst-case behavior is. > And that [heuristic approach] is very > valuable whenever the benefits of success outweight the > consequences of failure... If one assumes an attack in progress, the benefits of success clearly outweigh the possibility of failure. I have dealt above with the matter of uncertainty as to whether an attack is in progress or not. > With further research, computers > may be able to play chess so well that they almost always > make good moves, and rarely make dumb ones. This is perhaps an ill-chosen example, in that the best current chess programs can fairly consistently slaughter the majority of human chess players. Several of the major programs have official (or semi-official; I'm not sure of the politics involved) chess skill ratings that put them well above the vast majority of the rated human players. A player who is good enough for major international competition will demolish any current program, but few human players ever reach that level. > But for SDI, > the consequences of misinterpreting the data, or making a > dumb strategic move, could very well be the start of a > nuclear holocaust. This contention simply does not seem to be justified by the situation. As mentioned above, there is no real need for an automatic "open fire" decision; as I have discussed elsewhere, there is neither a requirement nor much likelihood of SDI being coupled to offensive systems to the extent of automatically activating them. > ... When hardware fails, if the > failure is detected, a backup system can be substituted. > But when software fails, if the failure is detected, any > backup copy has the same errors in it. A backup copy of the same software, yes. But the whole reason for backup hardware systems is to get *different* hardware into action if the first set fails. This is why one of the Shuttle's computers runs totally different software from the other four, duplicating their functions but in a different way written by different authors. It is *different* software, a software backup that does *not* share the flaws of the primary system. > ... scientific debate about the feasibility of Star Wars > misses the main point. The threat posed by nuclear weapons > is a political problem, with an obvious, if not easy, polit- > ical solution. When politicians propose a scientific solu- > tion, they are raising a distraction from their own > failures... So the alternative to SDI is to magically turn those failures into successes. The contention that this is possible, where SDI is not, seems to me to be unproven. Not necessarily false, but unproven. Certainly not obvious. > ... Even if SDI were completely successful in its > aims, countermeasures would soon follow... Examination of the history of disarmament efforts gives little cause to think that this will not be true of them as well. > ... And lasers that can destroy > missiles undoubtedly have their offensive uses. SDI is no > solution to the arms race, but a further escalation. I have discussed elsewhere the interesting reasoning: "because some types of SDI systems would be very dangerous, any SDI system would be". -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry
clarke@utcsri.UUCP (Jim Clarke) (07/10/85)
In article <5772@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes: >.... >I have discussed elsewhere the interesting reasoning: "because some >types of SDI systems would be very dangerous, any SDI system would be". But your article, from which this last comment is extracted, is using the mirror image reasoning: "Because we do not have (or may not have) grounds to fear Star Wars in general, we should therefore not fear the current actual Star Wars proposal." Just think of it as a giant Berklix in the sky, Henry. The first time you hit funny code, bam! we're all dead. Or maybe the second time. Or the third. Plus, while you're installing it, the people from Xenix (can't hardly equate the Roossians to Bell Labs, can we?) are getting nervous, and if you can't get BSD up inside 30 minutes, they're going to replace your PDP-11 with a PC XT. -- Jim (How's that for adaptive analogizing? "Survival of the fittest analogy", I always say.)
henry@utzoo.UUCP (Henry Spencer) (07/10/85)
> >I have discussed elsewhere the interesting reasoning: "because some > >types of SDI systems would be very dangerous, any SDI system would be". > > But your article, from which this last comment is extracted, is using the > mirror image reasoning: "Because we do not have (or may not have) grounds > to fear Star Wars in general, we should therefore not fear the current > actual Star Wars proposal." I wasn't aware that there was a "current actual Star Wars proposal", in the sense of specific ill-advised hardware systems having been chosen for active use. And I think you are reading a bit much into the space between the lines; I was claiming that the critics are being silly in certain ways, not that the system they are attacking is beyond criticism. Yes, I do have reservations, some of them quite serious, about the current proposal. But it is vitally important that the baby not be thrown out with the bathwater; it is high time, long past time, that our "defence departments" got back into the defence (as opposed to deterrence) business. > Just think of it as a giant Berklix in the sky, Henry. The first time you > hit funny code, bam! we're all dead. Or maybe the second time. Or the third. Just think of our current setup as a giant OS/360 in the sky. You know your programs won't run forever, especially since IBM keeps changing the control blocks with each new release. It's a question of whether you can get everything important off onto another system -- yes, even a Berklix -- before the crash comes. Because there really isn't much hope of convincing the Computer Center to stop buying from IBM. [Oh boy, a rousing round of analogical one-upmanship!] -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry
clarke@utcsri.UUCP (Jim Clarke) (07/10/85)
In article <5775@utzoo.UUCP> henry@utzoo.UUCP writes: >I wasn't aware that there was a "current actual Star Wars proposal", in >the sense of specific ill-advised hardware systems having been chosen for >active use. And I think you are reading a bit much into the space between >the lines; I was claiming that the critics are being silly in certain ways, >not that the system they are attacking is beyond criticism. Yes, I do have >reservations, some of them quite serious, about the current proposal. But >it is vitally important that the baby not be thrown out with the bathwater; >it is high time, long past time, that our "defence departments" got back >into the defence (as opposed to deterrence) business. > Sorry to misread you (though, as you can guess, I still disagree with that last sentence). >> Just think of it as a giant Berklix in the sky, Henry. The first time you >> hit funny code, bam! we're all dead. Or maybe the second time. Or the third. > >Just think of our current setup as a giant OS/360 in the sky. You know your >programs won't run forever, especially since IBM keeps changing the control >blocks with each new release. It's a question of whether you can get >everything important off onto another system -- yes, even a Berklix -- before >the crash comes. Because there really isn't much hope of convincing the >Computer Center to stop buying from IBM. > >[Oh boy, a rousing round of analogical one-upmanship!] >-- > Henry Spencer @ U of Toronto Zoology I surrender! (... though I wonder if instead of "a giant OS/360 in the sky", what we really have mightn't be a whole lot of 360/30's on the ground?)
hogg@utcsri.UUCP (John Hogg) (07/10/85)
First, a retraction: in my posting of Ric's paper that started all this, there was a reference to an airplane crash caused by faulty software. Upon digging further, I can't find the incident. The example is retracted; the point about the inconceivable cost of BMD failure remains. "Having dealt with some side issues", I will rebut Henry's rebuttal. First, the matter of the need for absolute correctness, given that verifiability is virtually impossible. If we are to protect our cities from attack, a "small" leakage is absolutely unacceptable. Thus, the battle management software must be perfect. If we use such follies as pop-up missiles, matters are much worse. A missile rising out of a submarine is going to look hostile, no matter what it does. In fact, the mere starting up of ANY sort of BMD system is going to look hostile; remember, current bets are that a BMD system adequate to cope with a retaliatory strike is probably feasible, so firing it up may be the first step of a first strike. In which case, the only way for the Soviets to preserve their deterrent is to go to launch-on-warning so that something will get through. They may consider some stage of BMD startup to be "warning". And then they may not... did you play "chicken" in your younger days? Did you play it with the population of the Northern Hemisphere in your car? Note, by the way, that no straw-man tying of BMD to retaliatory ICBMs is needed to cause an "accidental" war. Now for the point that, given the source, shocks me. Henry asks >Furthermore, why should the decision to activate an SDI system against >a major attack have to be automated at all? ALL proponents of SDI (with the possible exception of ol' Sixgun) including fanatics such as Daniel Graham, consider a boost-phase defence to be the only feasible approach. Boost phase can most comfortably be measured in seconds. By the end of that time, the rising missiles must be destroyed - not identified as being worthy of destruction. If the Soviets go to simultaneous launch (difficult, but much simpler than SDI!) the decision must be made within the first few seconds of boost. What kind of meaningful information can a human absorb, let alone digest, in this time? And multiple decisions by multiple observers is even more ridiculous. Henry's speculation that humans could even handle the battle management functions is surely the product of a hallucinogens in the Ramsey Wright water supply. Read the Fletcher Report on Battle Management to get an idea of the magnitude of the problem; ask me to expand on this if you wish. The REAL "crux of the matter" is perhaps that >If one assumes an attack in progress, the benefits of success clearly >outweigh the possibility of failure. On the other hand, if an attack isn't yet in progress, but is made far more likely by a system that won't work perfectly for ICBMs and not at all for bombers, cruise missiles and SLBMs, then the benefits of this incredibly expensive system become very hard to see. (Of course, I don't work for Lockheed.) And the phrase, "won't work perfectly" is much weaker than it needs to be. Numerous counters to SDI as currently envisaged have already been proposed, many of them cheap and simple. "Won't work at all" might well be less in error. >I have discussed elsewhere the interesting reasoning: "because some >types of SDI systems would be very dangerous, any SDI system would be". Henry, in light of what I've said here, please propose a non-dangerous SDI system! -- John Hogg Computer Systems Research Institute, UofT {allegra,cornell,decvax,ihnp4,linus,utzoo}!utcsri!hogg
henry@utzoo.UUCP (Henry Spencer) (07/11/85)
>> (Something that bothers me is the "peace movement"'s serious ignorance of >> the nature of the systems they criticize.) ... > > Sigh...., somehow it's always the peace movement thats portrayed as > ignorant; don't the seriously ignorant among those who promote a nuclear > "defense" bother you. Ignorance anywhere bothers me. But some of the "peace movement" people really do not appear to have the faintest idea how these systems work; their opposition seems to arise from either ideological considerations or herd instinct, rather than the issues themselves. I emphasize (as I should have before) that not all "peace movement" people are like this. The percentage is high enough to be troubling, though. > It is my impression that the principal feature of launch on warning is > that the side being attacked does not wait for actual detonation or > impact of incoming missiles before ordering retaliation. This does > reduce the amount of time available for a decision and if weapons > delivery systems continue to decrease delivery time (or even appear > to effectively do so by various forms of stealth) it will necessitate > either 1. having an impregnable retaliatory system so it is not > necessary to launch before impact, or 2. employ automatic launch > systems since there will not be time for human decision making. You are correct about the principal feature of launch on warning, and about its effect on decision times. But a human being can make a go/no-go decision in seconds. Launch-on-warning *would* mean serious changes in policy, since current policy is "presidential order only", and there is a strong possibility that time would be too short for this. But people who talk about having to automate the decision are jumping to conclusions, or have been misled by DoD's manic enthusiasm for automating everything in sight. There does not seem to be any logical necessity for it. -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry
mmt@dciem.UUCP (Martin Taylor) (07/13/85)
>Ignorance anywhere bothers me. But some of the "peace movement" people >really do not appear to have the faintest idea how these systems work; >their opposition seems to arise from either ideological considerations >or herd instinct, rather than the issues themselves. I emphasize (as I >should have before) that not all "peace movement" people are like this. >The percentage is high enough to be troubling, though. As I just posted in a different context (lotteries), most PEOPLE are like this. Ask, though, whether a person is more likely to be intelligent given that s/he belongs to the "peace movement" or to the populace in general (or to the populace matched for socioeconomic or educational background). The fact that a lot of people cotton on to something fashionable doesn't make it wrong (or right). As for whether people support some course of action for ideological reasons or because of the issues themselves, well, that seems to be a non-question. One's ideological background is a strong determiner of how one will react to a controversial issue. Neither can be in play without the other. -- Martin Taylor {allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt {uw-beaver,qucis,watmath}!utcsri!dciem!mmt
jchapman@watcgl.UUCP (john chapman) (07/15/85)
> > >Ignorance anywhere bothers me. But some of the "peace movement" people > >really do not appear to have the faintest idea how these systems work; > >their opposition seems to arise from either ideological considerations > >or herd instinct, rather than the issues themselves. I emphasize (as I > >should have before) that not all "peace movement" people are like this. > >The percentage is high enough to be troubling, though. > > As I just posted in a different context (lotteries), most PEOPLE are like > this. Ask, though, whether a person is more likely to be intelligent > given that s/he belongs to the "peace movement" or to the populace in > general (or to the populace matched for socioeconomic or educational > background). The fact that a lot of people cotton on to something > fashionable doesn't make it wrong (or right). > > As for whether people support some course of action for ideological > reasons or because of the issues themselves, well, that seems to be a > non-question. One's ideological background is a strong determiner of > how one will react to a controversial issue. Neither can be in play > without the other. > -- > > Martin Taylor > {allegra,linus,ihnp4,floyd,ubc-vision}!utzoo!dciem!mmt > {uw-beaver,qucis,watmath}!utcsri!dciem!mmt Well said. Henry, I'd like to know why you are bothered by ignorance among some members of the peace movement and apparently not bothered (at least not enough to mention it) by the ignorance exhibited by some "hawks" (for lack of a better term). Do you have some *evidence* for greater ignorance on the part of the peace movement? does accidental peace bother you more than accidental war? :-) John Chapman ...!watmath!watcgl!jchapman
fred@mnetor.UUCP (Fred Williams) (07/16/85)
> >>Ignorance anywhere bothers me. But some of the "peace movement" people >>really do not appear to have the faintest idea how these systems work; >>their opposition seems to arise from either ideological considerations >>or herd instinct, rather than the issues themselves. I emphasize (as I >>should have before) that not all "peace movement" people are like this. >>The percentage is high enough to be troubling, though. Yes it is troubling, but there are those of us opposing SDI involvement who do have some idea of what is involved. I was a weapons analyst for several years and on some projects I'd rather not talk about. This was with private arms manufacturers as well as the Dept. of National Defence. I have also worked for many years with state of the art computer systems. If I do say so myself, I have a very good idea of the type of system SDI needs, and the reliability we could expect in the type of environment forseeable. I have two objections to Star Wars, 1 These are not necessarily defensive only weapons. A microwave laser could be aimed at many targets even on the ground. The warning time for such an attack would be zero. 2 It simply won't work even in a defensive roll. The proponents of SDI give it a 95% success rate in their best projections. The remaining 5% of Soviet missles that do get through are still eneough to wipe out North America, and cause a nuclear winter. Believe me, if somebody came up with a scheme that would really protect us from nuclear attack, I'd re-consider. I would probably recommend that it be distributed universally. However, this is not likely to happen. Reality dictates that the only solution will be a social or cultural one. We have to put nationalist goals aside and simply learn to live in harmony with our neighbours, and our environment. I know this sounds idealistic, and a bit corny, but I believe it will happen to some extent because if it doesn't there will be nobody around to tell me I was wrong. Cheers, Fred Williamns
henry@utzoo.UUCP (Henry Spencer) (07/16/85)
> First, the matter of the need for absolute correctness, given that > verifiability is virtually impossible. If we are to protect our cities > from attack, a "small" leakage is absolutely unacceptable. Thus, the > battle management software must be perfect. John, that is propaganda, not rational reasoning. More specifically: 1. Zero leakage is unnecessary. Successful defence of populations, and in particular cities, demands that leakage be small. But a handful of warheads getting through, at semi-random places, is still far better than ten thousand warheads obliterating everything. Civilization will survive the former, but probably not the latter. The former is unlikely to cause anything more than a chill in the air; the latter almost certainly will cause Nuclear Winter. If you are going to claim that anything less than 100.000000% effectiveness is worthless, please justify this astounding statement. I agree that 85% effectiveness is pretty useless, but 99.999% would be a different story. (What level of effectiveness can be achieved in reality is a different issue; please don't confuse the two. *If* we can achieve 99.999% effectiveness, is it useful? YES!) 2. Even if complete effectiveness is required, this does not require that the software be perfect. It only requires that it be good enough to stop all the warheads. It really doesn't matter whether the software crashes six times in the first five minutes, *if* it has enough margin of capability that it nevertheless shoots down all the warheads. Whether such margin is possible is a different issue; if it is, then perfect effectiveness does *not* demand perfect software. Anything less than 100% correctness in the handling of my bank account is unacceptable to me, but I don't hide my money in a mattress just because I know the bank systems crash occasionally. They recover, and finish the job correctly. > If we use such follies as pop-up missiles, matters are much worse. A > missile rising out of a submarine is going to look hostile, no matter what > it does. Agreed. It is absolutely necessary that such a dangerous action not be initiated unless an attack is definitely already in progress (not merely feared to be imminent). > In fact, the mere starting up of ANY sort of BMD system is going > to look hostile; remember, current bets are that a BMD system adequate to > cope with a retaliatory strike is probably feasible, so firing it up may be > the first step of a first strike. In which case, the only way for the > Soviets to preserve their deterrent is to go to launch-on-warning so that > something will get through. They may consider some stage of BMD startup to > be "warning". And then they may not... They may consider some stage of "nuclear alert" to be warning. And then again they may not. Such alerts exist today. As warning times fall, the situation will get worse. It is not obvious that the problem can be avoided even in the absence of BMD. The Soviets may -- repeat, may -- have such a policy today; they are not nearly as well set up to ride out an attack. The obvious answer to this one is an idea I wholeheartedly support: major cuts in offensive weaponry coinciding with BMD deployment. > ... did you play "chicken" in your younger days? No, I tended to regard it as a bad idea. Much the same way I regard the offensive nuclear standoff today. > Note, by the way, that no straw-man tying of BMD to retaliatory ICBMs is > needed to cause an "accidental" war. Glad to hear you admit that it's a straw man; I hope it won't reappear yet again as an "important argument" against BMD. > >Furthermore, why should the decision to activate an SDI system against > >a major attack have to be automated at all? > > ...decision must be made within the first few seconds of boost. What kind > of meaningful information can a human absorb, let alone digest, in this time? "Oh no, look at that. Missile launches all over the place. OPEN FIRE!" I'm not talking about reading printouts, I'm talking about the same sort of real-time interaction that takes place when driving a car or flying an aircraft. Similarly, I am not talking about digging the President out of bed; I'm talking about trained observers standing regular watches. > And multiple decisions by multiple observers is even more ridiculous. How so? We're talking parallel simultaneous decisions, not one observer consulting another. If one observer would work, so would multiple observers. > Henry's speculation that humans could even handle the battle management > functions is surely the product of a hallucinogens in the Ramsey Wright > water supply. Read the Fletcher Report on Battle Management... Our water supply is pretty cruddy, but I don't drink from it... Remember, I didn't say I was sure it could be done; I just said that I suspected that human capabilities were being underestimated. I agree that it would take many highly-trained people, that breakdowns in coordination would have to be averted by extensive practice beforehand, and that some of the more speed-critical functions would need to be automated. And that DoD probably isn't capable of doing this right, especially since they have a fixation on computers that blinds them to the often-superior capabilities of human beings. But I am not 100% convinced that it is entirely impossible, given a lot of hard work and considerable reliance on those sadly-imperfect computers in our heads. It wouldn't surprise me if it was impossible. It wouldn't shock me if it was hard but possible. > >If one assumes an attack in progress, the benefits of success clearly > >outweigh the possibility of failure. > > On the other hand, if an attack isn't yet in progress, but is made far more > likely by a system that won't work perfectly for ICBMs... As before, the question is whether it's good enough, not whether it's perfect. I agree that the answer to this is not obvious, although I think that theoretical pontification is never going to answer this adequately; tests of real hardware are needed. > and not at all for bombers, cruise missiles... Bombers and cruise missiles we know how to defend against, although the overwhelming threat of ICBMs has resulted in near-complete disregard for air defences. (I don't claim it's easy, by the way.) > ...and SLBMs... How so? Ballistic missiles are ballistic missiles; the warning time may be shorter, but that is not a disqualification, just a problem. Note that submarines cannot fire their missiles simultaneously, so the warning time for the second, third, etc. missiles from the same submarine is quite substantial. Note also that there is an increasing tendency for missile subs (especially Soviet ones) to operate farther and farther away from their targets and closer and closer to "home base", for safety against antisubmarine efforts; this means that SLBM defence becomes increasingly similar to ICBM defence. > ...then the benefits of this incredibly > expensive system become very hard to see. I agree that if it's not pretty effective, it's worthless. The lack of effectiveness is not conclusively established. The degree of increased danger is very sensitive to details. > Henry, in light of what I've said here, please propose a non-dangerous SDI > system! John, please propose a non-dangerous alternative! The current situation is very dangerous, and getting steadily worse. Disarmament would be nice, if only there were some cause for confidence that it would succeed. I don't support BMD because I'm infatuated with the technology, or because I stand to benefit from it financially; I support BMD because I'm scared, and it looks like BMD might, repeat might, be our best/only chance of survival. -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry
ken@alberta.UUCP (Ken Hruday) (07/17/85)
In article <5797@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes: >2. Even if complete effectiveness is required, this does not require that >the software be perfect. It only requires that it be good enough to stop >all the warheads. It really doesn't matter whether the software crashes >six times in the first five minutes, *if* it has enough margin of capability >that it nevertheless shoots down all the warheads. Whether such margin is >possible is a different issue; if it is, then perfect effectiveness does >*not* demand perfect software. Anything less than 100% correctness in the >handling of my bank account is unacceptable to me, but I don't hide my >money in a mattress just because I know the bank systems crash occasionally. >They recover, and finish the job correctly. I think we can all agree that the "benign" errors that you describe above can be tolerated - but I think you've missed part of John's point. We have no assurance that these are the type of bugs left in the system. It is therefore necessary to insure that at least part of the system is verifiably or proven correct. Without this assurance the system will have a finite (and possibly measurable) probability of not working at all. Additionally, even if the system is verified correct, this is no guarantee that 100% of the warheads can be brought down. Effectiveness of the system can only be estimated statistically since we don't know where all the warheads are to be launched from, the cloud cover, etc. Once more, if we accept your admission that a "pop-up" sort of system is unacceptable, then we must take the worst case scenario since (in an orbiting defence system) the Russians will know the positions of the satellites and launch to their maximum "advantage" so as to strain the defense system to it's limits. Ken Hruday University of Alberta
mvclase@watmum.UUCP (Michael Clase) (07/18/85)
In article <5797@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes: >> ...decision must be made within the first few seconds of boost. What kind >> of meaningful information can a human absorb, let alone digest, in this time? > >"Oh no, look at that. Missile launches all over the place. OPEN FIRE!" >I'm not talking about reading printouts, I'm talking about the same sort >of real-time interaction that takes place when driving a car or flying an >aircraft. Similarly, I am not talking about digging the President out of >bed; I'm talking about trained observers standing regular watches. You're talking about the same sort of real-time interaction that causes thousands of fatal car accidents every year. Sure, most of the time drivers can make split second decsions correctly, but when they don't the consequences do not result in the deaths of millions of people. >-- > Henry Spencer @ U of Toronto Zoology > {allegra,ihnp4,linus,decvax}!utzoo!henry Michael Clase (mvclase@watmum.UUCP)
hogg@utcsri.UUCP (John Hogg) (07/18/85)
And here is the latest rebuttal of Henry's rebuttal of... ...of problems associated with SDI. If I may gently correct my esteemed colleague on a point or two... First, on the question of "acceptable" leakage of warheads. What exactly is "acceptable"? Everybody would love zero; I'd be willing to accept one or maybe even two; generals who know that there's a war to be won will be quite happy with forty. This figure (random targets, so a small level of redundancy will result in slightly less destruction) is what SDI proponents now claim is reasonable, although they base this on current understanding of the technology we'd require, and no simple and obvious countermeasures being used by the Soviet Union. This must be balanced off against the great increase in the probability of nuclear war caused by the system. Some reasons for this have been gone over before and will appear again in this message; this point has been sidestepped by Henry, but not answered. Henry brings in the threat of nuclear winter as a reason to build SDI, because the number of exploding warheads would be greatly decreased, even by a leaky system. At the risk of appearing a technophobe, could I point out that a much simpler way of accomplishing the same task would be a massive decrease in (not total destruction of) nuclear stockpiles? Yes, this is a political problem, but even though it involves the cooperation of the Soviets, it is politically no more difficult than ramming through SDI, given enough courage in high places. It could be much easier if an offer to the USSR were made which tied build-down to the scrapping of SDI. >2. Even if complete effectiveness is required, this does not require that >the software be perfect... Anything less than 100% correctness in the >handling of my bank account is unacceptable to me, but I don't hide my >money in a mattress just because I know the bank systems crash occasionally. >They recover, and finish the job correctly. Do you have faith that even this level of "correctness" can be achieved in an untested system? (Profuse apologies for the following DOUBLE excerpt - it seemed simplest.) >> In fact, the mere starting up of ANY sort of BMD system is going >> to look hostile; remember, current bets are that a BMD system adequate to >> cope with a retaliatory strike is probably feasible, so firing it up may be >> the first step of a first strike. In which case, the only way for the >> Soviets to preserve their deterrent is to go to launch-on-warning so that >> something will get through. They may consider some stage of BMD startup to >> be "warning". And then they may not... > >They may consider some stage of "nuclear alert" to be warning. And then >again they may not. Such alerts exist today. As warning times fall, the >situation will get worse. It is not obvious that the problem can be avoided >even in the absence of BMD. The Soviets may -- repeat, may -- have such >a policy today; they are not nearly as well set up to ride out an attack. > >The obvious answer to this one is an idea I wholeheartedly support: major >cuts in offensive weaponry coinciding with BMD deployment. The difference here is not quantitative, it is qualitative! A reasonable assumption is that the Soviets do NOT currently have a launch-on-warning policy, because their technology is (to put it mildly) no better than ours, and they haven't yet blown us up accidentally. If we force them to LOW by making that the only way for them to ensure that their missiles are not destroyed in a first strike, then ANY act which appears hostile, from a BMD startup to a migration of geese, will be VERY dangerous. The issue isn't how many seconds THEY have to react - it's whether or not they have to react before they actually hear the bangs. How about major cuts (again, not total disarmament - I don't trust the Soviets) WITHOUT BMD deployment instead? The whole issue of humans versus computers running the system I will not answer (unless urged to) for now. I do not concede that they could do so, but in any case, It Won't Work either way - although it might be more comforting to know that the last mistake was human, not inanimate. Henry says that "tests of real hardware" are required in order to say whether the system will or won't work; theoretical pontification isn't enough. But, to the best of my knowledge (corrections invited), nobody has yet proposed a possible design for a system which could overcome the simplest countermeasures, even given components that perform up to their theoretical potentials. Before tests of real hardware can be done, design of real hardware must take place. And this will take (by estimation of the Fletcher commission) about nine breakthroughs of the order of magnitude of the Manhattan project. These breakthroughs won't occur; the "real hardware" that is tested will thus be very leaky. If at that point, the SDI pushers agreed to call it a day and give up, I wouldn't worry so much; in fact, I MIGHT even be in favour. But based on my biased opinion of Pentagon thinking, the single-buttocked system that will result from n*$26,000,000,000 of research will be built, because it will be "vitally necessary to national security". And besides, Lockheed will be lobbying with all their might. Henry claims that we know how to defend against bombers, missiles and SLBMs, although he concedes that the first two aren't easy to handle. Again, it's a matter of the leakage you're willing to put up with. How precisely DO we shoot them ALL down, or near as dammit? (Again, even given the greatly increased risk, I might accept one or two warheads.) Or, as Henry put it, >I agree that if it's not pretty effective, it's worthless. As far as SLBMs are concerned, I should explain why I considered them to be more difficult to intercept than ICBMs. Apart from detection, they can be protected by the atmosphere IF they are launched on a very low trajectory from a spot near their target. Assuming that the Soviets make no attempt to change their current basing strategy, they are indeed no harder to handle than ICBMs. >> Henry, in light of what I've said here, please propose a non-dangerous SDI >> system! > >John, please propose a non-dangerous alternative! The current situation is >very dangerous, and getting steadily worse. Disarmament would be nice, if >only there were some cause for confidence that it would succeed. I don't >support BMD because I'm infatuated with the technology, or because I stand >to benefit from it financially; I support BMD because I'm scared, and it >looks like BMD might, repeat might, be our best/only chance of survival. Sigh... back to square one. Our current situation is not only less than ideal, it's horrible. Trying to technofix our way out through SDI, however, will make it far worse. The only truly viable answer is political negotiation of a reduction in arms, which is unlikely to occur while a peabrain with a badge and six-gun inhabits the White House. Oh, for the days of Nixon. A crook, yes, but a sufficiently INTELLIGENT crook. -- John Hogg Computer Systems Research Institute, UofT {allegra,cornell,decvax,ihnp4,linus,utzoo}!utcsri!hogg
brad@looking.UUCP (Brad Templeton) (07/19/85)
I haven't made up my mind on SDI yet, but all this percentage stuff doesn't make sense. It's my impression that most of the many nukes we have (if not all, officially) are targeted at military targets, not at cities. The ones targeted at cities are aimed at military targets located within themm Nobody seriously considers a nuclear exchange involving everybody killing off all the cities on the other side. It's pointless, insane, and would result in bombs falling on your own cities. As I understand it, the reason for our current massive nuclear buildup is fear of a first strike. Even a 50% effective Star Wars system is enough for this. It says, "don't try it, because no matter what you do, a fair number of our silos will survive, and then it's bye-bye to you." Nuking cities is something that was done once, strictly for dramatic effect, to end the second world war. I don't think it's on people's minds today as a direct end. I know this will be an unpopular statement, but I trust the USA not to engage in a first strike. They used the bomb twice, but to end a massive global conflict. For ten years after that, they never used it again, in spite of the fact that there was much call for it and they were involved in a major conflict. Watch the film, "The Atomic Cafe" to see what the American attitude was then. They all thought they were "the supreme power for goodness" on the earth. But they didn't use it, even with a general for a President. -- Brad Templeton, Looking Glass Software Ltd. - Waterloo, Ontario 519/884-7473
lionel@garfield.UUCP (Lionel H. Moser) (07/20/85)
> Nuking cities is something that was done once, strictly for dramatic effect, > to end the second world war. I don't think it's on people's minds today > as a direct end. > > I know this will be an unpopular statement, but I trust the USA not to > engage in a first strike. They used the bomb twice, but to end a massive > global conflict. > Brad Templeton, Looking Glass Software Ltd. - Waterloo, Ontario Was the nuking of Hiroshima and Nagasaki required to win WWII? Hadn't it become just a mopping-up operation when the bombs were dropped? > ... strictly for dramatic effect ... Probably so. Lionel H. Moser Memorial University of Newfoundland St. John's, Newfoundland Canada A1C 5S7 UUCP: {ihnp4, utcsri, allegra} !garfield!lionel
idallen@watmath.UUCP (08/05/85)
>> ...decision must be made within the first few seconds of boost. What kind >> of meaningful information can a human absorb, let alone digest, in this time? > > "Oh no, look at that. Missile launches all over the place. OPEN FIRE!" > I'm not talking about reading printouts, I'm talking about the same sort > of real-time interaction that takes place when driving a car or flying an > aircraft. Similarly, I am not talking about digging the President out of > bed; I'm talking about trained observers standing regular watches. > -- Henry Spencer @ U of Toronto Zoology One can only hope that the computer that decided those were missle launches wasn't just dropping bits or seeing radar hash from the Moon. Putting the computer as the last link in the decision chain is risky; but so is putting it everywhere else. Human beings deal better with unexpected situations than computer programs. -- -IAN! (Ian! D. Allen) University of Waterloo
henry@utzoo.UUCP (Henry Spencer) (08/11/85)
> > "Oh no, look at that. Missile launches all over the place. OPEN FIRE!" > > I'm not talking about reading printouts, I'm talking about the same sort > > of real-time interaction that takes place when driving a car or flying an > > aircraft. Similarly, I am not talking about digging the President out of > > bed; I'm talking about trained observers standing regular watches. > > One can only hope that the computer that decided those were missle > launches wasn't just dropping bits or seeing radar hash from the Moon. > Putting the computer as the last link in the decision chain is risky; > but so is putting it everywhere else. Human beings deal better with > unexpected situations than computer programs. Sigh, you have misunderstood completely. The whole idea is to place the go/no-go decision, *and* the signal processing that leads up to it, with the human observers. The aiming system obviously needs heavy signal processing to get precise tracks of missiles, but *the human observers don't*. Missile launches are not inconspicuous events! Especially when hundreds or thousands are occurring simultaneously, which is the only case that stresses a BMD system to the limit and hence really calls for a fast decision. If we could put human observers permanently in low orbit, binoculars would suffice. That's inconvenient, because we'd need a lot of observation posts to make sure we had one or two in the right place. Telescopic video cameras (visible and IR) aboard Clarke-orbit satellites would probably suffice; if not, it shouldn't need much more. Note that attempts at jamming such cameras, or their data links, are in themselves hostile acts. Both cameras and data links should be multiply redundant [different hardware rather than just replication of the same], to avoid any single hardware malfunction being mistaken for jamming. [From another author] > You're talking about the same sort of real-time interaction that causes > thousands of fatal car accidents every year. Sure, most of the time > drivers can make split second decsions correctly, but when they don't the > consequences do not result in the deaths of millions of people. Please read what I said: I am explicitly suggesting multiple observers, with near-simultaneous agreement required for a "fire" decision. This is the sort of procedure that is used already for things like ICBM launch decisions, which are much more likely to cause the deaths of millions than activation of a defence system is. -- Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,linus,decvax}!utzoo!henry
ludemann@ubc-cs.UUCP (Peter Ludemann) (08/14/85)
> .... Human beings deal better with >unexpected situations than computer programs. Agreed. I worked on real-time systems at BNR (telephone switching, which is *much* better defined than missile detection) and observed in my code and others' code that one of the most common causes of bugs (besides not understanding the problem) was the extra code put in to catch exceptional conditions. This code was needed because the system had very high reliability requirements - it just wasn't supposed to crash (mustn't stop that phone call to your granny in Moose Jaw). But very often this code would not only not catch the exceptional conditions, it would cause crashes under normal conditions. Rigourous testing, type-checking compilers, etc. helped produce a very reliable system, but I'm extremely sceptical about the reliability of a system many times bigger than a telephone switch, which can't be tested to nearly the same extent and whose problem domain is much less well defined. -- ludemann%ubc-vision@ubc-cs.uucp (ubc-cs!ludemann@ubc-vision.uucp) ludemann@cs.ubc.cdn ludemann@ubc-cs.csnet Peter_Ludemann@UBC.mailnet