[mod.risks] RISKS-3.41

RISKS@CSL.SRI.COM (RISKS FORUM, Peter G. Neumann -- Coordinator) (08/23/86)

RISKS-LIST: RISKS-FORUM Digest,  Saturday, 23 August 1986  Volume 3 : Issue 41

           FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS 
   ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
  $1 million bogus bank deposit (Hal Perkins)
  Cheating of automatic teller machines (Jacob Palme)
  Simulation, Armored Combat Earthmover, and Stinger (Herb Lin)
  Report from AAAI-86 (Alan Wexelblat)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, nonrepetitious.  Diversity is welcome. 
(Contributions to RISKS@CSL.SRI.COM, Requests to RISKS-Request@CSL.SRI.COM)
  (Back issues Vol i Issue j available in CSL.SRI.COM:<RISKS>RISKS-i.j.
  Summary Contents in MAXj for each i; Vol 1: RISKS-1.46; Vol 2: RISKS-2.57.)

----------------------------------------------------------------------

Date: Fri, 22 Aug 86 21:47:58 EDT
From: hal@gvax.cs.cornell.edu (Hal Perkins)
To: risks@csl.sri.com
Subject: $1 million bogus bank deposit

From the Chicago Tribune, Friday, Aug. 15, 1986.  sec. 3, p. 3:

Bank machine is no match for schoolboy with a lollipop

  AUCKLAND, New Zealand [UPI] -- A schoolboy outsmarted an automatic
bank machine by using the cardboard from a lollipop packet to
transfer $1 million New Zealand dollars into his account, bank
spokesmen said Thursday.

  Tony Kunowski, corporate affairs manager of the United Building
Society savings and loans institution, said the 14-year-old student
slipped the cardboard into an envelope and inserted it into the machine
while punching in a deposit of $1 million, the U.S. equivalent of
$650,000.

  "We are not amused, but we don't think this is the tip of an
iceberg," he said of the incident of three weeks ago.

  Kunowski said that when the boy, identified only as Simon, checked
his account a few days later, he was amazed to discover the money had
been credited.  He withdrew $10.

  When no alarm bells rang and no police appeared, he withdrew another
$500.  But his nerve failed and he redeposited the money.

  On Tuesday, Simon withdrew $1,500, Kunowski said.

  But his nerve failed again Wednesday, and he told one of his teachers
at Selwyn College, Kunowski said.  The school's headmaster, Bob Ford,
took Simon to talk with United Building Society executives.

  Ford said Simon had not been considered one of his brightest pupils,
"at least until now."

  It was unknown if Simon would be disciplined.

  Kunowski told reporters that Simon succeeded because of delays in
reconciling transactions in automatic tellers around the country with
United's central computer system.

  "The delay in toting up the figures would normally be four weeks and
that was how a schoolboy could keep a fake million dollars in his
account without anyone batting an eyelid," he said.

  "We are now looking very closely at our internal systems.  Human
error may also be involved," Kunowski said.

------------------------------

Date:        21 Aug 86 02:45 +0200
From:        Jacob_Palme_QZ%QZCOM.MAILNET@MIT-MULTICS.ARPA
To:          "RISKS FORUM" <RISKS@CSL.SRI.COM>
Subject:     Cheating of automatic teller machines

Several young people have cheated automatic teller machines from
one of the largest Swedish bank chains in a rather funny way.

You use the machines by inserting your plastic card in a slot, then punching
the amount you want and your password, and then the card comes out of one
slot, and the money out of another slot.

The cheaters took a badge belonging to a large guard company, which looked
very reassuring, and fastened it with double-sticky tape in front of the
slot through which money comes out. They then faded into the background and
waited until someone came to get money from the machine. The person who
wanted to use the machine put in his card, punched his code and amount, and
the machine started to push out the money through the slot. When the money
could not get out, because of the obstruction, the machine noted this, and
gave a "technical error" message to the customer, who went away. Up came the
youngsters, who took away the badge, fetched the money behind it, and put up
the badge again for the next customer.

The cheatings described above have been going on for several months, but the
bank has tried to keep this secret, claiming that if more people knew about,
more would try to cheat them.  Since the money is debited on the account of
the customers, this means that those customers who did not complain lost the
money. The bank has now been criticised for keeping this secret, and has
been forced to promise that they will find all customers cheated (this is
possible because the temporary failure in getting the money out of the slot
was noted automatically by the machine) and refund the money lost.

The bank chain will now have to rebuild 700 automatic dispensing machines.
Most other banks in Sweden, except this chain, have a joint company
operating another kind of dispensing machines, from which you can take out
money from your account in any of these banks. Their dispensing machines
cannot be cheated in this way, because they have a steel door in front of
the machine which does not open until you insert a valid plastic card.

------------------------------

Date: Fri, 22 Aug 1986  08:53 EDT
From: LIN@XX.LCS.MIT.EDU
To:   "Mary C. Akers" <makers@CCT.BBN.COM>
Cc:   arms-d@XX.LCS.MIT.EDU, risks@CSL.SRI.COM
Subject: Simulation, Armored Combat Earthmover, and Stinger

    From: Mary C. Akers <makers at cct.bbn.com>

    ... the main bone of contention among those concerned with weapons
    design and testing [is] whether computer simulation and modeling can
    take the place of live trials with real equipment.

The reason people want to do simulation testing is that they then don't have
to do real testing, for whatever reason.  Real testing is expensive and
time-consuming, and the very people who say that they want real testing are
often those who say that the weapons development process is too slow.

No one would argue that simulation testing is a bad thing in and of itself.
It is when you REPLACE real testing with simulation, rather than SUPPLEMENT
real testing, that you run into problems with validity.

     The Army's Armored Combat Earthmover (ACE) - "...which underwent
     18,000 hours of testing without ever being operated under field
     conditions.  [When it finally under went live trails at Fort Hood]
     ...the tests revealed that the ACE's transmission cracked, that its
     muffler caught fire, that the driver's hatch lid was too heavy to lift, 
     and that doing certain maintenance work "could endanger the operator's
     life."

It strengthens the point of the article to note that the 18,000 hours
of testing described was probably not simulation testing, but rather
developmental testing.  But that is the point of operational testing
(OT) -- to place it into a real life operational environment and see what
problems there are.  You EXPECT things to go wrong in OT. 

------------------------------

Date: Fri, 22 Aug 86 13:05:57 CDT
Received: by banzai-inst (1.1/STP) id AA03138; Fri, 22 Aug 86 13:05:57 CDT
To: risks@csl.sri.com
Subject: Report from AAAI-86    [Really from Alan Wexelblat]

I just got back from a week at AAAI-86.  One thing that might interest
RISKS readers was the booth run by Computer Professionals for Social
Responsibility (CPSR).  They were engaged in a valiant  (but ineffectual)
effort to get the AI mad-scientist types to realize what some of their
systems are going to be doing (guiding tanks, cruise missiles, etc.).

They were handing out some interesting stuff, including stickers that said
(superimposed over a mushroom cloud):  "It's 11 p.m.  Do you know what your
expert system just inferred?"

They also had a series of question-answer cards titled "It's Not Trivial."
Some of them deal with things that have come up in RISKS before.  [I left
them in for the sake of our newer readers.  PGN]    They are:

Q1:  How often do attempts to remove program errors in fact introduce one
	or more additional errors?

A1:  The probability of such an occurance varies, but estimates range from
	15 to 50 percent (E.N. Adams, "Optimizing Preventing Service of
	Software Products," _IBM Journal of Research and Development_,
	Volume 28(1), January 1984, page 8)

Q2:  True or False:  Experience with large control programs (100,000 < x <
	2,000,000 lines) suggests that the chance of introducing a severe
	error during the correction of original errors is large enough that
	only a small fraction of the original errors should be corrected.

A2:  True. (Adams, page 12)

Q3:  What percentage of federal support for academic Computer Science
	research is funded through the Department of Defense?

A3:  About 60% in 1984.  (Clark Thompson, "Federal Support of Academic
	Research in Computer Science," Computer Science Division, University
	of California, Berkeley, 1984)

Q4:  What fraction of the U.S. science budget is devoted to defense-related
	R&D in the Reagan 1985/86 budget?

A4:  72%  ("Science and the Citizen,"  _Scientific American_ 252:6 (June
	1985), page 64)

Q5:  The Space Shuttle Ground Processing System, with over 1/2 million lines
	of code, is one of the largest real-time systems ever developed.
	The stable release version underwent 2177 hours of simulation
	testing and the 280 hours of actual use during the third shuttle
	mission.  How many critical, major, and minor errors were found
	during testing?  During the mission?

A5:  		Critical	Major	Minor
     Testing	   3		  76	 128
     Mission	   1		   3	  20
	(Misra, "Software Reliability Analysis," _IBM Sys. J. 1983, 22(3) )

Q6:  How large would "Star Wars" software be?

A6:  6 to 10 million lines of code, or 12 to 20 times the size of the Space
	Shuttle Ground Processing System.  (Fletcher Report, Part 5, page 45)

The World Wide Military Command and Control System (WWMCCS) is used by
civilian and military authorities to communicate with U.S. military forces
in the field.

Q7:  In November 1978, a power failure interrupted communications between
	WWMCCS computers in Washington, D.C. and Florida.  When power was
	restored, the Washington computer was unable to reconnect to the
	Florida computer.  Why?

A7:  No one had anticipated a need for the same computer (ie the one in
	Washington) to sign on twice.  Human operators had to find a way to
	bypass normal operating procedures before being able to restore
	communications.  (William Broad, "Computers and the U.S. Military
	Don't Mix," _Science_ Volume 207, 14 March 1980, page 1183)

Q8:  During a 1977 exercise in which WWMCCS was connected to the command and
	control systems of several regional American commands, what was the
	average success rate in message transmission?

A8:  38%  (Broad, page 1184)

Q9:  How much will the average American household spend in taxes on the
	military alone in the coming year?

A9:  $3,400 (Guide to the Military Budget, SANE)

[question 10 is unrelated to RISKS]

Q11: True or False?  Computer programs prepared independently from the same
	specification will fail independently.

A11: False.  In one experiment, 27 independently-prepared versions, each
	with reliability of more than 99%, were subjected to one million
	test cases.  There were over 500 instances of two versions failing
	on the same test case.  There were two test cases in which 8 of the
	27 versions failed.  (Knight, Leveson and StJean, "A Large-Scale
	Experiment in N-Version Programming,"  Fault-Tolerant Computing
	Systems Conference 15)

Q12: How, in a quintuply-redundant computer system, did a software error
	cause the first Space Shuttle mission to be delayed 24 hours only
	minutes before launch?

A12: The error affected the synchronization initialization among the 5
	computers.  It was a 1-in-67 probability involving a queue that
	wasn't empty when it should have been and the modeling of past
	and future time.  (J.R. Garman, "The Bug Heard 'Round the World,"
	_Software Engineering Notes_ Volume 6 #5, October 1981, pages 3-10)

Q13: How did a programming punctuation error lead to the loss of a Mariner
	probe to Venus?

A13: In a FORTRAN program, DO 3 I = 1,3 was mistyped as DO 3 I = 1.3 which
	was accepted by the compiler as assigning 1.3 to the variable DO3I.
	(_Annals of the History of Computing_, 1984, 6(1), page 6)

Q14: Why did the splashdown of the Gemini V orbiter miss its landing point
	by 100 miles?

A14: Because its guidance program ignored the motion of the earth around
	the sun. (Joseph Fox, _Software and its Development_, Prentice Hall,
	1982, pages 187-188)

[Questions 15-17 are not RISKS related]

Q18: True or False?  The rising of the moon was once interpreted by the
	Ballistic Missile Early Warning System as a missile attack on the US.

A18: True, in 1960.  (J.C. Licklider, "Underestimates and Overexpectations,"
	in _ABM: An Evaluation of the Decision to Deploy and Anti-Ballistic
	Missile_, Abram Chayes and Jerome Wiesner (eds), Harper and Row,
	1969, pages 122-123)

[question 19 is about the 1980 Arpanet collapse, which RISKS has discussed]

Q20: How did the Vancouver Stock Exchange index gain 574.081 points while
	the stock prices were unchanged?

A20: The stock index was calculated to four decimal places, but truncated
	(not rounded) to three.  It was recomputed with each trade, some
	3000 each day.  The result was a loss of an index point a day, or
	20 points a month.  On Friday, November 25, 1983, the index stood
	at 524.811.  After incorporating three weeks of work for consultants
	from Toronto and California computing the proper corrections for 22
	months of compounded error, the index began Monday morning at
	1098.892, up 574.081.  (Toronto Star, 29 November 1983)

Q21: How did a programming error cause the calculated ability of five
	nuclear reactors to withstand earthquakes to be overestimated, and
	the plants to be shut down temporarily?

A21: A program used in their design used an arithmetic sum of variables when
	it should have used the sum of their absolute values.  (Evars Witt,
	"The Little Computer and the Big Problem,"  AP Newswire, 16 March
	1979.  See also Peter Neumann, "An Editorial on Software Correctness
	and the Social Process,"  _Software Engineering Notes_, Volume 4(2),
	April 1979, page 3)

Q22: The U.S. spy ship Liberty was attacked in Israeli waters on June 8,
	1967.  Why was it there in spite of repeated orders from the U.S.
	Navy to withdraw?

A22: In what a Congressional committee later called "one of the most
	incredible failures of communications on the history of the
	Department of Defense," none of the three warnings sent by three
	different communications media ever reached the Liberty.  (James
	Bamford, _The Puzzle Palace_, Penguin Books, 1983, page 283)

Q23: AEGIS is a battle management system designed to track hundreds of
	airborne objects in a 300 km radius and allocate weapons sufficient
	to destroy about 20 targets within the range of its defensive
	missiles.  In its first operational test in April 1983, it was
	presented with a threat much smaller than its design limit:  there
	were never more than three targets presented simultaneously.  What
	were the results?

A23: AEGIS failed to shoot down six out of seventeen targets due to system
	failures later associated with faulty software.  (Admiral James
	Watkins, Chief of Naval Operations and Vice Admiral Robert Walters,
	Deputy Chief of Naval Operations.  Department of Defense
	Authorization for Appropriations for FY 1985.  Hearings before the
	Senate Committee on Armed Services, pages 4337 and 4379.)

Well, this message is long enough; I'll hold off on my personal commentaries.
People wanting more information can either check this sources given or
contact CPSR at P.O. Box 717, Palo Alto, CA  94301.

--Alan Wexelblat
ARPA: WEX@MCC.ARPA or WEX@MCC.COM
UUCP: {ihnp4, seismo, harvard, gatech, pyramid}!ut-sally!im4u!milano!wex

------------------------------

End of RISKS-FORUM Digest
************************
-------