MDAY@XX.LCS.MIT.EDU ("Mark S. Day") (03/10/88)
Soft-Eng Digest Wed, 9 Mar 88 Volume 4 : Issue 14 Today's Topics: Science Citation Index: Correction Addresses Needed for Companies in the UK Art or Engineering A Cynic's Guide, part 1 (3 msgs) Software Engineering Handbooks (2 msgs) Review - Software Specification Techniques Software QA : What Should it Do ? ---------------------------------------------------------------------- Date: Sat, 27 Feb 88 16:24:48 PST From: "Douglas M. Pase" <pase@ogcvax.ogc.edu> Subject: Science Citation Index: Correction > [The Science Citation Index is an index published monthly (?) > by the Institute for Scientific Information (??) -- if either > of these facts is incorrect, please let me know. It's published quarterly. Doug Pase -- ...ucbvax!tektronix!ogcvax!pase or pase@cse.ogc.edu (CSNet) ------------------------------ Date: 18 Feb 88 19:08:34 GMT From: mnetor!utzoo!dciem!array!len@uunet.uu.net (Leonard Vanek) Subject: Addresses Needed for Companies in the UK I would like to get information about any of the software tools mentioned above. I know that Malpas is the product of Rex Thompson and Partners in Farnham, UK and that SPADE is the product of Program Validation Ltd. in Southampton but I have no further address for these companies. I would appreciate receiving an address (either electronic or postal) for either of these firms as well as a name and address of the suppliers of Pathfinder and Genesis. In addition, I would welcome any description of or testimonial on any of these products. Thank you. ------------------------------------------------------------------ Leonard Vanek UUCP: ... utzoo!dciem!array!len Array Systems Computing Inc. or ... {utzoo,watmath}!lsuc!array!len 5000 Dufferin St. Suite 200 or len@array.UUCP Downsview, Ont. M3H 5T5 Phone: (416) 736-0900 Canada FAX: (416) 736-4715 ------------------------------ Date: Mon, 22 Feb 88 12:51:33 PST From: Eugene Miya <eugene@ames-nas.arpa> Subject: Art or Engineering I think Fred Brook's recent "No Silver Bullets" article in Computer gave a lots of the reasons why we don't have engineering yet. I don't think conventional ideas of engineering would stand the flexibility that computer systems have. Tables and models do not make engineering they are its artifacts. A good example is Doug Pase's call for a better notation (programming language). I've been having an argument with a friend (who gave me Whitehead's Chapter on Variables). I don't think a single language (silver bullet of any kind) is going to solve our problems. We are trying to develop the software `baby' in one month with nine women. I also think, because of the new emerging parallel systems, it will get harder before it gets easier. --eugene ------------------------------ Date: 3 Mar 88 23:44:31 GMT From: neff@shasta.stanford.edu (Randy Neff) Subject: A Cynic's Guide, part 1 ------ The Cynic's Guide to Software Engineering ------ ------ an invitation to dialog, starting with the personal view of ------ ------ Randall Neff @ sierra.stanford.edu ------ ------ March 3, 1988 part 1 ------ Hardware vs Software State-of-the-practice in Hardware: At companies that are serious about producing hardware, either pc board or integrated circuits; the first parts produced USUALLY work correctly. At Application Specific Integrated Circuit (ASIC) companies, it is an embarrassment if the first parts don't work correctly. State-of-the-practice in Software: "THE PROGRAM IS PROVIDED "AS IS' WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU (AND NOT IBM OR AN AUTHORIZED PERSONAL COMPUTER DEALER) ASSUME THE ENTIRE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION" one paragraph from IBM program Licence Agreement (shrink wrap). All software is known by version number, both for bug fixes and enhancements: Turbo C 1.0, 1.1, 1.2, 1.5, etc. An Ada compiler 5.0, 5.1, 5.41, 5.5, etc. X windows 10.4, 11.1 (about one hundred bugs reported), 11.2, etc. A new version of an operating system takes about two weeks to install correctly and now cannot network to some other computers. A new version of a compiler compiles, but generates bad code (illegal instructions, segmentation faults), for previously working programs; man months wasted trying to find work arounds. A portable language won't port between different brands of compilers or even different versions. No brand names here, similar problems occur with almost any software. In order to survive, a programming group MUST have full source code for all of the software they use. Why the big difference? Both hardware and software are working instantiations of behavior as described by requirements and specifications. There is the obvious difference in the scale of the projects: designing and implementing a RISC chip is a lot simpler than designing and implementing a compiler for it or (gasp) a new operating system for it. Why can hardware engineers do their job so well and software engineers talk about the (huge) percentage lifecycle cost in the maintenance phase? Looking at the last ten years in hardware design tools, the following trends can be observed: -- Willingness to learn new paradigms of design, new methods, new tools. Hierarchical design in functional languages, logic, switches, transistors. Willingness to change methods and design in order to use tools. -- Capital investment in both hardware (originally $75,000 to $100,000 per station) and in software (like $100,000 programs). Buy or lease Cray time. Use days of mini and super-mini computer time to check design. -- Venture Capital/Entrepreneur boom in new companies, products. (then bust or buyout for most, now mostly stagnant). This was started/fed by university research into hardware CAD tools. -- Return on Investment (ROI) and productivity gain was obvious (sometimes order of magnitude) over previous methods. -- Manufacturing cost was increased to drastically shorten design time with standard cells, gate arrays, and silicon compilers. -- Enormous amounts of reuse of specification, design, and testing through standardized part libraries: off shelf parts, standard cells, gate array cells, etc. -- Constructing the test procedures along with the design; and designing for easier testing. Why haven't software engineers followed similar trends? ------------------------------ Date: 4 Mar 88 15:36:35 GMT From: defun.utah.edu!shebs@cs.utah.edu (Stanley T. Shebs) Subject: A Cynic's Guide, part 1 [...] Software complexity is perenially underestimated. A large piece of software is closer to a space shuttle or nuclear power plant in the complexity of its behavior. To some extent, this is due to the "nonlinear" nature of software behavior that someone has mentioned. Unlike circuits, software doesn't degrade at a known rate - failure will be unexpected and catastrophic. >Looking at the last ten years in hardware design tools, the following trends >can be observed: > >-- Willingness to learn new paradigms of design, new methods, new tools. > Hierarchical design in functional languages, logic, switches, transistors. > Willingness to change methods and design in order to use tools. Amen. Try arguing with a rabid assembly-language or C programmer, to see just how bad it is in software. Higher-level languages are met with large amounts of skepticism and indifference. To be fair, both programmers and managers are guilty here. >-- Capital investment in both hardware (originally $75,000 to $100,000 per > station) and in software (like $100,000 programs). Buy or lease Cray time. > Use days of mini and super-mini computer time to check design. I would *love* to have the luxury of extensive testing in controlled environments. The word "luxury" describes the current situation. Guilty parties are everybody, including the customer who buys software known to be incompletely tested, because "I've just got to have it now!". >-- Return on Investment (ROI) and productivity gain was obvious (sometimes > order of magnitude) over previous methods. This is more of a problem. Claims of gain in software productivity have been regarded with considerable suspicion, perhaps because of the lack of repeatibility? >-- Manufacturing cost was increased to drastically shorten design time with > standard cells, gate arrays, and silicon compilers. >-- Enormous amounts of reuse of specification, design, and testing through > standardized part libraries: off shelf parts, standard cells, gate array > cells, etc. >-- Constructing the test procedures along with the design; and designing for > easier testing. All three of these approaches in the software realm fall victim to the same concern: efficiency. This is partly a legacy of the 50s, when the self-taught programmers of the time were willing to do *anything* to squeeze out a few more cycles or a few words of memory. It was OK to violate any abstraction or to use any quirk of the system. Even secret changes to the specifications were acceptable (assuming that specs were used at all). That has changed some- what, but programmers are still caught in the tug-of-war between reliability and performance. When was the last time you heard a customer said of a piece of code "yeah, this is fast enough"? >Why haven't software engineers followed similar trends? Any half-trained software engineer knows how things *should* be done. Formal specifications, reuse/standardization of components, extensive testing and test generation, and so forth. The time necessary to do all this is much larger than the average time allotted to software efforts, and between management and customer, the time gets shrunk to something that fits other schedules. In the case of software publishing, there is the incentive of competition to shrink the schedule. Then the objecting software engineer's competence is impugned, and he/she imprudently claims "I can do all that in three days" (images of the legendary "real programmer" in the back of the head). All downhill from there... The solution? Software engineers have to stand up for what they know is right, undisciplined hackers have to be retrained or fired, managers have to be knowledgeable about what is and isn't possible, and customers have to be both more patient, and completely unforgiving of mistakes in delivered software. Nothing technical here; as Fred Brooks said, "no silver bullet". stan shebs shebs@cs.utah.edu ------------------------------ Date: 6 Mar 88 14:36:22 GMT From: milano!buckaroo!marks@im4u.utexas.edu (Peter) Subject: A Cynic's Guide, part 1 In article <5313@utah-cs.UUCP>, shebs%defun.utah.edu.uucp@utah-cs.UUCP (Stanley T. Shebs) writes: > > The solution? Software engineers have to stand up for what they know is > right, undisciplined hackers have to be retrained or fired, managers have > to be knowledgeable about what is and isn't possible, and customers have > to be both more patient, and completely unforgiving of mistakes in delivered > software. Nothing technical here; as Fred Brooks said, "no silver bullet". It seems to me that the *solution* proposed here can be paraphrased as "people have to will themselves to change." If hoping for a silver bullet is futile, how well does wishing for the magic potion called "will power" stand up to scrutiny? Or perhaps this is a call to the Lone Ranger himself to rid the world of evil-doers? Or is it merely a suggestion to go to Oz and get some courage? P-) [ An interesting aspect of this hardware/software contrast that was discussed previously is that hardware improvements benefit *everything* that runs on the hardware. Similarly, operating system improvements benefit every application that uses that OS. However, improvements to an application benefit "only" the users of that application. We might reasonably expect to find that (in general) hardware is much better-engineered than operating systems, which in turn are much better-engineered than applications. Is there any good (non-anecdotal) evidence for or against this conjecture? Commonly, contributors to this digest note the degree to which "software engineering techniques" are unknown or ignored, and attribute this to malice or stupidity on the part of others. However, doing things "the right way" does cost more, and it may be that for a large number of programming tasks it's cheaper to do it "the wrong way." Again, I'd be interested in evidence for or against this. -- MSD ] ------------------------------ Date: 26 Feb 88 0732 From: GOODLOE <E38006@d2.dartmouth.edu> Subject: Software Engineering Handbooks I have found How to Solve it by Computer by R. G. Dromey, Prentice-Hall International,1982 to be a good book to steal algorithms from. I like the fact that the author gives the loop invariants and assertions. Another place to find excellant algorithms is in the column Small Programming Exercises that Martin Rem writes in The Science of Computer Programming. David Gries is currently working on a handbook of algorithms. He is designing a fully polymorphic programming language, Polya, in which the algrithms will be written. See the following two papers: A New Notation for Encapsulation, Sigplan Notices, Vol 20, No 7, 1985 Programming Pearls, CACM, April 1987. ------------------------------ Date: Thu, 25 Feb 88 12:02:13 PST From: PAAAAAR%CALSTATE.BITNET@MITVMA.MIT.EDU Subject: Software Engineering Handbooks stuart@cs.rochester.edu writes >Such things exist, for example: > Gaston H. Gonnet [...] >This book covers sequential search, sorted array (binary) search, >hashing,[...] Yep - its got hashing - and I had to debug it, extend it, and add a few bits of Knuth to make a usable package... uh2@psuvm.bitnet (Lee Sailer) writes: >So--what books do you keep next to your terminal for ready reference >when you need to write a code fragment that you know has been done >a thousand times before? Unix:-) Seriously - an important data base of designs should be online! Another place to look - The NAG algorithms library on the Networks(see Comm ACM recently) And - don't forget the 300 algorithms and certifications published by the ACM. I keep a book that is by my terminal: Let me quote the preface - Every successful engineer is a born inventor; indeed the daily work of an engineer in practice largely consists in scheming and drawing from previous experience new and improved processes, methods, and details for accomplishing them, and for satisfying, or cheapening old forms of [...] In the work of designing [...] the draughtsman has to rely mainly on his memory for inspiration; and, for lack of an idea, has frequently to wade throught numerous volumes to find a detail[...] In the course of 25 years of such experience, I have found the want of such a volume as the present, and endeavoured to supply the deficiency in my own practice by private notes and sketches,[...] A sketch, properly executed, is - to a practical man -- worth a folio of description; and it is to such that these pages are addressed[...] ----end quote--- "The Engineer's Sketch Book of Mechanical Movements" (2nd edn) by T W Barber Pub by E & F Spon, London and New York, 1890 ^^^^ (243 pages, plus contents, preface and adverts) It has sketches that succeed in comunicating clearly how to make 1937 different mechanical gadgets. ***** We don't have a way to diagram an algorithm clearly in a 1 inch square! Another thing - does anyone know a book that has a set of formulae for predicting (within 10%-20%) the speeds of programs that use disk storage. The theory's been published 2 or 3 years ago, and I've written my own "Crib sheet/Cheat sheet" of formulae and want to publish them in a book... Dick Botting PAAAAAR@CCS.CSUSCC.CALSTATE(doc-dick) paaaaar@calstate.bitnet PAAAAAR%CALSTATE.BITNET@{depends on the phase of the moon} Dept Comp Sci., CSUSB, 5500 State Univ Pkway, San Bernardino CA 92407 voice:714-887-7368 modem:714-887-7365 -- Silicon Mountain ------------------------------ Date: Sun, 28 Feb 88 14:57:02 PST From: PAAAAAR%CALSTATE.BITNET@MITVMA.MIT.EDU Subject: Review - Software Specification Techniques Software Specification Techniques by Gehani McGettrick This book is 3 years old but I've only just got a copy to review. Since there has been some discussion of specifications on this distribution list a late review might be timely? Title Software Specification Techniques Editors Narain Gehani(ATT) and Andrew McGettrick (Univ Strathclyde) Published by Addison Wesley as Part of the International Comp Sci Series Copyright date is 1986 by AT & T Bell Telephone Labs Identifiers - ISBN 0-201-14230-9 UK BNB 85-1437 (Dewey classification 001.64'25) US Lib of Congress Card QA76.6.S6437 (1985) Objective information It has 461 pages (including vii pages of preface, contents and such) plus a 22 page bibliography). It republishes 21 papers and technical reports plus other material. It is divided into 4 parts: REQUIREMENTS AND TECHNIQUES(4 general intro papers) PARTICULAR APPROACHES( 5 papers) CASE STUDIES(8 papers) SPECIFICATION SYSTEMS(4 papers) It covers/touches on - assertions, abstraction, Parnas, VDM(Jones), operational approaches (Zave), AFFIRM, CLEAR, GYPSY, PDS, OBJ(algebraic), Executable specifications(PAISley, Zave), State Transition diagrams,... Reading most of the papers requires a familiarity with the symbols of formal logic - such as lambdas, upsidedown A's etc. Subjective Reaction -- Disappointment It is easy to get a quick feeling for what a specification might look like using a one of the discussed techniques. However all the stuff you need to be able to (1) verify the author's assertions and (2) test the ideas have been removed. All the syntax and semantics of the systems have been removed from the papers. Conclusion - Borrow it - but you won't need to keep it. Dick Botting PAAAAAR@CCS.CSUSCC.CALSTATE(doc-dick) paaaaar@calstate.bitnet PAAAAAR%CALSTATE.BITNET@{depends on the phase of the moon} Dept Comp Sci., CSUSB, 5500 State Univ Pkway, San Bernardino CA 92407 voice:714-887-7368 modem:714-887-7365 -- Silicon Mountain ------------------------------ Date: Wed, 02 Mar 88 15:40:20 SET From: Nigel Head <ESC1111%ESOC.BITNET@MITVMA.MIT.EDU> Subject: Software QA : What Should it Do ? (Long I'm afraid so skip now if you're short of time!) A rather broad question to you professional software engineers that might get your imaginations going a little: What would you like a useful software Quality Assurance service to do ? Context : Real time process control system development Highly interactive user interface on purpose built workstations 20 - 40 Man years in 2 - 5 years elapsed time 3 - 10 year operational lifetime FORTRAN (some PASCAL, maybe one day ADA) on VAX/VMS 3 or 4 such developments on the go simultaneously Our existing standards for software lifecycle and reviews appear to work reasonably well (project phases : User Reqs definition, Software Reqs, Architectural design, Detailed design, Code+test, operations+maintenance; Each phase ends with careful review before proceeding). Due to rapid expansion and the onset of manned spaceflight (at least in Europe) we want to formalise our QA to try and avoid potential future disasters so I have the job of creating, from scratch, a QA service to help in this. I have a number of areas in which I would appreciate your opinions (or pointers to publications of relevance). Of particular interest would be (war) stories from people who have actually tried to apply a particular tool/technique on a real project: - formal requirements specification : what does it add to cost? does it work? how do users react to the increased complexity of a specification defined in this manner? how good a basis for defining acceptance criteria does it provide? - manpower planning and estimation : any clues as to how to do this systematically? Project control tools or methods? - hardware capacity planning and estimation : has anyone come across *usable* methodologies or tools for helping here? Any experience with (simple) simulations of systems to help predict loading and or bottlenecks? - reliability modelling : what's the state of this? any agreed models yet? how about intital parameter estimation to get the models started? has anyone actually dared use it to predict a cost-to-achieve-required-reliability for a project? - reliability measurement : classification of faults? elapsed time vs run time? do the answers match reality anyway? - software architecture verification : simulations of overall architecture? design tools/methodologies? - code evaluation : metrics of one sort or another - are they a useful guide to anything? which ones? other tools (dataflow analysis, test coverage analysis) ? Any other comments or suggestions are welcome. As you can see my interpretation of QA covers a very wide range (I've a feeling this may be a case of a little knowledge being dangerous however) so we're on the look out for anything that will help either do it better or cheaper and that is also (sort of) cost effective. There is also a little room for some speculative studies if a plausible idea turns up. I will try and summarise whatever responses I get and will also, if anyone is interested, let you know what I eventually decide to do and, in due course, how it works out. >>>>>>>>>>>>>>>>>>>>>> Nigel Head Voice : 06151 - 886264 Data Processing Division BITNET: ESC1111 at ESOC European Space Operations Centre SPAN : ECD1::323NIGEL Darmstadt BIX : nhead W. Germany >>>>>>>>>>>>>>>>>>>>>> ------------------------------ End of Soft-Eng Digest ****************************** -------