neff@Shasta.STANFORD.EDU (Randy Neff) (03/04/88)
------ The Cynic's Guide to Software Engineering ------ ------ an invitation to dialog, starting with the personal view of ------ ------ Randall Neff @ sierra.stanford.edu ------ ------ March 3, 1988 part 1 ------ ------------------------------------------------------------------------------ Hardware vs Software State-of-the-practice in Hardware: At companies that are serious about producing hardware, either pc board or integrated circuits; the first parts produced USUALLY work correctly. At Application Specific Integrated Circuit (ASIC) companies, it is an embarrassment if the first parts don't work correctly. State-of-the-practice in Software: "THE PROGRAM IS PROVIDED "AS IS' WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU (AND NOT IBM OR AN AUTHORIZED PERSONAL COMPUTER DEALER) ASSUME THE ENTIRE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION" one paragraph from IBM program Licence Agreement (shrink wrap). All software is known by version number, both for bug fixes and enhancements: Turbo C 1.0, 1.1, 1.2, 1.5, etc. An Ada compiler 5.0, 5.1, 5.41, 5.5, etc. X windows 10.4, 11.1 (about one hundred bugs reported), 11.2, etc. A new version of an operating system takes about two weeks to install correctly and now cannot network to some other computers. A new version of a compiler compiles, but generates bad code (illegal instructions, segmentation faults), for previously working programs; man months wasted trying to find work arounds. A portable language won't port between different brands of compilers or even different versions. No brand names here, similar problems occur with almost any software. In order to survive, a programming group MUST have full source code for all of the software they use. Why the big difference? Both hardware and software are working instantiations of behavior as described by requirements and specifications. There is the obvious difference in the scale of the projects: designing and implementing a RISC chip is a lot simpler than designing and implementing a compiler for it or (gasp) a new operating system for it. Why can hardware engineers do their job so well and software engineers talk about the (huge) percentage lifecycle cost in the maintenance phase? Looking at the last ten years in hardware design tools, the following trends can be observed: -- Willingness to learn new paradigms of design, new methods, new tools. Hierarchical design in functional languages, logic, switches, transistors. Willingness to change methods and design in order to use tools. -- Capital investment in both hardware (originally $75,000 to $100,000 per station) and in software (like $100,000 programs). Buy or lease Cray time. Use days of mini and super-mini computer time to check design. -- Venture Capital/Entrepreneur boom in new companies, products. (then bust or buyout for most, now mostly stagnant). This was started/fed by university research into hardware CAD tools. -- Return on Investment (ROI) and productivity gain was obvious (sometimes order of magnitude) over previous methods. -- Manufacturing cost was increased to drastically shorten design time with standard cells, gate arrays, and silicon compilers. -- Enormous amounts of reuse of specification, design, and testing through standardized part libraries: off shelf parts, standard cells, gate array cells, etc. -- Constructing the test procedures along with the design; and designing for easier testing. Why haven't software engineers followed similar trends?
shebs%defun.utah.edu.uucp@utah-cs.UUCP (Stanley T. Shebs) (03/04/88)
In article <2541@Shasta.STANFORD.EDU> neff@Shasta.UUCP (Randall Neff) writes: >Why the big difference? Both hardware and software are working instantiations >of behavior as described by requirements and specifications. There is the >obvious difference in the scale of the projects: designing and implementing >a RISC chip is a lot simpler than designing and implementing a compiler for >it or (gasp) a new operating system for it. Why can hardware engineers do >their job so well and software engineers talk about the (huge) percentage >lifecycle cost in the maintenance phase? You've answered your own question, at least partly. Software complexity is perenially underestimated. A large piece of software is closer to a space shuttle or nuclear power plant in the complexity of its behavior. To some extent, this is due to the "nonlinear" nature of software behavior that someone has mentioned. Unlike circuits, software doesn't degrade at a known rate - failure will be unexpected and catastrophic. >Looking at the last ten years in hardware design tools, the following trends >can be observed: > >-- Willingness to learn new paradigms of design, new methods, new tools. > Hierarchical design in functional languages, logic, switches, transistors. > Willingness to change methods and design in order to use tools. Amen. Try arguing with a rabid assembly-language or C programmer, to see just how bad it is in software. Higher-level languages are met with large amounts of skepticism and indifference. To be fair, both programmers and managers are guilty here. >-- Capital investment in both hardware (originally $75,000 to $100,000 per > station) and in software (like $100,000 programs). Buy or lease Cray time. > Use days of mini and super-mini computer time to check design. I would *love* to have the luxury of extensive testing in controlled environments. The word "luxury" describes the current situation. Guilty parties are everybody, including the customer who buys software known to be incompletely tested, because "I've just got to have it now!". >-- Return on Investment (ROI) and productivity gain was obvious (sometimes > order of magnitude) over previous methods. This is more of a problem. Claims of gain in software productivity have been regarded with considerable suspicion, perhaps because of the lack of repeatibility? >-- Manufacturing cost was increased to drastically shorten design time with > standard cells, gate arrays, and silicon compilers. >-- Enormous amounts of reuse of specification, design, and testing through > standardized part libraries: off shelf parts, standard cells, gate array > cells, etc. >-- Constructing the test procedures along with the design; and designing for > easier testing. All three of these approaches in the software realm fall victim to the same concern: efficiency. This is partly a legacy of the 50s, when the self-taught programmers of the time were willing to do *anything* to squeeze out a few more cycles or a few words of memory. It was OK to violate any abstraction or to use any quirk of the system. Even secret changes to the specifications were acceptable (assuming that specs were used at all). That has changed some- what, but programmers are still caught in the tug-of-war between reliability and performance. When was the last time you heard a customer said of a piece of code "yeah, this is fast enough"? >Why haven't software engineers followed similar trends? Any half-trained software engineer knows how things *should* be done. Formal specifications, reuse/standardization of components, extensive testing and test generation, and so forth. The time necessary to do all this is much larger than the average time allotted to software efforts, and between management and customer, the time gets shrunk to something that fits other schedules. In the case of software publishing, there is the incentive of competition to shrink the schedule. Then the objecting software engineer's competence is impugned, and he/she imprudently claims "I can do all that in three days" (images of the legendary "real programmer" in the back of the head). All downhill from there... The solution? Software engineers have to stand up for what they know is right, undisciplined hackers have to be retrained or fired, managers have to be knowledgeable about what is and isn't possible, and customers have to be both more patient, and completely unforgiving of mistakes in delivered software. Nothing technical here; as Fred Brooks said, "no silver bullet". stan shebs shebs@cs.utah.edu
UH2@PSUVM.BITNET (Lee Sailer) (03/05/88)
In article <2541@Shasta.STANFORD.EDU>, neff@Shasta.STANFORD.EDU (Randy Neff) says: > >Why the big difference? Both hardware and software are working instantiations >of behavior as described by requirements and specifications. There is the >obvious difference in the scale of the projects: designing and implementing >a RISC chip is a lot simpler than designing and implementing a compiler for >it or (gasp) a new operating system for it. Why can hardware engineers do >their job so well and software engineers talk about the (huge) percentage >lifecycle cost in the maintenance phase? > A cynical answer: Since hardware engineers usually get to do their work first, they do it in ways that make it easier for them and harder for the software engineers who follow.
marks@buckaroo.SW.MCC.COM (Peter) (03/06/88)
In article <5313@utah-cs.UUCP>, shebs%defun.utah.edu.uucp@utah-cs.UUCP (Stanley T. Shebs) writes: > > The solution? Software engineers have to stand up for what they know is > right, undisciplined hackers have to be retrained or fired, managers have > to be knowledgeable about what is and isn't possible, and customers have > to be both more patient, and completely unforgiving of mistakes in delivered > software. Nothing technical here; as Fred Brooks said, "no silver bullet". It seems to me that the *solution* proposed here can be paraphrased as "people have to will themselves to change." If hoping for a silver bullet is futile, how well does wishing for the magic potion called "will power" stand up to scrutiny? Or perhaps this is a call to the Lone Ranger himself to rid the world of evil-doers? Or is it merely a suggestion to go to Oz and get some courage? P-)
shebs%defun.utah.edu.uucp@utah-cs.UUCP (Stanley T. Shebs) (03/07/88)
In article <302@buckaroo.SW.MCC.COM> marks@buckaroo.SW.MCC.COM (Peter) writes: >> The solution? Software engineers have to stand up for what they know is >> right, [...] > >It seems to me that the *solution* proposed here can be paraphrased as >"people have to will themselves to change." If hoping for a silver bullet >is futile, how well does wishing for the magic potion called "will power" >stand up to scrutiny? Willpower is greatly aided by the changing of old institutions and the establishment of new ones (people love to be ordered about by "the system", whatever they might say). An equivalent of Underwriter's Labs for legal software has been mentioned recently, and it's already gained considerable power. A Ralph Nader type could do lots to get consumer protection outfits set up. I'm not real keen on forming another government agency to monitor computing, but that may be what it takes. Unfortunately, it will probably happen *after* the first software-caused disaster killing >100 people... >Or perhaps this is a call to the Lone Ranger himself >to rid the world of evil-doers? Heh-heh, there are a few computer companies where I'd like to don my Rambo outfit and put a few HE rounds through certain people (they know who they are). Seriously, I do support licensing of software engineers along with civil and maybe even criminal penalties for misdeeds. Nothing like the law to instill a little willpower, when you're trying to decide whether or not to add that bit of type-checking code! stan shebs shebs@cs.utah.edu
tada@athena.mit.edu (Michael Zehr) (03/09/88)
In article <2541@Shasta.STANFORD.EDU> neff@Shasta.UUCP (Randall Neff) writes: > Hardware vs Software > Why can hardware engineers do > their job so well and software engineers talk about the (huge) percentage > lifecycle cost in the maintenance phase? > [ lots of other stuff... ] Well, I read an article recently that gave some facts that could explain some of the discrepancy. I don't remember the exact numbers, so I'll paraphrase: Over the last blah years, the demand for hardware has grown at a [blah] annual rate. Over the same period of time, the demand for software has grown at a [5 times as much] annual rate. In other words, the software people are under more pressure to get their product out fast. [yeah, i know -- all the hardware people who've ever had a deadline are going to flame me...] Another possible reason is that a lot more custom software is produced than custom hardware. just a few random thoughts ... ------- michael j zehr "My opinions are my own ... as is my spelling."
karl@tut.cis.ohio-state.edu (Karl Kleinpaste) (03/09/88)
tada@athena.mit.edu writes:
In other words, the software people are under more pressure to get
their product out fast. [yeah, i know -- all the hardware people
who've ever had a deadline are going to flame me...]
This is related to the abysmal management styles that seem to go with
software management. "If it's not perfect, that's OK, we'll fix it in
release 2. Just get us on the market *first*." The investment to
repair faulty hardware already in the field is positively monstrous;
the investment to repair faulty software in the field is (perceived to
be) small.
Example: At a department where I used to work (I won't even tell you
which job, so those of you who might know where I've been the last 10
years can't even guess well), a certain large product was arranged to
be delivered by a certain date. This offense was committed by a group
responsible for setting up contracts but who knew absolutely nothing
of software engineering; this was evidenced by the ridiculous time
schedule they imposed on the developer-grunt staff.
First result, the original schedule was simply impossible to meet. It
was hopeless from the outset. There was no way to succeed. Talk
about lowered morale. Common bitch session over lunch: "How are we
going to deliver by MM/DD/YY?" "We aren't, of course." Be still, my
wretching stomach.
Second, there was no way to expand the schedule. The contract which
both corporate parties had signed required the supplied date, period.
Delay was dealt with in the form of charges against my company as
$N/day where N is very, very large.
Third, management within the department was (ahem) suboptimal, in that
the already-excessively-tight schedule was arbitrarily tightened even
more by the department manager for reasons I don't even care to guess.
The initial beta-test delivery date was backed up 2 months. Schedule
compression was required, which not only caused the loss of 2 months
of work before that delivery, but also 2 more days of work during
which the schedule tightening was performed by the entire development
staff. You know the scenario: OK, everybody, pull out those schedules
that show your work done by MM/DD/YY, and find a way to compress it to
MM-2/DD/YY.
Ick.
Would this be tolerated in a hardware development environment? Not in
any with which I have been associated. As an example, if my group is
working on the next generation of 680x0, and we screw up the design
such that race conditions in the CPU are created, and we don't catch
the flaw until after we've shipped a couple hundred thousand of them,
our collective butt would be in a sling for sure. The cost to the
company to replace the defective units in the field would be
absolutely horrendous. Jobs would be lost for certain.
In software, though, the view is that it can be fixed for the next
release. It's "just software," just park a new tape on the drive and
load in new binaries. See, your bug is gone now. Oh, you found
another bug? Gee, we're sorry, that'll be fixed in release 3...
Bad management is the curse of software development. Think about the
rather neat, high-quality software that is produced by people who are
not concerned with schedules much; my experience has been that such
software is far more bug-free than anything written under pressure.
This is especially true of real software professionals who are working
on something for themselves in their spare time that happens to be
useful to a larger group, so they start distributing it.
You know, come to think of it, my story about compressed schedules
fits *2* of my previous jobs altogether too well...
Karl
geoff@desint.UUCP (Geoff Kuenning) (03/09/88)
In article <5321@utah-cs.UUCP> shebs@cs.utah.edu (Stanley T. Shebs) writes: > Heh-heh, there are a few computer companies where I'd like to don my Rambo > outfit and put a few HE rounds through certain people (they know who they > are). Unfortunately, that's the problem: they DON'T know who they are. How many incompetent people do you know who will admit to incompetence? As an example, I could (but won't) cite a certain person who, having driven away one company's customers, later went on to a more responsible position with a much larger firm, where he today is happily convincing many people that his employer is completely incompetent, at least with the software he is involved in. -- Geoff Kuenning geoff@ITcorp.com {uunet,trwrb}!desint!geoff
beyer@houxs.UUCP (J.BEYER) (03/09/88)
When I used to be in hardware, the term 'maintenance' was used exclusively for repairing something that broke. 'Enhancements' were not charged to a maintenance budget, but to future engineering budget. In software, it seems that Enhancements and continuing engineering are [I believe] mistakenly charged to maintenance. This seems unfair.
shebs%defun.utah.edu.uucp@utah-cs.UUCP (Stanley T. Shebs) (03/10/88)
In article <1692@desint.UUCP> geoff@desint.UUCP (Geoff Kuenning) writes: >> Heh-heh, there are a few computer companies where I'd like to don my Rambo >> outfit and put a few HE rounds through certain people (they know who they >> are). > >Unfortunately, that's the problem: they DON'T know who they are. How >many incompetent people do you know who will admit to incompetence? "Incompetence" has several possible meanings. It might, for instance, mean that the individual is simply ignorant, perhaps even too ignorant to realize it. Education, not punishment, is appropriate; if any punishment is to be handed out, it should be to the managers that give ignorant persons too much responsibility. Another sort of incompetent is one who knows what should be done, but chooses to take shortcuts. Lots of punishment, the more the better; there's ample legal precedent for this sort of thing. Then there's the person with good intentions but botched results. Handslaps and transfers to less responsible positions are the right answer. (note that my initials are "SS" - it's a good thing I'm not a manager! :-) ) >As an >example, I could (but won't) cite a certain person who, having driven away >one company's customers, later went on to a more responsible position with >a much larger firm, where he today is happily convincing many people that >his employer is completely incompetent, at least with the software he is >involved in. Why not name names? If you have facts to relate, then your moral duty is to publicize them. Of course, this is in the same category as whistle blowing, so I understand if you maybe want to line up another job first! Perhaps ACM or IEEE could make themselves useful for a change and set up something to support software whistleblowers... stan shebs shebs@cs.utah.edu
rmac@well.UUCP (Robert J. McIlree) (03/11/88)
In article <5335@utah-cs.UUCP> shebs%defun.utah.edu.UUCP@utah-cs.UUCP (Stanley T. Shebs) writes: >In article <1692@desint.UUCP> geoff@desint.UUCP (Geoff Kuenning) writes: > >>As an >>example, I could (but won't) cite a certain person who, having driven away >>one company's customers, later went on to a more responsible position with >>a much larger firm, where he today is happily convincing many people that >>his employer is completely incompetent, at least with the software he is >>involved in. > >Why not name names? If you have facts to relate, then your moral duty is >to publicize them. Of course, this is in the same category as whistle >blowing, so I understand if you maybe want to line up another job first! >Perhaps ACM or IEEE could make themselves useful for a change and set up >something to support software whistleblowers... > > stan shebs > shebs@cs.utah.edu Uh, hold the phone here Stan. "Whistleblowing" pertains to those who are committing crimes, like felonies, against their employers, the government, etc. "Publicizing the facts", as you describe, would probably result in one or more of the following (even if the whistleblower had something else lined up): 1) To trumpet so-and-so as an incompetent software engineer or manager would probably entail the release of the company's proprietary information (i.e. the "facts, as you put it). This leads to a lawsuit by company involved against whistleblower, for revealing trade secrets and violating the standard employment agreement most of us sign when coming aboard. 2) In trumpeting so-snd-so as incompetent, so-and-so probably has some very nice legal avenues to persue against the "whistleblower" (I'd use a better term here, how about "fink" or stool pidgeon"?). Libel and slander come to mind right away. This scenario can also be closely coupled with scenario #1. 3) Finally, the "whistlebloer/fink/stoolie" must become public along with the target. Probably would get publicized in the trades (after all, we are trying to root out "problem" people, aren't we?) and prof. mags. I, for one, would never hire the fink because, in the absense of criminal activity as defined by law, he could "blow his whistle" on me, for as small a reason as disagreeing with my management or technical styles. As you may suspect, the witch-hunts that start from that type of approach would concievably leave us all unemployed. Finks cannot be trusted. As to your reference for ACM or IEEE to become policemen over our profession, I for one do not pay dues to these organizations so that the profession may be purged of your view of incompetent individuals. Actually, our profession weeds out people who don't belong in it pretty effectively anyway. People leave, get fired, get promoted (as in the example), or start new careers. So, equilibrium of "competent" folks is usally maintained accross the board. Finally, individuals compete with each other, companies do the same. So if you were a competitor against the guy who screwed up, you'd gloat. Because you'd get your stuff to market faster with higher quality while he spins his (and the company's) wheels. See what I mean? Bob McIlree {lll-crg,ihnp4,}!well!rmac
noise@eneevax.UUCP (Johnson Noise) (03/11/88)
In article <3584@bloom-beacon.MIT.EDU> tada@athena.mit.edu (Michael Zehr) writes: > >In other words, the software people are under more pressure to get >their product out fast. [yeah, i know -- all the hardware people >who've ever had a deadline are going to flame me...] > flame(on); No, really, I agree with you, but I think there is a larger amount of incompetence in the software industry (too many hackers, not enough _real programmers_). >Another possible reason is that a lot more custom software is produced >than custom hardware. > That may be true in pure numbers, but I think it scales. I don't know anyone who designs TV's and radios. I do know people who design low light level imaging systems, 500 Mw pulsed power modulators, laser initiated semiconductor switching systems etc.
shebs%defun.utah.edu.uucp@utah-cs.UUCP (Stanley T. Shebs) (03/11/88)
In article <5411@well.UUCP> rmac@well.UUCP (Robert J. McIlree) writes: >[...] "Whistleblowing" pertains to those who >are committing crimes, like felonies, against their employers, the >government, etc. I didn't realize that "whistleblowing" was so narrowly construed. What do you call it if only civil statutes apply? Is a civil engineer who knowingly specifies cheaper but unsafe materials in a building (which then collapses as a result) only touchable via lawsuits? Is someone who publicizes the engineer's misdeeds a nasty fink or a public hero? Would Roger Boisjoly (the Challenger almost-whistleblower) have been a fink? > 2) In trumpeting so-snd-so as incompetent, so-and-so probably > has some very nice legal avenues to persue against the > "whistleblower" (I'd use a better term here, how about > "fink" or stool pidgeon"?). [...] Libel and slander > come to mind right away. If I recall my vague acquaintance with the law correctly, the truth is never considered libelous. "X is incompetent" is not a provable statement. But "X omitted range checks in critical code" or "Y released product Z 4.0 with 54 known bugs, one of which corrupts a PC's hard disk" are statements whose truth can be tested objectively. > I, for one, would never hire the fink because, in the > absense of criminal activity as defined by law, he > could "blow his whistle" on me, for as small a reason > as disagreeing with my management or technical styles. Even as a mere student in the ivory towers of academe, I hear all kinds of stuff about who screwed up and why. Some of it is silly stuff about management style, and some of it is more serious. I would probably prefer to hire the "fink", at least if the "finking" was all factual, because I would know that this person has enough of a sense of responsibility to stand up to me, if I were to be tempted to do something wrong. The factualness is important; witch-hunts come about when rumors are given more weight than provable allegations. >As to your reference for ACM or IEEE to become policemen over our >profession, I for one do not pay dues to these organizations so >that the profession may be purged of your view of incompetent >individuals. Doesn't have to be "my" views - there is plenty of "accepted wisdom" in the field. ACM is actually not an appropriate organization anyhow, since it is "scientific" rather than "professional" like IEEE. IEEE could help implement licensing - I think some people have been working on it (anybody know for sure?). Is there any reason why software engineers should be exempt from the sort of licensing requirements found in other fields? >Actually, our profession weeds out people who don't >belong in it pretty effectively anyway. People leave, >get fired, get promoted (as in the example), or start new careers. "Getting promoted" is not my idea of effective weeding! And why would programmers who do wrong things tend to start new careers more than those who do right things? Come to think of it, why would they even get fired? Based on my limited "real-world" experience (defense contractor, mostly), there wasn't even any attempt to determine who (if anybody) was responsible for mistakes! Given that, how could there have been any incentive to write reliable software? (Incidentally, my DoD work was for the air-launched cruise missile, and although the standards for most of the software were pretty lax, the software relating to nuclear safety was a different matter - there were elaborate calculations on the probability of accidental detonation, although I don't recall any sort of formal verification being done on the assembly (!) code involved. Now you folks living near SAC bases can sleep easier tonight. :-) ) stan shebs shebs@cs.utah.edu
dhesi@bsu-cs.UUCP (Rahul Dhesi) (03/11/88)
In article <5340@utah-cs.UUCP> shebs%defun.utah.edu.UUCP@utah-cs.UUCP (Stanley T. Shebs) writes: >Is there any reason why software engineers >should be exempt from the sort of licensing requirements found in other fields? Yes indeed. Licensing prevents employers from hiring the people *they* believe are the best, instead of being forced to hire people that *somebody else* considers competent. If an employer would really rather let somebody else make his hiring decisions, he can always specify that job applicants be certified (e.g. CDP, CCP). Clearly most employers don't feel this way, because most don't require certification. I see no reason to believe that a certificate or license is a more reliable indicator of competence than evidence of education or achievement at work. I would be happy to see a licensing agency put its money where its mouth is, by accepting all liability for the failures of those that it licenses as competent. Until that happens, and the ABA accepts liability for all legal malpractice by its licensees, and the AMA accepts liability for all medical malpractice, and so on, I will refuse to believe that licensing is anything other than an attempt to simply keep wages artificially high. Political follow-ups should probably be emailed or sent to talk.politics.misc (where I won't see them). -- Rahul Dhesi UUCP: <backbones>!{iuvax,pur-ee,uunet}!bsu-cs!dhesi
jwg1@bunny.UUCP (James W. Gish) (03/12/88)
In article <7982@tut.cis.ohio-state.edu> karl@tut.cis.ohio-state.edu (Karl Kleinpaste) writes: >tada@athena.mit.edu writes: >This is related to the abysmal management styles that seem to go with >software management. "If it's not perfect, that's OK, we'll fix it in >release 2. Just get us on the market *first*." The investment to >repair faulty hardware already in the field is positively monstrous; >the investment to repair faulty software in the field is (perceived to >be) small. >... I agree, this is one of the biggest parts of the problem. As part of the abysmal management that seems to go along with software development is a real reluctance on the part of many managers to buy appropriate tools to help do the job. I attribute this to the perception that software tools are not "tangible" in the eyes of many beancounter-managers and thus find their acquistion difficult to justify - unlike the purchase of hardware tools that they can touch and feel and stub their toes on. Another important difference between quality/productivity in hardware vs. software development is one of maturity of the fields. Hardware has had a lot more time to develop disciplined approaches than the software field has. Also consider that the notion of "pluggable modules" and well defined interfaces is taken for granted when building hardware (although I've seen some pretty messy boards with lots of patches). Standard software interfaces/components are still along way off. This is due to many factors that I'm sure most of us can relate to (inertia of obsolete languages, programmer eccentricity, programmer fear of <fill in your favorite fear>). Many programmers do not trust existing code, often for good reason, so there is usually a direct impact of the "not invented here" syndrome on productivity and quality. A good deal of the problems with software reuse are cultural/psychological/organizational, but we are far from solving the technical problems of software reuse. -- Jim Gish GTE Laboratories, Inc., Waltham, MA CSNET: jwg1@gte-labs UUCP: ..!harvard!bunny!jwg1
raveling@vaxa.isi.edu (Paul Raveling) (03/12/88)
In article <2541@Shasta.STANFORD.EDU> neff@Shasta.UUCP (Randall Neff) writes: > >State-of-the-practice in Software: >"THE PROGRAM IS PROVIDED "AS IS' WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED >OR IMPLIED, INCLUDING, BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF >MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO >THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM >PROVE DEFECTIVE, YOU (AND NOT IBM OR AN AUTHORIZED PERSONAL COMPUTER DEALER) >ASSUME THE ENTIRE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION" >one paragraph from IBM program Licence Agreement (shrink wrap). This is actually state of the art in litigation. You'll find the same wording in limited warranty disclaimers for lots of non-computing products because it's clearly defined in law. You're right though that software goes MUCH farther than anything else I can think of in invoking the law's CYA shelters. I personally dislike these disclaimers intensely. > >All software is known by version number, both for bug fixes and enhancements: >Turbo C 1.0, 1.1, 1.2, 1.5, etc. An Ada compiler 5.0, 5.1, 5.41, 5.5, etc. ... as is hardware. I've tailored software to suit boards up to Revision M. If hardware makes it anywhere near that level of revision it's almost certainly obsolete and will be replaced soon by a new board at Rev A. I'd LOVE to be able to use the same approach with lots of software. Economics favors upgrading hardware as technology enables producing a new product which can be manufactured for less cost than an old product. The conventional "wisdom" is that software should be reused to the hilt to hold down costs. This works for some software, but causes BIG trouble when reused components are a poor fit for new requirements. Software management seems to be much more aware of the NIH (Not Invented Here) syndrome than of the risk of the UWE (Use Whatever Exists) syndrome. --------------------- Paul Raveling Raveling@vaxa.isi.edu
raveling@vaxa.isi.edu (Paul Raveling) (03/12/88)
In article <5313@utah-cs.UUCP> shebs%defun.utah.edu.UUCP@utah-cs.UUCP (Stanley T. Shebs) writes: > >All three of these approaches in the software realm fall victim to the same >concern: efficiency. This is partly a legacy of the 50s, when the self-taught >programmers of the time were willing to do *anything* to squeeze out a few >more cycles or a few words of memory. It was OK to violate any abstraction >or to use any quirk of the system. ... There's another side to this coin. What I see now is lots of software being written without regard to performance. Squeezing every ounce of speed and size out of the code made sense when (for example) my time cost a princely $3.08/hr and the machine cost $125/hr. But those old machines taught us that efficient algorithms were a bigger win than efficient code. They forced us to learn good habits in both algorithm design and optimal coding. We had to do so much "optimal coding" that it was quite painless to abandon it when machines grew enough to eliminate the need. Now I see lots of software being put together with minimum implementation time as its sole goal. Yes, we have lots of uncommented Lisp. The benchmarks show it runs about 3 times slower than C, which is about 3 times slower than machine language. All that runs over an operating system, Unix, which switches contexts an order of magnitude slower than some other systems. ... and one of us (not me) wonders why his "high-tech" workstation has worse response time than his PC at home. --------------------- Paul Raveling Raveling@vaxa.isi.edu
geoff@desint.UUCP (Geoff Kuenning) (03/12/88)
In article <5335@utah-cs.UUCP> shebs%defun.utah.edu.UUCP@utah-cs.UUCP (Stanley T. Shebs) writes: > Why not name names? If you have facts to relate, then your moral duty is > to publicize them. Of course, this is in the same category as whistle > blowing, so I understand if you maybe want to line up another job first! > Perhaps ACM or IEEE could make themselves useful for a change and set up > something to support software whistleblowers... It's got nothing to do with job protection; in fact one could argue that I have already cost the person in question one job. It's just that I don't want to make enemies, and I know that he reads the net, though perhaps not this group. It is quite possible to argue that my position is based on opinions, not facts, software engineering being a somewhat ill-defined discipline. I don't really see how writing that "Joe Blow of Fast Computers, Inc. is an idiot who is damaging his company" is going to help anyone. Certainly Fast Computers would qualify as an idiot company if they fired Joe based on the word of an essentially anonymous net poster. And I'd hate to get sued by Joe because he couldn't get a new job based on my maligning him. In any case, any competent interviewer would never hire this person for the job he has. So why not just let Fast Computers dig their own grave? -- Geoff Kuenning geoff@ITcorp.com {uunet,trwrb}!desint!geoff
shebs%defun.utah.edu.uucp@utah-cs.UUCP (Stanley T. Shebs) (03/14/88)
In article <5019@venera.isi.edu> raveling@vaxa.isi.edu (Paul Raveling) writes: > Now I see lots of software being put together with minimum > implementation time as its sole goal. Yes, we have lots > of uncommented Lisp. The benchmarks show it runs about > 3 times slower than C, which is about 3 times slower than > machine language. I've said it before in other newsgroups - there's no technical reason for Lisp programs to be any slower than machine language. Near-optimal compiler output has been demonstrated many times, but only on small benchmarks, and the results on larger programs can be as bad as you say. Nevertheless, efficient compilation of realistic programs seems to have been decreed to be "non-research" therefore unfundable, and Lisp companies have been trying to get better compilers, while losing sales because their stopgap releases have shabby performance. It's pretty frustrating to see higher-level languages losing out to C++, not because of inherent defects, but because available systems fail to deliver adequate speed and space performance. > All that runs over an operating system, > Unix, which switches contexts an order of magnitude slower > than some other systems. > > ... and one of us (not me) wonders why his "high-tech" > workstation has worse response time than his PC at home. This is the first I've heard of Unix being accused of slowing workstations to sub-PC levels! Certainly doesn't correspond to my experience, where I've given up on PCs completely because they're just too excruciatingly slow to use, compared to the HP 350 on my desk... stan shebs shebs@cs.utah.edu
jim@ektools.UUCP (James Hugh Moore) (03/14/88)
My own thoughts on what accounts for a significant part of the differences in the state-of-the-art in hardware and software engineering is that hardware engineers design for a much more intelligent group of users :-) . My point is that when hardware engineers do their surveys of their user community, they are dealing with people who are used to logically setting things down in order, and usually have a significant of experience to bring to bear on specifying what they need. The software engineer on the other hand has either marketing people, or the user themselves to deal with. Users of computer based products range from those who are very intelligent and learn from the software you have written (requesting more functionality almost as soon as it is released), to those who know what they want to do, but do not want to bother reading the manual, or taking the training course. In either case, building good, useful software in a timely manner is difficult. I do not apologize for software manager who are not willing to say no to marketing people, nor for those not willing to invest in designing for reusability etc. But for hardware engineers who think that software is easy, I leave you with designing a microprocessor (based on any architecture) which has one additional instruction: Pneumonic Op Code Description DWIT (Your choice) Do what I think. (This is not original, as most programmers recognize from various joke instruction sets which have circulated over the years.) Another way of putting it is that users tend to pump water, back to the top of the "Software development waterfall". =============================================================================== James H. Moore "Jesus is Lord" Eastman Kodak Co. Rochester NY 14650 ...rutgers!rochester!kodak!ektools!jim * All opinions expressed are my own, and do not represent an authorized statement by Eastman Kodak Co. ------------------------------------------------------------------------------
t-peterl@microsoft.UUCP (Peter Labon) (03/16/88)
Having worked with both hardware and software I must concur with the notion that software (as opposed to hardware) tools have a long way to go. I often find this is spurred on by the people in marketing (sales?) depts. whose goals don't always coincide with those of the developers. I have worked for a developer of tools (nth generation language concept), and in the Tools Group for a different developer. Both employers made a large investment in tools and as a consequence are reaping large profits. Anyone who doesn't do this is shooting himself in the foot and then entering a marathon. TOOLS:the choice of a new generation Simon Disclaimer: the views expressed in this article are exclusively the figment of my imagination.
raveling@vaxa.isi.edu (Paul Raveling) (03/18/88)
In article <5346@utah-cs.UUCP> shebs%defun.utah.edu.UUCP@utah-cs.UUCP (Stanley T. Shebs) writes: > >This is the first I've heard of Unix being accused of slowing workstations >to sub-PC levels! Certainly doesn't correspond to my experience, where I've >given up on PCs completely because they're just too excruciatingly slow to use, >compared to the HP 350 on my desk... You're right that the 350 is good, fast machine. That's what I'm using now, with 24 MB of RAM and almost 400 MB of disk. However, before that I used a 320. The fellow who complained about his speed when compared to his PC's speed is also using a 320. Both of us own AT clones, and we can both attest to the obvious speed difference. The context switch timing (Unix slower than at least one other system by an order of magnitude) is from benchmarks between EPOS and Unix V6 on the same PDP-11/45. Some limited benchmarks on 350's, 320's, and VAXes using HP-UX and BSD 4.2 suggested this ratio would hold true with the newer Unix systems. I'd love to get a charter to write enough of an EPOS-like kernel to prove this on our workstations. Another benchmark of note was a FutureNet DASH-2/DASH-3 port. DASH-1 through DASH-4 were schematic designers carefully written in assembly language for PC's, including XT's. Before the original Bobcats (using 68010's) hit the streets a group ported DASH-2 to one, using C and Starbase to drive the graphics. The Bobcat hardware was about 3 times faster than a PC/XT by most benchmarks. The time required to display a particular drawing was 5 times longer on the Bobcat than on the XT, suggesting the software as a whole was 15 times slower. Another case in point is our Geographic Display Agent on the HP workstations. Its performance improved by an order of magnitude when I moved its X interface out of a separate graphics-subsystem process and into the GDA process. Finally, there are old benchmarks on VAX-11/780's comparing BSD 4.1 to VMS Version 2. I don't recall the exact numbers now, but I believe they showed a context switch time of about 175 microseconds on VMS and about 1 millisecond on BSD. --------------------- Paul Raveling Raveling@vaxa.isi.edu