MDAY@xx.lcs.mit.EDU ("Mark S. Day") (03/14/88)
Soft-Eng Digest Sun, 13 Mar 88 Volume 4 : Issue 19
Today's Topics:
Linkers
Programmer Productivity
Response to "A Cynic's Guide, part 1" (4 msgs)
Shared Libraries (2 msgs)
Rochester Connectionist Simulator update
Call for Papers: Neural Info Processing Systems
HICSS-22 Rapid Prototyping Session
----------------------------------------------------------------------
Date: 5 Mar 88 01:41:22 GMT
From: quintus!ok@unix.sri.com (Richard A. O'Keefe)
Subject: Linkers
>> We could profitably turn this into a survey of what the linker
>> requirements of various languages are: could an ADA compiler easily
>> use the UNIX / VMS / MVS / PR1MOS linker?
> In a word, no. A linker for Ada must check for matching of user-defined
> types in different modules.
In two words, not quite. There are two things: planning a link (which
involves working out where to find things and checking that the operation
requested makes sense) and *doing* the link. Is there any reason why an
ADA compiler could not generate two files: a description file and an
object file, and an ADA linker could not use the information in the
description files to validate and plan the link, and then use the
UNIX / VMS / MVS / PR1MOS linker to *do* the link? If an ADA program is
to include foreign code (ADA-FORTRAN is a particularly interesting case)
mustn't something like this be done? Simula has similar link validation
requirements, but Simula compilers normally use the system-supplied linker.
I'm not claiming that this is the best way to do it.
I'm just trying to draw a distinction between finishing off a compilation
and linking. Maybe there isn't any such distinction if you look hard
enough. The Burroughs BINDER certainly managed to check interfaces as it
bound together COBOL, ALGOL, and FORTRAN. Is the fact that the UNIX
linker doesn't check interfaces its greatest weakness?
------------------------------
Date: 8 Mar 88 16:51:20 GMT
From: rfs@cs.duke.edu (Robert F. Smith)
Subject: Programmer Productivity
I am interested in learning about research or tests of programmer's
productivity. My search at the library was fruitless. If you have
pointers/bibliographies/ideas on this, please e-mail to me.
Thanks,
Robert F. Smith CSNET: rfs@duke
Dept.of Computer Science ARPA: rfs@cs.duke.edu
Duke University UUCP: {ihnp4!decvax}!duke!rfs
Durham NC 27706 Phone (919)684-5110 Ext. 254
------------------------------
Date: 7 Mar 88 00:04:44 GMT
From: defun.utah.edu!shebs@cs.utah.edu (Stanley T. Shebs)
Subject: Response to "A Cynic's Guide, part 1"
In article <302@buckaroo.SW.MCC.COM> marks@buckaroo.SW.MCC.COM (Peter) writes:
>> The solution? Software engineers have to stand up for what they know is
>> right, [...]
>
>It seems to me that the *solution* proposed here can be paraphrased as
>"people have to will themselves to change." If hoping for a silver bullet
>is futile, how well does wishing for the magic potion called "will power"
>stand up to scrutiny?
Willpower is greatly aided by the changing of old institutions and the
establishment of new ones (people love to be ordered about by "the system",
whatever they might say). An equivalent of Underwriter's Labs for legal
software has been mentioned recently, and it's already gained considerable
power. A Ralph Nader type could do lots to get consumer protection outfits
set up. I'm not real keen on forming another government agency to monitor
computing, but that may be what it takes. Unfortunately, it will probably
happen *after* the first software-caused disaster killing >100 people...
>Or perhaps this is a call to the Lone Ranger himself
>to rid the world of evil-doers?
Heh-heh, there are a few computer companies where I'd like to don my Rambo
outfit and put a few HE rounds through certain people (they know who they
are). Seriously, I do support licensing of software engineers along with
civil and maybe even criminal penalties for misdeeds. Nothing like the
law to instill a little willpower, when you're trying to decide whether or
not to add that bit of type-checking code!
stan shebs
shebs@cs.utah.edu
------------------------------
Date: 9 Mar 88 03:23:44 GMT
From: tada@athena.mit.edu (Michael Zehr)
Subject: Response to "A Cynic's Guide, part 1"
> Hardware vs Software
> Why can hardware engineers do
> their job so well and software engineers talk about the (huge) percentage
> lifecycle cost in the maintenance phase?
> [ lots of other stuff... ]
Well, I read an article recently that gave some facts that could
explain some of the discrepancy. I don't remember the exact numbers,
so I'll paraphrase:
Over the last blah years, the demand for hardware has grown at a
[blah] annual rate. Over the same period of time, the demand for
software has grown at a [5 times as much] annual rate.
In other words, the software people are under more pressure to get
their product out fast. [yeah, i know -- all the hardware people
who've ever had a deadline are going to flame me...]
Another possible reason is that a lot more custom software is produced
than custom hardware.
just a few random thoughts ...
michael j zehr
"My opinions are my own ... as is my spelling."
------------------------------
Date: 9 Mar 88 04:05:36 GMT
From: karl@TUT.CIS.OHIO-STATE.EDU (Karl Kleinpaste)
Subject: Response to "A Cynic's Guide, part 1"
tada@athena.mit.edu writes:
In other words, the software people are under more pressure to get
their product out fast. [yeah, i know -- all the hardware people
who've ever had a deadline are going to flame me...]
This is related to the abysmal management styles that seem to go with
software management. "If it's not perfect, that's OK, we'll fix it in
release 2. Just get us on the market *first*." The investment to
repair faulty hardware already in the field is positively monstrous;
the investment to repair faulty software in the field is (perceived to
be) small.
Example: At a department where I used to work (I won't even tell you
which job, so those of you who might know where I've been the last 10
years can't even guess well), a certain large product was arranged to
be delivered by a certain date. This offense was committed by a group
responsible for setting up contracts but who knew absolutely nothing
of software engineering; this was evidenced by the ridiculous time
schedule they imposed on the developer-grunt staff.
First result, the original schedule was simply impossible to meet. It
was hopeless from the outset. There was no way to succeed. Talk
about lowered morale. Common bitch session over lunch: "How are we
going to deliver by MM/DD/YY?" "We aren't, of course." Be still, my
wretching stomach.
Second, there was no way to expand the schedule. The contract which
both corporate parties had signed required the supplied date, period.
Delay was dealt with in the form of charges against my company as
$N/day where N is very, very large.
Third, management within the department was (ahem) suboptimal, in that
the already-excessively-tight schedule was arbitrarily tightened even
more by the department manager for reasons I don't even care to guess.
The initial beta-test delivery date was backed up 2 months. Schedule
compression was required, which not only caused the loss of 2 months
of work before that delivery, but also 2 more days of work during
which the schedule tightening was performed by the entire development
staff. You know the scenario: OK, everybody, pull out those schedules
that show your work done by MM/DD/YY, and find a way to compress it to
MM-2/DD/YY.
Ick.
Would this be tolerated in a hardware development environment? Not in
any with which I have been associated. As an example, if my group is
working on the next generation of 680x0, and we screw up the design
such that race conditions in the CPU are created, and we don't catch
the flaw until after we've shipped a couple hundred thousand of them,
our collective butt would be in a sling for sure. The cost to the
company to replace the defective units in the field would be
absolutely horrendous. Jobs would be lost for certain.
In software, though, the view is that it can be fixed for the next
release. It's "just software," just park a new tape on the drive and
load in new binaries. See, your bug is gone now. Oh, you found
another bug? Gee, we're sorry, that'll be fixed in release 3...
Bad management is the curse of software development. Think about the
rather neat, high-quality software that is produced by people who are
not concerned with schedules much; my experience has been that such
software is far more bug-free than anything written under pressure.
This is especially true of real software professionals who are working
on something for themselves in their spare time that happens to be
useful to a larger group, so they start distributing it.
You know, come to think of it, my story about compressed schedules
fits *2* of my previous jobs altogether too well...
Karl
------------------------------
Date: 4 Mar 88 21:03:36 GMT
From: uh2@psuvm.bitnet (Lee Sailer)
Subject: Response to "A Cynic's Guide, part 1"
In article <2541@Shasta.STANFORD.EDU>, neff@Shasta.STANFORD.EDU (Randy Neff) says:
>
>Why the big difference? Both hardware and software are working instantiations
>of behavior as described by requirements and specifications. There is the
>obvious difference in the scale of the projects: designing and implementing
>a RISC chip is a lot simpler than designing and implementing a compiler for
>it or (gasp) a new operating system for it. Why can hardware engineers do
>their job so well and software engineers talk about the (huge) percentage
>lifecycle cost in the maintenance phase?
>
A cynical answer: Since hardware engineers usually get to do their
work first, they do it in ways that make it easier for them and harder
for the software engineers who follow.
------------------------------
Date: 8 Mar 88 11:51:16 GMT
From: mcvax!botter!star!jos@uunet.uu.net (Jos Warmer)
Subject: Shared Libraries
>How does this "smart linker" business tie into the "shared libraries"
>in Unix V.3. As I understand it, (1) when I need a module, the whole
>library is loaded, but (2) when another program needs a module from
>the library, it shares the core image that is already in memory.
>
>So, for example, at any moment, there is only one copy of all the stdio
>(that's standard input-output in Unix-speak) stuff in memory at any given
>moment, and all programs that need it share. (This also makes the
>executables smaller and saves disk space and load time.)
>
> Just asking, Lee
Shared libraries do have the advantage of making the
executables smaller. There is however also a disadvantage
which we stumbled across last week.
We have a program installed under VMS at a client of us.
The program was tested and worked perfectly well for several months.
Then all of a sudden it started to produce erroneous output,
and we got an alarming telephone call.
What happened ?
The day things went wrong, they had a new version of the VMS
C compiler installed, including of course the (shared) libraries.
The errors in the unchanged working program emerged from an
error in the new libraries.
Moral:
Replacing shared libraries with new versions could make
lots of working programs crash. Because those libraries are
instantaneously used in almost ALL running programs, one
error in such a library can be fatal for the complete system.
Jos Warmer,
VU Informatica Amsterdam
jos@cs.vu.nl
------------------------------
Date: 9 Mar 88 01:11:57 GMT
From: wesommer@ATHENA.MIT.EDU (William Sommerfeld)
Subject: Shared Libraries
A properly designed shared library system should deal gracefully with
version skew.
Ideally, the installation of a new library should be done such that
existing processes can continue to use the old version of the library,
while new processes which start up see the new version.
Also, there should be a facility to allow just one process to run with
a new version of a shared library, to test compatibility with the old
version, although this is less important in a single-user workstation
environment than in a timesharing environment.
All this implies that the interface to the routines is unchanged, and
that only the _implementation_ of the library is changed in the new
version; changing the interface of a library routine will be painful
unless you've had the foresight to include some sort of version
numbering system.
- Bill
------------------------------
Date: Tue, 08 Mar 88 17:37:37 -0500
From: goddard@cs.rochester.edu
Subject: Rochester Connectionist Simulator update
The Rochester Connectionist Simulator is available from:
Rose Peet
Department of Computer Science
University of Rochester
Rochester, NY 14627.
rose@cs.rochester.edu
...!seismo!rochester!rose
There is a licence to sign, and a distribution fee. Currently
distribution is via tape only, anonymous ftp may become available at
some indeterminate point in the future. The package is written in C,
runs under UNIX, and has a graphics package which runs under Suntools.
It is currently in use at several dozen sites and is described in the
February issue of the CACM. The simulation system is highly general
and flexible, placing no restrictions on network architecture, unit
activation functions and data, or learning algorithms.
A new version, 4.1, will be releases shortly. Version 4.1 includes
facilities to selectively delete links and sites, with garbage
collection; capability for integration with Kyoto Common Lisp and
Scheme, allowing the simulator to be controlled from those packages;
dynamic reloading of activation and other functions into a running
simulator, with access to global variables from the interface; and the
ability to associate a delay with each link.
An X-windows graphics package is under development.
A mailing list for simulator users will be started shortly.
For more information, licence, distribution details, contact Rose Peet
at the address above.
Nigel Goddard
------------------------------
Date: Wed, 9 Mar 88 10:26:14 EST
From: 21423js@faline.bellcore.com (Jawad Salehi)
Subject: Call for Papers: Neural Info Processing Systems
CALL FOR PAPERS
I E E E Conference on
Neural Information Processing Systems
-- Natural and Synthetic --
Monday, November 28 -- Thursday, December 1, 1988
Denver, Colorado
This will be the second meeting in a series. The November
1987 meeting, at the same location, brought together neurobiologists,
cognitive psychologists, computer scientists, engineers and physicists.
Several days of focussed workshops at a nearby ski resort followed.
A similar mountain retreat is planned for this year. At the NIPS
meeting, the topics we expect to cover include the following:
Neurobiological models of development, cellular information processing,
synaptic function, learning and memory.
Connectionist models of learning and cognitive processing; training
paradigms; analysis of applicability, generalization, and complexity.
Applications to signal processing, vision, speech, motor control,
knowledge engineering and adaptive systems.
Practical issues in the simulation of neural networks.
Advances in hardware technologies -- neurophysiological recording tools,
VLSI or optical implementations of neural networks.
Technical program: Plenary, contributed, and poster sessions will be
held. There will be no parallel sessions. The full text of presented
papers will be published.
Contributed papers:
Original research contributions are solicited, and will be rigorously
refereed. Authors should submit six copies of a 500-word (or less)
summary and one copy of a 50-100 word abstract clearly stating their
results to the program committee chairman, Scott Kirkpatrick,
IBM T. J. Watson Research Center, P.O. Box 704, Yorktown Heights, NY
10598.
Deadline for papers is May 14, 1988.
---------------------------------------------------------------------------
I E E E Conference on
Neural Information Processing Systems
-- Natural and Synthetic --
Conference: November 28 - December 1, 1988 (Mon-Thurs)
Sheraton Denver Tech Center, Denver, Colorado
Workshops: December 1 - December 4, 1988 (Thurs-Sun)
Summit County, Colorado
Organizing committee:
Terrence Sejnowski General Chairman
Scott Kirkpatrick Program Chairman
Clifford Lau Treasurer
Jawad Salehi Publicity Chairman
Kristina Johnson Local Arrangements
Howard Wachtel Workshop Coordinator
David Touretzky Publications Chairman
Edward C. Posner IEEE Liason
Larry Jackel Physics Liason
James Bower Neurobiology Liason
( ) Please send me registration material.
( ) I intend to submit an abstract.
( ) Please send me information about after-conference mountain retreat.
I suggest the following workshop topics._________________________
Name __________________________________
Institution ___________________________
Address ____________________________
____________________________
____________________________
------------------------------
Date: Sun, 6 Mar 88 20:09:36 CST
From: smu!leff@uunet.UU.NET (Laurence Leff)
Subject: HICSS-22 Rapid Prototyping Session
CALL FOR PAPERS AND REFEREES
HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 22
Rapid Prototyping Session
KAILUA-KONA, HAWAII - JANUARY 3-6, 1989
The Software Track of HICSS-22 will contain a special set of
papers focusing on a broad selection of topics in the area
of Rapid Prototyping. The presentations will provide a fo-
rum to discuss new advances in theory and appli- cations in
the use of Rapid Prototyping as a means for early exe-
cution of software capabilities. Prototyping can be used in
different phases of software development for different pur-
poses. It can be used to refine the requirements, func-
tional evaluations, performance evaluations, as well as
testing. Although there is a continuing research in try-
ing to develop Very High Level Language based software
design environments, prototyping activity can be per-
formed with varying degree of efficiency and effec-
tiveness in any development environment.
Papers are invited that may be theoretical, concep-
tual, tutorial or descriptive in nature. Those papers se-
lected for presentation will appear in the Conference
Proceedings which is published by the Computer Society of
the IEEE. HICSS-22 is sponsored by the University of Hawaii
in cooperation with the ACM, the Computer Society, and the
Pacific Research Institute for Information Systems and Man-
agement (PRIISM). Submissions are solicited in:
Prototyping experience with ART, KEE, PC+, etc.
Prototyping experience with C++, Smalltalk, Flavors,
objective-C, Ada, etc.
Prototyping experience with CASE tools
The role of Knowledge based systems in prototyping
The role of simulation in Prototyping
Prototyping experience in other environments/languages
INSTRUCTIONS FOR SUBMITTING PAPERS
Manuscripts should be 22-26 typewritten, double-spaced pages
in length. Do not send submissions that are significantly
shorter or longer than this. Papers must not have been
previously presented or published, nor currently submitted
for journal publication. Each manuscript will be put
through a rigorous refereeing process. Manuscripts paper
should have a title page that includes the title of the pa-
per, full name of its author(s), affiliation(s), complete
physical and electronic address(es), telephone number(s)
and a 300-word abstract of the paper.
[ Note that the following are the correct deadlines. Incorrect
deadlines have been posted in some places. -- MSD ]
DEADLINES
A 300-word abstract is due by March 30, 1988
Feedback to author concerning abstract by April 30, 1988
Six copies of the manuscript are due by June 6, 1988
Notification of accepted papers by September 1, 1988
Accepted manuscripts, camera-ready, are due by October 3, 1988
SEND SUBMISSIONS AND QUESTIONS TO EITHER
Murat M. Tanik Raymond T. Yeh
Southern Methodist University ISSI International
Computer Science and Engineering Dept. 9420 Research Blvd.
Dallas Texas, 75275 Suite 2000
(214) 692-2854 Austin, Tx. 78759
E-Mail UUNET:Tanik@SMU (512) 338-1895
------------------------------
End of Soft-Eng Digest
******************************
-------