[comp.software-eng] Soft-Eng Digest V4 #12

MDAY@XX.LCS.MIT.EDU (Moderator, Mark S. Day) (02/25/88)

Soft-Eng Digest             Wed, 24 Feb 88       Volume 4 : Issue  12

Today's Topics:
             Functional Approaches to Concurrent Systems
               Software Reuse - Design Support Systems
                     Software Handbooks (3 msgs)
            Question on Configuration Management (7 msgs)

----------------------------------------------------------------------

Date: 17 Feb 88 02:58:11 GMT
From: kddlab!icot32!nttlab!gama!kato@uunet.uu.net  (Kazuhiko Kato)
Subject: Functional Approaches to Concurrent Systems

I'm very interested in building concurrent/parallel systems, such as
operating systems, in the framework of functional programming.
Fortunately I found an article related to this subject.

In article <11071@shemp.UCLA.EDU> dgreen@CS.UCLA.EDU (Dan R. Greening) writes:
>Data flow and functional approaches to parallelism produce programs
>that are much easier to discuss in a theoretic framework.  I suggest
>you look at
>
>  J. Backus, Can Programming be Liberated from the von Neumann Style?
>  A Functional Style and Its Algebra of Programs, Communications of the
>  ACM, 21(8):613-641 (August 1978).

As there may be some people interested in this field, I decide to post
this, not to mail.

Recent nonprocedural approaches to concurrent systems are based on
mathematical-flavour concepts such as logic, functional, or
message-based (Ex: Actor) programming. Of the three, it is seemed that
logic and message-based approaches are more studied than functional in
the field of concurrent systems.

Why functional approaches have been less studied?

I know some approaches have already been done.

(1) Peter Henderson, "Purely Functional Operating Systems," Functional
Programming and its applications edited by J. Darlington, P. Henderson
and D. A. Turner, Cambridge University Press, 1982.

(2) W. Stoye, "A New Scheme for Writing Functional Operating Systems,"
Univ. Cambridge Computer Laboratoy Technical Report No. 56, 1984.

(3) D. Turner, "Functional Programming and Communicating Processes,"
in Proc. PARLE Conf., Jun. 1987.

If you know other researches or papers related to functional
approaches, please let me know.

>The Scientific Citation Index will provide a list of successor
>articles.

What is Scientific Citation Index? How can I get them?

				Kazuhiko KATO
				University of Tsukuba

				E-mail Address:
				    kato%is.tsukuba.junet@relay.cs.net
				Postal Address:
  				    Masuda Laboratory
				    Institute of Information Sciences
				    and Electronics, University of Tsukuba
				    Tsukuba, Ibaraki 305
				    JAPAN

   [The Science Citation Index is an index published monthly (?)
    by the Institute for Scientific Information (??) -- if either
    of these facts is incorrect, please let me know.  It allows you
    to look up, for example, "Backus, J.", then look under his name
    for "Can programming be liberated...", and find pointers to
    recent publications that have cited that particular paper.  At
    least that's what I remember from the last time I used it.  It 
    doesn't distinguish between people except by name, so if there
    happens to be a J. Backus who's (say) a medical researcher, then
    right under those citations you might find the citations of
    "Disorders of the spleen reconsidered" or similar.  Since
    the SCI is both large and expensive, not every library is likely
    to have a copy.  -- MSD ]

------------------------------

Date: 10 Feb 88 12:31:17 GMT
From: mcvax!ukc!dcl-cs!neil@uunet.uu.net  (Neil Haddley)
Subject: Software Reuse - Design Support Systems

        One of the aims of the research project we are  currently
working  on  is to create a software development environment with
better support for software reuse  .  Current  thinking  suggests
that  software  reuse is most effective if it takes place as part
of the design process, so we are currently working on  developing
prototype  design  support  systems  .  For  this  we are using a
Smalltalk-80 system (ParcPlace VI2.2 VM1.1 under Unix on the  Sun
3/50) . [See comp.lang.smalltalk] .

        We have however not been able to find many  papers  which
discuss  supporting  the  design  process, rather than supporting
design document production .

        At present then we are attempting to contact the  authors
of  these  papers, and would be grateful to hear from anyone else
working in this research area .

        Thanks,

Neil


EMAIL:	neil@comp.lancs.ac.uk		| Post: University of Lancaster,
UUCP:	...!mcvax!ukc!dcl-cs!neil	|	Department of Computing,
					|	Bailrigg, Lancaster, UK.

------------------------------

Date: 13 Feb 88 20:42:16 GMT
From: clyde!watmath!utgpu!utzoo!mnetor!spectrix!yunexus!geac!daveb@rutgers.edu  (David Collier-Brown)
Subject: Software Handbooks

In article <32637UH2@PSUVM> UH2@PSUVM.BITNET (Lee Sailer) writes:
>I have always wondered why there are so few software "handbooks" to go
>along with all the engineering handbooks.  There are lots and lots
>of little "standard" components in any big program, for example:
>     
>o sort list in memory with good average time performance
>     
  Well, I've seen about two:
Rodgers, David F., "Procedural Elements for Computer Graphics", New
York (McGraw-Hill) 1985.

... and I lent my Hopity-and-somebody data structures book out, so I
can't give a citation for it.

  Rare, but not unknown.

-- 
 David Collier-Brown.                 {mnetor yunexus utgpu}!geac!daveb
 Geac Computers International Inc.,   
 350 Steelcase Road,Markham, Ontario, 
 CANADA, L3R 1B3 (416) 475-0525 x3279 

------------------------------

Date: 16 Feb 88 19:57:27 GMT
From: uh2@psuvm.bitnet  (Lee Sailer)
Subject: Software Handbooks

I said a while ago that programmers do not seem to use ``handbooks'' as
much as hardware engineers, and this distinguished programming from
engineering.  I received the interesting post attached below, [next
message] suggesting that Gonnet's Handbook of Algorithms was such a book.

It seems to me that Knuth's books could qualify, except that they are
tough going for the average programmer (I certainly find them difficult).
I sometimes find undergrad texts like Sedgewick's Algorithm's useful,
and years ago I used to steal a lot from Newman and Sproull's
graphics book.

So--what books do you keep next to your terminal for ready reference
when you need to write a code fragment that you know has been done
a thousand times before?


Lee

------------------------------

Date: Fri, 12 Feb 88 03:59:35 EST
From: stuart@cs.rochester.edu
Subject: Software Handbooks

Such things exist, for example:

  Gaston H. Gonnet
  Handbook of Algorithms and Data Structures
  Addison-Wesley -- International Computer Science Series
  Reading, Massachusetts, 1984
  ISBN 0-201-14218-X

This book covers sequential search, sorted array (binary) search,
hashing, recursive structure (tree) search, multidimensional search,
sorting arrays, sorting other data structurs, merging, external
sorting, priority queues, k-th element selection, basic arithmetic,
special arithmetic, matrix multiplication, and polynomial evaluation.
Many of these topics have lots of alternative approaches, with
different advantages and disadvantages.  For example, Gonnet gives
seven different methods for maintaining priority queues.

In addition for formal analyses and empirical studies to satisfy the
real engineer as to the appropriateness of a given solution to a given
problem, the various algorithms are given in Pascal and/or C for the
programmer who just wants to hack up a solution and doesn't care.

So, you just have to look around.  Of course, you have to understand
that an engineer (software or otherwise) is expected to know when to
apply various tools and that may involve some substantial analysis of
the problem under attack.  That skill comes from training;  it's not
something you can just pull out of a handbook.

Stu Friedberg  {ames,cmcl2,rutgers}!rochester!stuart  stuart@cs.rochester.edu

------------------------------

Date: 14 Feb 88 04:48:18 GMT
From: cca!g-rh@husc6.harvard.edu  (Richard Harter)
Subject: Question on Configuration Management

In article <497@aimt.UUCP> breck@aimt.UUCP (Robert Breckinridge Beatie) writes:
>
>Basically the argument is over one question: "Is it acceptable to have
>more than one 'call interface' per source file or not?"  For our purposes
>a 'call interface' is defined as a non-static function in a C source file.
>My boss' position is that having more than one 'call interface' per source
>file causes so many "configuration management" problems that it outweighs any
>benefits.
>
>So how about it?  What horrible CM problems does having more than one
>'call interface' per source file cause?  Has anyone had any such problems?
>I'd appreciate any horror stories, anecdotes, or opinion.

	There are a number of issues involved here.  The following may be
what your boss has in mind.  If every file has exactly one entry point
and that entry point has the same name as the file (with the extension
stripped off) then life becomes simpler in a number of respects.  When you
see function foo in some source code you know that it can be found in file
foo.c [more or less -- it may be a macro or in a system library or ...].
Similarly, if the linker comes up with "foo not found", you know that
you need to include foo.o to the list of files loaded.  [Again, foo
may be a data global -- most linkers are reticent about such details.]

	The classical way to describe a hardware configuration is use an
indented tree listing, e.g.

	System
	  Sub system 1.
	    Component 1.1
	    Component 1.2
	  Sub system 2.
	    Component 2.1
	    ...

When we talk about software, the question is, "are the compenents procedures
or files".  If we are talking about program structure, the answer is
"procedures".  However the components managed are usually files.  If we
follow the one procedure, one file rule there is no problem.  Suppose,
however, that we have packaged several procedures into one file, e.g.
components 1.1 and 2.1 are contained in the same file.  Then the single
file is shared by several subsystems.  This makes for complications.

	Actually, of course, the problem is there even if we follow the
one file, one procedure rule.  To see why, let me take an example from
hardware.  The B-999 bomber has a hydraulic subsystem and a wing attitude
control subsystem.  The wing attitude control subsystem uses a hydraulic
lifter.  This component is functionally part of two different subsystems.
A good CM system (hardware or software) must be able to deal with 
multiple interlocking subsystems.

	Version control is an important part of CM.  Since terminology
is often confused in this area, let me distinguish between file version
control, and configuration version control.  Tools such as SCCS and RCS
are file version control tools.  If you build a CM version control system
on top of file version control tools you run into problems if the elements
of your configuration are procedures.  These problems can be addressed,
but things can get sticky if they aren't.

	As to best practice, my rule is that the one file/one procedure
rule should be followed _except_ when you have a group of procedures which
forms an atomic package.  In that case, the file should bear the name of
one of the procedures in the package.  There are a number of principles
that one should follow in setting up such packages, but that is another
issue.

	I hope this is some help.

	Richard Harter, SMDS  Inc.

------------------------------

Date: 14 Feb 88 02:58:06 GMT
From: ihnp4!ihlpe!daryl@ucbvax.Berkeley.EDU  (Daryl Monge)
Subject: Question on Configuration Management

In article <497@aimt.UUCP>, breck@aimt.UUCP (Robert Breckinridge Beatie) writes:
> Now I've never heard this recommendation before.  Nor have I ever seen
> (non-trivial) software that only had one 'call interface' per source file.

I haven't either.

> So how about it?  What horrible CM problems does having more than one
> 'call interface' per source file cause?

None, unless of course the modules are unrelated.  For example, wouldn't
you want the insert, delete, and search functions for a hash table
implementation in a single file "hash.c" for readability and maintenance?
Did you ask what those problems were, or would that be dangerous to your
career?

We strive for a set of product goals that consist of a set of required
standards and recommended guidelines.  Each person or project should
establish these to their own satisfaction.  But avoid nitpicking.

In this particular case, we have requirements of maximum lines per module,
but only a goal of functions per file.  The above managers requirement
does seem restrictive to me.

Daryl Monge				UUCP:	...!ihnp4!ihcae!daryl
AT&T					CIS:	72717,65
Bell Labs, Naperville, Ill		AT&T	312-979-3603

------------------------------

Date: 18 Feb 88 00:15:31 GMT
From: hao!dinl!hull@AMES.ARC.NASA.GOV  (Jeff Hull)
Subject: Question on Configuration Management

You do run into some problems when you need to update one procedure
in a multi-procedure file.  Tracking exactly what changed and why
is one such problem.  (This one is typically handled via comments
embedded in the source code; not an elegant solution, in my opinion.)

When you are controlling documentation, the multiple "procedures" in
one file approach drives you to send out updates that include many
more change pages that would otherwise be necessary, for example.

There are other problems that require mucho context definition to 
explain.

Why would you (ever) want to put more than one "procedure" in a file
anyway.  Now that we have <make> and similar utilities, it is very
simple to (re-)compile programs contained in many files, so why not
put one procedure in one file?


Jeff Hull		...!hao!dinl!hull
1544 S. Vaughn Circle	303-750-3538	
Aurora, CO 80012			

------------------------------

Date: 17 Feb 88 00:15:36 GMT
From: ubc-vision!alberta!calgary!jameson@beaver.cs.washington.edu  (Kevin Jameson)
Subject: Question on Configuration Management

The basic problem with keeping several procedures in one physical file
is that it becomes more difficult (both conceptually and physically) to
manipulate individual procedures.  If you know that you never have
to treat a particular procedure as an individual unit, then placing
a group in one file makes more sense (eg, as with Hash_get, Hash_put, etc).
Sometimes the grouping will be forced if all related procedures require
statically allocated (ie, private) declarations.

For example, if you have one procedure per file, you can replace individual
procedures easily if need be, and are not generally penalized when you
must manipulate related groups.  The functions in the calling tree are
also more accessible (at the operating system file level vs. having to
manually search each physical file).  Translation to other languages
is made easier when you don't have to deal with many functions per 
file.  Software tools are much easier to construct for the one-proc-per-file
model too.  File i/o cost in the editor is considerably improved because
you only have to deal with small files.

We liked the one-proc/file method so much that one person in our group
wrote a a tool to combine many individual procedures into one larger
file for compilation and linking purposes (the linker could only handle
150 physical object files).  Each procedure is thus maintained in its
own file, and then it is combined with related procedures to get by
the linker restriction.

On the other hand, the one-proc/file method has some disadvantages.  The
physical file namespace can get crowded when several hundred procedures
are in the program.  The combination step takes a bit more time.  Global
name changes to related procedures take more effort.  

In our view (experience) the one-proc/file method far outweighs any
extra hassle because you can manipulate procedures at the individual
procedure level.

Kevin {ihnp4,ubc-vision}!alberta!calgary!vaxb!jameson

------------------------------

Date: 18 Feb 88 07:36:41 GMT
From: ptsfa!well!pokey@AMES.ARC.NASA.GOV  (Jef Poskanzer)
Subject: Question on Configuration Management

One problem peculiar to Unix is caused by the brain-damaged library
system.  Let's say you have a .c file with 20 marginally related
routines in it.  You make it into a library, then write a program
that uses only one of those routines.  When you link it, guess what
happens?  All 20 routines get pulled into your executable.

Until someone fixes this, perhaps with a utility to dissect an object
file into N separate object modules, it makes sense to put as few
routines as is reasonable into each source file.

Jef

              Jef Poskanzer   jef@lbl-rtsg.arpa   ...well!pokey

------------------------------

Date: 19 Feb 88 00:16:25 GMT
From: rochester!ritcv!mjl@bbn.com  (Mike Lutz)
Subject: Question on Configuration Management

In article <188@dinl.mmc.UUCP> hull@dinl.UUCP (Jeff Hull) writes:
>Why would you (ever) want to put more than one "procedure" in a file
>anyway.  Now that we have <make> and similar utilities, it is very
>simple to (re-)compile programs contained in many files, so why not
>put one procedure in one file?

The issue is one of packaging based on abstract interfaces.  If I write
a symbol table handler, with Initialize, Lookup, Enter, Update, and
Delete operations, I want the implementation details to be hidden, especially
the details of the data organization I choose.  The natural package in C
is one where the implementation "secrets" are kept as static global
data structures (and internal support routines are static as well, to
avoid name clashes with the client).  To do this, of course, the visible
operations must be in the same file at compile time -- and I argue
that they form a unified abstraction that *should* be in one file.  If
you change the internal data structure details (or create alternate
versions), you'll find your configuration management problems
*decrease* with such packaging.

And, for the person who complained about the brain damaged Unix linker and
librarian: most linkers I've encountered have the same restriction.
The library contains object modules, possibly with multiple entry points,
and if you reference one you get them all.  It's damn difficult to pull
apart an object module and decide which bytes you need and which ones
are extraneous.  The Unix linker/librarian may not set the world on fire
with it's snazzy features, but it most certainly is state of the practice.

Mike Lutz	Rochester Institute of Technology, Rochester NY
UUCP:		{allegra,seismo}!rochester!ritcv!mjl
CSNET:		mjl%rit@csnet-relay.ARPA

------------------------------

Date: 21 Feb 88 01:27:29 GMT
From: well!pokey@hplabs.hp.com  (System Operator)
Subject: Question on Configuration Management

}And, for the person who complained about the brain damaged Unix linker and
}librarian: most linkers I've encountered have the same restriction.

Then I guess you haven't encountered that many linkers.  The VAX/VMS linker
does this right.  The RSX-11 linker does this right.  The TOPS-20 linker
does this right.  Even the god damned PDP-8 linker running FORTRAN-II did
this right.

}                 The Unix linker/librarian may not set the world on fire
}with it's snazzy features, but it most certainly is state of the practice.

It most certainly is not.  It was already obsolete when it was written,
and it hasn't gotten any better in the intervening decade.

Jef

              Jef Poskanzer   jef@lbl-rtsg.arpa   ...well!pokey

------------------------------

End of Soft-Eng Digest
******************************

-------