[comp.sw.components] Linear Scalable S/W Components

stein@oscsuna.osc.edu (Rick 'Transputer' Stein) (05/10/89)

In article <698@oliver.analogy.UUCP> cmr@oliver.UUCP (Chesley Reyburn) writes:
>In article <TED.89May8134732@kythera.nmsu.edu> ted@nmsu.edu (Ted Dunning) writes:
>>In article <5421@hubcap.clemson.edu> billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) writes:
>
>Can we describe
>a methodology that does not begin with a particular language?
>
Well, I don't know the answer this question, but I have a few
comments on language independent software engineering methodologies.

1) In a multicomputer environment, one needs s/w components which
are scalable.  Scalable in the sense that a particular component
can be replicated (in a linear fashion concommitant with the #
of processors in the system), and executed in a massively parallel
environment.  In the case of numerical computations, there are
plenty of implementations for all sorts of algorithms.  I don't know
about you, but I don't particularly like to code, so I'd just use
a pre-existing package (like IMSL) to solve my problem.  Now,
there is a library for the IPSC-II Hypercube which solves your
standard numerical problems.  Is it architecture independent?
Meaning that I can take the source code, replace the Intel OS
calls with calls from another OS for message passing, etc. and
run the software straight away?
Is it linearly scalable?  Do I get n-times single cpu performance
when I run it on n processors (save for communication bound problems).
I don't know?

2) The trick is not only to wrap the domain expertise into the component,
but to make that component scalable.  Clearly, without any knowledge
of linearly scalable s/w components, the entire multicomputer technology
might just as well not exist.  This is a problem.  One can not continue
to create toxic waste dump software programs and expect to have the
engineering effort be reuseable without applying knowledge of the target
architecture.

3) All I can say about methodologies for multicomputer systems is that
one must have s/w engineering discipline, believe in structured design
principles, and attempt to organize scalable software by assessing
requirements.  Whether or not this process can be built into a domain
engineering system is another matter.  Manually at least, it works because
this is exactly what I do.  In both C, Fortran, and eventually C++.
>Chesley Reyburn                 ...tektronix!ogccse!cvedc!cmr
--rick
-- 
Richard M. Stein (aka Rick 'Transputer' Stein)
Office of Research Computing @ The Ohio Supercomputer Center
Ghettoblaster Vacuum Cleaner Architect
Internet: stein@pixelpump.osc.edu

billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe,2847,) (05/11/89)

From article <166@oscsuna.osc.edu>, by stein@oscsuna.osc.edu (Rick 'Transputer' Stein):
> 1) In a multicomputer environment, one needs s/w components which
> are scalable.  Scalable in the sense that a particular component
> can be replicated (in a linear fashion concommitant with the #
> of processors in the system), and executed in a massively parallel
> environment.  [...] Do I get n-times single cpu performance
> when I run it on n processors (save for communication bound problems)?
> [...] One can not continue to create toxic waste dump software programs 
> and expect to have the engineering effort be reuseable without applying 
> knowledge of the target architecture.

    Absolutely.  The way I accomplish this is to write my components
    using Ada multitasking; the components can then be compiled on
    any desired system.  The Ada compiler for that system has knowledge
    of the target architecture, and generates code which maps the abstract
    model (independent processes communicating via the rendezvous mechanism)
    onto the number of physical processors actually available.

    Another approach to this problem which will be used in the future is
    the connection of large numbers of computer systems into a network
    such as the NECTAR system being implemented by Dr. H.T. Kung at
    Carnegie-Mellon; on such a system, processes can move to the system
    most appropriate for their execution.


    Bill Wolfe, wtwolfe@hubcap.clemson.edu