[comp.sys.transputer] BIRA Conference: Software for Vector and Parallel Computers, 8 & 9 May

PVR%autoctrl.rug.ac.be@NSFNET-RELAY.AC.UK ("Patrick Van Renterghem / Belg. Reg. Trans. Supp. Center") (04/11/90)

Here is the full announcement of the BIRA Conference on
 
               Software for Vector and Parallel Computers,
             May 8 & 9, 1990, Switel Hotel, Antwerp, Belgium
 
Parallel processing is an efficient form of information processing which
emphasizes on the exploitation of concurrent or repetitive events in a
computing process, in contrast to the methods in which they are handled
sequentially. The promise of parallel computers is to deliver high performance
at an affordable cost.
 
Early multiprocessor work started in the 1970's, but was not very successful.
Multiprocessor systems fell out of favor, but have staged a very strong comeback
during the last 5 years. The advance in technology has made it possible to
build relatively inexpensive systems with thousands and tens of thousands
of processors (e.g. the Connection Machine or the AMT DAP). These systems
are used commercially and offer cost-effective alternatives to vector computers.
 
 
Historically, the first supercomputers were vector computers, which
supported fast execution of vector instructions operating on all components
of the vector operands simultaneously. Hence vector processing may be
considered in this sense as a special fine-grained form of parallel
processing.
 
MIMD computers such as the Encore Multimax, Sequent Balance, DEC 8800, BBN
Butterfly, IBM RP3 and the NCUBE and Intel hypercubes are more coarse-grained.
Shared memory systems have a smaller programming overhead, but distributed
memory machines have the highest potential. Parallelism does not come for
free with these systems and the recent market introduction of a large number
of different multiprocessor architectures has brought about an equally large
proliferation of software tools, operating systems and parallel programming
environments.
 
This conference also looks at new parallel programming languages (Linda and
Strand) and at new parallel programming environments for MIMD systems (CSTools
and Express).
 
The bottom line is that parallel architectures will be more widespread in
the near future and parallel processing will become a mainstream activity.
Therefore, it is extremely important for project coordinators, software
managers, information processing analysts and programmers to know about this
evolution.
 
Who should attend this conference and its exhibition ?
 
This conference is aimed at engineering managers, engineers involved in
research, design, development and decision making for computer systems,
software programmers and developers and also at people who want to keep
abreast with current technology.
 
Why should you attend this conference ?
 
We have gathered a number of well-known specialists in the field of
parallel processing software at this two-day international conference. The
tutorial is given by Prof. Mike Delves of the University of Liverpool (UK),
who will give an overview and define the concepts of parallel processing,
which will be discussed in greater detail in the remainder of the conference.
He will address the main issues of this conference: who needs parallelism,
what kind of parallel architectures are there, what are the physical
limitations, how about productivity, which software is needed for SIMD and
MIMD machines, ...
 
Another well-known expert in this field is
dr. Jack Dongarra of the University of
Tennessee and Oak Ridge National Laboratory
(USA), who was involved in the design and             F O T O !!!
implementation of the EISPACK and LINPACK             ===========
packages and is currently working on the
design of algorithms and techniques for
high-performance computer systems.
 
This conference is an excellent opportunity to explore the rapidly evolving
world of supercomputing. The combination of both conference and exhibition
is a chance to hear experts talk about certain new and future systems and to
get detailed information about parallel processing systems in the exhibition
room during extended coffee breaks. This combination saves you time and money.
 
You can also visit the exhibition separately on tuesday 8 May, 1990 from 16.00 h
to 19.00 h at a minimal "exhibition-only" fee. This gives you the chance to comp
   are
and evaluate a number of parallel and vector computers, even if you are unable
to attend the conference lectures.
 
A number of exhibitors will also show how to add vector and parallel capabilitie
   s
your PC, Macintosh or workstation.
 
Prof. dr. ir. Luc Boullart                       ir. Patrick Van Renterghem
Chairman BIRA-Working Party DTCS                 Scientific Coordinator
Vice-President BIRA
 
Automatic Control Laboratory,                    Automatic Control Laboratory,
State University of Ghent                        State University of Ghent
 
The Conference Programme:
 
The conference is subdivided in a number of blocks to make it easier for
the participants to distinguish the wide range of vector and parallel
computers described. These blocks are:
 
- A general Tutorial and Introduction to the Topics discussed at the Conference
- The SIMD Approach to Parallel Programming
- Vector Computers
- New Parallel Programming Languages and Methodologies
- Shared Memory versus Distributed Memory Systems
- Parallel Programming Environments for MIMD Systems
- Libraries for Vector and Parallel Computers
- Round-up and Conclusion of the Conference
 
Tuesday 8 May, 1990:
 
Introduction to the Conference:
-------------------------------
 
* Software for Vector and Parallel Computers
  Prof. Mike Delves, Liverpool University, U.K.
 
This talk will cover the availability of software and software tools for
vector, shared memory and distributed memory machines, and discuss the
differences between and possible future convergence of programming styles
for these machines.
 
The SIMD Approach:
------------------
 
* Super Parallel Algorithms
  Prof. Dennis Parkinson, Active Memory Technology Ltd. and Queen Mary
  College, London, U.K.
 
Serial algorithmic design concentrates on how to solve a single instance
of a given task on a single processor. The natural extension for parallel
processing usually concentrates on how to solve the given single problem
on a set of processors. When algorithms for massively parallel systems are
designed it often occurs that the algorithm for parallel solution of a given
problem turns out to be more general and automatically solves multiple
instances of the task simultaneously. We call these algorithms Super Parallel
Algorithms. This presentation discusses some examples of Super Parallel
Algorithms.
 
* Massively Parallel Computation in Science and Engineering
  dr. E. Denning Dahl, Thinking Machines Corp., Cambridge, MA, U.S.A.
 
Massively parallel computers are the fastest computers available today.
More importantly, they represent the only known path to significantly
greater performance in the future. It is very likely, therefore, that such
machines will become, to an ever increasing extent, the numerical
laboratories of science and engineering.
 
The massively parallel Connection Machine built by Thinking Machines achieves
its multiple gigaflop performance using thousands of individual processors
performing the same operation on different data elements simultaneously. This
so-called SIMD (Single Instruction, Multiple Data) or data-parallel type of
computation is very appropriate to most of what is generally called supercomputi
   ng.
Partial Differential Equations (PDE's), for example, represent a large fraction
of scientific and engineering supercomputing. PDE's are fundamentally amenable
to SIMD computation, because the differential equation represents a single set
of instructions that must be executed for the many thousands of data elements
associated with the time and space mesh values. Furthermore, differential
operators require minimal interprocessor communication; this is a bonus.
 
The talk will include illustrative examples of data-parallel computation.
Massively parallel computers and high-level languages for them, such as
Fortran, have been available only recently. The early applications offer
lessons for this rapidly emerging field.
 
Vector Computers:
-----------------
 
* Automatic Vectorization and Parallelization for Supercomputers
  dr. Hans Zima, University of Vienna, Austria
 
This presentation discusses the current state of the art and recent research
directions for the automatic vectorization and parallelization of numerical
Fortran 77 programs. Particular emphasis is placed on the parallelization for
distributed memory multiprocessing systems.
 
* LAPACK: A Linear Algebra Library for High-Performance Computers
  dr. Jack Dongarra, University of Tennessee and Oak Ridge National Laboratory,
  U.S.A.
 
This talk outlines the proposed computational package called LAPACK. LAPACK
is planned to be a collection of Fortran 77 subroutines for the analysis and sol
   ution
of various systems of simultaneous linear algebraic equations,
linear least-squares problems, and matrix eigenvalue problems.
 
The library is intended to provide a uniform set of subroutines to solve the mos
   t
common linear algebra problems and to run efficiently on a wide range of
architectures. This library, which will be freely accessible via computer
network, not only will ease code development, make codes more portable among
machines of different architectures, and increase efficiency, but also will
provide tools for evaluating computer performance. The library will be based
on the well-known and widely used LINPACK and EISPACK packages for linear
equation solving, eigenvalue problems, and linear least squares. LINPACK and
EISPACK have provided an important infrastructure for scientific computing on
serial machines, but they were not designed to exploit the profusion of
parallel and vector architectures now becoming available.
 
This talk will describe the naming scheme for the routines, give listings for
a few proposed routines and contains notes on the structure of the routines and
choice of algorithms. In addition, a discussion of the aspects of software
design will be given.
 
* Tools for Developing and Analyzing Parallel Fortran Programs
  dr. Jack Dongarra, University of Tennessee and Oak Ridge National Laboratory,
   U.S.A.
 
The emergence of commercially produced parallel computers has greatly
increased the problem of producing transportable mathematical software.
Exploiting these new parallel capabilities has led to extensions of
existing languages such as Fortran and to proposals for the development of
entirely new parallel languages.  We present an attempt at a short term
solution to the transportability problem. The motivation for developing the
package has been to extend capabilities beyond loop based parallelism and
to provide a convenient machine independent user interface.
 
A package called SCHEDULE is described which provides a standard user interface
to several shared memory parallel machines.  A user writes standard Fortran code
 
and calls SCHEDULE routines which express and enforce the large grain data depen
   dencies
of his parallel algorithm.  Machine dependencies are internal to SCHEDULE
and change from one machine to another but the users code remains essentially
the same across all such machines.  The semantics and usage of SCHEDULE
are described and several examples of parallel algorithms which have been
implemented using SCHEDULE are presented.
 
New Parallel Programming Languages and Methodologies:
-----------------------------------------------------
 
* Linda Meets UNIX
  Dipl. Math. Martin Graeff, Scientific Computers GmbH, Aachen, West-Germany and
  Technical University, Vienna, Austria
 
The appearance of parallel machines in general computing seems to be
unavoidable. Informatics has to do a big job, if this evolution shall not
proof a drawback to former days, when programs were coded for dedicated machines
   .
 
Tools for parallel programming can be classified as follows: auto-parallelizing
compilers, intended chiefly to port existing Fortran programs (dusty decks) to
new machines, interactive tools for analysis and formulation of parallel program
   s,
compilers for high-level, application-oriented problem specifications, and
parallel languages.
 
Functional parallelization and data partitioning are the main principles for
parallelization. Programs, that are functionally parallelized, consist of
inhomogenous, parallel tasks. Functional parallelization can hardly be expressed
in terms of algorithms, and the development of tools does not look promising, ye
   t.
 
Data partitioning breaks a program's work into subtasks, operating on subsets
of the original programs data set, provided that interaction of subtasks is smal
   l.
Data partitioning opens broad room for the development of automatic and semi-aut
   omatic
tools. Auto-parallelizing/vectorizing compilers, that operate on nested DO-loops
   ,
are commercial products for numerical applications now. Interactive tools are
used with great success at research sites and have become products, recently.
 
Linda (TM), a new parallel communication paradigm developed by researchers at
Yale University, is one of the most promising efforts on parallel languages.
Linda's programming model provides an easy and convenient entry to the parallel
world. Not only does Linda make writing parallel programs easier, but these
programs are portable between different parallel computers. At first glance,
however, Linda appears like it would be difficult to implement efficiently, so
programs written using Linda might be too slow to be effective.
 
Cogent Research has developed a version of Linda that supports multiple tuple
spaces, system-level programming, and communication between programs written in
different languages, while being efficient enough to be used as the communicatio
   n
mechanism for a parallel operating system - complete with graphical user interfa
   ce.
Through compatibility with UNIX and NeWS, this new operating system brings a
familiar and powerful environment to parallel computing.
 
* Software Portability for Parallel Computers using Strand
  Prof. John J. Florentin, University of London and Artificial Intelligence Ltd.
   , U.K.
 
This presentation describes how the use of Strand, an implicitly parallel
language, allows applications to be developed that are portable across a
range of parallel machines. Strand has a parallel computation model that
allows parallel algorithms to be expressed directly. In conjunction with
a virtual topology mechanism, the programmer is freed from direct concern
with the physical topology of the multiprocessor being used.
 
Strand is currently implemented on a number of distributed memory (e.g.
Meiko Computing Surface, NCUBE and Intel iPSC/2) and shared memory
machines (Sequent Symmetry), as well as for single-processor systems (e.g.
the NeXT computer). Once written, a Strand program can be run on
these hardware platforms, usually without modification. The Strand language
was selected for the prestigous Genome project at Argonne National Labs.
 
Wednesday 9 May, 1990:
 
Shared Memory versus Distributed Memory Systems:
------------------------------------------------
 
* Shared Memory Multiprocessors - A Cost Effective Architecture for Parallel Pro
   gramming
  Prof. Pete Lee, Center for Multiprocessors, Newcastle-upon-Tyne, U.K.
 
Shared memory systems permit a relatively straightforward programming model
to be used and they are easier to program than distributed memory machines.
Work is underway to standardize the parallel constructs that need to be added
to the Fortran language to enable parallel code to execute fast and efficiently
on a variety of shared memory parallel machines. This Parallel Computing
Forum (PCF) combines most of the leading computer manufacturers and U.S. academi
   cs.
 
This presentation discusses the programming environments of some shared memory
machines (e.g. Encore Multimax), presents some applications of these machines
and describes their performance and speed-up for a number of these applications.
 
Although distributed memory machines are more fashionable for the moment,
shared memory machines still provide a cost-effective architecture for parallel
programming.
 
* Design Objectives for Massively Parallel Systems
  dr. Francis Wray, Paracom/Parsytec GmbH, Aachen, West-Germany
 
Scientific Computing has seen many advances over the last five decades, stimulat
   ed
by the introduction of ever more powerful computers. Until recently, most of
these were serial in nature; that is they had a single processor and memory. Man
   y
numerical methods for scientific computation have been developed for this type o
   f
computer, as have the now familiar operating systems and programming languages
which support the implementation of those methods.
 
It now seems likely that the most powerful computers of the next decade will
become increasingly, if not massively, parallel. That is to say they will contai
   n
many processors, probably with each having its own local memory. Some numerical
methods devised for serial machines will be suitable for implementation on paral
   lel
machines. Others may need moderate modification and some may need complete
redesign. The same is true of operating systems and programming languages which,
    on
the whole, have been designed for serial machines.
 
The purpose of this talk is to give an overview of work already underway, both
at Parsytec/Paracom and at other collaborating organizations, to develop a
support environment for parallel programming and to produce parallel implementat
   ions
of existing and new applications
 
* What Tools are required for Vector and Parallel Processing ?
  dr. Aad van der Steen, Academic Computer Centre, Utrecht, the Netherlands
 
Software tools for vector and parallel processing are reviewed and desirable
properties for such tools are formulated. Although the variety of vector
and parallel systems is very large, it should be possible to form a common
body of standardized tools that would be available for all machines. Minimal
requirements for these tools are given as well as examples of existing tools.
 
* Software Development and Applications on the iPSC/2 Parallel Computer
  dr. ir. Dirk Roose, Catholic University of Louvain, Belgium
 
In a distributed memory parallel computer, a number of processors, each with
a local memory, are connected in a network. Communications between processors
is achieved by message-passing. The most successful networks are 2D-meshes
and "hypercubes". The latter network is used in the iPSC/2 and other parallel
computers.
 
We first discuss some important characteristics of the available software
environments for these parallel computers. The Intel iPSC/2 system will be
described in detail.
 
Afterwards, the development of parallel algorithms and software will be
outlined by treating some examples: sorting, solving linear systems by iterative
methods, ray tracing and component labeling in image processing. Timings and
speed-ups obtained on the iPSC/2 will be presented.
 
Parallel Programming Environments for MIMD Systems:
---------------------------------------------------
 
* Supercomputing on Massively Parallel Systems
  ir. Ruben Van Schalkwijk, Meiko Benelux, Rotterdam, the Netherlands
 
Many technical, scientific but also administrative problems require massive
computing power. An almost unlimited performance, in terms of speed and memory,
can be achieved by a massively parallel machine such as the Meiko Computing
Surface.
 
The parallelism is often prescribed by the nature of the application itself. A
programmer must be able to write parallel programs at the abstraction level of h
   is
application. He must be able to use standard languages (Fortran, C, ...) and
standard operating systems (Unix, VMS) without detailed knowledge of the
machine architecture.
 
The programming environment CSTools enables a programmer to write a parallel pro
   gram
using communicating sequential processes. CSTools makes parallel programming har
   dware
independent and it gives the benefit of using a homogenous programming environme
   nt
for a heterogenous hardware architecture (e.g. transputers + Intel i860 +
SPARC). Since parallel programming tools exist, MIMD machines are easy to use.
 
* The Express Parallel Programming Environment
  ir. Patrick Van Renterghem, State University of Ghent, Belgium
 
Can your software cope with the challenge of massive parallelism ? Will you
have to make massive investments in rewriting already written sequential
code into a strange parallel language to stay in the business ? We need new
tools to solve the problem, either new programming languages such as Strand88,
Linda or Par.C, new operating systems such as Helios or Mach or new parallel
operating environments such as CSTools or Express. We need a solution that
works for lots of programmers and lots of applications on lots of machines.
The three major requirements of these tools are believed to be ease of use
(to minimalize the software investment and the time-to-market), portability
(to maximize the life-cycle of the resulting code) and performance/efficiency
(that's what parallel processing is all about). We opt for a parallel programmin
   g
environment, which is machine and configuration independent, which makes it easy
to integrate existing software and which is easy to learn and efficient. We
believe that Express is one of the best run-time environments for parallel
computers.
 
In this presentation, the Express parallel programming environment is described,
which is currently available for transputer systems, the NCUBE, iPSC/2 and
iPSC/860 hypercubes and the Sequent and Encore shared memory systems.
 
The Express library can be used to implement new implicitly parallel languages
(e.g. Strand88), parallel operating systems or parallel libraries for these
machine in a machine-independent way.
 
Libraries:
----------
 
* Library Software for Transputer Arrays
  Prof. Mike Delves, University of Liverpool, U.K.
 
With the advent of the floating-point T800 transputer, transputer arrays have
proven to be cost-effective architectures for scientific and engineering
computing up to the 100 MFlops level and above. But their widespread use in
industry is still limited by the effort needed to port existing applications,
and to write new ones. Numerical libraries have become basic tools of the
engineering programmer in a serial environment; one simple way to port
existing code from a serial to a parallel environment is to replace calls
to a serial library by calls to a parallel library. Library software for
transputer arrays is now starting to appear, and we give here a progress
report on the production of such numerical libraries, both serial and
parallel.
 
* Libraries for Vector and Parallel Computers
  dr. Sven Hammarling, The Numerical Algorithms Group Ltd., Oxford, U.K.
 
The current version (Mark 14) of the NAG Fortran Library contains nearly
900 documented routines for numerical and statistical computation. NAG has
always aimed to make this Library available on any machine for which there is
reasonable demand for the Library, which in practice means any computer
in widespread use for general purpose scientific computing. Thus portability
of the Library has always been a prime consideration. The advent of vector
and parallel computers has required us to pay much more careful attention to
the performance of the Library and the challenge has been to satisfy the
sometimes conflicting aims of performance and portability.
 
In this talk we describe the work done and the work that is in progress to
meet the challenge on modern high-performance computers, and we describe a
collaborative project, LAPACK, aimed at developing a numerical linear
algebra library for such machines.
 
Round-Up/Conclusion:
--------------------
 
Prof. Mike Delves discusses the evolution of software for parallel computers
and rounds up and concludes the conference.
 
Time Schedule:
 
                      DAY ONE                                 DAY TWO
 
 8.30 h         Registration and Welcome
 9.20 h         Mike Delves                             Pete Lee
10.00 h         Tutorial Presentation (cont.)           Francis Wray
10.40 h         Coffee/Tea/Exhibition                   Coffee/Tea/Exhibition
11.20 h         Dennis Parkinson                        Aad van der Steen
12.00 h         Denning Dahl                            Dirk Roose
12.40 h         Lunch                                   Lunch
14.00 h         Hans Zima                               Ruben Van Schalkwijk
14.40 h         Jack Dongarra                           Patrick Van Renterghem
15.20 h         Coffee/Tea/Exhibition                   Coffee/Tea/Exhibition
16.00 h         Jack Dongarra                           Mike Delves
16.40 h         Martin Graeff                           Sven Hammarling
17.20 h         John Florentin                          Conclusion/Round-up
17.40 h                                                 Closing Session
18.00 h         Demo Session/Exhibition
19.30 h         Conference Dinner (optional)
 
 
The fees for the conference have been fixed at 18000 BEF(*) for BIRA members,
20000 BEF for non-BIRA members and 9000 BEF for teachers and assistants.
This includes coffee/tea, lunches, proceedings, admission to the conference
and the exhibition room, NOT the accommodation in the hotel.
 
The exhibition-only fee is fixed at 1000 BEF. The conference dinner price
is 1500 BEF.
 
(*) 55 BEF is approximately 1 pound at this moment.
 
For registration, contact:
 
Luk Pauwels,
BIRA Secretary,
Desguinlei 214,
2018 Antwerp, Belgium
Tel: +32 3 216 09 96
Fax: +32 3 216 06 89
 
Candidate companies for the exhibition can also contact the above address
for the reservation of exhibition space.