[ut.na] NA Digest Volume 88 : Issue 19

krj@csri.toronto.edu (Ken Jackson) (05/09/88)

NA Digest   Sunday, May 8, 1988   Volume 88 : Issue 19

Today's Editor: Cleve Moler

Today's Topics:
 
     Re: Fast Floating Point Software for Microprocessors
     IMACS World Congress 
     Signal Processing Software
     Special Issue on Parallel Optimization
     Index Issue of Linear Algebra and its Applications
     Graduate Assistantships at Utah State

-------------------------------------------------------

From: David Hough <dgh@Sun.COM>
Date: Sun, 1 May 88 13:05:29 PDT
Subject: Re: Fast Floating Point Software for Microprocessors

Summary of original posting:  Brad Templeton is trying to implement
the fastest possible software floating point for the 8086.  He doesn't
care about getting the rounding correct.

Summary of my response:  Don't waste your time.

The summary reflects my own experience in improving the software floating
point implementation on the Sun-2 and Sun-3.  [Sun experts know it as
 -fsoft].  There is almost no perceived value in either a fast
correct implementation or a faster incorrect implementation of floating-point
arithmetic in software for processors for which good hardware floating point
is readily available.  As far as I know all IBM-PC's and their clones
have provision for 8087's, which can be obtained around here for < $100
in retail quantity 1.  While the 8087 is not free from fault it is
faster than any software implementation and of higher quality than most.

By mistake I decided to have -fsoft be the default code generation option
on the Sun-3, because the emulator software available from Motorola
was extremely slow and the 68881 hardware initially available had some
bugs.  So I devoted some effort to improving the software floating point,
within the constraint of correct IEEE rounding.   However everybody to whom
either speed or full IEEE correctness was important eventually bought
the 68881 (which was and is only optional on the cheapest Sun-3 anyway)
and I would have to say that the only thing people noticed about the
software floating point was that it didn't support IEEE modes or
exception handling and it didn't produce the same answers as -f68881.

In retrospect what I should have done was to have -f68881 code generation
(fastest possible assuming 68881 hardware present) be the default, and
have the kernel emulate the hardware if it was missing.  This is the
approach taken in the Sun-4, for instance.  Anybody who cares about
floating-point performance will get the hardware sooner or later anyway.

As for what's to be done on IBM PC's, I'd suggest coding that assumes
the 8087 is present and exploits it optimally, with the emulator
invoked automatically if the hardware is missing.

As for what happens when you tolerate sloppy rounding to make it "faster",
simply recall what happened to someone else who tried this, as relayed
to me through an intermediary:

           The program that did this was Spice.  I had a bug in my sqrt that
        resulted in noise in the LSB of the dbl prec result.  Rounding then
        occurred (correctly), but it was too late for the lsb.  Something
        about the convergence criteria of the Spice model made it run about
        three times longer due to the non-monotonic behavior.  (I'll try to
        find our local Spice expert for more details on that.)  This caused
        our benchmarking people to agonize for several days on why we were
        so much slower than expected, until I tried Kahan's Paranoia program,
        which quickly complained about the non-monotonicity.  Fixed the bug,
        and Spice sped up by a factor of 3 or so.  

	Kahan thinks that monotonicity
        is extremely important for intrinsics, and after that I agreed with
        him.

As mentioned previously, Kahan's lecture series is starting this week.
Anybody interested in subscribing as an absentee participant should send
me e-mail for the details - in return you get the privilege of helping to
defray the fixed costs of the course.

  -- David Hough
     Sun Microsystems

------------------------------

From: Robert Vichnevetsky <vichneve@aramis.rutgers.edu>
Date: Mon, 2 May 88 13:57:30 EDT
Subject: IMACS World Congress 


                        =========================
                      * 12th IMACS WORLD CONGRESS *
		      *	ON SCIENTIFIC COMPUTATION *
                        =========================
		     July 18-22, 1988 - Paris, France
                     ================================


The 12th. IMACS World Congress will take place at the historic site of
the Sorbonne/Lycee Louis  le  Grand in the  Quartier Latin,  a central
area of Paris known since the  Middle Ages for its prestigious Schools
and its University.   The program of the  Congress features  some  900
papers, to be  presented by authors  from almost  every country in the
world.    The topics cover   a    wide  range of interests,  including
Computational  Mathematics, Numerical  Analysis, Modelling of Systems,
Computational  Physics,    Computational Acoustics,   Applications  in
Science   and Engineering,  and Hardware and   Software for Scientific
Computation.

Registration  forms, and  the preliminary  program,  which contains  a
listing of all  papers  and social events,  may be obtained by writing
to:

			IMACS Secretariat
			Attn:  K. Hahn
			Rutgers University
			Dept. of Computer Science
			New Brunswick, NJ  08903  USA

			Tel:  201-932-3998
			ARPANET: khahn@aramis.rutgers.edu


------------------------------

From: Jeff Dunn <dunn%nrl.decnet@nrl.arpa>
Date: 3 May 88 06:57:00 EDT
Subject: Signal Processing Software

  One of the workers here has need of a signal processing code
which will run on an IBM PC, that is, has easily adjustable array
sizes and is standard FORTRAN. He wants a package which will
calculate spectra and cross spectral density matrices for 2
dimensional data. He also wants to be able to do filtering
easily, ie. he wants filters to be part of the code.  If anyone
has such a code and is willing to share it with us, we would be
most appreciative. 


			Thanks,
			Jeff Dunn
		     <dunn@nrl.arpa>


------------------------------

From: Stavros Zenios <ZENIOS@wharton.upenn.edu>
Date: Tue, 3 May 88 13:08 EST
Subject: Special Issue on Parallel Optimization


PARALLEL OPTIMIZATION ON NOVEL COMPUTER ARCHITECTURES 


Editors:
Robert R. Meyer, University of Wisconsin
Stavros A. Zenios, University of Pennsylvania

Special volume of the "Annals of Operations Research".
Vol. 14, 1988, approx. 400 pages

This volume presents a collection of papers that describe the 
state-of-the-art in the rapidly evolving area of parallel optimization
on novel computer architectures. They represent both theoretical 
contributions describing new ways of decomposing large-scale problems, and
succesfull parallel implementations of existing and new optimization
algorithms. Computational studies are reported on a wide range of parallel
systems like the Alliant FX/8, Sequent Balance 21000, IBM 3090-600, CRAY X-MP,
FPS T-20, the Connection Machine CM-1 and others. We also see here the use of 
parallel and vector supercomputers for analyzing large scale applications of
optimization.

Contents:

Preface, by R.R. Meyer and S.A. Zenios

G.B. Dantzig
Planning Under Uncertainty Using Parallel Computing

R.V. Helgason, J.L. Kennington and H.A. Zaki
Parallelization of the Simplex Method

O.L. Mangasarian and R. De Leone
Parallel Gradient Projection Successive Overrelaxation for
Symmetric Linear Complementarity Problems and Linear Programs

J.-S. Pang and J.-M. Yang
Two-stage Parallel Iterative methods for the Symmetric Linear
Complementarity Problem

A.T. Phillips and J.B. Rosen
A Parallel Algorithm for Solving the Linear Complementarity 
Problem

D.P. Bertsekas
The Auction Algorithm: A Distributed Relaxation Method for
the Assignment Problem

M.D. Chang, M. Engquist, R. Finkel and R.R. Meyer
A Parallel Algorithm for Generalized Networks

S.A. Zenios and R. Lasken
Nonlinear Network Optimization on a Massively Parallel
Connection Machine

R.H. Byrd, R.B. Schnabel and G.A. Shultz
Using Parallel Function Evaluations to Improve Hessian
Approximations for Unconstrained Optimization

M.-Q. Chen and S.-P. Han
A Parallel Quasin-Newton Method for Partially Separable 
Large Scale Minimization

M. Lescrenier
Partially Separable Optimization and Parallel Computing

S. Wright
A Fast Algorithm for Equality-Constrained Quadratic
Programming on the Alliant FX/8

G.A.P. Kindervater and J.K. Lenstra
Parallel Computing in Combinatorial Optimization

J. Plummer, L.S. Lasdon and M. Ahmed
Solving a Large Nonlinear Programming Problem on a 
Vector Processing Computer

R.E. Haymond, J.T. Thornton and D.D. Warner
A Shortest Path Algorithm in Robotics and its Implementation
on the FPS T-20 Hypercube


TO ORDER:

(U.S.) J.C. Baltzer AG, Scientific Publishing Co.,
       P.O. Box 8577, Red Bank, NJ 07701-8577

(International)                                   
       J.C. Baltzer AG, Scientific Publishing Co.,
       Wettsteinplatz 10, CH-4058 Basel, Switzerland


------------------------------

From: Hans Schneider <hs@vanvleck.math.wisc.edu>
Date: Tue, 3 May 88 21:18:23 cdt
Subject: Index Issue of Linear Algebra and its Applications
 
                     LAA NEWS BULLETIN
 
    100 volume index of LINEAR ALGEBRA AND ITS APPLICATIONS
 
  Volume 100 of LAA will be published during May. It contains the
  author index for papers published in the first 100 volumes of the
  journal. It also contains a list of all members of the editorial
  board since the inception of the journal and, to the extent
  possible, a list of all referees. A complete listing of special
  issues with their special editors and of conference reports,
  profiles (biographical articles), book reviews, and obituaries will
  also  be included.
 
  Volume 100 may be purchased at a price of $40 from the publisher at
  the address below:
 
                               Elsevier Science Publishing Co
                               52 Vanderbilt Ave
                               New York NY 10017
 
  Volumes 101, 102, 103, 104 and 105 will be published in rapid
  sequence and are expected to appear during May and June.
 

------------------------------

From: Homer Walker <UF7099%USU.BITNET@forsythe.stanford.edu>
Date: Thu, 5 May 88 14:49 MDT
Subject: Graduate Assistantships at Utah State

Dear colleagues:

The Mathematics and Statistics Department at Utah State University has
several graduate assistantships still open for next fall. More information
on these assistantships and how to apply for them is given in the flyer below.
We would appreciate your bringing this to the attention of any prospective
graduate students. Anyone wishing to communicate informally with me about this
is welcome to do so.

Homer Walker
uf7099@usu.bitnet or na.walker@na-net.stanford.edu



                   RESEARCH AND TEACHING ASSISTANTSHIPS

                   Mathematics and Statistics Department
                           Utah State University

The Mathematics and Statistics Deparment at Utah State University has several
research and teaching assistantships available for the 1988-89 academic year.
The areas of interest for the research assistantships include numerical opti-
mization, statistical computing, numerical solution of partial differential
equations, and computational fluid dynamics. Qualified students can expect a
stipend of at least $8000 for the year and out-of-state tuition waivers.
Inquiries should be made immediately and directed to

                           Graduate Chairman
                           Mathematics and Statistics Department
                           Utah State University
                           Logan, UT 84322-3900

                           Phone: (801) 750-2809

------------------------------

End of NA Digest
**************************
-------

Reposted by

-- 
Kenneth R. Jackson,                   krj@csri.toronto.edu (csnet)
Department of Computer Science,       uunet!csri.toronto.edu!krj (uucp)
University of Toronto,                krj@csri.toronto.cdn (ean x.400)
Toronto, Canada  M5S 1A4              krj%csri.toronto.edu@relay.cs.net (arpa)
(416) 978-7075                        krj@csri.utoronto (bitnet)