leff@smu.CSNET.UUCP (02/25/87)
==================================================
NUMERICAL ANALYSIS TITLES -- Volume 4, Number 3
December 15 1986
==================================================
##### AACHEN #####
---------------
The following report(s) may be obtained from:
Frau Ch. Jarausch
Institute for Geometry and Practical
Mathematics
RWTH-Aachen
Templergraben 55
D-5100 Aachen, Fed. Rep. Germany
<IGPM at DACTH51.BITNET>
Submitted by: Rolf Jeltsch
---------------
%A Bernd Einfeldt
%A Ein schneller Algorithmus zur Loesung des
%T Riemann Problems
%R Report 36
%I Institut for Geometry and Practical Mathematics
%C RWTH-Aachen
%D May, 1986
%A Wolfgang Mackens
%T Some Notes on Block Gauss-Seidel Newton Iterations
for the Solution of Sparse Nonlinear Problems
%R Report 37
%I Institut for Geometry and Practical Mathematics
%C RWTH-Aachen
%D July, 1986
%A Rolf Jeltsch
%A Klaus Raczek
%T Counterexamples to an Order Barrier for Stable
Multistep Discretizations of Linear Hyperbolic Equations
%R Report 39
%I Institut for Geometry and Practical Mathematics
%C RWTH-Aachen
%D October 1986
##### BELL LABORATORIES #####
---------------
The following report(s) may be obtained from:
Bill Coughran
AT&T Bell Labs
Room 2C-419
Murray Hill, NJ 07974
USA
<wmc.swift@btl.csnet>
Submitted by: Bill Coughran
---------------
%T Aspects of Computational Circuit Analysis
%A W. M. Coughran, Jr.
%A Eric Grosse
%A Donald J. Rose
%R AT&T Bell Labs Numerical Analysis Manuscript 86-4
%X A hierarchical formulation of the differential-algebraic systems
describing circuit behavior is presented. A number of algorithms
that have proven effective are reviewed. These include multidimensional
splines that preserve monotonicity, sparse direct and iterative methods
for the linear equations, damped-Newton and Newton-iterative techniques
for the nonlinear equations, continuation methods, and low-order time-
integration formulae. Some aspects of time macromodeling are described.
##### BOEING #####
---------------
The following report(s) may be obtained from:
Jan Peterson
Engineering Technology Applications Division
Boeing Computer Services Company
Mail Stop 9C-01
565 Andover Park West
Tukwila, WA 98188
Submitted by: John Lewis <lewis@anl-mcs.arpa>
---------------
%A R. Grimes
%A H. Krakauer
%A J. Lewis
%A H. Simon
%A S. Wei
%T The solution of large dense generalized eigenproblems on the CRAY X-MP/24
with SSD
%R ETA-TR-32
%D December 1985
%A J. Lewis
%A H. Simon
%T The impact of hardware gather/scatter on sparse Gaussian elimination
%R ETA-TR-33
%D June 1986
%A R. Grimes
%A J. Lewis
%A H. Simon
%T Eigenvalue problems and algorithms in structural engineering
%R ETA-TR-35
%D march 1986
%A A. Erisman
%A H. sSimon
%T Education of engineers and scientists in supercomputing
%R ETA-TR-37
%D July 1986
##### BUFFALO #####
---------------
The following report(s) may be obtained from:
P.J.Eberlein
Computer Science Department
Bell Hall
SUNY/Buffalo
Buffalo, NY, 14260
<eberlein@buffalo.csnet>
Submitted by: Pat Eberlein
---------------
%A P. J. Eberlein
%T Comments on some parallel Jacobi Orderings
%R #86-16
%I Suny/Buffalo CS Dept., Bell Hall
%C Buffalo, NY, 14260
%D August, 1986
%P 9
%X A one-sided Jacobi method for the hypercube is described, which
utilizes the inherent parallelism of the Jacobi algorithm. A natural
outcome of the implementation of the algorithm on a hypercube leads to
some new Jacobi rotation sets for loops. Loops of even length for the
hypercube architecture are described.
%A P. J. Eberlein
%T On One-Sided Jacobi Methods for Parallel Computation
%R 86-17
%I Suny/Buffalo CS Dept., Bell Hall
%C Buffalo, NY, 14260
%D August, 1986
%P 11
%X Convergence proofs are given for one-sided Jacobi/Hestenes
methods for the singular value and eigenvalue problems. The limiting form
of the matrix for the Hestenes method with optimization when the original
matrix is normal is derived; this limiting matrix is block diagonal
where the blocks are multiples of unitary matrices. A variation in the
algorithm is shown to guarantee convergence to a diagonal matrix when
the original matrix is symmetric. Implementation techniques for the
hypercube are described.
##### CHALMERS #####
---------------
The following report(s) may be obtained from:
Axel Ruhe
Department of Computer Science
Chalmers University of Technology
S-41296 Goteborg
(telephone: int-46-31810100 ext. 1015)
<ruhe@chalmers.sunet>
Submitted by: Axel Ruhe
---------------
%A Owe Axelsson
%A Gunhild Lindskog
%T On the Eigenvalue Distribution of a Class of Preconditioning Methods
%R Report 3
%I Department of Computer Science, Chalmers University of Technology
%C S-41296 Goteborg, Sweden
%D December 12, 1985
%X A class of preconditioning methods depending on a relaxation parameter
is presented for the solution of large linear systems of equations Ax=b, where
A is a symmetric positive definite matrix. The methods are based on an
incomplete factorization of the matrix A and include both pointwise and
blockwise factorizations. We study the dependence of the rate of convergence
of the preconditioned conjugate gradient method on the distribution of
eigenvalues of C\u-1\d A, where C is the preconditioning matrix. We also
show graphic representations of the eigenvalues and present numerical tests
of the methods.
%A Thomas Ericsson
%A Axel Ruhe
%T Lanczos Algorithms and Field of Value Rotations for Symmetric
Matrix Pencils
%R Report 4
%I Department of Computer Science, Chalmers University of Technology
%C S-41296 Goteborg, Sweden
%D April 4, 1986
%X Iterative algorithms for the eigensolution of symmetric
pencils of matrices are considered. It is shown that the symmetric
Lanczos algorithm, the nonsymmetric Lanczos algorithm and the Arnoldi
algorithm are closely related in this case. Applicability of this class
of algorithms to indefinite pencils is discussed. A new field of values
concept is used to describe the symmetric pencil problem. Then spectral
transformation corresponds to a rotation in the complex plane, and the
inertia count gives the number of eigenvalues corresponding to points in
one of two half planes.
%A Owe Axelsson
%A Gunhild Lindskog
%T On the Rate of Convergence of the Preconditioned Conjugate Gradient
Method.
%R Report 5
%I Department of Computer Science, Chalmers University of Technology
%C S-41296 Goteborg, Sweden
%D March 10, 1986
%X We derive new estimates for the rate of convergence of the
conjugate gradient method by utilizing isolated eigenvalues of parts of
the spectrum. We present a new generalized version of an incomplete
factorization method and compare the derived estimates of the number
of iterations with the number actually found for some elliptic difference
equations and for a similar problem with a model empirical distribution
function.
##### HARWELL #####
---------------
The following report(s) may be obtained from:
Iain Duff
CSS Div
Harwell Laboratory
Didcot, OXON OX11 0RA,
England
<na.duff@su-score.arpa>
Contributed by: Iain Duff
---------------
%A Jack Dongarra
%A Iain S. Duff
%T Advanced architecture computers
%R TM 57
%I Mathematics and Computer Science Division,
%C Argonne National Laboratory, Argonne, Illinois 60439
%X We describe the characteristics of several recent
computers that employ vectorization or parallelism to
achieve high performance in floating-point calculations. We
consider both top-of-the-range supercomputers and computers
based on readily available and inexpensive basic units. In
each case we discuss the architectural base, novel features,
performance, and cost. It is intended that this report will
be continually updated, and to this end the authors welcome
comment.
%A Iain S. Duff
%T The influence of vector and parallel processors on numerical analysis
%I CSS Div., Harwell Laboratory
%C Didcot, OXON OX11 0RA, England
%X It is now ten years since the first CRAY-1 was delivered
to Los Alamos National Laboratory. Since then,
supercomputers with vector processing capability have become
widespread and important in the solution of problems in many
areas of science and engineering involving large-scale
computing. Their influence on numerical analysis has been
less dramatic but we indicate the extent of that influence.
In the last year or so, advanced @V0superminis@V1 that
exhibit various more general forms of parallelism have been
developed and marketed. We identify these and give some
general principles which algorithm designers are using to
take advantage of these parallel architectures. We argue
that parallel processors are having a much stronger
influence on numerical analysis than vector processors and
illustrate our claims with examples from several areas of
numerical analysis including linear algebra, optimization,
and the solution of partial differential equations.
%A Iain S. Duff
%T Comments on the Solution of Sparse Linear Equations
%R TM 58
%I Mathematics and Computer Science Division,
%C Argonne National Laboratory, Argonne, Illinois 60439
%X We consider primarily direct methods for the solution of
sparse linear equations. We first illustrate the wide range
of sparse matrix types that arise in different application
areas. It is clear that methods taking advantage of a
particular structure should do better than a general
technique designed for all classes of sparse matrices. We
comment on this both philosophically and empirically.
We discuss the techniques used in general solution
schemes and those used in particular applications @mr for
example, the solution of equations resulting from
discretizations of partial differential equations. We also
consider the use of direct methods as a preconditioning to
iterative methods for solution.
We look at work in combinatorial algorithms that relates
to the solution of sparse equations and comment on some of
the open problems in this and other areas.
Throughout we comment on available software and the
relative performance of the methods as implemented in those
software packages.
%A Ameet K. Dave
%A Iain S. Duff
%T Sparse matrix calculations on the CRAY-2
%I CSS Div., Harwell Laboratory
%C Didcot, OXON OX11 0RA, England
%X The CRAY-2 is sometimes viewed as a CRAY-1 with a faster
cycle time. We show that this viewpoint is not completely
appropriate by describing some of the important
architectural differences between the machines and
indicating how they can be used to facilitate numerical
calculations.
We have been testing kernels and codes on a CRAY-2
prior to the delivery of a machine to Harwell early next
year and report some results on the solution of sparse
equations which indicate that high efficiency can be
obtained. We give details of one of the techniques for
attaining this performance.
We also comment on the use of parallelism in the
solution of sparse linear equations on the CRAY-2.
%A I. S. Duff
%T The use of vector and parallel computers in the solution of large
sparse linear equations
%I CSS Div., Harwell Laboratory
%C Didcot, OXON OX11 0RA, England
%X We discuss three main approaches that are used in the
direct solution of sparse unsymmetric linear equations and
indicate how they perform on computers with vector or
parallel architecture. The principal methods which we
consider are general solution schemes, frontal methods, and
multifrontal techniques. In each case, we illustrate the
approach by reference to a package in the Harwell Subroutine
Library. We consider the implementation of the various
approaches on machines with vector architecture (like the
CRAY-1) and on parallel architectures, both with shared
memory and with local memory and message passing.
%A Iain S. Duff
%A Torbjorn Wiberg
%T Implementations of {O(n SUP 1/2 tau )} assignment algorithms
%I CSS Div., Harwell Laboratory
%C Didcot, OXON OX11 0RA, England
%X We examine a number of implementations and modifications of a 1973
algorithm of Hopcroft and Karp for permuting a sparse matrix so that there are
no zeros on the diagonal. We describe our implementation of the original
Hopcroft and Karp algorithm and compare this with modifications which we prove
to have the same {O(n SUP 1/2 tau )} behaviour, where the matrix is of order {n}
with { tau } entries. We compare the best of these with an efficient
implementation of an algorithm whose worst case behaviour is {O(n tau )}.
##### ILLINOIS #####
---------------
The following report(s) may be obtained from:
Rebecca J. Gering
CSRD
University of Illinois at Urbana-Champaign
Urbana, Illinois 61801
Submitted by: Rebecca Gering
---------------
%A Peiyi Tang
%A Pen-Chung Yew
%T Processor Self-Scheduling for Multiple-Nested Parallel Loops
%R L00548
%I CSRD, University of Illinois at Urbana-Champaign
%C Urbana, Illinois 61801
%D August 1986
%X Processor self-scheduling is a useful scheme in a multiprocessor system if
the execution time of each iteration in a parallel loop is not known in
advance and varies substantially, or if there are multiple nestings in
parallel loops which makes static scheduling difficult and inefficient.
By using efficient synchronization primitives, the operating system is
not needed for loop scheduling. The overhead for the processor self-
scheduling is small.
We presented a processor self-scheduling scheme for a single-nested
parallel loop, and extend the scheme to multi-nested parallel loops.
Barrier synchronization mechanisms in the processors self-scheduling
schemes are also discussed.
%A David Kuck
%A Edward Davidson
%A Duncan Lawrie
%A Ahmed Sameh
%T Parallel Supercomputing Today and the Cedar Approach
%R L00558
%I CSRD, University of Illinois at Urbana-Champaign
%C Urbana, Illinois 61801
%D February 1986
%X More and more scientists and engineers are becoming interested in using
supercomputers. Earlier barriers to using these machines are disappearing
as software for their use improves. Meanwhile, new parallel supercomputer
architectures are emerging that may provide rapid growth in performance.
These systems may use a large number of processors with an intricate memory
system that is both parallel and hierarchical; they will require even
more advanced software. Compilers that restructure user programs to
exploit the machine organization seem to be essential. A wide range of
algorithms and applications is being developed in an effort to provide
high parallel processing performance in many fields. The Cedar super-
computer, presently operating with eight processors in parallel, uses
advanced system and applications software developed at the University
of Illinois during the past 12 years. This software should allow the
number of processors in Cedar to be doubled annually, providing rapid
performance advances in the next decade.
%A Sy-Shin Lo
%A Bernard Philippe
%T The Symmetric Eigenvalue Problem on a Multiprocessor
%R L00568
%I CSRD, University of Illinois at Urbana-Champaign
%C Urbana, Illinois 61801
%D April 1986
%X This paper is a survey of optimal methods for solving the symmetric
eigenvalue problem on a multiprocessor. Although the inherent
parallelism of the Jacobi method is well known, the convergence is
not insured and the total number of operations is larger than that
of the methods which transform the full matrix into a tridiagonal matrix.
In this paper we are concerned with the methods which deal with the
tridiagonalized eigenvalue systems.
In Section 2 we review the sequential methods for solving symmetric
eigenvalue problems by tridiagonal reduction using the appropriate
routines from EISPACK. Section 3 discusses the different ways of
exploiting parallelism, examples of linear recurrence and orthogonali-
zation are presented. Section 4 and Section 5 deal with the two
algorithms, SESUPD, a parallel version of TQL2 in EISPACK, and TREPS,
a parallel version of BISECT+TINVIT, respectively. In Section 6 we
compare these two approaches.
%A Gene Golub
%A Robert Plemmons
%A Ahmed Sameh
%T Parallel Block Schemes for Large Scale Least Squares Computations
%R L00574
%I CSRD, University of Illinois at Urbana-Champaign
%C Urbana, Illinois 61801
%D April 1986
%X Large scale least squares computations arise in a variety of scientific
and engineering problems, including geodetic adjustments and surveys,
medical image analysis, molecular structures, partial differential
equations and substructuring methods in structural engineering. In each
of these problems, matrices often arise which possess a block structure
which reflects the local connection nature of the underlying physical
problem. For example, such super-large nonlinear least squares
computations arise in geodesy. Here the coordinates of positions are
calculated by interatively solving overdetermined systems of nonlinear
equations by the Gauss-Newton method. The U.S. National Geodetic
Survey will complete this year (1986) the readjustment of the North
American Datum, a problem which involves over 540 thousand unknowns and
over 6.5 million observations (equations). The observation matrix for
these least squares computations has a block angular form with 161
diagonal blocks, each containing 3 to 4 thousand unknowns. In this
paper parallel schemes are suggested for the orthogonal factorization
of matrices in block angular form and for the associated backsubstitution
phase of the least squares computations. In addition, a parallel scheme
for the calculation of certain elements of the covariance matrix for
such problems is described. It is shown that these algorithms are ideally
suited for multiprocessors with three levels of parallelism such as the
Cedar system at the University of Illinois.
%A Edward Davidson
%A David Kuck
%A Duncan Lawrie
%A Ahmed Sameh
%T Supercomputing Tradeoffs and the Cedar System
%R L00577
%I CSRD, University of Illinois at Urbana-Champaign
%C Urbana, Illinois 61801
%D May 1986
%X A number of tradeoffs made in supercomputer and minisupercomputer designs are
discussed. The Cedar System is discussed in this context. Applications and
software work on the Cedar System are presented.
%A N. R. Nandakumar
%T Polynomial Preconditioning of Symmetric Indefinite Systems
%R L00580
%I CSRD, University of Illinois at Urbana-Champaign
%C Urbana, Illinois 61801
%D June 1986
%X In this paper a new algorithm is proposed to solve indefinite symmetric
sparse linear systems using polynomial preconditioning. The precondi-
tioning polynomials are derived using simple calculus techniques.
In practice this algorithm, unlike other algorithms proposed in the
literature, does not require complete knowledge of the intervals
containing the spectrum of the given matrix. This method is superior
to CG on normal equations, SYMMLQ and Saad's GCI algorithm. Also
the algorithm is amenable for implementing on parallel machines.
The performance of the algorithm is compared with Saad's GCI algorithm
on the minisupercomputer Alliant FX/8.
##### MARYLAND #####
---------------
The following report(s) may be obtained from:
Howard C. Elman
University of Maryland
Computer Science Technical Report 1704
<elman@mimsy.umd.edu>
Submitted by: Howard Elman
---------------
%A Howard C. Elman
%T Approximate Schur Complement Preconditioners for
Serial and Parallel Computers
%R Technical Report 1704
%I University of Maryland, Computer Science
%X We consider a class of preconditioning techniques for sparse matrices based
on computing the Schur complement of a (suitably ordered) matrix.
The techniques generalize the reduced system methodology for 2-cyclic
matrices to non-2-cyclic matrices, and in addition, they are well-suited
to parallel architectures. We demonstrate their effectiveness with
numerical experiments on a nine-point finite difference operator,
and we present an analysis showing that they can be implemented efficiently
on multiprocessors.
##### NYU #####
---------------
The following report(s) may be obtained from:
James W. Demmel
Courant Institute
New York, NY 10012
<demmel@nyu-acf8.arpa>
Submitted by: James Demmel
---------------
%T Accurate Solutions of Ill-posed Problems in Control Theory
%A James W. Demmel
%A Bo Kagstrom
%I Courant Institute, New York University
%C New York, NY 10012
%X We present computable, guaranteed error bounds for controllable
subspaces and uncontrollable modes, unobservable subspaces and
unobservable modes, supremal (A,C) invariant subspaces in
ker D, supremal (A,C) controllability subspaces in ker D,
the uncontrollable modes within the supremal (A,C) invariant
subspace in ker D, and invariant zeroes. In particular our
bounds apply in the nongeneric case when the solutions are
ill-posed. We do this by showing that all these features are
eigenspaces and eigenvalues of certain singular matrix pencils,
which means they may all be computed by a single algorithm
to which we can apply a perturbation theory for general singular
matrix pencils. Numerical examples are included.
---------------
The following report(s) may be obtained from:
Dick Dee
Courant Institute
251 Mercer Street
New York, NY 10012
<dee%nyuacf.bitnet@wiscvm.wisc.edu> or <dee@nyu-acf7.arpa>
Submitted by: Dick Dee
---------------
%A Stephen E. Cohn
%A Dick P. Dee
%I Courant Institute, New York University
%C 251 Mercer Street, New York, NY 10012
%X The situation in which one would like to solve a numerical initial value
problem for a set of partial differential equations (PDE), but cannot for lack
of complete initial data, is arising ever more frequently in applied sciences
and engineering. The attempt to estimate the evolving state by inserting
incomplete data into a numerical model as they become available at different
instants of time is called "data assimilation" or "filtering."
We show that if data generated by linear PDE are inserted properly, then
complete observability of the discrete numerical model is necessary and
sufficient for asymptotic stability of the data assimilation process. This
complete observability means that, had the data been generated instead by
the discrete model, then they would define the state of the model uniquely
after some finite time.
Simple observability criteria for discretizations of linear, constant-
coefficient PDE on periodic domains are formulated in terms of properties of
the symbol of the numerical scheme. It is shown that spurious numerical
dispersion can detract from observability by yielding a multi-valued
symbol. While observability for the advection equation demands the addition
of dissipation to the leapfrog and Crank-Nicolson schemes, dissipation is
not needed for second-order and sixth-order accurate box schemes.
##### OAK RIDGE #####
---------------
The following report(s) may be obtained from:
D. C. Human
Mathematical Sciences
P. O. Box Y, Bldg. 9207A
Oak Ridge National Laboratory
Oak Ridge, Tennessee 37831
<human@ornl-msr.arpa>
Submitted by S. Thompson, A. D. Solomon
---------------
%A S. Thompson
%T Remarks on the Choice of a Stiff Ordinary Differential Equation Solver
%R ORNL-6189
%I Oak Ridge National Laboratory
%C Oak Ridge, Tennessee
%D March, 1986
%X This report discusses several aspects of the solution of fluid
flow problems. A model problem is described which captures
several of the salient characteristics of such problems. The
model problem is derived from the one-dimensional Euler
equations. It corresponds to the subcooled liquid region of a
three region reactor steam generator model. The defining partial
differential equations are discretized spatially using the method
of pseudo-characteristics. The resulting system of ordinary
differential equations is then solved by the method of lines
using each of several available ordinary differential equation
(ode) solvers. In addition to demonstrating that fluid flow
problems can be solved effectively using adaptive mathematical
software, the results also illustrate the typical performance of
each solver as well as the relative merits of special solver
features (e.g., automatic method switching and exploitation of
problem sparsity or bandedness). The model problem is also
modified and used to illustrate several commonly observed
characteristics of fluid flow models. The results illustrate
the potential effect on solver performance of characteristics such
as inaccurate water property calculations. Consequently, they
explain apparent anomalies often attributed incorrectly
to poor solver design and algorithmic implementation. The
results also support the choice of the sparse ordinary
differential solver LSODES for the solution of fluid flow
problems.
%A A. D. Solomon
%A D. G. Wilson
%A V. Alexiades
%T An Approximate Analysis of the Formation of a Buoyant Solid Sphere in a Supercooled Melt
%R ORNL-6212
%I Oak Ridge National Laboratory
%C Oak Ridge, Tennessee
%D March 1986
%X A mathematical model is presented for the idealized formation and
development of a bouyant sphere solidifying in an infinite
pool of supercooled liquid. The solid and liquid are of the same
pure material and the solid is less dense than the liquid.
Initially the liquid is at a uniform temperature
that is below its equilibrium freezing temperature,
@T sub cr@, but above the so called hypercooled temperature,
@T sub cr@ - @H/c sub L ^@. Here @H@ and @c sub L@ are the latent
heat of solidification and the specific heat of the liquid respectively.
An approximate solution is derived based on the Megerlin approximation
method.
%A V. Alexiades
%A A. D. Solomon
%T Critical Radius for Nucleation
%R ORNL/TM-9931
%I Oak Ridge National Laboratory
%C Oak Ridge, Tennessee
%D April 1986
%X The free energy of formation and the critical radius for
homogeneous nucleation of a spherical nucleus in supercooled
liquid, at given temperature and ambient pressure, are
determined, taking fully into account surface tension, curvature,
and pressure effects. We allow the specific heats and densities
of the two phases to be different and all thermophysical
properties to be temperature dependent. In the simple case in
which classical nucleation theory is valid, our result predicts a
critical radius of about 80% of the classical value.
%A V. Alexiades
%A A. D. Solomon
%A D. G. Wilson
%T The Formation of a Solid Nucleus in Supercooled Liquid
%R ORNL-6280
%I Oak Ridge National Laboratory
%C Oak Ridge, Tennessee
%D June 1986
%X A careful analysis of the thermodynamics of a phase transition involving
supercooled liquid is presented. Relations are developed for equilibrium
temperature, pressures and volumes of solid and liquid from conservation
relations and mathematical statements of thermal, mechanical and chemical
equilibrium. Surface area, curvature and pressure effects are considered.
Initial data are the temperature, pressure and volume of a supercooled
liquid in which nucleation is assumed to have occurred. Generalized
Clapeyron type equations in the presence of curved interfaces and a
generalization of the Gibbs-Thompson relation between curvature of the phase
change interface and the transition temperature at the interface are derived
and discussed.
%A A. D. Solomon
%A D. G. Wilson
%T A Stefan-Type Problem With Void Formation and its Explicit Solution
%R ORNL-6277
%I Oak Ridge National Laboratory
%C Oak Ridge, Tennessee
%D June 1986
%X The density of most materials changes under melting and freezing.
For certain materials being considered as potential latent heat storage
materials for use in a space station, this density change may be large.
A density change induces a change of volume of the material occupying a
given container and this creates vapor "bubbles" or "voids."
We have formulated a problem that models a phase change process with
such a void and that has an explicit solution.
We examine the solution to gain a further understanding of the thermal
and phase change performance of a material in which such voids form.
We also look at the related problem of melting a material in which a void
is initially present.
Examples are given of the performance of a lithium fluoride phase
change material subject to melting and freezing.
%A A. D. Solomon
%A V. Alexiades
%A D. G. Wilson
%A G. A. Geist
%A J. Kerper
%T An Enthalpy Method for the Solidification of a Supercooled Liquid
%R ORNL-6217
%I Oak Ridge National Laboratory
%C Oak Ridge, Tennessee
%D March 1986
%X The classical enthalpy formulation of phase change processes does
not permit modeling solidification of supercooled liquids. This
report describes an enthalpy-based method for simulating a
freezing process involving a planar front moving into a
supercooled but not hypercooled liquid. The method is applicable
when the liquid is initially below its equilibrium freezing
temperature, @T sub cr@, but above @T sub cr~-~H@/@c sub L~@.
Here @H@ and @c sub L@ are the latent heat of solidification and
the specific heat of the liquid respectively. The report also
discusses a computer implementation of the method in one-dimension
and results of its application to a particular problem
with an explicit solution.
%A A. D. Solomon
%A V. Alexiades
%A D. G. Wilson
%A J. Greenberg
%T A Hyperbolic Stefan Problem with Discontinuous Temperature
%R ORNL-6216
%I Oak Ridge National Laboratory
%C Oak Ridge, Tennessee
%D March 1986
%X In an earlier paper a Stefan problem for hyperbolic heat transfer
was formulated under the condition of continuity of the
temperature at the moving front. In this paper we examine this
assumption and present an alternative model in which continuity
is not imposed. For this case, a problem with an explicit
solution is presented, as are some qualitative properties of
solutions to some similar problems. In addition, we derive
quasi-steady approximate solutions to certain other benchmark
problems.
##### PENN #####
---------------
The following report(s) may be obtained from:
Computer Science Department
Whitmore Lab
Pennsylvania State University
University Park, PA, 16802, U.S.A.
Submitted by: Alex Pothen <pothen@psuvax1.bitnet>
---------------
%A Alex Pothen
%T Equilibrium graphs in structural optimization
%R CS-86-22
%I Computer Science Department, Penn State UNiv.
%C Whitmore Lab, Pennsylvania State University, University Park, PA, 16802
%D August, 1986
%X We introduce the equilibrium graph, the bipartite graph of
the equilibrium matrix, as a useful tool in the computation of
a sparse self-stress matrix.
This matrix is a basis for
the null space of the equilibrium matrix, and is needed in the
force method of structural optimization.
We argue that this graph conveys more information about the
physical structure than the equilibrium matrix.
The concept of this graph helps in
the design of an equilibrium graph algorithm to compute a
self-stress matrix.
The algorithm has two phases: in the first graph phase,
a sizable number of the columns of the self-stress matrix
is computed from the equilibrium graph.
In a second triangular phase, the triangular algorithm (Coleman and Pothen
(1986)) is used to compute the rest of the columns.
For two of the problems, this algorithm computes sparsest self-stress
matrices. For others, it speeds up the computations considerably.
##### PURDUE #####
---------------
The following report(s) may be obtained from:
Manolis Vavalis
Computer Science Department
Purdue University
<mav@purdue.arpa>
Submitted by: Manolis Vavalis
---------------
%A E. N. Houstis
%A E. A. Vavalis
%A J. R. Rice
%T Convergence of O( h sup 4 ) Cubic Spline Collocation Methods
for Elliptic Partial Differential Equations
%I Computer Science Dept., Purdue University.
%D May 15, 1986.
%P 33
%X This paper present a new class of collocation methods using cubic
splines for solving elliptic partial differential equations (PDEs).
The error bounds obtained for these methods are optimal. The methods
are formulated and a convergence analysis is carried out for a board
class of elliptic PDEs. Experimental results confirm the optimal
convergence and indicate that these methods are computationally more
efficient than methods based on either collocation with Hermite cubics
or on Galerkin with cubic splines. Various iterative methods have been
used to solve the cubic spline collocation equations.
##### WATERLOO #####
---------------
The following report(s) may be obtained from:
Andy Conn
Computer Science Department
University of Waterloo
Waterloo, Ontario N2L 3G1
<arconn@waterloo.csnet or na.conn@su-score.arpa>
Submitted by: Andy Conn.
---------------
%A A.R. Conn
%A N.I.M. Gould
%A Ph.L. Toint
%T Global convergence of a class of trust region algorithms for
optimization with simple bounds.
%R CS-86-31
%I Computer Science Department, University of Waterloo
%C Waterloo, Ontario, CANADA N2L 3G1
%X This paper extends the known excellent global convergence properties of trust
region algorithms for unconstrained optimization to the case where bounds on
the variables are present. Weak conditions on the size of the Hessian
approximations are considered. It is also shown that the proposed algorithms
reduce to an unconstrained calculation after finitely many iterations, allowing
a fast asymptotic rate of convergence.
%A A.R. Conn
%A N.I.M. Gould
%A Ph.L. Toint
%T Testing a class of methods for solving minimization problems
with simple bounds on the variables.
%R CS-86-45
%I Computer Science Department, University of Waterloo
%C Waterloo, Ontario, CANADA N2L 3G1
%X We describe the results of a series of tests upon a class of new
methods of trust region type for solving the simple bound constrained
minimization problem. The results are encouraging and lead us to believe
that the methods will prove useful in solving large problems.
---------------
The following report(s) may be obtained from:
Henry Wolkowicz
Department of Combinatorics and Optimization
University of Waterloo
Waterloo, Ontario N2L 3G1
<hwolkowicz@waterloo.csnet>
Submitted by: Henry Wolkowicz
---------------
%T A Volume and Constraint Reducing Algorithm for Linear Programming
%A Henry Wolkowicz
%A Adi Ben-Israel
%X An algorithm is presented for solving the linear
programming problem. At each iteration, the algorithm moves through the
middle of the feasible set to a nonadjacent vertex. It
simultaneously discards 'half' of the feasible set and at least
one constraint is made redundant. The algorithm is unaffected by
degeneracy.
##### WEIDLINGER #####
---------------
The following report(s) may be obtained from:
Victor Pereyra
Weidlinger Associates
620 Hansen Way
Palo Alto, CA 94304
<pereyra@su-score.arpa>
Submitted by: Victor Pereyra
---------------
%A V. Pereyra
%A G. Wojcik
%T Numerical Methods for Inverse Problems in Three-Dimensional Geophysical
Modeling
%I Weidlinger Associates
%C 620 Hansen Way, Palo Alto, CA 94304
%P 70
%D July 31, 1986