[comp.doc.techreports] tr-input/uklirb

leff@smu.CSNET (Laurence Leff) (12/07/87)

This is our latest list of abstracts of SEKI REPORTS and SEKI WORKING PAPERS
for both 1986 and 1987.

PhD theses can be ordered at a nominal fee of US$ 10, SEKI REPORTS will cost
US$ 2, and SEKI WORKING PAPERS (which are mostly written in German) will cost
US$ 1.50, unless there is an exchange agreement.

There is, however, another way to obtain SRs and SWPs. You can subscribe to all
papers published during a year at a price of US$ 60. You will get each SR and
SWP immediately after its publication. The fee covers both postage and packing.
If you want to subscribe, please write (don't e-mail) and enclose a cheque.
The subscription will end after one year unless you explicitly order and pay
for another year.

If you want to order any publications, please enclose a cheque payable to
the university with the appropriate amount either in US$ or its equivalent in
your own currency. Cash is welcome, too.

Please, send your orders to the following address:

Scarlet Noekel
Universitaet Kaiserslautern
Fachbereich Informatik
Postfach 3049

6750 Kaiserslautern
West Germany

Phone: +49-631-205-2802
UUCP: ...!mcvax!unido!uklirb!scarlet

-------------------------------------------------------------------------


SEKI REPORTS 1987





SEKI REPORT SR-87-01   (modified PhD Thesis)

Karl-Hans Blaesius

Equality Reasoning Based on Graphs

ABSTRACT: The theoretical and practical problems of equality rea-
soning  in  Automated  Deduction  are  notorious. A new method is
presented to cope with the enormous search  that  usually  arises
when  equational axioms are present. Starting from an empty graph
a production system constructs graphs which  represent  solutions
for simpler problems defined by abstraction. These graphs contain
global information and are plans for guiding  the  search  for  a
proof  of  the  original problem, represented in the final graph.
The construction of equality graphs  is  based  on  the  idea  to
reduce  the  differences between two terms by seperating toplevel
symbol and subterms of a functional term. The impact of  the  ex-
plicit  representation  of information contained in the inference
system on the control of inferences is discussed in this work and
the  method  is compared to other equality reasoning methods. The
basic principle of our method is  formalized  and  soundness  and
completeness  results are given. An implementation of this method
confirmed its adequacy for equality reasoning.  Such  implementa-
tions  led to the rejection of previous approaches, but conducted
the ideas to the development of our final inference system.



SEKI REPORT SR-87-02   (PhD Thesis)

Walter Olthoff

The Connection between Applicative and Procedural Languages in an
Integrated Software Development and Verification System

ABSTRACT:  Professional production of verified software is incon-
ceivable  without  using  sophisticated  software engineering en-
vironments that either support or even enforce  the  verification
of  each development step. In modern systems not only several but
also principally different languages are offered to  a  user.  In
order  to  solve the problem of verifying the transition from ap-
plicative to procedural levels, we present an algebraic  specifi-
cation  language  and  a  programming language in a common formal
framework. The denotional semantics of our languages are based on
concepts of abstract data type theory. The procedural programming
language ModPascal is introduced as an extension of standard Pas-
cal   by  constructs  and  concepts  of  algebraic  specification
languages. We develop an algebraic notion of correct  realization
that makes use of so-called representation objects which serve as
a connection between language levels. A  semi-automatical  method
for  the correctness proof is given; reference is made to the In-
tegrated Software Development and Verification  System  in  which
the practical feasibility of the approach has been demonstrated.



SEKI REPORT SR-87-03

Manfred Schmidt-Schauss

Unification in Permutative Equational Theories is Undecidable

ABSTRACT:  An equational theory E is permutative in  every  valid
equation s =E, t the terms s and t have the same symbols with the
same number of occurences. The class  of  permutative  equational
theories  includes  associativity  and commutativity and hence is
important for unification  theory,  for  term  rewriting  systems
modulo  equational  theories  and  corresponding  completion pro-
cedures. It is shown in this research note that there is no algo-
rithm  that  decides  E-unifiability of terms for all permutative
theories.

        The proof  technique  is  to  provide  for  every  Turing
machine  M  a  permutative theory with a confluent term rewriting
system such that narrowing on certain terms simulates the  Turing
machine M.



SEKI REPORT SR-87-04

Rolf Socher

Boolean Simplification of First Order Formulae

ABSTRACT: For a deduction system it is paramount to recognize the
equivalence  or supsumption of formulae already at an early stage
of the search for the group. Since the equality of  two  formulae
is undecidable in general, we consider the syntactic criterion of
equality up to a renaming of the bound variables. We give an  al-
gorithm  that  decides whether two closed formulae are equivalent
in this restricted sense. It will become apparent that this prob-
lem is closely related to the graph isomorphism problem.

Furthermore an algorithm is given that decides a special form  of
subsumption  for  literal  sets.  The  simplification of formulae
based on the recognition of equivalence and subsumption  is  par-
ticularly  important for formulae that arise in program verifica-
tion tasks.



SEKI REPORT SR-87-05 (PhD Thesis)

Alex Herold

Combination of Unification Algorithms in Equational Theories

ABSTRACT: Unification in equational theories, i.e., solving equa-
tions  in  varieties, is a basic operation in Artificial Intelli-
gence, in Computer Science and in particular in the field of  Au-
tomated  Deduction. A shortcoming of most known unification algo-
rithms is that they are usually designed for a restricted set  of
terms only.

A method is presented to combine unification algorithms for  spe-
cial equational theories (with disjoint sets of function symbols)
to overcome this drawback. This method  applies  to  regular  and
collapse-free equational theories (a theory is regular if all its
axioms have the same variables on each side and  it  is  collapse
free  if  it does not contain an axiom of the form x = t, where x
is a variable and t is a non-variable term). The  basic  idea  is
first  to replace certain subterms by constants and then to unify
these constant abstractions. In a  recursive  step  the  replaced
subterms   are  considered  in  turn.  Provided  the  constituent
theories are finitary (a finite minimal complete set of  unifiers
always  exists)  the combination algorithm is shown to be totally
correct. The termination proof is based on an idea of Fages' ter-
mination proof for Stickel's associative and commutative unifica-
tion algorithm.

The abstract algorithm is applied to the theory of  associativity
and  commutativity  thus  extending  the algorithm of Livesay and
Siekmann to the whole class of first order terms. As a second ex-
ample of application the theory of commutativity is discussed. In
order to investigate the limitations of the approach  above,  the
theory of idempotence, which is a collapse theory, is studied and
a new idempotent unification algorithm for  the  class  of  first
order terms is presented.



SEKI REPORT SR-87-06 (Diploma Thesis)

Michael Beetz

Specifying Meta-Level Architectures for Rule-Based Systems

ABSTRACT: Explicit  and  declarative  representation  of  control
knowledge  and  well-structured  knowledge  bases are crucial re-
quirements for efficient development  and  maintanance  of  rule-
based  systems.  The  CATWEAZLE rule interpreter allows knowledge
engineers to meet these requirements by partitioning  rule  bases
and specifying meta-level architectures for control.

        Among others the following problems arise when  providing
tools for specifyimg meta-level architectures for control:



        1. What is a suitable language to specify meta-level
	   architectures for control?

        2. How can a general and  declarative  language  for
	   meta-level  architectures  be  efficiently interpreted?



The thesis outlines solutions to both research questions provided
by the CATWEAZLE rule interpreter:



        1. CATWEAZLE provides a small set of concepts  based
	   on a separation of control
           knowledge in control strategies and control  tac-
	   tics and a further categorization of control strategies.

        2. For rule-based systems it is efficient to  extend
	   the RETE algorithm such that control knowledge can 
	   be processed, too.



SEKI REPORT SR-87-07

Werner Nutt, Pierre Rety and Gert Smolka

Basic Narrowing Revisited

ABSTRACT: In this paper we study basic narrowing as a method  for
solving  equations  in  the initial algebra specified by a ground
confluent and terminating term rewriting system. Since we are in-
terested in equation solving, we don't study basic narrowing as a
reduction relation on terms but consider immediately its reformu-
lation as an equation solving rule. This reformulation leads to a
technically simpler presentation and reveals that the essence  of
basic narrowing can be captured without recourse to term unifica-
tion.

        We present an equation  solving  calculus  that  features
three  classes  of  rules. Resolution rules, whose application is
don't care nondeterministic, are the basic rules and suffice  for
a  complete solution procedure. Failure rules detect inconsistent
parts of the search space. Simplification rules,  whose  applica-
tion  is  don't  care  nondeterministic, enhance the power of the
failure rules and reduce  the  number  of  necessary  don't  know
steps.

        Three of the presented simplification rules are new.  The
rewriting  rule  allows for don't care nondeterministic rewriting
and thus yields a marriage of basic  and  normalizing  narrowing.
The safe blocking rule is specific to basic narrowing and is par-
ticulary useful in conjunction with the rewriting rule.  Finally,
the unfolding rule allows for a variety of search strategies that
reduce the number of don't know alternatives that need to be  ex-
plored.

        Keywords: Equation solving, Universal  Unification,  Nar-
rowing, Basic Narrowing, Normalizing, Narrowing, Rewriting.

        Acknowledgements: Nutt and Smolka are funded by the  Bun-
desminister fr Forschung und Technologie under grant ITR8501A.



SEKI REPORT SR-87-08

Hans-Juergen Buerckert

Matching -  A Special Case of Unification?

ABSTRACT: Usually matching is considered as a special form of un-
ification.  Hence  most  research in unification theory  does not
take care of the problems arising with matching. After a  discus-
sion  of the various definitions of matching in the literature we
compare matching and unification in the more general framework of
restricted  unification. Restricted unification is unification of
terms where not all variables are allowed for substitution and of
course  matching  and unification are special cases of restricted
unification.

        We give certain results in similar behaviour of  matching
and  unification with respect to the existence and the cardinali-
ties of minimal  and  complete  solution  sets  (unification  and
matching   hierarchy)  in  so  called  collapse  free  equational
theories, and we show under which conditions this can be general-
ized to arbitrary theories. We also present counterexamples where
matching  and  unification  behave  differently,  especially   we
present  an equational theory, in which unification is decidable,
matching, however, is undecidable.

        Matching and restricted unification as  defined  here  is
equivalent  to  extend  the  given signature with free constants,
hence our results can also be interpreted as combination  results
of  combining  unification  in a given theory with unification in
the theory of free  constants.  Our  counterexamples  demonstrate
that this is not a trivial combination problem, since for example
unification might become undecidable under  this  extension.  The
solution  of  this special combination problem is a first step to
the more general case of combining unification of a given collaps
theory with unification of free functions.

        Key Words: equational theory, unification, matching, res-
tricted unification, collapse free theory, unification hierarchy,
combination of equational theories for unification.



SEKI REPORT SR-87-09

Michael Lessel, Harold Boley

5-UNIXPERT: Diagnosis of Printer Problems

ABSTRACT: The 5-UNIXPERT systems  perform  knowledge-based  diag-
nosis  in  the  domain  of printers as part of the computer peri-
phery. They are  implemented  in  LISPLOG,  a  functional/logical
language  that provided the required programming spectrum ranging
from operating-system calls to the explenation component.



SEKI REPORT SR-87-10

Rolf Socher

Graph Isomorphism: Some Special Cases

ABSTRACT: This paper investigates some special  cases  of  graphs
with  a  small number of vertices or edges where a characteristic
property of the vertices and edges already determines  the  graph
up  to isomorphism. We also present counterexamples that show the
limits of this approach to the  graph  isomorphism  problem.  The
main  interest  in graph isomorphism for automated deduction lies
in the fact that the problem of deciding whether a  clause  is  a
variant  of  another clause is a generalization of the graph iso-
morphism problem.

SEKI REPORT SR-87-11

Gert Smolka

TEL (Version 0.9) - Report and User Manual

ABSTRACT: TEL is a second generation logic  programming  language
integrating  types  and  functions with relational programming la
Prolog. Relations are defined as in Prolog and  are  executed  by
typed  resolution  and  backtracking.  Functions are defined with
conditional equations and are executed by typed innermost rewrit-
ing.

        The most innovative aspect of TEL  is  its  type  system,
which  accomodates  parametric polymorphism as in ML and subtypes
as in OBJ2. Variables need  not  be  declared  since  TEL's  type
checker  infers their most general types automatically. Types are
present at runtime through typed matching and unification: values
are  tested  for  membership  in  subtypes and variables are con-
strained to subtypes.

        TEL is not a toy language. Almost the entire  TEL  system
has been written in TEL. TEL has a module facility supporting the
incremental construction of large programs. Furthermore, TEL sup-
ports  type-safe  file handling and other extra-logical manipula-
tions.

        Acknowledgements: This research was funded  by  the  Bun-
desministerium fr Forschung und Technologie under grant ITR8501A.
Version 0.9 of TEL  is  being  implemented  by  Costa  Mossiadis,
Werner  Nutt, Reinhard Praeger, and Georg Seul. The first version
of TEL was implemented by Michael Schmitt.



SEKI REPORT SR-87-12

Michael A. McRobbie

Towards  Efficient Knowledge-Based Automated Theorem Proving  for
Non-Standard Logics

ABSTRACT: Researchers in  artificial  intelligence  and  computer
science have recently begun to take an interest in logics studied
by logicians and philosophers, in some cases for millennia. These
are  logics  such  as  deontic, epistemic, intuitionistic, modal,
paraconsistent, relevent and temporal logics, all  of  which  are
usually collectively called non-standard logics. Given the nature
of much   of  this  interest,  many  of  these  researchers  have
correctly  pointed  to  the  importance  of  efficient  automated
theorem proving systems for these  logics.  However  very  little
work  has  actually been done on implementing such systems. Where
such systems have been implemented, they have  usually  not  been
able  to  prove  anything other than elementary theorems of these
logic. In this paper we discuss a technique for greatly  increas-
ing the efficiency of automated theorem provers for these logics.
In short this technique takes advantage of the  fact  that  while
most  important  non-standard  logics  do not have finite charac-
teristic models, (in the sense that thruth tables  are  a  finite
charac-  teristic  model for classical propositional logic), they
do have finite models. These models invalidate all  the  theorems
of  a given logic and some non-theorems. They invalidate the rest
of the non-theorems. Hence  this  technique  involves  using  the
models  to  direct  a search by an automated theorem prover for a
proof by filtering out or pruning the items of the  search  space
which  the models invalidate. Consequently we call this technique
the model pruning technique. This  technique  does  rely  on  the
underlying  proof  method  of the automated theorem prover having
certain characteristics, but provided it does, this technique has
almost universal applicability.

        Key Phrases: Automated Theorem Proving, Non-Standard Log-
ics,  Non-Classical  Logics,  Many-Valued Logics, Knowledge-Based
Search Models.

        Computing Reviews Classification: F.4.1, I.2.3, I.2.8.



SEKI REPORT SR-87-13

Rolf Socher

Optimized Transformation Between Conjunctive and Disjunctive Nor-
mal Forms

ABSTRACT: Resolution based theorem proving  systems  require  the
conversion  of predicate logic formulae into clausal normal form.
One step of all procedures performing this transformation is  the
multiplication into conjunctive normal form. In general this is a
critical step, since it can cause an inflation  of  the  original
formula.  In  many cases the clausal normal form that is obtained
even contains a lot of redundant clauses. This paper  presents  a
multiplication  algorithm  that  avoids the generation of many of
these redundant clauses. It is shown that the set of clauses gen-
erated  by  this  algorithm is the set of prime implicants of the
original formula. Especially in those cases where the usual muli-
plication  algorithm  produces  a  contradictory  set  of  ground
clauses the improved algorithm generates the empty clause.



SEKI REPORT SR-87-14

Gert Smolka et al.

Order-Sorted Equational Computation

ABSTRACT: The expressive power of  many-sorted  equational  logic
can  be  greatly  enhanced  by allowing for supports and multiple
function declarations. In this paper we study some compu- tation-
al aspects of such a logic. We start with a self-contained intro-
duction to order-sorted equational logic including initial  alge-
bra  semantics  and  deduction rules. We then present a theory of
order-sorted term rewriting and show that the key results for un-
sorted rewriting extend to sort decreasing rewriting. We continue
with a review of order-sorted unification  and  prove  the  basic
results.

        In the second part of the  paper  we  study  hierarchical
order sorted specifications with strict partial functions. We de-
fine the appropriate homomorphisms for strict algebras  and  show
that  every strict algebra is isomorphic to a strict algebra with
at most one error element. For  strict  specifications,  we  show
that their categories of strict algebras have initial objects. We
validate our approach to partial functions by proving  that  com-
pletely defined total functions can be defined as partial without
changing the initial algebra semantics. Finally, we  provide  de-
cidable sufficient criteria for the consistency and strictness of
ground confluent rewriting systems.

        Keywords:  Order-Sorted   Equational   Logic,   Algebraic
Specifications,  Initial  Algebra  Semantics,  Partial Functions,
Rewriting, Unification, Subsorts, Inheritance, Logic Programming.

        Acknowledgements: We are grateful to Hans-Juergen Buerck-
ert, Manfred Schmidt- Schauss and Joerg Siekmann for many discus-
sions. The research reported in this paper has been supported  in
part  by  the Bundesminister fuer Forschung und Technologie, con-
tract ITR8501A, the office of Naval Research,  contracts  N00014-
82-C-0333, N00014-85-C-0417 and N00014-86-C-0450, and a gift from
the system Development Foundation.





SEKI WORKING PAPERS 1987







SEKI WORKING PAPER SWP-87-01 (in German)

Ansgar Bernardi, Michael Dahmen, Manfred Meyer

LISPLOG Benutzerhandbuch

ABSTRACT: This paper is intended to serve as a user's manual  for
LISPLOG.

        It gives a short description of the syntax of LISPLOG to-
gether  with an explanation of all commands available on the top-
level of LISPLOG.

The concepts and the commands of the LISPLOG  tracer/breaker  are
also described.

        This manual should give you the ability to use the  LISP-
LOG system.

It doesn't explain the concepts of the LISP/PROLOG integration or
its implementation details.



SEKI WORKING PAPER SWP-87-02 (in German)

Manfred A. Meyer

Entwurf und Implementierung einer Interaktionsumgebung fr LISPLOG

ABSTRACT: This paper deals with the design and implementation  of
an interactive programming environment for an integration of LISP
and PROLOG called LISPLOG, which is currently  developed  at  the
university of Kaiserslautern.

        This programming environment  consists  of  a  box  model
tracer  with  backtrace possibility and a very comfortable break-
package.

        Two innovative interactive  tools  for  non-deterministic
languages  that  have been explored in LISPLOG are a 'cut indica-
tor' and a 'manual cutter'.

        In addition, an extension of the box model for the  func-
tional part of LISPLOG is discussed and prototypically implement-
ed.

        Finally, several improvements of the user  interface  are
roughly  outlined,  and  the current implementation is critically
reviewed once again.



SEKI WORKING PAPER SWP-87-03

Harold Boley

Fone and Fall:  Forward-with-Backward Chaining  in LISPLOG

ABSTRACT:  A small extension for incorporating  forward  chaining
into  and  on  top  of  LISPLOG's  backward chaining framework is
presented.



SEKI WORKING PAPER SWP-87-04

Harold Boley

Goal: Backward-with-Forward Chaining in LISPLOG

ABSTRACT: A tiny extension for  performing  forward  chaining  to
prove  goals  set  up by LISPLOG's backward-chaining mechanism is
introduced.



SEKI WORKING PAPER SWP-87-05 (in German)

Volker Penner

Implementation  logischer Sprachen auf Multiprozessor-Systemen

ABSTRACT: The main contribution of this paper consists in a  pro-
posal  for implementing parallel logical languages on distributed
systems using

        - a parallel extension (PWAM) of Warren's  abstract
	  machine (WAM) in order to take advantage of approved techniques,

        - an approach for modularizing logical programs  in
	  order to meet restricted resources, and

        - integrating committed  choice  nondeterminism  in
	  order to allow backtrack-free implementations.

The text is devided as follows: first we give  an  overview  over
the  main  notions playing a dominent role in parallel languages,
then we introduce the basic process-oriented semantics  and  dis-
cuss  the  way "or"-parallelism is handled. Starting point of the
last chapters is a brief description of the WAM,  which  is  fol-
lowed  by  a  discussion  of  the  static  structure  and dynamic
behavior of it's parallel counterpart (PAM) stressing essentially
the basic ideas and omitting technical details.





SEKI WORKING PAPER SWP-87-06 (in German)

Michael Dahmen

Modules and Streams in Lisplog

ABSTRACT: This paper discusses two extensions of the LISPLOG sys-
tem, called modules and streams.

The module system enables the LISPLOG user to build  his  program
from  a  number  of seperate databases (modules). The interaction
between the modules is defined by special interfaces. The  result
is a hierarchical structure of the LISPLOG database.

The second part describes an extension of the LISP-LISPLOG inter-
face  for  delayed  computation of the solutions of a LISPLOG in-
quiry. Therefore it becomes possible to  compute  any  number  of
solutions  without  specifying that number before the computation
starts. Even an infinite number of solutions may  be  intensional
represented by means of a stream.



SEKI WORKING PAPER SWP-87-08

Klaus Noekel

Conceptual Dependency Theory: ein Ueberblick   (in German)

ABSTRACT: The paper reflects the contents of a talk given at  the
2nd KL-AI-Workshop. Its main concern is the knowledge representa-
tion formalism of CD theory. We begin with an introduction to the
syntax  and  semantics  of CD structures. Next, we explain how CD
structures can be put to use in the representation of the meaning
contents  of  utterances. Primitive ACTs play an important aspect
in the transition from the input text to CD structures  and  lead
to  a general discussion of the principle of CD theory that mean-
ing should be represented using  a  small  (minimal?)  number  of
primitives. The exposition af the static aspects of CD structures
is complemented by a brief look at conceptual inference rules and
the  way  that  CD  theory was actually used in SchankTs programs
MARGIE and SAM. We close with a summary  of  critical  statements
made  by both Schank and other researchers in the field. Pointers
to the literature are provided.

Key Words: natural language processing, knowledge representation,
semantic nets.

leff@smu.UUCP (Laurence Leff) (08/20/89)

  
============================================================================
Project 'High Performance Transaction Systems' 
Data Management Systems Institute (Prof. Haerder)
Dept. of Computer Science
Univ. of Kaiserslautern, West Germany
 
Technical Reports  (1985 - 1989)

If you wish to request a copy of one of the following reports,
please contact: 
                  Dr. Erhard Rahm
                  Dept. of Computer Science
                  Univ. Kaiserslautern
                  P.O. Box 3049
                D-6750 Kaiserslautern, West Germany
                 E-mail: rahm@uklirb.uucp
                   
============================================================================
A. Reuter: 
Load Control and Load Balancing in a Shared Database Management System,
TR 129/85, March 1985
 
In shared DBMS, there is a need for dynamically setting a large number
of operational parameters, like transaction routing tables, assignment
of central resources, multiprogramming level etc. It is shown that
static techniques as they are applied in current centralized DBMS are
inadequate for exploiting the performance potential of a shared DBMS.
We therefore plead for using dynamic load control methods for adaptive
modification of the DBMS's operational parameters. After a brief survey
of the state of the art in database load control, we outline an approach 
for load balancing in a shared system, which consists of both analytic
models and heuristics. The key concepts are illustrated by a simple
example, and a number of areas for future research are identified. 
 
This paper appeared in 2nd Int. Conf. on Data Engineering, pp. 188-197, 1986
============================================================================
E. Rahm:
Primary Copy Synchronization for DB-Sharing, TR 137/85, July 1985

In  a  database  sharing  (DB-Sharing) system, multiple loosely or
closely  coupled  processors  share  access  to  a  single  set of
databases.  Such  systems  promise  better availability and linear
growth  of  transaction  throughput  at  equivalent  response time
compared to single processor database systems. The efficiency of a
DB-Sharing system heavily depends on the synchronization technique
used  for  maintaining consistency of the shared data. A promising
algorithm  is the primary copy approach which will be presented in
this paper. We describe the actions of the lock manager in a basic
and  in  a more advanced version. Furthermore, it is shown how the
lock  managers  can  be  enabled to deal with the so-called buffer
invalidation problem that results from the existence of a database
buffer in each processor. 
 
An improved version of this report appeared in INFORMATION SYSTEMS,
Vol. 11, No. 4, pp. 275-286, 1986
=========================================================================
E. Rahm:
A Reliable and Efficient Synchronization Protocol for Database Sharing
Systems, TR 139/85, Aug. 1985

We describe how the primary copy protocol for DB-sharing can be extended
to continue concurrency control after processor crashes. 
In particular, mechanisms are proposed to reconstruct lost lock
tables and to adapt the PCA allocation. The redundancy required for
reconstruction of lost lock information can be provided with nearly
no extra costs during normal processing. We outline additional recovery
actions which can partly be processed in parallel to permit a fast
crash recovery. 

This paper appeared in Proc. 3rd Int. Conf. on Fault-Tolerant 
Computing Systems, pp. 336-347, 1987 (Springer-Verlag) 
============================================================================
E. Rahm:
Buffer Invalidation Problem in DB-Sharing System, TR 154/86, Feb. 1986

In  the  near future large transaction processing systems will have to meet 
high requirements concerning throughput, response times, availability and
modular growth. A possible architecture for such high performance systems is 
DB-Sharing  where multiple loosely or closely coupled processors share the
common database at the disk level. Unfortunately, DB-Sharing has a buffer 
invalidation problem since each processor maintains a database buffer of its 
own and, therefore, multiple copies of a database page may reside in multiple 
buffers. Modification of any copy will thus invalidate all other copies.
This  paper  provides  a spectrum of solutions to  this  buffer  invalidation 
problem.  It  turns  out  that the most effective solutions are  feasible  if 
synchronization  and buffer control can be combined.  Further  sections  deal 
with  the implications of a processor crash and the use of a global  database 
buffer in the context of buffer invalidation. 

A subset of this report appeared as 'Integrated Solutions to Concurrency
Control and Buffer Invalidation' in Proc. 2nd Int. Conf. on Computers
and Applications, pp. 410-417, 1987
=========================================================================
E. Rahm:
Concurrency Control in DB-Sharing Systems, 1986
  
This paper gives an overview of concurrency control algorithms for
data sharing (DB-Sharing). We distinguish between locking and optimistic 
methods and between centralized and distributed solutions. 
Five synchronization protocols are described in some detail and 
compared with each other. 
 
This paper appeared in Proc. 16th Annual Conf. of the German Computer
Society GI, pp. 617-632, 1986 (Springer-Verlag)
=============================================================================
T. Haerder:
Handling Hot Spot Data in DB-Sharing Systems, IBM Research Report 
RJ 5523, 1987
 
We review various solutions for concurrency control on aggregate data
where the operations to be synchronized commute - at least for certain
value ranges. In particular, the escrow mechanism for centralized DBMS
is discussed and extended. Our investigations focus on the escrow
mechanism for a data sharing environment. We propose the use of global
escrow services which may be called asynchronously. Such an optimistic 
attitude seems to be apprpriate, since rejections of escrow
operations typically are rare events. The performance of the proposed
scheme may be further improved by refining it to a hierarchical escrow
scheme with a global escrow and distributed local escrows. 

This paper appeared in Information Systems, Vol. 13, No. 2, pp. 155-166, 1988
=============================================================================
E. Rahm:                                                    
Performance Evaluation of Primary Copy Synchronization in 
Database Sharing Systems, TR 165/87, Feb. 1987

In this paper we investigate the performance of the
primary copy protocol which has been proposed for concurrency
and coherency control in data sharing systems. A trace-driven
simulation system has been developed where the synchronization 
component together with buffer management and logging components 
have been completely implemented. Detailed simulation results 
are presented for two transaction loads and a varying number of 
processors.  In addition, the performance impact of different 
communication costs and load distribution strategies (affinity 
based vs. random routing) is investigated.
========================================================================
E. Rahm:
Design of Optimistic Methods for Concurrency Control in Database Sharing
Systems, 1987
  
Optimistic  methods  promise  less synchronization messages per transaction 
than  locking  algorithms for concurrency control in database sharing  (DB-
sharing) systems. We  describe  a new distributed protocol called broadcast
validation  where  the  validations  for  a  transaction are simultaneously
started  at  multiple  processors  by  a broadcast message. Such a parallel
validation  scheme  is  prerequisite  for reaching short response times and
high  transaction  rates.  The  use  of  timestamps permits simple and fast
validations  as  well  as  an  integrated  solution to the so-called buffer
invalidation  problem  that  is  introduced by the DB-sharing architecture.
Further  improvements  of our algorithm are proposed in order to reduce the
synchronization  overhead  and  to  allow  a combination with a distributed
locking  protocol  which is advisable for applications with higher conflict
probability. The communication and validation overhead of our algorithms is
quantified by simple estimates. 
 
This report appeared in Proc. 7th Int. Conf. on Distributed Computing
Systems, pp. 154-161, 1987
=============================================================================
T. Haerder:
On Selected Performance Issues of Database Systems, 1987
 
Performance, data integrity, and user-friendly access to data are
considered the cardinal properties of DBMS. But performance will
steadily receive more attention as more interactive applications are
designed and implemented for almost all domains of our life.
We discuss specific performance problems of 'conventional' DBMS,
DB/DC systems, and DBMS for 'non-standard' applications (e.g.
engineering, office, etc.). Then, a short survey attempts to sketch
the solutions achieved and the problems remaining for the most
performance-critical DBMS functions/components. 

This paper appeared in Proc. 4th Conf. on Measurement, Modelling and
Evaluation of Computer Systems, pp. 294-312, 1987 (Springer-Verlag)
=============================================================================
E. Rahm: 
Design and Evaluation of Concurrency and Coherency Control Techniques 
for Database Sharing Systems, TR 182/88, May 1988
    
This paper provides a survey and an empirical performance evaluation of 
concurrency and coherency control protocols for DB-sharing. 
For synchronization a number of locking and optimistic protocols are discussed
as well as some design alternatives like multiversion concurrency control or
schemes for reduced consistency requirements. Strategies for coherency 
control are classified and their integration into the synchronization 
protocols is described. For a realistic performance comparison of the 
presented protocols a trace-driven simulation system for loosely coupled 
DB-Sharing systems has been developed.  Within this system six concurrency 
and coherency control protocols together with buffer management and 
logging components have been completely implemented; a table-driven 
approach is pursued for distributing the transaction load among the 
processors. In addition to the quantitative comparison of the protocols, 
the performance impact of different communication costs and load 
distribution strategies (affinity based vs. random routing) is investigated.
 
A discussion of the simulation results (for 4 protocols) can also be found in
'Empirical Performance Evaluation of Concurrency and Coherency Control
Protocols for Data Sharing', IBM Research Report RC 14325, 1988
===========================================================================
E. Rahm,  A. Thomasian:
Distributed Optimistic Concurrency Control for High Performance 
Transaction Processing, IBM Research Report, Yorktown Heights, 1989
 
The performance of high-volume transaction processing systems is 
determined by the degree of hardware and data contention.  This is 
especially a problem in the case of distributed systems with global 
transactions accessing and updating objects from multiple systems.
While the conventional two-phase locking method of centralized
systems can be adapted for concurrency control in distributed systems, 
it may restrict system throughput to very low levels.
This is due to a significant increase in lock holding times and
associated transaction waiting time for locks, as compared to 
centralized systems.  Optimistic concurrency control (OCC) which 
is similarly extensible to distributed systems has the disadvantage of
repeated transaction restarts, which is a weak point of currently 
proposed methods.  We propose a new hybrid method based on OCC 
followed by locking, which is an integral part of distributed
validation and two-phase commit.  This advanced OCC method assures 
that a transaction failing its validation will not be re-executed 
more than once, in general. Furthermore deadlocks, which are difficult 
to handle in a distributed environment, are avoided by serializing 
lock requests.  We outline implementation details and compare
the performance of the new scheme with distributed two-phase locking.
==========================================================================
V. Bohn:
Performance Evaluation of a Distributed Database System by Means of
Discrete Event Simulation, 1989
 
We investigate the performance behavior of locally distributed database
system which use front-end processors to allocate the workload to
transaction processing nodes. The performance analysis is based on
a complex simulation system which is driven by page reference strings
and employs discrete event handling for simulation control. 
Main parameters of the simulation system include the number of
nodes and the multiprogramming level. Response time and throughput
results are analysed as well as internal statistics (communication 
frequency, CPU utilization etc.)

This paper appears in Proc. 3rd European Simulation Congress, Sep. 1989
==========================================================================
E. Rahm: 
A Framework for Workload Allocation in Distributed Transaction Systems,
Sept. 1989

Ever increasing demands for high transaction rates, limitations of
high-end processors, high availability and modular growth considerations
are all driving forces towards distributed architectures for transaction
processing.  A key prerequisite to take advantage of the capacity of a 
distributed transaction system, however, is an effective strategy for 
workload allocation. The distribution of the workload should not only 
achieve load balancing, but also support an efficient transaction 
processing with a minimum of inter-system communication.  To this end, 
dynamic - yet efficient - schemes for transaction routing have to be 
employed which are highly responsive to configuration changes and 
workload fluctuations.
We develop a general framework for workload allocation in distributed 
transaction systems.  This framework is based on a classification of
distributed architectures and execution models for transaction
processing which determine the optimization potential for workload 
distribution.  The framework helps to identify key factors and 
alternatives in the design of appropriate allocation schemes. 
Furthermore, it facilitates a comparison of existing schemes and may 
guide the development of new, more effective protocols.
===========================================================================
===========================================================================
The following reports are available in GERMAN: 
 
T. Haerder, E. Rahm: 
Quantitative Analysis of a Synchronization Algorithm for Data Sharing, 1985
(conference paper, simulation study of extended pass-the-buck protocol)

E. Rahm: 
Closely Coupled Architectures for Data Sharing, 1986 (conference paper)
 
E. Rahm: 
Algorithms for Efficient Load Control in Multiprocessor Database Systems, 1986
(journal paper)
          
T. Haerder, E. Rahm: 
Multiprocessor Database Systems for High Performance Transaction Procesing, 1986
(journal paper, classification of distributed architectures for database 
management)
         
T. Haerder, E. Rahm: 
High Performance Database Systems - Comparison and Evaluation of Current
Architectural Approaches and their Implementation, 1987
(journal paper, comparison of 'shared disk' and 'shared nothing')
          
K. Meyer-Wegener: 
Transaction Systems - Distributed Processing and Distributed Data Management,
1987 (journal paper)
 
T. Haerder:
Fault Tolerance in Transaction Processing Systems, 1987 (conference paper)

E. Rahm: 
Optimistic Concurrency Control in Centralized and Distributed Database Systems,
1988 (journal paper)

V. Bohn, T. Wagner: 
Cooperation between Load Control and Recovery in Data Sharing Systems, 1989

E. Rahm: 
The Data Sharing Approach to High Performance Transaction Processing, 1989
(journal paper, summarizes state of the art for 'shared disk' implementations)
    
V. Bohn:
Characteristics of Transaction Workloads in DB/DC Systems, 1989
(conference paper, workload characterization based on trace analysis)
 
V. Bohn, T. Wagner:
Transaction Chains - Concept and Implementation, 1989
(conference paper, implementation of 'sagas')
 
T. Haerder, K. Meyer-Wegener: 
Transaction Processing in Workstation/Server Environments, 1989

T. Haerder, E. Rahm:
Utilization of New Storage Architectures for High Performance Transaction
Processing, 1989
(use of shared semiconductor stores in data sharing systems)