[comp.doc.techreports] tr-input/ucsd.1

leff@smu.UUCP (Laurence Leff) (05/23/89)

The following technical reports on replicated data managements are available
from UC Santa Cruz.

Please address correspondence to:

Technical Report Librarian
Baskin Center for Computer Engineering & Information Sciences
Applied Sciences Building
University of California
Santa Cruz, CA 95064


TR #88-35
Cost $4


The Effect of Failure and Repair Distributions on Consistency Protocols

                      John L. Carroll
                 San Diego State University

                     Darrell D. E. Long
            University of California, Santa Cruz

                          Abstract

The accessibility of vital information can  be  enhanced  by
replicating  the data on several sites, and employing a con-
sistency control protocol to manage the copies.

     Various protocols have been  proposed  to  ensure  that
only current copies of the data can be accessed.  The effect
these protocols have on the accessibility of the  replicated
data is investigated by simulating the operation of the net-
work and measuring the performance.  Several strategies  for
replica maintenance are considered, and the benefits of each
are analyzed.  The details of the simulations are discussed.
Measurements  of the reliability and the availability of the
replicated data are compared and contrasted.

     The sensitivity of  the  Available  Copy  and  Dynamic-
linear  Voting protocols to common patterns of site failures
and repairs is studied in detail.  Exponential, Erlang, uni-
form, and hyperexponential distributions are considered, and
the effect  the  second  moments  have  on  the  results  is
analyzed.   The  relative performance of competing protocols
is shown to be only marginally affected  by  non-exponential
distributions,  validating the robustness of the exponential
approximations.

TR #88-34
Cost $4

           Reliability of Replicated Data Objects

                     Darrell D. E. Long
            University of California, Santa Cruz

                    Jehan-Francois Paris
                   University of Houston

                      John L. Carroll
                 San Diego State University

                          Abstract

Improved  fault  tolerance  of  many  applications  can   be
achieved  by  replicating  data at several sites.  This data
redundancy requires a protocol to maintain  the  consistency
of  the  data  object in the presence of site failures.  The
most commonly used scheme is voting.  Voting and  its  vari-
ants  are  unaffected  by  network partitions.  When network
partitions cannot occur, better performance can be  achieved
with available copy protocols.

     Common measures of dependability  include  reliability,
which  is  the  probability  that  a  replicated object will
remain constantly available over a fixed  time  period.   We
investigate  the  reliability  of  replicated  data  objects
managed by voting, available copy and their variants.  Where
possible, closed-form expressions for the reliability of the
various consistency protocols  are  derived  using  standard
Markovian  assumptions.  In other cases, numerical solutions
are found and validated with simulation results.


TR #88-23
Cost $4


       Regeneration Protocols for Replicated Objects

                     Darrell D. E. Long
            University of California, Santa Cruz

                    Jehan-Francois Paris
                   University of Houston

                          Abstract

The reliability and  availability  of  replicated  data  can
often  be  increased  by  generating  new replicas when some
become inaccessible due to system malfunctions.  This  tech-
nique has been used in the Regeneration Algorithm, a replica
control protocol based on file regeneration.

     The read and write availabilities  of  replicated  data
managed  by the Regeneration Algorithm are evaluated and two
new regeneration protocols are presented that overcome  some
of  its  limitations.  The first protocol combines regenera-
tion and the Available Copy approach to improve availability
of  replicated  data.   The second combines regeneration and
the Dynamic Voting approach to guarantee data consistency in
the  presence of network partitions while maintaining a high
availability.  Expressions for the availabilities of  repli-
cated  data  managed by both protocols are derived and found
to improve significantly on the availability achieved  using
extant consistency protocols.


TR #88-07
Cost $10


   The Management of Replication in a Distributed System

                     Darrell D. E. Long
                       (Ph.D. Thesis)
            University of California, San Diego

                          Abstract

The field of consistency control  protocols  for  replicated
data  objects  has  existed  for about ten years.  Its birth
coincides with the advent of distributed data bases and  the
communications  technology  required  to support them.  When
data objects are replicated around  a  computer  network,  a
protocol  must  be  chosen to ensure a consistent view to an
accessing process.  The replicas of the data object are then
said  to  be  mutually  consistent.   The  protocols used to
insure mutual consistency are known as  replica  control  or
consistency control protocols.

     There are several advantages to  a  distributed  system
over  a  single processor system.  Among these are increased
computing power and the ability to tolerate partial failures
due to the malfunction of individual components.  The redun-
dancy present in a distributed system has been the focus  of
much  research in the area of distributed data base systems.
Another benefit of this natural redundancy, along  with  the
relatively  independent  failure modes of the processors, is
that it allows the system to continue operation  even  after
some  of  the  processors  have failed.  This can be used to
construct data objects that are robust in the face  of  par-
tial system failures.

     The focus of this dissertation is the  exploitation  of
the  redundancy  present  in distributed systems in order to
attain an  increased  level  of  fault  tolerance  for  data
objects.   The  use of replication as a method of increasing
fault tolerance  is  a  well-known  technique.   Replication
introduces  the  additional complexity of maintaining mutual
consistency among the replicas of the data object.  The pro-
tocols  that manage the replicated data and provide the user
with a single consistent view of that data are studied,  and
a  comprehensive analysis of the fault tolerance provided by
several of  the  most  promising  protocols  are  presented.
Several  techniques  are employed, including Markov analysis
and discrete event simulation.  Simulation is used  to  con-
firm  and  extend  the results obtained using analytic tech-
niques.