[comp.object] Looking for explanation of OODB problem

plogan@apd.mentorg.com (Patrick Logan) (06/07/91)

It is not entirely clear what is being described below. Rather than
propose my interpretation, I am looking for other opinions and
experiences.

From "UNIX Today!", May 13, 1991, pp. 58, 64...

    [Paraphrasing: Dan Gerson, Xerox PARC, is developing collaborative
    systems, in particular a document database that will allow
    multiple users and track versions. He's not sure OODBs are best
    for his work. He's using a Sybase DBMS and is investigating
    ObjectStore from Object Design.]

    [Quoting: typing errors are mine.]

    But OODBMSes have their drawbacks as well. "Currently, OODBMSes
    are not very well developed," he [Gerson] says.

    "The basic problem in an OODBMS is in the user I/O inside of a
    transaction," Gerson adds. "Programs in an RDBMS have a looser
    connection to the data. Users issue an SQL query, the data base
    gets a table, makes a copy of it, and the user looks at it on his
    screen."

    During a transaction in an OODBMS, objects are loaded into memory,
    either real or virtual. "As soon as you execute a transaction, you
    can't see the object anymore," he says. "In an RDBMS, the system
    is giving you some sort of a copy. In an OODBMS, the system is
    giving you the actual object, so they're only valid in a
    transaction."

    Gerson syas he believes few people are aware of this fundamental
    flaw in OODBMS technology because so few systems are out there.
    Those that are function as single-user, workstation-based
    development systems, not multiuser systems where deadlocks can
    occur. Besides, he says, he thinks some OODBMS vendors are either
    unaware of the possible problem or deliberately ignoreing it.

    [End of quote.]
-- 
Patrick Logan, Try: plogan@dad.mentor.com, plogan@ws.mentor.com,
plogan@mentorg.com or substitute patrick_logan for plogan and try
that. You can also try going through uunet!(mntgfx, mentorg, mentor.com)!plogan
[Can you tell things are changing around here?]

jeusfeld@forwiss.uni-passau.de (Manfred Jeusfeld) (06/08/91)

In article <1991Jun6.194440.2879@apd.mentorg.com> plogan@apd.mentorg.com (Patrick Logan) writes:
>    are not very well developed," he [Gerson] says.
>
>    "The basic problem in an OODBMS is in the user I/O inside of a
>    transaction," Gerson adds. "Programs in an RDBMS have a looser
>    connection to the data. Users issue an SQL query, the data base
>    gets a table, makes a copy of it, and the user looks at it on his
>    screen."
>    Gerson syas he believes few people are aware of this fundamental
>    flaw in OODBMS technology because so few systems are out there.

Alright. I think the problem is that some implementations of
OODBMSs have occupied the label "object-oriented database" by
"object-oriented programming language with some kind of
persistency".

There is no doubt that oo abstraction principles can be incorporated
into a DBMS that does follow oldfashioned traditions like
declarative query language, view abstractions (aka deduction rules),
and global integrity constraints.

.-- Manfred Jeusfeld, Universitaet Passau

hsrender@happy.colorado.edu (06/09/91)

In article <1991Jun6.194440.2879@apd.mentorg.com>, plogan@apd.mentorg.com (Patrick Logan) writes:
> It is not entirely clear what is being described below. Rather than
> propose my interpretation, I am looking for other opinions and
> experiences.
> [description of problem deleted]

It *sounds* like what he is describing is an update problem.  For example,
if I retrieve an object, display it on the screen, and then make a change
to the object, I have two concerns: 1) will my change be automatically 
propagated to the database or is there a "commit point" sometime down the
road; 2) will my change be reflected in the displayed image of the object
(presuming you didn't make the change directly through the display).  In
the first case, it is a problem when to recognize the end of a transaction
and commit the changed objects.  Since the notion of transaction is not
commonly considered within OO programming systems, it will have to be addressed
as a new element of OODBMSs.  In the second case (view update), this is a
regular part of an OO system with displayed output, namely, how often do I
synchronize the displayed image of an object and the object itself.  I think
the two cases are analogous, at least when considered from the perspective
of the old Model-View-Controller framework.

One thing that the relational treatment of updates (i.e. copy the data
to be changed and don't write it back to the database until the transaction
is completed) gave you is a focus around which to base mechanisms for
handling concurrent access and undoing erroneous updates.  I don't
think that there is anything inherent to OODBMSs that prevent similar
facilities from being incorporated, but it is possible that current 
implementations do not address the topic very well yet.  Not having the bucks
to spend on a non-research OODBMS, I haven't had the chance to get any first
had experience.

hal.

dhartung@chinet.chi.il.us (Dan Hartung) (06/10/91)

plogan@apd.mentorg.com (Patrick Logan) writes:
>It is not entirely clear what is being described below. Rather than
>propose my interpretation, I am looking for other opinions and
>experiences.

Well, I have Codd's Relational Model V2 at my desk right now, so I
might as well respond.

>From "UNIX Today!", May 13, 1991, pp. 58, 64...
>
>    [Paraphrasing: Dan Gerson, Xerox PARC, is developing collaborative
>    systems, in particular a document database that will allow
>    multiple users and track versions. He's not sure OODBs are best
>    for his work. He's using a Sybase DBMS and is investigating
>    ObjectStore from Object Design.]
>
>    [Quoting: typing errors are mine.]
[Deletions by dhartung for clarity.] 
>
>    During a transaction in an OODBMS, objects are loaded into memory,
>    either real or virtual. "As soon as you execute a transaction, you
>    can't see the object anymore," he says. "In an RDBMS, the system
>    is giving you some sort of a copy. In an OODBMS, the system is
>    giving you the actual object, so they're only valid in a
>    transaction."
>
>    [End of quote.]

In his new 1990 book on the relational model, Codd devotes a chapter to
rebuttal of "Claimed Alternatives to the Relational Model", including
Entity-Relationship, Binary and Universal Relational, and Object-Oriented
approaches.  He gives succinct reasons why he believes each falls short
of the Relational Model (<-- I capitalize because he is speaking of something
very specific).

There is approximately one page on the object-oriented approaches.  First
he raises questions about the "adaptability to change" of an object-
oriented database manager, and whether o-o is as "high-level" as any
relational language (thus limiting its optimizability).

The paragraph I think is relevant is this:

"Can the OO approach to database management support distribution independence?
In other words, can application programs remain unchanged and correct when
a database is converted from centralized to distributed and later when the
data must be re-distributed?  What support does the OO approach provide for
built-in and user-defined integrity constraints that are not embedded in
the application program?"

I believe his point, however obliquely made, is that OO does not, in his
view, have the flexibility to successfully operate under different
configurations without change, as a relational language would (must).
The paragraph cited above is merely a specific instance of this: by
having a grab-the-object-until-done approach, relational integrity
would be virtually unenforceable.  Ironically, this is very much akin
to the record-locking that is done in so many "relational" databases
(really semi-relational) -- e.g. xbase.  The "EDIT" command is
actually something like an EDIT object in this sense.

-- 
Daniel A. Hartung           |  "What's the difference anyway, between being
dhartung@chinet.chi.il.us   |  safe and being rad, the joke's on us, we've
Birch Grove Software        |  all been had."  -- John Wesley Harding
-----------FoxPro Programmer Looking For Work--------------

mark@motown.altair.fr (Mark James) (06/10/91)

In article <1991Jun6.194440.2879@apd.mentorg.com>
plogan@apd.mentorg.com (Patrick Logan) writes:

>It is not entirely clear what is being described below.

You're not the only one.

>From "UNIX Today!", May 13, 1991, pp. 58, 64...
>
>    [Paraphrasing: Dan Gerson, Xerox PARC, [...]
>    He's using a Sybase DBMS and is investigating
>    ObjectStore from Object Design.]
>
>[...]
>
>    But OODBMSes have their drawbacks as well. "Currently, OODBMSes
>    are not very well developed," he [Gerson] says.
>
>    "The basic problem in an OODBMS is in the user I/O inside of a
>    transaction," Gerson adds.
>[...]
>    During a transaction in an OODBMS, objects are loaded into memory,
>    either real or virtual. "As soon as you execute a transaction, you
>    can't see the object anymore," he says. "In an RDBMS, the system
>    is giving you some sort of a copy. In an OODBMS, the system is
>    giving you the actual object, so they're only valid in a
>    transaction."

In an OODBMS, the system gives you the *identity* of the object.
Whether you see the data value encapsulated in the object depends on
the level of encapsulation of the object.  The value (if you can see
it) may or may not be "some sort of copy"; this is entirely an
implementation decision.

I don't understand why Gerson generalizes about OODBMSes, since he has
only investigated one of them.

>    Gerson syas he believes few people are aware of this fundamental
>    flaw in OODBMS technology because so few systems are out there.

Or else, if Unix Today correctly presents his level of understanding,
because there is no problem.

>    Those that are function as single-user, workstation-based
>    development systems, not multiuser systems where deadlocks can
>    occur. Besides, he says, he thinks some OODBMS vendors are either
>    unaware of the possible problem or deliberately ignoreing it.

Again, Gerson should not generalize from a sample of one; here, he
sounds like a marketing guy trying to bad-mouth the competition.  I
can't speak for OODBMSes other than O2, but O2 resolves multiuser
concurrency control problems with a classic two-phase locking
mechanism.  There is *nothing* inherent in object-oriented databases
that inhibits this solution.

The implementation of two-phase locking for long transactions
involving complex and multimedia objects is not trivial, so perhaps
his statement about "some vendors" is correct; but to conclude from
that that transaction management is "the basic problem in an OODBMS"
is just talking out of the top of his head.  (Again, assuming that
Unix Today has quoted him accurately.)

Usual disclaimer:  I'm speaking for myself here, not for my company.

--
Mark James  <mark@bdblues.altair.fr> or <mark@nuri.inria.fr>
O2 Technology [formerly Altair]
Mail:  B P 105 -- 78153 Le Chesnay -- France
Telephone +33 (1) 39 63 53 93    Fax +33 (1) 39 63 58 90

dlw@odi.com (Dan Weinreb) (06/11/91)

I think I am in a good position to explain what's going on here.  I've
known Dan Gerson for about ten years.  Between 1985 and 1988, we
worked together at Symbolics on the design and development of an
object-oriented database system (called Statice), which is now a
product.

When the journalist from Unix Today talked to Dan, Dan brought up a
topic that he's been interested in for a long time: the interplay
between interactive I/O and concurrency control, specifically in the
face of deadlocks, aborts, and retries.  Unfortunately, this is a
rather technically abstruse topic, and it appears that the journalist
really didn't understand it at all.  I've exchanged mail with Dan
since the article came out, and he is pretty unsatisfied with the
level of understanding reflected in the article.

There are several, related problems that he's interested in.  To
illustrate them, consider the following scenario.  There is a database
containing information about the canonical domain: employees and
departments.  There is a user interface for examining and updating
information in the database.  In a typical scenario of updating data,
the person says "show me John Smith"; a bunch of data about John Smith
appears on the screen; the person clicks on some fields and types in
some new values; the person says "OK, I am satisfied with these
changes, write it back"; and the database gets updated.

Now, in this scenario, there are two ways that you might divide it
into transactions.  In scheme 1, the whole thing is one transaction.
The transaction starts when we call up the information, and the
transaction ends when we say that we are satisfied.  In schema 2,
there are two transactions instead of one.  The first transaction
reads the data from the database so that it can be displayed on the
screen, and then immediately commits (having made no modifications).
Then we interact with the data.  Finally, when we say we're satisfied,
a second transaction starts, the data items that were displayed on the
screen are written back to the database, and the second transaction
commits.

Under simple circumstances, the two schemes will have the same effect.
However, they can behave differently if someone else is changing the
database while we are doing our interaction.  They can also behave
differently if certain clever concurrency control and caching schemes
are being used.  An example of the latter is optimistic concurrency
control combined with caching, which can cause transactions to abort
because it is discovered that obsolete cached data has been used.
This sort of thing is discussed in two papers in the recent SIGMOD
proceedings.  If someone else is changing the database, or if one of
these concurrency control or caching schemes is being used, any
transaction might deadlock, and so the user's transaction might be
forcibly aborted by the database system.

The problem of aborts is much more severe in scheme 1 than in scheme
2, for two reasons: an abort is more likely, and an abort is more
problematic.  An abort is more likely in scheme 1 because the
transaction is so much longer in duration.  The transaction in scheme
1 spans user interaction, which would take at least seconds and
possibly even hours (suppose we get up for a coffee break in the
middle of editing an entry).  An abort is more problematic because we
have to start all over again; all our work is lost, for apparently no
reason.  If your user interface did this to you, you'd certainly be
frustrated.

You might think that we don't really have to do our work over again;
the user interface system can simply retain the changes that we've
typed in, and restart the transaction invisibly.  That's true, but
there's a hazard: when we start the transaction again, the database
state might have changed, and the new values that we typed into the
user interface might have depended on the values that were displayed
when we started.  For example, suppose I am in the process of
approving a loan application for an employee.  I call up his record on
the screen, scrutinize various field values, and decide that he should
get the loan.  While I'm reading and updating fields, someone else
makes changes to the employee's records, noting that he has just been
convicted of embezzlement.  This causes a deadlock, and I start again,
and now I really had better redo all my consideration and
scrutinization, to see if anything has changed since I last made my
decision.  In other words, if I expect my own interaction with the
database to behave like a transaction, moving from one consistent
state to another consistent state, then I have to behave in an
appropriately transaction-like way, which means starting all over when
there's an abort.

In scheme 2, the transactions are very short and quick. And if they
abort, there's no problem with just automatically starting them again.
However, scheme 2 still suffers from the problem that some other user
might modify some of the data about the employee while we're in the
middle of doing my interactions.  In fact, in scheme 2, even if
ordinary locking is used for concurrency control, we are not holding
any locks, so nothing prevents the employee's data from being changed
"out from under" us.  When our second transaction goes back to store
the data, in fact, we might overwrite someone else's changes.

To my mind, the most important question is: "What is the goal?"  That
is, if two people find themselves editing the same thing at the same
time, what ought to happen, anyway?  Several alternatives make some
sense, but I don't think it's immediately obvious which is best.

This read-modify-store scenario is very simple compared to what we see
in the real world.  There are many problems involving the way
concurrency control works in the presence of interactive user
interfaces.  This is the class of issue that Gerson was talking about.

None of this actually has much to do with "object-oriented databases",
though.  Anything that has a general concurrency control scheme (by
"general" I mean that deadlock avoidance can't be used because it
can't be determined in advance the total set of data items that will
be referenced) will exhibit the same problem.

My colleague Jack Orenstein started writing another reply to this posting,
and we decided to pool our replies together.  Jack says:

  I agree with Hal Render's analysis of the problem. Applications often
  have conflicting requirements on transaction boundaries. On one hand,
  transactions should be kept as short as possible, to maximize
  concurrency. On the other hand, it's awfully tempting to do everything
  inside a transaction because the "synchronization" problems then
  disappear.  (In MVC terms, you have to make sure that your own updates
  of the model are propagated to all views, but you don't have to worry
  about updates posted by someone else.) This is a problem no matter
  WHAT kind of database system is used, relational, object-oriented, or
  something else, (assuming the database system supports transactions,
  of course).

  I'm with Object Design, and we have an OO DBMS product, ObjectStore.
  As Hal points out, OO DBMSs can have transactions too, and ObjectStore
  does.  We provide start-transaction and end-transaction function
  calls, and it is up to the user to select the proper transaction
  granularity. Our part of the deal says that any work done inside a
  transaction will be isolated from any concurrent transactions;
  execution is serializable. If your application is such that the whole thing
  can run in one transaction, that's great - you'll have a particularly
  simple programming job, since your program can manipulate the objects
  in the database directly; you won't have to deal with copies of
  objects, (and you will still get the usual guarantees of transactions,
  e.g. all-or-none behavior.)  If this won't get you enough concurrency,
  then you'll have to work on copies of objects or parts thereof, e.g.
  certain fields from an object in the model that are to be displayed in
  a view.

Of course if no transaction is in progress, you cannot directly
manipulate objects in the database, since direct manipulation of
database objects has to set database locks, and that can only happen
inside a transaction.  But if you split up your program into little
transactions and work on data in between transactions, you still have
to worry about what happens if the state of the database changes out
from under you.

  To bring this back to the original discussion: if you do an
  interactive application over a database system, and you want to keep
  transactions as short as possible, you will have the problem of
  keeping views consistent with models, and you'll have to take
  snapshots of the database, i.e. copies. This is in the nature of
  concurrency, and is not a problem of a particular data model
  (relational or OO).  But in situations where longer transactions are
  okay, relational DBMSs again force you to work on copies of objects
  while OO DBMSs allow you to manipulate objects directly.

A lot of this was lost in the Unix Today article.

-- Dan Weinreb (with Jack Orenstein)
Object Design, Inc.

marcs@slc.com (Marc San Soucie) (06/12/91)

Dan Hartung writes (and quotes Codd):

> In his new 1990 book on the relational model, Codd devotes a chapter to
> rebuttal of "Claimed Alternatives to the Relational Model", including
> Entity-Relationship, Binary and Universal Relational, and Object-Oriented
> approaches.
>
> "Can the OO approach to database management support distribution independence?
> In other words, can application programs remain unchanged and correct when
> a database is converted from centralized to distributed and later when the
> data must be re-distributed?  What support does the OO approach provide for
> built-in and user-defined integrity constraints that are not embedded in
> the application program?"
>
> I believe his point, however obliquely made, is that OO does not, in his
> view, have the flexibility to successfully operate under different
> configurations without change, as a relational language would (must).
> The paragraph cited above is merely a specific instance of this: by
> having a grab-the-object-until-done approach, relational integrity
> would be virtually unenforceable.  Ironically, this is very much akin
> to the record-locking that is done in so many "relational" databases
> (really semi-relational) -- e.g. xbase.  The "EDIT" command is
> actually something like an EDIT object in this sense.

How are the stated objections affected when the "relational" integrity of the
objects and their connections is maintained by code which executes within the
database server itself - local to their storage, not to the client? The
so-called "grab-the-object-until-done" approach is not the only approach
available in OODB's. Codd seems to have overlooked GemStone, for instance,
where built-in and user-defined integrity constraints are embedded in the
database, not in the application programs. Did he fail to analyze this case?

    Marc San Soucie
    Servio Corporation
    Beaverton, Oregon
    marcs@slc.com

dlw@odi.com (Dan Weinreb) (06/14/91)

In article <1991Jun10.070451.18516@chinet.chi.il.us> dhartung@chinet.chi.il.us (Dan Hartung) writes:

   The paragraph I think is relevant is this:

   "Can the OO approach to database management support distribution independence?
   In other words, can application programs remain unchanged and correct when
   a database is converted from centralized to distributed and later when the
   data must be re-distributed?  What support does the OO approach provide for
   built-in and user-defined integrity constraints that are not embedded in
   the application program?"

What a peculiar question.  I cannot imagine why he thinks that an OO
database should have any more trouble with distribution than a
relational database.  The best speculation I can come up with is that
he is under the impression that object identifiers as stored in
databases encode the physical location of the object, in a way that is
somehow visible to the program, or something like that.  I don't know
why he should make such an assumption.

As for integrity constraints, there's no reason that an OO database
can't have constraints just as any other kind of database can.  It's
hard to discuss this further without going on for pages, since there
are so many kinds of integrity constraint imaginable, and all
different kinds of implementation techniques that can be used to
implement them.

   The paragraph cited above is merely a specific instance of this: by
   having a grab-the-object-until-done approach, relational integrity
   would be virtually unenforceable.  

What do you mean by "relational integrity" in the context of an OO
database?  What, exactly, do you mean by "grab"?  What constraint on
an OO database would be desirable, but so very hard to enforce
compared to a similar constraints on a relational database?

				      Ironically, this is very much akin
   to the record-locking that is done in so many "relational" databases
   (really semi-relational) -- e.g. xbase.  The "EDIT" command is
   actually something like an EDIT object in this sense.

Locking is part of concurrency control; I don't understand why it has
anything to do with data models and data integrity.  (I assume that
Codd uses "integrity" in the same sense that Date uses it, i.e.
protecting a database from errors due to concurrent access, or aborts,
or system crashes, or media failure, is not what is meant by
"integrity".  See Date's "Introduction to Database Systems".)
Concurrency control in OODBs can be done any number of ways, including
exactly the same way that it's done in commercial relational database
systems.

sakkinen@jyu.fi (Markku Sakkinen) (06/14/91)

[This thread discussed some article of E.F. Codd.]

In article <1991Jun13.231646.29226@odi.com> dlw@odi.com writes:
>In article <1991Jun10.070451.18516@chinet.chi.il.us> dhartung@chinet.chi.il.us (Dan Hartung) writes:
>
>   The paragraph I think is relevant is this:
>
>   "Can the OO approach to database management support distribution independence?
>   In other words, can application programs remain unchanged and correct when
>   a database is converted from centralized to distributed and later when the
>   data must be re-distributed?  What support does the OO approach provide for
>   built-in and user-defined integrity constraints that are not embedded in
>   the application program?"
>
>What a peculiar question.  I cannot imagine why he thinks that an OO
>database should have any more trouble with distribution than a
>relational database.  The best speculation I can come up with is that
>he is under the impression that object identifiers as stored in
>databases encode the physical location of the object, in a way that is
>somehow visible to the program, or something like that.  I don't know
>why he should make such an assumption.

I guess that Codd not only _is_ ignorant about OODB's but that he
_wants to remain_ ignorant, so he can bash them without restraint
when preaching the Only Holy Relational Religion - dogma subject
to alteration by Codd without prior notice.

Things like built-in and user-defined integrity constraints that are not
embedded in application software happen to be one of the strong points
of the OO approach!  What support does the _relational_ model provide
for them?

[rest of original article deleted]

----------------------------------------------------------------------
"All similarities with real persons and events are purely accidental."
      official disclaimer of news agency New China

Markku Sakkinen (sakkinen@jytko.jyu.fi)
       SAKKINEN@FINJYU.bitnet (alternative network address)
Department of Computer Science and Information Systems
University of Jyvaskyla (a's with umlauts)
PL 35
SF-40351 Jyvaskyla (umlauts again)
Finland
----------------------------------------------------------------------

marcs@slc.com (Marc San Soucie) (06/18/91)

> >From "UNIX Today!", May 13, 1991, pp. 58, 64...
>
>     [Paraphrasing: Dan Gerson, Xerox PARC, is developing collaborative
>     systems, in particular a document database that will allow
>     multiple users and track versions. He's not sure OODBs are best
>     for his work. He's using a Sybase DBMS and is investigating
>     ObjectStore from Object Design.]
>
>     During a transaction in an OODBMS, objects are loaded into memory,
>     either real or virtual. "As soon as you execute a transaction, you
>     can't see the object anymore," he says. "In an RDBMS, the system
>     is giving you some sort of a copy. In an OODBMS, the system is
>     giving you the actual object, so they're only valid in a
>     transaction."
>
>     Gerson syas he believes few people are aware of this fundamental
>     flaw in OODBMS technology because so few systems are out there.
>     Those that are function as single-user, workstation-based
>     development systems, not multiuser systems where deadlocks can
>     occur. Besides, he says, he thinks some OODBMS vendors are either
>     unaware of the possible problem or deliberately ignoreing it.

The above discussion describes only ONE possible implementation of semantics
for accessing shared objects in an OODBMS. If you research the available OODBMS
products, you will find some that were designed as multiuser systems. For
example, in GemStone the following semantics are provided.

Case 1) Let's consider an object X. The system gives you a stable view of X
during your transaction. The object is presented as it existed at the start of
your transaction. Any changes you make to object X during your transaction are
visibile to you but not to other sessions (i.e. not to other users running
other transactions). At the point you "commit" your transaction, your changes
to X become visible to other transactions which start after your "commit". The
"commit" does not destroy your view of object X; you still have visibility of
the state of the object as it was committed. This example assumes that there
were no conflicts on object X.

Case 2) Now let's consider the case where two transactions A and B are both
trying to change object X. Assume A and B begin at the same time, and thus see
the same state of X. If the application chooses to use pessimistic concurrency
control, both A and B will attempt to lock X for writing; one transaction will
be granted a lock, the other will be denied the lock. Assume A was granted the
lock; it will proceed as in Case 1 above. Transaction B still has a stable view
of X that is unaffected by the transaction A. If B wants to see the new state
of X that was committed by A, B must abort its transaction.

In cases 1 and 2, I have described the semantics of objects seen by methods
written in the OPAL persistent database language used by GemStone. GemStone
also provides application program interfaces for C, C++, and Smalltalk. With
these API's it is possible to COPY the contents of an object into a
user-interface program, if it is desired to have a copy of an object displayed
that is independent of any transaction states.


From Hal Render:

> One thing that the relational treatment of updates (i.e. copy the data
> to be changed and don't write it back to the database until the transaction
> is completed) gave you is a focus around which to base mechanisms for
> handling concurrent access and undoing erroneous updates. I don't
> think that there is anything inherent to OODBMSs that prevent similar
> facilities from being incorporated, but it is possible that current
> implementations do not address the topic very well yet. Not having the bucks
> to spend on a non-research OODBMS, I haven't had the chance to get any first
> had experience.

Note that GemStone provides a virtual copy of any objects accessed by a
transaction. Changes made to those objects are not written to the permanent
database until the transaction commits. Commit is an atomic operation. Either
all of the modified objects become part of the permanent database, or none of
them do. Appropriate algorithms are used to ensure that commit is atomic in the
presence of possible power failure or system crash, etc.

The GemStone API's and OPAL language each have functions and methods that may
be executed to cause a commit. The application is in control of when a commit
happens.

With regard to changes to an object being visible in displayed output, that
problem is in the domain of the display system. Often the display system is a
form or graphics-based application where user actions (mouse, keyboard) change
the displayed information, and the database objects are then modified by the
application software to reflect the changes to the display.


    Allen Otis
    Servio Corporation
    Beaverton, Oregon
    otisa@slc.com