[comp.databases] Transaction management

gupta@mas1.UUCP (04/02/87)

In article <477@cognos.UUCP> garyp@cognos.UUCP (Gary Puckering) writes:
>  
> The scenario you describe points out that their are two types of 
> transactions:  read_only and read_write.  A transaction involving only
> queries (i.e. only SELECT's) can be read_only.  This enables the database
> manager to perform a Versioned Read on the tables involved...

The primary reason why a lot of systems do not provide this to the user is that
the user would then have to specify that they were doing a read-only
transaction at the start of the transaction.  I agree that this is not a very
strong argument, and that versioned reads should be provided.

> ...
> An objection I have to many relational systems is the lack of choice
> they provide in transaction control.  I can understand why degree 3
> consistency is the default, but I can't understand why some systems
> don't allow you the option of degree 2 or even degree 1...

I agree that the system should provide the capability.  I guess that the
reason for not providing this ability is that they are afraid of the user
using the features incorrectly and then complaining that it is the DBMS's
fault.

> ...
> While on the subject of transactions, has anyone encountered problems
> with the fact that most relational database managers cannot retain
> your cursor position once you've committed the transaction?  If so,
> in what circumstances have you encountered the problem and how did
> you get around it?

As a DBMS implementor, I would like to ask the question:

    What are the semantics of a cursor, once the transaction is committed?
	OR, put differently,
    What are the semantics of transactions if cursor stability is provided
    accross transaction boundaries?

To explain, here is a scenario:

    User starts read-only transaction.
       Opens and positions the cursor.
    Commits the transaction.
	...
    Starts read/write transaction.
       Is the previous cursor still valid?

Here is another:

    User does read/write transaction.
       Opens and positions the cursor.
       ...
    Commits the transaction.
	...
    User starts another transaction.
       Is the previous cursor still valid?

The main problem is that between transactions the state of the database may
change, thus the cursor may no longer be valid.

-- 
Yogesh Gupta				{ ihnp4!mas1!gupta }
Measurex Automation Systems		           ^
						This is a "one"

garyp@cognos.uucp (Gary Puckering) (04/09/87)

In article <144@mas1.UUCP> gupta@mas1.UUCP writes:
>In article <477@cognos.UUCP> garyp@cognos.UUCP (Gary Puckering) writes:
>> ...
>> While on the subject of transactions, has anyone encountered problems
>> with the fact that most relational database managers cannot retain
>> your cursor position once you've committed the transaction?  If so,
>> in what circumstances have you encountered the problem and how did
>> you get around it?
>
>As a DBMS implementor, I would like to ask the question:
>
>    What are the semantics of a cursor, once the transaction is committed?
>	OR, put differently,
>    What are the semantics of transactions if cursor stability is provided
>    across transaction boundaries?
>...
>The main problem is that between transactions the state of the database
>may change, thus the cursor may no longer be valid.

One approach I've seen to ensuring that the database state remains
consistent is to retain the locks across the commit boundary.  In other
words, you end the first transaction, then immediately start another
transaction just like the first one, inheriting all the locks from the
first transaction.  Concurrent read-only transactions that begin after
each commit point would see the committed changes.

Unfortunately, I don't think this addresses the type of problem we've
encountered.  In our QUICK product, which provides a screen-oriented
data entry, retrieval and update facility, we allow an operator to
change records as desired while browsing along a retrieval path.
However, we *never* allow locks to be held across a terminal read.
This is to prevent the problem of an operator walking away from the
terminal and leaving records locked for long periods of time.

For indexed file systems, we only lock records when the operator
requests an update.  Before actually modifying the records, we re-read
them and compare them to what was originally read.  If they haven't
changed, we proceed with the update.  Otherwise, we abandon it.  The
re-read is usually done on another stream or another open, so as not to
disturb the retrieval path.

For relational systems, we browse using a read-only transaction and
update on a read-write transaction, using the same re-read and
comparison approach that we use for indexed files.  This guarantees
a stable cursor on the read-only (browse) transaction, and ensures a
high degree of concurrency.  Unfortunately, with this approach, updates
could fail (a concurrent transaction could have grabbed the record and
changed between the time your transaction read it and the time it is
re-read in preparation for update).

We also provide a CONSISTENCY mode in which a single read-write
transaction is used.  This guarantees that updates will succeed and
eliminates the overhead of re-reading and comparing records.  But, it's
vulnerable to the problem of the operator going to lunch and leaving
large portions of the database locked up.  Also, once the operator
commits the updates, the cursor is lost and the retrieval path along
which he/she was browsing is stopped cold.

These two strategies seem to present an interesting dilemma.  You can
have high concurrency, or high consistency, but not both.

It would seem that the relational transaction model is appropriate for
batch-style queries and update requests.  But in an on-line transaction
environment, in which operator intervention and decision-making 
(based on retrieved data) is involved, the model has some serious
practical limitations.

I'd like to hear comments from the database implementors *and* the
theoreticians on this one!

(Nothing would please me more than to have someone out there point out
an "obvious" solution to this problem.)

--
Gary Puckering        3755 Riverside Dr.
Cognos Incorporated   Ottawa, Ontario       decvax!utzoo!dciem!
(613) 738-1440        CANADA  K1G 3N3       nrcaer!cognos!garyp
-- 
Gary Puckering        3755 Riverside Dr.
Cognos Incorporated   Ottawa, Ontario       decvax!utzoo!dciem!
(613) 738-1440        CANADA  K1G 3N3       nrcaer!cognos!garyp

mat@amdahl.UUCP (04/14/87)

In article <534@cognos.uucp>, garyp@cognos.uucp (Gary Puckering) writes:
> ...  In our QUICK product, which provides a screen-oriented
> data entry, retrieval and update facility, we allow an operator to
> change records as desired while browsing along a retrieval path.
> However, we *never* allow locks to be held across a terminal read.
> This is to prevent the problem of an operator walking away from the
> terminal and leaving records locked for long periods of time.
> 

The most interesting approach that I have seen is called "escrow locking."
The basic idea is that you "reserve" resources and hold them "in escrow."
So, if you have a transaction which is taking money out of an account,
you reserve that amount of money until you decide if the transaction
will commit or abort. Meanwhile other transactions can use the record.
If the transaction commits, then no action is required. If not, then the
money held in escrow is returned to the record. The advantage is the
transaction can take a long time without locking anything other than that
which is logically required - the amount of money in question. This
is not an ultimate solution, but obviously helpful in inventory-type
applications.
-- 
Mike Taylor                        ...!{ihnp4,hplabs,amd,sun}!amdahl!mat

[ This may not reflect my opinion, let alone anyone else's.  ]