[comp.os.vms] Tricky RMS/QIO problem for gurus only

mick@auspyr.UUCP (Michael J. Andrew) (01/06/88)

We've been trying hard to figure out how to simulate a feature from other
file systems, on VMS under RMS.

Namely, given the key to an indexed record, return the record *prior* to
that one in the collating sequence.  Note that the key is not necessarily
the primary key. Note also, this is a multi user environment, with
concurrency considerations.

This is a problem under RMS as the vanilla filesystem will only return the
*next* record.

We're talking real-world here, so only methods which are reasonably time
efficient (say, factor of 10 at worst) are worth considering.


Most of the solutions we have come up with so far involve variations on 
having two RMS indexes on the file for each index requested by the user; 
one ordered in the requested collating order, and the other in the 
opposite order.
The problem with this is that the order of duplicate keys is the same
(increasing time order) in both directions.

To use such a scheme, it seems necessary to add extra data to distinguish
such duplicates.  The best choice for this seems to be a timestamp.  VMS
provides granularity to 10 milliseconds, which we believe is sufficient.
5 bytes seems to be the smallest usable size for such a timestamp.
However, this incurs a data overhead of 5 bytes per record, plus 5 + 5
bytes per key per record.

So, the above is our best shot.  

We rejected schemes involving searching; where does the search begin?
what if the "current" record gets deleted?

One intriguing possibility is to poke around in the RMS file structure and
find the previous record directly, with a walk through the buckets.
Doing this via RMS seems(?) to be impossible, as the file may only be
opened for block access, or structured access, not both. 
(Multi user, remember!).

This second option seems to have promise, but our QIO documentation is 
incomplete.  Is there a glimmer of hope for us?


-- 
-------------------------------------------------------------------------
Michael Andrew		Sr. Systems Engineer	Austec, Inc.  San Jose CA.
mick@aussjo.UUCP				(408) 279 5533

rrk@byuvax.bitnet (01/11/88)

You have identified a problem which must be repeatedly solved and has been
repeatedly requested from DEC via SIR's and SPR's.  DEC made a very feeble
attempt to fix the problem by providing the reverse order keys, but what
they should have done is provide another bit in the RAB for reverse order
sorting.

A small correction:  You can open a file and first do indexed I/O and then
reconnect (not reopen) for block I/O, but I would never recommend trying
to find your way around in an indexed file.  The most workable solution
(hopefully you don't have too many duplicate records) is to add a longword
to your key and define your key for no duplicates.  Then every time you
store a record, if you get a duplicate-error back, increment your longword
until it lets you store the record.  Then your key sort order will be
proper both directions, or as suggested, a time value might be better, if
you have lots of duplicates and want to avoid collisions.  Before the reverse
keys were introduced, it was necessary to store the keys twice--once in
two's compliment to provide a proper reverse sort.  Now at least the size
of the data buckets are not increased.