[comp.os.vms] Datatrieve Efficiency

grant@NRL-CSS.ARPA (William (Liam) Grant) (08/12/87)

Hello all:

	I already know that my use of Datatrieve will have less than 
optimum efficiency because of the length of my records.  I am looking 
for suggestions within the constraints set by my employers.  These 
include using Datatrieve, with indexed fields, and creating the fields 
that they have determined are necessary, regardless of my opinion.  
In addition to setting fields, they have specified minimum field 
widths in most cases.  Lastly, these structures must be accessible 
from procedures written in Datatrieve (so that a clerk/typist doesn't 
have to learn Datatrieve).
	I've solved most of the specs, but I have one last question
concerning efficiency.  It concerns record size.  They are already 
125 bytes each, spread over several smaller fields.  I must now add a
new field of at least 100 bytes.  According to what I remember about a
file structures course, using an indexed structure is more efficient
when an integer number of records fits in a disk block.  Using RA81's,
this would mean records of either 128 or 256 bytes would be more
efficient than 225 bytes.  Is this true ?
	Have I forgotten something ?  Have I missed something in the
manuals ? (With so many for ONE layered product, it's easy).
	Reply to me and I'll summarize.

					grant@nrl-css.arpa
					grant@nrl-com.arpa
					(202) 767-2392

					William (Liam) Grant
					Naval Research Laboratory
					Code 5522
					Washington, D.C. 20375-5000