[comp.databases] Archiving data

cjackso@uswnvg.UUCP (Clay Jackson) (01/23/91)

As our Oracle applications get larger and more complex, we're seeing a number
of cases where tables are getting too large to process rapidly in an OLTP
environment.  For example, a table that contained 10K rows fits entirely
in the SGA (shared memory, for those of you not using Oracle), but at 100K
rows it no longer fits.  The result is that database operations (esp
queries) are taking longer and longer.

Our users tell us that it's "ok" to move "old" data to someplace else that
would be "less" accessible (ie, they would accept slower response time
and/or in some cases be willing to put up with delays while off-line media
was retrieved).  There are several "issues" that we're facing, and wondering
if anyone else has solved some of these problems:

1)  What types of reasonable (performance/cost) devices exist for large
Unix platforms (we're running a Pyramid/Sequent shop) that will enable us
to store and retrieve several GB of data?

2)  How do we "archive" the data in such a way that we retain all of the
foreign key relationships as they were at the time the data was last
updated (for example, we have several tables that reference more than
3 other tables)?

3)  What sort of mechanisms an we use to allow potential updates of this
data?

Clay Jackson
US West NewVector