[comp.unix.xenix] Accell's Recovery after System Crash

GeeWhiz@cup.portal.com (Paul Terry Pryor) (03/01/89)

Recently after a series of memory parity panics, the two
critical database files, files.db and unify.db were both
listed by fsck's warning about inconsistent file sizes.
A listing of their sizes was much too amazing to believe !
Files.db weighted in at whopping 6 megs, and unify.db
grew to 500K. Since Accell's manual did not state any
specific recovery actions, I am appealling to anyone who
can lend me a hand recovering these two critical files.

An alternative is to lose hundred hours of work put into
the Accell. Thankfully, we are evaluating several commerical
databases, and not coding a specific database application.

Thanks !

Paul Pryor
Integrated Microcomputer Systems, INC
NASSIR ACIRS3+ Project Team
Bailey's Crossroads, VA

bill@bilver.UUCP (bill vermillion) (03/03/89)

In article <15221@cup.portal.com> GeeWhiz@cup.portal.com (Paul Terry Pryor) writes:
>Recently after a series of memory parity panics, the two
>critical database files, files.db and unify.db were both
>listed by fsck's warning about inconsistent file sizes.

That is typical on non-sequentially created files.  Most of the time you don't
have to worry about an fsck reporting inconsistant file sizes.  Particularly
in data base applications.

>A listing of their sizes was much too amazing to believe !
>Files.db weighted in at whopping 6 megs, and unify.db
>grew to 500K.

Those numbers are NOT uncommon.  I have a client site where the file.db and
.dbr are large than the drive they are on.  Where you will have a problem is
if you try to copy that file out and them copy it back in.  The non-used
blocks will be filled with zeros and be used in the copy, and you won't be
able to put the file back on the disk.

>          Since Accell's manual did not state any
>specific recovery actions, I am appealling to anyone who
>can lend me a hand recovering these two critical files.
>

Best bet, if you feel the files are bad is to select all the data into an
ascii file, recreate the data base and then reload it from the ascii file.

>An alternative is to lose hundred hours of work put into
>the Accell. Thankfully, we are evaluating several commerical
>databases, and not coding a specific database application.

-- 
Bill Vermillion - UUCP: {uiucuxc,hoptoad,petsd}!peora!rtmvax!bilver!bill
                      : bill@bilver.UUCP

frankb@usource.UUCP (Frank Bicknell) (03/05/89)

In article <432@bilver.UUCP>, bill@bilver.UUCP (bill vermillion) writes:
> Those numbers are NOT uncommon.  I have a client site where
> the file.db and .dbr are large than the drive they are on.
> Where you will have a problem is if you try to copy that
> file out and them copy it back in.  The non-used blocks will
> be filled with zeros and be used in the copy, and you won't
> be able to put the file back on the disk.  

Which brings up a point: does backup/restore do this
zero-filling stuff?  Especially in the 'r' mode of restore I
think that it might not, but have no proof.  Anyone know for
sure?
-- 
Frank Bicknell; 1405 Main St, Ste 709; Sarasota, FL 34236-5701
killer!usource!frankb