lazear@MITRE@sri-unix (06/06/82)
Date: Fri, 21 May 1982 0830-EDT The Air Force Data Services Center has done extensive work in improving dump's capabilities: - it will dump at either 800 or 1600 bpi - it can block up to 8 Unix blocks per tape block - it can create identical backup tapes - the code is documented at least to the subroutine level Similar work has been done with restor: - it will restore at 800/1600, with various blocking factors - it has commented code - it can recover from tape errors in several ways: - by requesting a backup tape and seeking to the same place on that tape and continuing (this can go on forever, switching tapes (much like avoiding potholes on a two-lane road)) - by 'checkpointing' at each inode and losing only one inode in the case of no backup tape and a tape error (we recovered a 56000 blk filsys with 10 tape errors (no backup) and lost only 4 files!) The source is available by sending a tape to: Charles Muir, AFDSC/CMT, Room 1D988, Pentagon, Wash, DC 20330 Walt Lazear (formerly AFDSC, now Mitre)
dmr (09/08/82)
Making the dump command restartable between tapes is a laudable objective, but it is (if I remember how dump works) decidedly non-trivial. The problem is that dump does not decide in advance how files are to be distributed onto the tapes. It starts by making several passes through the file system, deciding which files are to be dumped. Then it dumps the files themselves; however the decision to change tapes is made dynamically. Probably the minimal change would involve creating a file after writing each tape that tells the last inode that was completely dumped, and adding a dump option to read and believe such a file. (Or just have dump print out the last i-number, and enter it manually for the second dump.) You do have to worry about how to decide, the second time around, which files to dump. I suppose this record could also be saved in a file. One simple change will vastly increase the reliability of multi-tape dumps: have the command flush type-ahead, and insist on a full "yes" answer when it is ready to change tapes. This may already be in BSD distributions. I have seen in unix-wizards reports that dump gets badly boggled on some "phase errors" -- files that have changed since the start of the dump or during the dump. This really shouldn't happen, and should be fixable. Since I have not seen the problem occur it may be a problem only in some versions. Dennis Ritchie
wss@Lll-Unix@sri-unix (09/13/82)
From: wss at Lll-Unix (Walter Scott - Consultant/SC) Date: 10 Sep 1982 at 0921 Pacific Daylight Time I understand that a restartable multi-tape version of dump has been written at UC Berkeley. It works as follows: When dump reaches the end of a tape, it forks. The parent waits, and the child tries writing the new tape. If the child fails for some reason, the parent just reforks and tries again. If the child succeeds in writing the tape, it kills the parent and becomes the new dump. It seems as though this should be fairly easy to implement.
nancy@resonex.UUCP (Nancy Blachman) (07/18/85)
The UNIX manual page for dump(8) suggests dumping a file system according to a modified Tower of Hanoi algorithm. If you know how the sequence suggested, i.e., 0 3 2 5 4 7 6 9 8 relates to the Tower of Hanoi algorithm, would you please write to me and tell me. The Tower of Hanoi algorithm is the sequence required to move rings of different sizes from one peg of three pegs to another with the restriction that no ring may lie on top of a smaller ring. Do you know who invented the Tower of Hanoi? /\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\//\/ Nancy Blachman UUCP: {hplabs,ihnp4,ucbvax!sun}!resonex!nancy (408) 720 8600 x37 ARPA: nancy@riacs.ARPA
rfb@cmu-cs-h.ARPA (Rick Busdiecker) (07/19/85)
As to how the sequence: 0 3 2 5 4 7 6 9 8 relates to the Tower of Hanoi algorithm, I'm not completely sure however I can generate the sequence: 1 3 2 0 5 4 7 6 9 8... I believe dump(8) refers to a modified version of the sequence. Number discs starting with 0 on the top. The post numbered 0, 1, 2. Consider the sequence moves as ordered pairs (stone, post). If you add these numbers and then take the first occurrance of any given number, you get the above sequence. 1 3 2 0 5 4 (0,1) (1,2) (0,2) (2,1) (0,0) (1,1) (0,1) (3,2) (0,2) (1,0) (0,0) (2,2) (0,1) (1,2) (0,2) (4,1) (0,0) (1,1) (0,1) (2,0) (0,2) (1,0) (0,0) (3,1) (0,1) (1,2) 7 (0,2) (2,1) (0,0) (1,1) (0,1) (5,2) (0,2) (1,0) (0,0) (2,2) (0,1) (1,2) (0,2) 6 (3,0) (0,0) (1,1) (0,1) (2,0) (0,2) (1,0) (0,0) (4,2) ... As for an inventor, the story I've always heard is that there is a 64-disk Tower that is being moved by Bhuddist Monks, and that when they complete their task (they believe) the world will come to an end. However, if they move a disk per second it will take 2^64 (~ 1.84 x 10^19) seconds to complete. This is about 584 billion years, so it shouldn't affect people reading the bboard very much! Rick Busdiecker rfb@cmu-cs-h.arpa
chris@umcp-cs.UUCP (Chris Torek) (08/22/85)
A while back I posted a mass (pardon the pun) of 4.2BSD kernel + dump(8) hacks (something I call the "mass driver") for speeding up dumps. Well, when I got in yesterday, around 7PM, our big 785 had just gone down for a level 0 dump. I decided to time things, and send out some "hard data" on the effectiveness of the mass driver. Here is how the disks are set up, right now: Filesystem kbytes used avail capacity Mounted on /dev/hp0a 7421 6445 233 97% / /dev/hp1g 120791 80891 27820 74% /foonman /dev/hp2c 236031 194710 17717 92% /ful /dev/hp3a 179423 150545 10935 93% /oldusr /dev/hp3b 179423 148005 13475 92% /g /dev/hp4a 179423 157349 4131 97% /usr /dev/hp4d 15019 7469 6048 55% /usr/spool /dev/hp5c 389247 317269 33053 91% /u (I've deleted the entries for disk partitons that we don't dump, e.g., /tmp.) "bc" tells me that this sums to 1,062,683K---just a bit over one gigabyte. We dumped all of that to two tape drives in under two hours. ------[suspenseful pause; commercial break; etc.]------ First, I need to describe our configuration. It looks approximately like this: <===================SBI===================> ||| ||| ||| RH0 RH1 RH2 | | | ||| | RM05 | TU77 4 Fuji Eagles TU78 2 RP06s (The '|||'s are intended to indicate higher bandwidth (-:. "RH" is DECish for a MASSBUSS. RH1 is not a real MASSBUSS, but rather an Emulex controller that emulates one.) Two weeks ago I observed another level 0 dump in progress. We had been using a rather suboptimal approach; it turns out that dumping the RM05 to the TU78 while dumping one of the Eagles to the TU77 uses up too much of RH0's bandwidth, or something; in any case we have reordered the dumps to run from the Eagles to the TU78 and from other drives to the TU77. It helps. Anyway, onward! to more timing data. I timed three tapes written on the TU77, dumping from /usr (one of the Eagle partitions). The total write time (not counting the 1 minute 20 seconds for rewind) was in each case under 5 minutes (I recall 4:30, 4:40, and 4:45 I think; but I didn't write these down). A TU77 has a maximum forward speed of 125 inches per second, which works out to 3 minutes 50.4 seconds to write an entire 2400 foot reel. 4:40 gives an average forward write speed of 102.857 inches per second, which is not bad. Unfortunately, by the time I thought of timing these we had already started the last full reel for the TU78, so I don't have numbers for it; however, it seemed to write a full reel in a little under twice as long as the TU77, so I'd estimate 9 minutes per tape. Since the TU78 was running at 6250 bpi, this is not bad; it works out to about twice the data tranfer rate of the TU77. In any case, the total time for the dump, including loading tapes and other small delays, was 1 hour 56 minutes from start to finish. This compares quite well to the 4.1BSD days of six hour dumps for two RP06s (~250M dumped on just a TU77), or our pre-mass-driver 4.2BSD days of four or five hour dumps for the configuration listed above (but somewhat less full). Further improvement is unlikely, short of additional tape drives. ------[another break, of sorts]------ For those who have stuck with me to the end of this (admittedly long and getting wordier by the moment :-) ) article, here's a small ugly kernel hack that should be installed after the mass driver, to make old executables with a bug in stdio work. In sys/sys/sys_inode.c, find ino_stat() and change the two lines reading else if ((ip->i_mode&IFMT) == IFCHR) sb->st_blksize = MAXBSIZE; to else if ((ip->i_mode&IFMT) == IFCHR) #if MAXBSIZE > 8192 /* XXX required for old binaries */ sb->st_blksize = 8192; #else sb->st_blksize = MAXBSIZE; #endif This generally shows up when doing things like grep e /usr/dict/words >/dev/null as random weird errors, core dumps, and the like (stdio is tromping on whatever data occurs after _sobuf in the executable). -- In-Real-Life: Chris Torek, Univ of MD Comp Sci Dept (+1 301 454 4251) UUCP: seismo!umcp-cs!chris CSNet: chris@umcp-cs ARPA: chris@maryland
salmi@dicomed.UUCP (john s salmi) (10/03/85)
a point of interest has been generated here of late. we are beginning to use our 4.2 vax 750 as a serious development machine. a lot of proprietary soft- ware lives on it, and dumps are done daily. my question is this: has anyone hacked dump to perform a verify pass on the data dumped? a few of our other development systems (pdp-11's) running rsx use bru as a backup facility. one of the switches on the bru command line tells bru to make sure that the tape is written as it is supposed to be written. this does, of course, cause a bit of hardship, as the pdp's are backed up each morning around 8:00, and the machines need to be in single user mode for the verifications to work. anyway, does such a utility exist? if so, can someone point me in the direct- ion of the source? as always, thanks in advance... -john -- john salmi {ihnp4,mgnetp}!dicomed!salmi system administrator dicomed corporation minneapolis, mn 612/885-3092
eric@amc.UUCP (Eric McRae) (10/04/85)
> has anyone hacked dump to perform a verify pass on the data dumped?
If your dumps fit on one tape, you can at least do a dd of the tape
into /dev/null. That will check for tape errors. At some point, you
should verify that your dump/restore mechanism works. Understand what
you would have to do to partially or completely rebuild a filesystem.
I have had disk crashes three times in my carreer as a sysad. Each
one wiped out two filesystems. I could give you a lot of other hints
about this whole process but for now I'll assume that you don't need
them. If you do, send me a note.
Eric McRae Engineering Fellow, Applied Microsystems Corporation
Phys: 5020 148th Ave. N.E. USPS: PO Box C-1002
Redmond, Wa 98052 Redmond, Wa. 98073-1002
UUCP: eric@amc (..uw-beaver!tikal!amc!eric) ATT: (206) 882-2000
richl@lumiere.UUCP (Rick Lindsley) (10/07/85)
A relatively reliable way to check even a multi-volume tape is to do a dumpdir (or restore -t, depending on your flavor) and, using awk or sort or something, obtain the highest number inode that was dumped. Then restore that file, doing it the *dumb* way (starting with tape 1, through tape n). This will force restore to read each and every tape and finally restore the last file on the last tape. This doesn't verify the data on every file, but it does verify that all the tapes are readable, that the labels you put on the tapes are correct, and that the data on the tape is in a recognizable format. This was the test we used to use at a leading university (Hi guys) before we packed away the last full backups of a semester for three years. If all you are interested in is tape errors, however, then as someone else mentioned, dd is a very fast way of doing that. Rick Lindsley
acheng@uiucdcs.CS.UIUC.EDU (10/09/85)
>/* Written 1:06 am Oct 7, 1985 by richl@lumiere.UUCP in uiucdcs:net.unix-wizar */ >A relatively reliable way to check even a multi-volume tape is to do a >dumpdir (or restore -t, depending on your flavor) and, using awk or >sort or something, obtain the highest number inode that was dumped. >Then restore that file, doing it the *dumb* way (starting with tape 1, >through tape n). This will force restore to read each and every tape >and finally restore the last file on the last tape. If an empty directory happens to be of the highest inode, you are out of luck. Directories are dumped at the very beginning. So, your "restore" will read only a bit of vol. 1 and finish. You may want to search backwards (from the highest inode down) for a file not ending with a "/.". We use the above approach before but use "dd" since it is much faster and doesnot make much difference. ---------------------------------------------------------------------- Albert Cheng acheng@UIUC.ARPA acheng@UIUC.CSNET {ihnp4,pur-ee}!uiucdcs!acheng Dept. of Computer Science, Univ. of Illinois-Urbana, Rm. 240, 1304 W. Springfield, Urbana, IL 61801 %%% The above is the opinion of my own %%% %%% and not necessarily that of the management. %%%
danny@itm.UUCP (Danny) (10/09/85)
In article <631@dicomed.UUCP> salmi@dicomed.UUCP (John S Salmi) writes: > ... >has anyone hacked dump to perform a verify pass on the data dumped? > ... As a matter of fact, I have access to a Zilog Z8000 machine with a cartridge-tape backup. I don't have source, but during dump(8), in level IV, I can hear the tape starting/stoping, then a rewind, then some more starting/stopping. What could this be but a verify? Of course, one may well have to do some mighty heavy persuadin' to get Zilog (or whoever owns source) to cough it up. -- Daniel S. Cox ({siesmo!gatech|ihnp4!akgua}!itm!danny)
john@basser.oz (John Mackin) (10/11/85)
In article <101@itm.UUCP> danny@itm.UUCP (Daniel S. Cox) writes: > As a matter of fact, I have access to a Zilog Z8000 machine with a > cartridge-tape backup. I don't have source, but during dump(8), in > level IV, I can hear the tape starting/stoping, then a rewind, then > some more starting/stopping. What could this be but a verify? Ah, the good old Zeus. But this is no time for reminiscing, you asked a question. Here's the answer. What else could it be? A lot else. In fact, what it is, is the tape changing tracks. Those cartridge tapes are implemented in the following way: they have four (or in some cases nine, but the one in the Zeus is four if I remember right) tracks, but unlike a real magtape the data is recorded in a bit-serial fashion ON EACH TRACK, not across the width of the tape (i.e., the read-write head has only one gap, not four or nine). The head actually moves up and down relative to the tape in order to select a track. Furthermore, the tape can only read/write in the forward direction. So, to read/write a full tape takes 4 (9) (read/write)-a-track/rewind cycles. John Mackin, Basser Department of Computer Science, University of Sydney, Sydney, Australia seismo!munnari!basser.oz!john john%basser.oz@SEISMO.CSS.GOV [By the way, let's have no misunderstandings: my association of the words "good" and "Zeus" is intended as sarcasm. Heavy saracsm.]
speck%cit-vlsi@CIT-VAX.ARPA (Don Speck) (10/11/85)
> has anyone hacked dump to perform a verify pass on the data dumped?
There are several levels of "verify" that one might want:
[1] Read the tape end-to-end just to see if read() returns any errors.
[2] Record the checksum of each block written, read the tape, compute
the checksums of the blocks read, and compare.
[3] After each tape, repeat pass IV with the tape write() replaced
with a read(), and compare with the disk data.
I suspect that the last of these is what the original poster had in
mind. The trouble (aside from being slow) is that ANY filesystem
activity will cause a compare of "not identical". Let's say the last
modify time of /dev/rmt8 changed while I was dumping, and I get a
"not identical" compare. Does that mean that my dump of the root
filesystem was incorrect? (Think!) Some discrepancies are inevitable,
and cause no harm. Unless these inconsequential diagnostics are
suppressed, the output becomes mostly noise, hence worse than useless.
Some of us aren't allowed the luxury of coming down to single-user
mode for dumps, or even dismounting filesystems. For instance, an
availability of only 99% on our Sun diskservers would be considered
by faculty and staff as cause for serious emergency finger-pointing
meetings. Taking our whole Sun network down 3 hours per Eagle for
full rdumps... look, I dislike the sight of gore.
When dumping active filesystems, there can be files on the tape
that are no longer on the disk. It might be quite difficult to
resynchronize the comparison. Dump(8) would need to have most
of restore(8) built into it.
Even the seemingly paranoid level of checking in [3] is hardly a
guarantee that the tape can be restored. I remembering spending
a day re-doing dumps of a particular filesystem on our 780, NONE
of which would restore onto our 750, before discovering that the
filesystem being dumped wouldn't pass fsck. The only way to be
SURE that the dump can be restored, is to run restore!
The only verification I do (it's all I can *afford*) is to list
my dumps with "restore t". This gives me at least some minimal
sanity-check, and the listings come in handy when I must restore
accidentally deleted files. Our sole GCR drive spends 12 hours
per week doing dumps and rdumps, growing longer despite my best
speedup efforts; I can ill afford to double that time for dubious
assurances. However, schemes [1] and [2] might be cheap enough
to use. Anyone care to convince me that one of these is worthwhile?
I certainly know how to implement them...
Don Speck speck@cit-vax.arpa
Yeah, I know, I should shell out $9K for another GCR drive.
root%bostonu.csnet@CSNET-RELAY.ARPA (BostonU SysMgr) (10/11/85)
Ok, a couple of more thoughts on one of our favorite subjects (:-) 1. Assuming you are able to design your dump schedules to use a lot of incremental dumps which are not huge (and have, of course, completely thought out what it takes to restore such a sequence) why not dump to a file, put that to tape (tar would do fine, would also allow multiple savesets on a tape if you just give them different names). Now the verify should be workable albeit not speedy (verify against the disk image, it won't get changed as the system runs.) One would probably have to write a streaming form of cmp/tar so you could do the moral equivalent of cmp `tar x mydump` mydump or some such, or just make another disk copy if you can afford to (cd ../scratch;tar x mydump;cmp mydump ../dump/mydump). This of course assumes sufficient disk space, may require relying on more frequent incrementals rather than only occasional fulls. You may have to still just take your chances with fulls or use another scheme, but this could help a lot. 2. Use two tape drives or dump twice to two tapes which takes about the same amount of time as verifying a tape, they may not be identical but they should, if done right, be equally useful if the same level dump is used and the first does not update dumpdates. It doubles tape cost but something has to give! Note that a lot of this does not prevent what I have experienced as the absolute worst disaster in the whole backup industry: A tape drive is going out of synch, eventually someone notices something is wrong (maybe goes completely down), FS comes and fixes it. It now refuses to read many many tapes written on itself before the fix. This is not as uncommon as you think (have you ever checked after FS left?) which means to really do a verify, you ought to do it on a different tape drive, or maybe N different tape drives...? Trust me, it happens. SO: WHAT I AM REALLY CURIOUS ABOUT BEING AS TAPE DRIVES STINK !I HATE EM! Anyone out there have Optical Disks *in production* on their Unix systems? Mail to me and I'll summarize. This promises to solve some set of problems I believe, if it ever keeps its promises. -Barry Shein, Boston University TAPES committee (Tapes Are a Poor Excuse for a Solution.)
phil@RICE.ARPA (William LeFebvre) (10/13/85)
> Note that a lot of this does not prevent what I have experienced as the > absolute worst disaster in the whole backup industry: A tape drive is > going out of synch, eventually someone notices something is wrong > (maybe goes completely down), FS comes and fixes it. It now refuses to > read many many tapes written on itself before the fix. This is not as > uncommon as you think (have you ever checked after FS left?) OH YES!!! We got caught on this once. And ever since then, whenever FS does anything to the tape drive, I make sure BEFORE THEY LEAVE that the drive can still read old tapes. It's been our experience with Digital and the TU-XX tape drives that it isn't that the heads were out of alignment before, it's that FS engineer doesn't align them right! And, take note, they test tape drives by running a diagnostic that writes and then reads what it has written. That is, it makes sure the drive is consistent with itself, but not necessarily with the rest of reality. This also happened once with one of our RM-03 drives. They replaced a head that had crashed, aligned all the heads, declared it working, and when I mounted a system pack, it couldn't boot off of it. In this case, tho, testing with an old pack was requested by the FS engineer. In general, the responsible system manager should double check FS work when necessary by trying old tapes and packs. Granted, there are times when it was wrong beforehand, and fixing it requires sacrificing the ability to read old media. But not usually. William LeFebvre Department of Computer Science Rice University <phil@Rice.arpa> or, for the daring: <phil@Rice.edu>
gww@aphasia.UUCP (George Williams) (10/14/85)
> > has anyone hacked dump to perform a verify pass on the data dumped? > > There are several levels of "verify" that one might want: > > [1] Read the tape end-to-end just to see if read() returns any errors. > > [2] Record the checksum of each block written, read the tape, compute > the checksums of the blocks read, and compare. > > [3] After each tape, repeat pass IV with the tape write() replaced > with a read(), and compare with the disk data. Under VMS the magtape driver has a write with verify operation that its backup uses, this could easily be implemented in user code by doing a write, backing up, doing a read, and then comparing the written buffer with the read one. Simple, no need to worry about changing filesystems. Unfortunately slow too. It will at least double the tape IO time (and make a streaming tape drive stop streaming). I think this might fit in as 2.5 above.
gwyn@brl-tgr.ARPA (Doug Gwyn <gwyn>) (10/14/85)
There exist master alignment disks and magtapes. Do not allow your serviceman to "align" your magnetic storage devices without using them.
eric@amc.UUCP (Eric McRae) (10/14/85)
The recent articles about restoring and verifing dumps have prompted me to edit a handbook for posting. If you have any hints or horror stories about attempting to dump/restore filesystems, please send them to me via email only. I will put a "Filesystem Backup" handbook together using the responses I get. This handbook should be helpful for new (and used) SAs anywhere. Thanks in advance. Eric McRae Applied Microsystems Corporation UUCP: amc!eric@tikal ..uw-beaver!tikal!amc!eric
jeq@laidbak.UUCP (Jonathan E. Quist) (10/16/85)
In article <324@aphasia.UUCP> gww@aphasia.UUCP (George Williams) writes: >Under VMS the magtape driver has a write with verify operation that its >backup uses, this could easily be implemented in user code by doing a >write, backing up, doing a read, and then comparing the written buffer >with the read one. Simple, no need to worry about changing filesystems. > >Unfortunately slow too. It will at least double the tape IO time (and >make a streaming tape drive stop streaming). I'm sure I've seen specs on at least one tape controller that did automatic write/verify operations. If the drive read heads are after the write heads in the tape path, this could conceivably be no problem. Even if the drive had to rewind to verify, the work would still be done by the controller, not the cpu, so the time penalty would not necessarily mean a great system performance penalty. (Excepting the problem with rewinding streaming drives.) Now, the $68000 question. Does anyone else out there know of tape controllers that do write/verify operations automatically, or is this just the product of a warped memory (faulty media?)? Jonathan E. Quist ihnp4!laidbak!jeq
perl@rdin.UUCP (Robert Perlberg) (10/22/85)
Guy Harris's version of restor as implemented on MASSCOMP's has a 'c' option which compares a dump with the filesystem and flags differences in data content. I believe it is in the public domain and has been posted to the net. Robert Perlberg Resource Dynamics Inc. New York {philabs|delftcc}!rdin!perl
slezak@lll-crg.ARPA (Tom Slezak) (10/28/85)
In an earlier letter on this topic, Daniel Cox said that his Zilog 8000 machine did rewinds during dumps and then resumed forward tape motion...with the hopeful assumption that this implied that verification was being done. Alas, I fear this is not the case. My Zilogs have the same cartridge drives and they are multi-track beasties. The rewind is to get to the beginning of the next track...so don't expect any verification code from Zilog. tom slezak LLNL slezak@llnl
cnrdean@ucbtopaz.BERKELEY.EDU (10/30/85)
Tom Slezak. I think you're right. As a matter of fact, I think the Zilog tape dump programs are programs which were never completed: If I ask restor to recover a file that is on the 2nd track, restor first determines what the inode is. Then it starts down the first track to see if that inode is on it. If it is not, it starts down the second track ... If the inode happens to be on the last track, it can take a LONG time to recover a file. I don't know why it doesn't check the inode at the beginning of each track first. If anybody from Zilog reads this, I would appreciate a comment. I have been too lazy to call you. Sam Scalise
radzy@calma.UUCP (Tim Radzykewycz) (10/31/85)
In article <135@ucbjade.BERKELEY.EDU> cnrdean@ucbtopaz.UUCP () writes: >Tom Slezak. I think you're right. As a matter of fact, I >think the Zilog tape dump programs are programs which were never >completed: If I ask restor to recover a file that is on the 2nd >track, restor first determines what the inode is. Then it starts >down the first track to see if that inode is on it. If it is >not, it starts down the second track ... If the inode happens to >be on the last track, it can take a LONG time to recover a file. >I don't know why it >doesn't check the inode at the beginning of each track first. >If anybody from Zilog reads this, I would appreciate a comment. >I have been too lazy to call you. >Sam Scalise The cartridge tapes in question don't work this way. The fact that they seek down the first track, rewind, seek down the next track,... is not because the program (dump in this case) isn't finished or doesn't know what's going on. The fact is that these tape drives look like normal 9-track tape drives to the applications. I'm pretty sure that *even the controller* doesn't know where the end of a track is and the beginning of the next. That is only known by the tape drive itself. In any case, the fact that a particular file is located on a particular track of the tape is invisible to the applications. -- Tim (radzy) Radzykewycz, The Incredible Radical Cabbage calma!radzy@ucbvax.ARPA {ucbvax,sun,csd-gould}!calma!radzy