michael@orcisi.UUCP (07/26/87)
The bottom line was that dump was not using an appropriate blocking factor. Thanks to the people at Sun and everyone else for your prompt responses. All of the responses follow. Michael Herman Optical Recording Corporation 141 John Street, Toronto, Ontario, Canada M5V 2E4 UUCP: { cbosgd!utcs ihnp4!utzoo seismo!mnetor }!syntron!orcisi!michael ALSO: mwherman@watcgl.waterloo.edu ------------------------------------------------------------------------------- From syntron!mnetor!seismo!sun!rtilson Fri Feb 20 15:40:08 1987 From: Rick Tilson -Western Support Center- x3605 <mnetor!seismo!sun!rtilson> Subject: Re: Why do our Sun 3/160 backups take 3+ hours? Try this. I use it on my 3/75 with 71 meg fujitsu and 1/4" tape. dump 0ufdsb /dev/rst8 1000 3825 1750 /dev/rsd0g This takes 11-15 minutes. If you are using the c option to specify cartridge tape, your using very minimum optins for length, density etc. This option was intended more for the old 4 track tape drives in 100's and 120's. ------------------------------------------------------------------------------- From syntron!mnetor!seismo!harvard!adelie!munsell!pac Fri Feb 20 15:40:13 1987 From: mnetor!seismo!harvard!munsell!pac (Paul Czarnecki) Subject: Re: Why do our Sun 3/160 backups take 3+ hours? Newsgroups: comp.unix.wizards Organization: Eikonix Corp., Bedford, MA In article <926@orcisi.UUCP> you write: >We have a basic configuration consisting of single 71MB hard disk and 1/4" >cartridge tape drive. > >Using dump, it currently takes 3+ hours to backup /usr. /usr is a 40MB >file system that is about 95% full. Try a blocksize of 126. I've seen tar become > 50% faster with this blocksize, maybe dump will also. (where did I get this number? somebody told me when I complained it was too slow) ------------------------------------------------------------------------------- From oscvax!utgpu!yetti!gen1!gen1.UUCP!tyler Fri Feb 20 15:56:53 1987 From: yetti!gen1!tyler (Tyler IVANCO) Subject: block.c et. al. X-Mailer: msg [version 3.2] Michael, I believe your problem is common to all systems using streamer tape drives. The problem is basically that unix cannot supply the streamer drive with data fast enough. I had the same problem and wrote a small block program that collects data from a pipe into a say, 1Mbyte buffer, then uses a single write call to dump to stdout. e.g. dump 0f - | block -b 1024000 > /dev/rmt4 On my ICM3216, the file dump took 2 hours using dump 0f /dev/rmt4 and less that 20 minutes using the dump/block combination. The program is trivial. Here is block.c. It uses getopt which was distributed by the net a while back. #include <stdio.h> #define DBUFFERSIZE 10240 main(argc,argv) int argc; char **argv; { extern int optind; extern int getopt(); extern char *malloc(); extern char *optarg; register char *buffer; register int i,bsize,total; int c, buffersize, quiet=0; buffersize=DBUFFERSIZE; while( (c=getopt(argc,argv,"qb:")) != EOF ) { switch(c) { case 'b': sscanf(optarg,"%d",&buffersize); break; case 'q': quiet++; break; default: exit(1); break; } } if(!quiet) fprintf(stderr,"Blocksize=%d\n",buffersize); if((buffer=malloc(buffersize))==0) { fprintf(stderr,"Not enough memory available\n"); exit(1); } for(c=0;c<buffersize;c+=256) *(buffer+c)=0; total=0; bsize=buffersize; while( (i=read(0,buffer+total,bsize-total)) > 0) { total+=i; if(total>=buffersize) { write(1,buffer,total); total=0; bsize=buffersize; } } if (total) write(1,buffer,total); close(1); cfree(buffer); exit(0); } ------------------------------------------------------------------------------- From syntron!utzoo!mnetor!seismo!sun!shannon Sat Feb 21 04:57:43 1987 From: mnetor!seismo!sun!shannon (Bill Shannon) Subject: Re: Why do our Sun 3/160 backups take 3+ hours? Use a very large block size (at least 126) with dump. ------------------------------------------------------------------------------- From UUCP Mon Feb 23 11:54:38 1987 From: Dave Martindale <omnitor!onfcanim!dave@watcgl> Subject: Re: Why do our Sun 3/160 backups take 3+ hours? Newsgroups: comp.unix.wizards Organization: National Film Board / Office national du film, Montreal In article <926@orcisi.UUCP> you write: >We have a basic configuration consisting of single 71MB hard disk and 1/4" >cartridge tape drive. > >Using dump, it currently takes 3+ hours to backup /usr. /usr is a 40MB >file system that is about 95% full. What block size are you using on the tape? We have IRISes with 70Mb disk and 1/4" tape drive, and it takes about a half hour to back up a 40Mb /usr. We normally do our backups with cpio, which has an option to specify that it write 250Kb blocks on tape. Tar writes 200Kb blocks on cartridge tape by default. This allows the drive to write for a reasonable period after it's spent all the time required to back up the tape and get it up to speed before the block, then stop afterwards. If you use the sort of blocksizes common on half-inch tape (10Kb), the tape drive spends all day starting and stopping without writing much data. ------------------------------------------------------------------------------- From syntron!utzoo!mnetor!seismo!rochester!srs!matt Tue Feb 24 11:24:34 1987 From: mnetor!seismo!rochester!srs!matt (Matt Goheen) Subject: Re: Why do our Sun 3/160 backups take 3+ hours? What blocking factor do you use. If you don't specify one it will default to 10 which is VERY slow for cartridge tape. The magic number seems to be 126 (somehow derived from internal buffer sizes). So: dump 0cfbu /dev/rst0 126 ------------------------------------------------------------------------------- From syntron!mnetor!seismo!sun!cramer Mon Feb 23 16:04:15 1987 From: mnetor!seismo!sun!cramer (Sam Cramer) Subject: Re: Cartrdige tape backups OK, you have a Wangtek tape drive with an Emulex controller. This means that you should be able to get the drive to stream. Yes, a large block size should help. The game here is to get enough data into the tape controller so that it has a chance of streaming. You can do this by repeatedly feeding it smaller blocks of data that make it into the timing window necessary to keep the drive streaming, or you can repeatedly feed it large blocks of data, each of which will stream. Try playing with a large blocksize in tar or dump.