m5@lynx.uucp (Mike McNally) (06/27/89)
Some cartridge tape drives support a wide variety of block sizes, some don't. Many only support 512-byte blocks. Is it thus common for archives (like tar or cpio) that are intended to be read on several different machines to be created with a 512-byte block size? To ask the question a different way: we have to massage our SCSI driver to work with some drives from Archive. The drives use QIC 24 recording format and are thus compatible in that respect with Suns and lots of other things. However, we are concerned (in our general ignorance about these things) because the drives only know about 512-byte blocks. Is that a problem (note that Sun uses these drives in some systems, a fact that only makes me more confused)? Does this here QIC 24 thing encompass block sizes or is it just recording format? Should I let somebody else worry about this? -- Mike McNally Lynx Real-Time Systems uucp: {voder,athsys}!lynx!m5 phone: 408 370 2233 Where equal mind and contest equal, go.
clewis@eci386.uucp (Chris Lewis) (06/29/89)
In article <5755@lynx.UUCP> m5@lynx.UUCP (Mike McNally) writes: >Some cartridge tape drives support a wide variety of block sizes, some >don't. Many only support 512-byte blocks. Is it thus common for >archives (like tar or cpio) that are intended to be read on several >different machines to be created with a 512-byte block size? The confusion here is "logical" versus "physical" block size. Tar and cpio both have block size setting options for use on variable record length devices for the simple reason that writing big *physical* blocks is a win in both performance and tape capacity. From another point of view, the most you can read from a tape of this type is the contents of the next physical block, which are separated from each other by relatively large block gaps. As you note, most 1/4" streamers have *only* 512 byte physical blocks, but the controllers are usually capable of handling requests for multiple physical blocks as if it was one contiguous logical record. The only 1/4" streamer devices actually capable of writing variable length records that I've seen are ones that wouldn't be compatible with an Archive or Wangtek QIC tape anyhow. (Eg: CDC Sentinels) Think of the streamers as if they were 512 byte disk drives without (reliable) seeking mechanisms, and *very* slow 'request-to-request' latencies. Thus: make your logical I/O's as big as you can (taking into account physical memory) *and* make them integral multiples of 1/2K. tar and cpio do not change their formats regardless of the buffer size you give them, they simply use bigger I/O buffers. You can prove this to yourself by tar'ing with differing buffer sizes to a file and comparing the result. On a fixed 1/2K style streamer you can "tar c" with 1Mb buffers with confidence that almost anything can read it (modulo machines with byteswap or other wierdnesses) using 512 byte buffers if nothing else. However, there *are* times where buffer size might matter. Generally speaking, a tape subsystem that supports variable length physical blocks must have sufficient buffer memory in the controller to contain a whole block. Some controllers don't have enough for "tar -b 10"... -- Chris Lewis, R.H. Lathwell & Associates: Elegant Communications Inc. UUCP: {uunet!mnetor, utcsri!utzoo}!lsuc!eci386!clewis Phone: (416)-595-5425
dold@mitisft.Convergent.COM (Clarence Dold) (06/29/89)
in article <1989Jun28.173209.1457@eci386.uucp>, clewis@eci386.uucp (Chris Lewis) says: > tar and cpio do not change their formats regardless of the buffer size > you give them, they simply use bigger I/O buffers. You can prove this > to yourself by tar'ing with differing buffer sizes to a file and comparing > the result. On a fixed 1/2K style streamer you can "tar c" with 1Mb buffers > with confidence that almost anything can read it (modulo machines with > byteswap or other wierdnesses) using 512 byte buffers if nothing else. One exception I can think of is EOT on a multi-volume archive. If you use cpio -ocvT512k >/dev/rmt0 and EOT is reached somewhere in the midst of writing a 512K block, the next reel will have a repeat of that 512K block. When restoring the archive with 512K blocks, the same will happen. The partial 512K block will be discarded, and the next reel will be in sequence. If an attempt is made to restore it with 512 byte blocks, the partial 512K at EOT will be successfully read as, say, 45 512byte blocks, causing the second reel to be out of sequence. If a small buffer is used on the outbound side, and a large buffer is used to read it, the opposite will happen, even on single reel archives. An archive that is 33*512 byte, will come out to an uneven multiple of 512K, and the restore will fail, unable to read the last, apparently partial set of blocks. -- --- Clarence A Dold - dold@tsmiti.Convergent.COM (408) 434-5293 ...pyramid!ctnews!tsmiti!dold P.O.Box 6685, San Jose, CA 95150-6685 MS#10-007
clewis@eci386.uucp (Chris Lewis) (07/07/89)
In article <757@mitisft.Convergent.COM> dold@mitisft.Convergent.COM (Clarence Dold) writes: >in article <1989Jun28.173209.1457@eci386.uucp>, clewis@eci386.uucp (Chris Lewis) says: >> tar and cpio do not change their formats regardless of the buffer size >> you give them, they simply use bigger I/O buffers. >One exception I can think of is EOT on a multi-volume archive. > cpio -ocvT512k >/dev/rmt0 >and EOT is reached somewhere in the midst of writing a 512K block, the next >reel will have a repeat of that 512K block. [And reads with small blocks would get out of sync] True. Didn't think of that. Mind you, most tar's don't support multi-volume (and frankly, I simply don't trust cpio multi-volume except *maybe* on floppies) so the question is moot for tar. >If a small buffer is used on the outbound side, and a large buffer is used >to read it, the opposite will happen, even on single reel archives. >An archive that is 33*512 byte, will come out to an uneven multiple of >512K, and the restore will fail, unable to read the last, apparently partial >set of blocks. H'm, I just tried this with cpio on ISC 1.0.6 and it worked just fine. try: cd /etc cpio -o > /tmp/foo passwd inittab group <ctrl-D> cd /tmp cpio -iC512000 < /tmp/foo (will say that 10000 blocks read, but will create the files just perfectly) (-C is an undocumented cpio argument on ISC, probably AT&T, Microport and Bell Tech. I belive they (whoever "they" were) replaced cpio with something called "ncpio" which appears to have been an internal enhanced version of cpio. This appears to be the only way to get arbitrarily sized buffers specified to cpio.) Even if true, on QIC devices you really do need big buffers to get any sort of reasonable throughput. So you should be able to choose a reasonable size. Any QIC driver that can't read/write more than 512 bytes at a time should be junked. Any 1/4" streamer that has variable length records wouldn't be able to read/write compatible QIC tapes anyhow. As a reasonable compromise: use 128K on QIC streamers - large enough to not have too bad a start-stop hit, not so large that you could run into severe problems on machines with small amounts of memory or lots of other users that are trying to get things done ;-). On 9-track, 5K is usually fine (tar limit), though there are some machines that can only handle 3K (some 3b's). Once you get above 5K blocks all bets are off as to whether the hardware can handle real physical blocks that big. There are a few machines that don't like > 32K or > 64K raw I/O because of DMA boundaries. 386 UNIX and NCR Towers have it right even though they have somewhat strange DMA structures. Some older PC UNIXes don't. For those machines, you might have to limit yourselves to 32K. Even then, you might be able to fake it: If your tar doesn't support buffers bigger than 5K, you can always pipe the output of tar thru dd: tar cvf - .... | dd bs=<whatever> > /dev/.... [I may be mistaken, but doesn't 386 2.3.1 Xenix dd not support bs > 64K?] -- Chris Lewis, R.H. Lathwell & Associates: Elegant Communications Inc. UUCP: {uunet!mnetor, utcsri!utzoo}!lsuc!eci386!clewis Phone: (416)-595-5425