saare@ibmpa.UUCP (John Saare) (01/24/89)
Mucho thanko to: David King Gary Heffelfinger Randell Jessup (Commodore) Perry Kivolowitz (ASDG) For there response to my questions regarding FFS behaviour with flaky media. Since the previous posting, I've tried the following: - Mea Culpa (sorta). The original reason I said I didn't have a defect list was because I thought I didn't have one with the appropriate info. Wrong. The Adaptec 4000a requires in addition to Cyl/Head, byte offset. I thought I only had the first two; I had it all. I did a low level format on the CDC 70Mb with a defect list and then proceded with the usual installation. On previous experiments I've tried various interleave factors. This time around I'm using 3. - Created a mountlist entry with Interleave = 0, Mask = 0x7ffff, and MaxTransfer of varying sizes (low = 17 (when I thought it meant blocks) high = 512). Seems to have very little effect. When I get home tonight, I'll try MaxTransfer = 8704 and Buffers = 17 (One track's worth...). I no longer get "hard" errors from the drive :). Instead :~(, the problems still persist, BUT are MUCH easier to characterize. Say I have a 130+K file on a floppy (a backup Randell ;) ) and I try copying directly from df0: to dh0:. I will get PRECISELY two r/w error on dh0: during the xfer, no matter how many copies of the file I try to make, or where I try to write them. IF, however, I copy the file first to RAM: and then to dh0:, everything works fine..., repeatedly. I'm trying to get the latest ROM level for my Adaptec just for the heck of it. I was about to write a program that wrote large blocks of data, but it seems to be more related to timing than anything else. Any eye-deers? I'm on the phone with Pac Periph on pretty much a daily basis, they are very sincere and try to be helpful. I thought I had the most recent level of code, but recent discussions with PP and recent events make me believe otherwise. I'll try and get over there later this week. It's no fun being a pain in the a__. RE: Randell's comments: I understand and agree that the file-system expects the appearance of a perfect media. I understand and agree the device driver SHOULD provide some sort of bad block forwarding mechanism. It's VERY regrettable that most don't. I in principle agree with the way FFS works and how disk-validator interacts with it. I do not agree with the way DiskDoctor works, however. It is the LAST line of defense. It's where the FFS architecture compromises and comes to terms with the REAL world, the world of occasionally imperfect media. If at all possible, I'd like to suggest one more compromise. If at all possible, could DiskDoctor optionally prompt the user to simply "lose" bad blocks when they are found? Is this technically feasible? Partioning/re-formatting/restore-from-backup/etc. seem to be unpleasant stop-gap measures for a file-system that is meant to manage data-sets sized in the trans-giga-byte range. Please, please, please develope a solution that does not require re-formatting. Tell ya what..., I'll go for partitioning if you'll add symbolic links in 1.4 :). Again, thank you for all the responses to my postings... YHOS -- John Saare (uunet!ibmsupt!saare).