[comp.sys.atari.8bit] 1050 disk format

Makey@LOGICON.ARPA (Jeff Makey) (11/03/87)

I am writing an exerciser for my (unmodified) Atari 1050 disk drive,
and I would like to know what the "worst cast" pattern is that I should
write to the disk to test for bad media.

For the uninitiated, the worst case pattern is like alternating ones
and zeroes, except that the particular method of encoding the ones and
zeroes on the disk may mean that a different pattern is actually the
"worst" case.  Thus, even if you don't know which pattern is worst you
could help me if you could tell me how ones and zeroes are physically
stored on the disk.  Thanks in advance.

                       :: Jeff Makey
                          Makey@LOGICON.ARPA

conklin@eecae.UUCP (Terry Conklin) (11/04/87)

Since the format itself is untouchable on a stock 1050, your
only option would be to use worst case data.

The TRS-80 always used the Hex byte E5 to fill all the unused
sectors on a track. This was the worst case data you could
get, and helped verify flaky disks. Since the TRS, Atari, 
IBM and ST all use (cough) the same disk format (well, I us
the TRS to copy disks for all of the above!) it would seem
a safe assumption that this would prove close if not it.

Terry Conklin
ihnp4!msudoc!cs the 

saulnier@cg-atla.UUCP (Jim Saulnier X7097) (11/08/87)

	Here in diagnostic land, we have always used a worst case
data pattern of ->  6D, B6, DB when testing floppy subsystems.  I'm not 
sure why this pattern came into being, but I do know that we have been 
using this as a standard for many years, and this company is usually 
more conscious of it's diagnostics than most other companies. 


--
Jim Saulnier

...!{decvax,ima,ism780c,ulowell,cgeuro,cg-f}!cg-atla!saulnier
"Wow, it never did THAT before."

hans@umd5.umd.edu (Hans Breitenlohner) (11/11/87)

The 1050 disk drive (as all Atari drives) uses industry standard (or is it
IBM standard) Floppy disk format (except that the index hole is not used).
In single density FM encoding is used, in double (or enhanced) density
MFM encoding is used.  

FM encoding records a transition in the middle of each bit cell, and an 
additional transition between equal bits.
MFM encoding records a transition in the middle of bit cells containing
a one bit, and additional transitions between bit cells which both contain
zero bits.
Thus transitions are one-half or one bit cell apart in FM mode; and one,
one-and-a-half, or two bit cells apart in MFM mode.  Note: Since the bit 
density is twice as high in MFM mode, the actual minimum and maximum spacing
between transitions is the same in both modes.

The lowest density of recorded transitions in either mode happens for 
alternating 0 and 1 bits.
The highest density of transitions in either mode occurs for sequences of
all zero bits or all one bits.
The intermediate transition density in MFM mode occurs for sequences of
the form 001001001 etc.

A good test pattern probably should contain some of all
recording densities.  IBM uses data bytes of E5 to fill empty sectors
in FM mode, which qualifies on that count.  Similarly the pattern 6D B6 DB
which was suggested alternates between high and low density transitions in
MFM mode.  I would think, however, that a good MFM test pattern should also
contain a good supply of medium-density signals, to verify that the three
spacings can be distinguished reliably.

An additional complication you need to know about is the fact that all data
bytes are inverted before they are written to disk, and after reading.