[comp.sys.atari.st] SH204 hard disk

jrb@dadla.TEK.COM (Jim Binkley) (01/22/88)

Moshe,

would you please inform as to what SUPEDIT is?  I was born under a
rock.

In reference to Atari hard disks in general, is there an equivalent of
"chkdsk"; i.e. a file system consistency checker, fixer-upper anywhere,
available for the average atari owner for rent or purchase?  Mr.
Harris, not having such a utility provided with your system is
unfathomable.

Also... I have an atari SH20? hard disk and am using the driver
supplied. Unfortunately for the sake of atari it sits next to a zenith
150-pc clone running good old msdos 2.2. Said ibm machine has a NEC
20meg drive and a 1010 winchester controller using the driver buried
away in the bios or somewhere. "To make a long story short, too late,
said the man in the audience...", My klunky old pc beats the atari's
performance on disk writes to pieces. This is curious.  The pc disk is
currently fragmented beyond belief with about %90 full; i.e., I won't
buy any reformat and try again explanations. Reading seems to be
comparable. I recently put together a little piece of code that was the
equivalent of the unix find utility and read all the directories on my
15 meg C: partition. That was fairly speedy. Writing is another matter.
Why?  Flicking the bit that turns off write-verify for "floppies"
doesn't seem to do any good on the hard disk. Did the supra driver that
Moshe used do any good on the atari disk?

One other observation:

As a rough benchmark: I have an interpreter that I run as a standard
time test on about an 8k "src" file. This program exists on both pc and
st. It should be compute bound as it really doesn't spend too much time
reading and writing to disk unlike say your average 4 pass C compiler:

on ibm-pc compiled with usoft v5.0 C

.30 seconds to compile (read compute write)

on pc with all file i/o via ram disk

.26 seconds to compile (read compute write)

on atari using SH20? drive and MWC v2.0 C compiler

.30 seconds to compile

on atari using ram disk

.15 seconds to compile

At least the 68000 computes faster...:->

			jim binkley jrb@amadeus.tek.com

dclemans@mntgfx.mentor.com (Dave Clemans) (01/23/88)

Supedit from Supra is a maintainence/diagnostic utility that lets you
examine/patch partition tables, boot sectors, etc. of ST hard disks.

In general, speed problems on ST disks (either floppies or hard disks)
are caused almost solely by the fact that the current GEMDOS uses an
INCREDIBLY! slow routine to search the file allocation table.  (In fact,
from the testing I've done almost the only way Digital Research could
have written a slower search routine would have been to put in wait
loops).  The FAT is hit more heavily on writes which is why writes
are slower than reads.

Hopefully the upcoming GEMDOS rewrite from Atari (when/if it hits
the streets) should address the disk speed problem.  (and it had better
also fix the GEMDOS memory management problems.  I'm really tired
of the "40 folder" and the "20 Malloc" bugs...)

As another example I have a directory cloner that sits on top of a
complete re-implementation of the GEMDOS filesystem (the only part
of the ST roms I use are the getbpb and rwabs calls).  While I don't
remember exact timings, it runs a number of times faster than any
copy routine built on top of GEMDOS and it doesn't slow down dramatically
when writing to very full large disk partitions.

As to "chkdsk" functionality; both disk reorganizer packages I know
about include a "chkdsk".

dgc

pete@gpu.utcs.toronto.edu (Peter Santangeli) (01/24/88)

In article <2697@dadla.TEK.COM> jrb@dadla.TEK.COM (Jim Binkley) writes:
>
>
>Also... I have an atari SH20? hard disk and am using the driver
>supplied. Unfortunately for the sake of atari it sits next to a zenith
>150-pc clone running good old msdos 2.2. Said ibm machine has a NEC
>20meg drive and a 1010 winchester controller using the driver buried
>away in the bios or somewhere. "To make a long story short, too late,
>said the man in the audience...", My klunky old pc beats the atari's
>performance on disk writes to pieces. This is curious.  The pc disk is
>currently fragmented beyond belief with about %90 full; i.e., I won't
>buy any reformat and try again explanations. Reading seems to be
>comparable. I recently put together a little piece of code that was the
>equivalent of the unix find utility and read all the directories on my
>15 meg C: partition. That was fairly speedy. Writing is another matter.
>Why?  Flicking the bit that turns off write-verify for "floppies"
>doesn't seem to do any good on the hard disk. Did the supra driver that
>Moshe used do any good on the atari disk?
>

	I believe what Jim has come up against here is the "soon to be as
famous as the 40 folder limit" bug in the file creation system.
	It seems that the programmer who wrote the GEMDOS routine to find
the first free allocation block on a device was as clueless as the guy who
designed the directory system.
	This brings up an interesting idea. Gemdos is certainly a usable
system, but...
	We have a limited number of folders (static).
	My hard drive creates new files slower than my floppy.
	The OS doesn't do any sector level buffering.
These features remind me painfully of my experiences with TRSDOS on a
1977 trs-80!
	Gemdos calls are based on traps. These are notable in that they are
VERY easy to intercept. How 'bout patching the OS to take care of a few of
these bugs? eh? Apple does it with the Macintosh, so there is no reason it
can't be done on the ST. I realise that the desktop being merely an application
is harder to patch, but GEMDOS is quite an easy patching target
	So, how 'bout it Atari? How 'bout releasing patches as they are written
for insertion into our AUTO folders, instead of making us wait for new ROMs?

	Pete Santangeli

rwa@auvax.UUCP (Ross Alexander) (01/25/88)

Jim's article <2697@dadla.TEK.COM> is absolutely correct.  Why *is* there
no CHKDSK.TTP or whatever?  Lord knows there are enough known bugs in the
filesystem that a correction-and-defragmenting utility ought to be standard
issue with any SH product.  I have worked around this by doing entire dumps
(via TURTLE) of the SH204 to many, many floppies and then reformatting,
building new partitions, testing, and restoring.  This is tedious at best
and error-prone & dangerous at worst.  I do it Sunday afternoons (like
today).

Jim is also right about relative filesystem performance.  Any
ibm-clone XT-class machine in the building can beat my ST/SH in
general filecopy performance.  Good grief, 1024 Kbytes of RAM to play
with and they don't even cache the FATs much less the directories, or
attempt read-ahead or any of the other tricks that (even with the
brain-damaged toy MSDOS filesystem) could easily be done to improve
performace.  The hardware is certainly quick enough.  As an example,
when I run my machine under the Magic Sac mac-emulation system, I
notice a considerable improvement; the hard disk performance
approachs TOS ramdisk speeds.

My interim solution has been to logically separate my files into two
groups; files which are essentially static and read-only (executables
and resource files, libraries, standard #include stuff, documentation
and the like) and files which are mutable and/or ephemeral (source,
mail, databases, pictures, any work in progress) and assign them to
different partitions.  

The mostly-read-only partition is loaded in the order of expected usage
frequency (i.e., \bin first, then \include and \lib, then \doc and so on)
and can be allowed to become moderately full (say 75%).  The read-write
partition holds \src and whatever else you have, and shouldn't get more
than 30% to 50% full.  In either case, small partitions are a speed win.
I always go with 4 x 5 megs for simplicity.

The final thing is to assign a RAM partition for really ephemeral stuff,
like compiler temporaries, .obj's, and suchlike objects.

Even with all this planning, the XT's still beat my ST.  Are you listening,
Neil?  Your people are crying out for help ;-).

BTW, here's my bench:  compile microemacs 3.9e from scratch. times:

	17 mins 32 secs with c:\{bin,lib,include}, d:\src, m:\tmp
	13 mins 25 secs with everything on m:\ (ramdisk).

( MWC 2.1.7, stock 1040STf or ST-4, stock SH204, in mono mode,
no acc's; in both cases, the command was 'make clean; time make' )

Who's got PC/XT times for this?  It's a good test from my
perspective; lots of compute, lots of reading, a fair bit of writing.
And its the kind of job load that I often offer this machine.  

> At least the 68000 computes faster...:->

But this is getting to be cold comfort :-< !

--
Ross Alexander, Sr. Systems Programmer & Bottlewasher @ Athabasca University
alberta!auvax!rwa

sreeb@pnet01.cts.com (Ed Beers) (01/25/88)

The atari is not the only machine which inherits this disk scheme from the ibm
pc.  I recently read in infoworld that OS-2, in keeping with upward
compatibility with the pc, uses it too.  I think I recall the article said
that OS-2 is 8 times slower than Xenix for file intensive applications because
of this.

UUCP: {cbosgd hplabs!hp-sdd sdcsvax nosc}!crash!pnet01!sreeb
ARPA: crash!pnet01!sreeb@nosc.mil
INET: sreeb@pnet01.cts.com

dag@chinet.UUCP (Daniel A. Glasser) (01/25/88)

In article <2697@dadla.TEK.COM> jrb@dadla.TEK.COM (Jim Binkley) writes:
[query about SUPEDIT removed]
>In reference to Atari hard disks in general, is there an equivalent of
>"chkdsk"; i.e. a file system consistency checker, fixer-upper anywhere,
>available for the average atari owner for rent or purchase?  Mr.
>Harris, not having such a utility provided with your system is
>unfathomable.
MichTron sells a program called "TuneUp!", which I have purchased, which
allows reordering and consistancy checking of hard-disk structures.  It is
not perfect -- It is not robust when the hard drive is almost full and you
have large files, so back up your files before doing a reorder on a nearly
full drive and it does not supply a method for fixing problems that the
checkdisk feature reports -- but it is the first program I've come across
that offers these features.  It costs < $50.
>
>Also... I have an atari SH20? hard disk and am using the driver
>supplied. Unfortunately for the sake of atari it sits next to a zenith
>150-pc clone running good old msdos 2.2. Said ibm machine has a NEC
>20meg drive and a 1010 winchester controller using the driver buried
>away in the bios or somewhere. "To make a long story short, too late,
>said the man in the audience...", My klunky old pc beats the atari's
>performance on disk writes to pieces. This is curious.  The pc disk is
>currently fragmented beyond belief with about %90 full; i.e., I won't
>buy any reformat and try again explanations. Reading seems to be
>comparable. I recently put together a little piece of code that was the
>equivalent of the unix find utility and read all the directories on my
>15 meg C: partition. That was fairly speedy. Writing is another matter.
>Why?  Flicking the bit that turns off write-verify for "floppies"
>doesn't seem to do any good on the hard disk. Did the supra driver that
>Moshe used do any good on the atari disk?
>
From conversations I've had with people "in the know" at Atari, the problem
here is with the TOS code for handling directory and FAT caching and lookups.
Though a rewrite has been underway in-house, the ROMs for the Blitter (the
ones in the current Megas) do not contain any of the re-written code because
it had not been tested in the timeframe required.  Look to Atari to eventually
release a version of TOS that will outperform the old versions by quite a bit.
The most expensive (time-wise) operation on the ST is opening a new file.  On
a hard disk with a lot of activity but over 50% free has sometimes taken over
six seconds.  This is the reason that Mark Williams recommends using a RAMdisk
for the temporary files (TMPDIR) and supplies a RAMdisk with their package.

>One other observation:
[Observation about relative speeds of the PC using RAMdisk vs. Atari using
RAMdisk vs. Atari using RAMdisk deleted.]

						Daniel A. Glasser
-- 
					Daniel A. Glasser
					...!ihnp4!chinet!dag
					...!ihnp4!mwc!dag
					...!ihnp4!mwc!gorgon!dag
	One of those things that goes "BUMP!!! (ouch!)" in the night.

K538915@CZHRZU1A.BITNET (01/31/88)

rwa@auvax.UUCP (Ross Alexander) writes:
>Jim's article <2697@dadla.TEK.COM> is absolutely correct.  Why *is* there
>no CHKDSK.TTP or whatever?  Lord knows there are enough known bugs in the
.......
and a lot more things which we probably all agree are correct..
.......
>Even with all this planning, the XT's still beat my ST.  Are you listening,
>Neil?  Your people are crying out for help ;-).
As people that have been following the net for the last two years have
noticed, the only positive action that Atari has ever taken about GEMDOS
bugs was the famous 'We're working on it!' from Neil Harris, concerning
the 40-folder problem nearly exactly two years ago. Since then Atari has
fixed problems in the BIOS and XBIOS and even some bugs in GEM, but
none of the minor or major GEMDOS bugs have been fixed. Why?
Well there were two possible ways Atari could have improved GEMDOS
since they started noticing the bugs (which as far as I know, happened
when they started shipping hard disks to developers (late 85)):

      1) fix the bugs inhouse, which would have been pretty easy
         for most of them (in fact for most problems they wouldn't have to
         have done it themselves: more than one person in Germany
         have actually rebuilt the C-Source of GEMDOS by diassembling
         the ROM's (one of them has actually published a book with
         the listing) and have repeatedly pointed out how to fix
         the bugs to Atari (most seem to be results of typos)). Only
         solving the '40-folder' problem would have needed some rewriting
         of the OS since this is a design problem of the internal
         memory management of GEMDOS.

      2) Get the fixed GEMDOS from DRI, that a fixed version of GEMDOS
         exists inside DRI has been  rumoured since the beginning of last
         year, one report even quoted somebody from DRI saying that they
         had given it to Atari............

So what's Atari's motivation not do to anything about GEMDOS (except
spreading rumours that they are working on a completely new OS (and even
if they are it must have very low priority (just one person working on it)))?
Well it costs money and if you have such a good scrapegoat as DRI,
why bother? Plus, all the GEMDOS problems have not hurt sales in Europe
(specially in the German speaking part), this is partially due to Atari
suppressing the information (the 40 folder bug was left out of the
German SH204 manual and only became widespread knowledge around Summer 87),
partially to the computer press not being critical enough about what
Atari Germany told them (remember the sentence 'The development of GEMDOS
is finished!' by Mr. Stumpf).

Now the only reason I bothered to write this is the astounding insolence,
that the same company that doesn't even bothers to support the OS on it's
major line of computers, is actually intending to be selling 4 (FOUR!)
different lines of computers, plus 4 different operating systems, plus
4 different window/graphic packages in 1988! Atari must really think
that we are stupid!


                               Simon Poole
                               Bitnet: K538915@CZHRZU1A
                               UUCP: ...mcvax!cernvax!forty2!poole