[comp.sys.amiga.tech] AmigaDos directory knowledge +

rsingh1@dahlia.waterloo.edu (11/30/89)

Interesting BENCHMARK I did at the end of file that rates
the speed of the amiga's hashing file system.  Probably not too accurate
for what it's measuring, but maybe it is.  Check it out.

REply:


You said that MS-DOS beats the amiga out on it's handling of 
directories.
 
For SMALL directories, this is ABSOLUTELY true.

->   B | | T   <-
       |_|

There are LOTS of bad points!

For example:  If the part of the disk that holds the directory gets
wiped out, it's game over.  No way to recover.  Sure, there are UTILITIES
that create backup's and things, but the problem still remains.
 
ALSO! ->  Have you ever seen MSDOS in action with a LARGE directory?
(a few hundred files).  Slow ISN'T THE WORD.  The speed is remarkably
pathetic.  The 'simple approach' of searching sequentialy is seriously
dumb.  And outrageously slow.  On most large IBM BBS's that rely on
the files in the file section as data, things slow down to a crawl.
Look what happens when Opus gets more than 500 or so messages in an
area.  And look how long it takes QuickBBS to do a new files with a large
file section.  The problem was so bad that one sysop split his GIF section
into ALPHABETIC sections.  Even though we still do a new files on the same
# of files, the speed has increased by a few orders of magnitude.
 
Those are the main points.  I have worked on databases with thousands of files
, and it really is a pain to split databases, because the system's file
system becomes a bottleneck.
 
The hashing system is a good deal more complex, but boy is it fast.
Especialy for pattern matches and stuff.  Here are some benchmarks
on a semi-full directory running off of floppy:

Files: 71 with real file names, and 100 with random file names.
The benchmark program was written in Arexx, and looked like this:

/* Timer.Rexx -- Times how long a command takes to execute */
a=1
address command
call time 'R'
'dir >nil: df1:shit'
call a
call time 'R'
'dir >nil: df1:shit/(S*|*s|*d|*me*|[a-z]??[n])'
call a
call time 'R'
'dir >nil: df1:shit/(S*|*s|*d|*me*|[a-z]??[n])'
call a
call time 'R'
'dir >nil: df1:shit/?????'
call a
call time 'R'
'dir >nil: df1:shit/~(s*)'
call a
exit
a:
        Say 'Test #'a 'with an elapsed:' time('E')
        a=a+1
return
 
The purpose of this benchmark isn't to show how slow the seek access
times on the floppy are, so I will run it with a cashe (most IBM
systems run one too it seems.  The cashe being used is called
FACCII, and looks to the computer, about the same as the drive.

The line: 'dir >nil: df1:shit/(S*|*s|*d|*me*|[a-z]??[n])'
 will list everything that begins, and ends with s, what ends with d,
what contains the letters 'me', and then it lists
every 4 letter file that begins with a letter and ends in n.

The line: 'dir >nil: df1:shit/?????'
simply lists every 5 letter file.
 
The line: 'dir >nil: df1:shit/~(s*)'
simply lists every file that doesn't start with s.

The output is all disgarded in the nil: device.  Also, understand that
all of the output would be sorted.

Benchmark Begin:  (Direct snip from console window)

6> rx timer
Test #1 with an elapsed: 2.72
Test #2 with an elapsed: 2.26
Test #3 with an elapsed: 2.26
Test #4 with an elapsed: 2.02
Test #5 with an elapsed: 2.56

All results are in seconds.  With 171 files, and a terribly fragmented
directory (although that shouldn't matter in these benchmarks), the
speed isn't bad.  And the pattern matching is quite complex.

I hope I've proved that the amiga's file system is a good deal better
than the ms dos file system.  The amiga's file system excells in finding
a file with a particular name quickly, and it can perform quick pattern
matching.  Also note, that this was with the 'sfs' (slow file system),
and that about .1 of those seconds are used to call up the directory
command.
 
-paul sop

kent@swrinde.nde.swri.edu (Kent D. Polk) (12/01/89)

In article <18812@watdragon.waterloo.edu> rsingh1@dahlia.waterloo.edu () writes:
[...]
>You said that MS-DOS beats the amiga out on it's handling of 
>directories.
> 
>For SMALL directories, this is ABSOLUTELY true.
>
>->   B | | T   <-
>       |_|
>
>There are LOTS of bad points!

What about the 512 filename limit on MSDOS directories? We have some PC
based scanners that take waveform data and stick it into separate files
- one per waveform (1k points). They have to automatically create
subdirectories which store up to 512 files. As far as I know this limit
is still in effect on PC's. (Don't ask why each waveform is in a
separate file - I didn't write the stuff).

The Amiga slows down a bit (still pretty darn fast), but can definitely
handle more than 512 filenames in a directory.

Any limit to the no. of filenames in AmigaDos directories?

====================================================================
Kent Polk - Southwest Research Institute - kent@swrinde.nde.swri.edu
        Motto : "Anything worth doing is worth overdoing"
====================================================================

eric@interlan.UUCP (Eric Anderson) (12/01/89)

[]
	Don't look now - here comes HPFS!  Dare to compare??


	- Eric
	(no affiliation with, or particular love of, MS Corp.)
	(I want my A3000!)

daveh@cbmvax.UUCP (Dave Haynie) (12/02/89)

in article <24370@swrinde.nde.swri.edu>, kent@swrinde.nde.swri.edu (Kent D. Polk) says:

> Any limit to the no. of filenames in AmigaDos directories?

Like just about everything else in the Amiga OS, the number of filenames in a 
directory is unlimited.  Or should I say, limited by the size of your hard
disk.  Files are always stored as linked lists of files off the main 72 entry
hash chain (72 is the magic number for 512 byte block systems; I imagine if
they start supporting 1K blocks that would go up to 200 entries, etc.).

Static limits are a bad thing, period.  MS-DOS even makes you tell it how
many files you'll allow to be open at once.  Apparently OS/2 suffers from
the same faulty thinking...

> Kent Polk - Southwest Research Institute - kent@swrinde.nde.swri.edu
-- 
Dave Haynie Commodore-Amiga (Systems Engineering) "The Crew That Never Rests"
   {uunet|pyramid|rutgers}!cbmvax!daveh      PLINK: hazy     BIX: hazy
                    Too much of everything is just enough

doug@xdos.UUCP (Doug Merritt) (12/02/89)

In article <8787@cbmvax.UUCP> daveh@cbmvax.UUCP (Dave Haynie) writes:
>Static limits are a bad thing, period.  MS-DOS even makes you tell it how
>many files you'll allow to be open at once.  Apparently OS/2 suffers from
>the same faulty thinking...

And so does Unix, unfortunately. It's a configurable limit, which means
that it can be raised by wizards but that normal Unix users can't do
anything to raise it.

There's a saying: "give people either zero, one, or infinity of any
resource...any other number will lead to frustration".
	Doug
-- 
Doug Merritt		{pyramid,apple}!xdos!doug
Member, Crusaders for a Better Tomorrow		Professional Wildeyed Visionary