[comp.periphs.scsi] Risc System/6000

emv@math.lsa.umich.edu (Edward Vielmetti) (02/21/90)

[ disk options on the RS/6000

aha, it's an esdi drive that's built in.  I wondered.

23ms, 1.3MB/sec transfer is wimpy for a fast machine.
In this configuration the machine is going to be seriously
i/o bound, without a doubt.

12.5ms, 2MB/sec transfer on the 320MB disk is better,
but that's not top of the line these days -- not for
SCSI (sync will go faster) and certainly not for disk
in general.  I don't see a real fast disk for these things.

Can we get a real word on the scsi adapter -- i.e.

- is it a part that's being sold now for the microchannel,
  or a new thing ?
- is it SCSI-1 or SCSI-2, does it support synchronous operation,
  etc.
- are there any problems with dropping in a microchannel SCSI
  adapter except perhaps that of getting device driver support?

The SCSI adapter I saw had an 80C186 and a big IBM chip (must
be some ASIC thing) on it, the copyright on the 80C186 was
1979 -- hardly state of the art in chips !

--Ed

pcg@aber-cs.UUCP (Piercarlo Grandi) (02/22/90)

In article <EMV.90Feb20220637@duby.math.lsa.umich.edu> emv@math.lsa.umich.edu (Edward Vielmetti) writes:
  
  [ disk options on the RS/6000
  
  aha, it's an esdi drive that's built in.  I wondered.
  
  23ms, 1.3MB/sec transfer is wimpy for a fast machine.
  In this configuration the machine is going to be seriously
  i/o bound, without a doubt.

Pah. The bottleneck is the filesystem, unless you do asynch io via a raw
device. You cannot get more than 600KB per second out of the filesystem in
the best of circumstances, and even that is only achieved, as far I know, by
the MIPS UNIX. Others top out at around 300KB per second.

Better seek times improve things a bit. Multiple drives, with overlapped
seek and transfer, improve things much more for a timeshared system.  It is
here, and not in higher transfer rates (or even seek times) that SCSI wins
over ESDI. But the advantage is nonexistent if you have only one drive.

If your only worry is single task fast transfer rate (signal/image
processing), be prepared to implement something like the Amoeba or Dartmouth
or Cray file systems.

The problem is software, not hardware.
-- 
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

markb@denali.sgi.com (Mark Bradley) (02/23/90)

In article <1660@aber-cs.UUCP>, pcg@aber-cs.UUCP (Piercarlo Grandi) writes:
> In article <EMV.90Feb20220637@duby.math.lsa.umich.edu> emv@math.lsa.umich.edu (Edward Vielmetti) writes:
>   
>   [ disk options on the RS/6000
>   
>   23ms, 1.3MB/sec transfer is wimpy for a fast machine.
>   In this configuration the machine is going to be seriously
>   i/o bound, without a doubt.
> 
> Pah. The bottleneck is the filesystem, unless you do asynch io via a raw
> device. You cannot get more than 600KB per second out of the filesystem in
> the best of circumstances, and even that is only achieved, as far I know, by
> the MIPS UNIX. Others top out at around 300KB per second.
> 
> Better seek times improve things a bit. Multiple drives, with overlapped
> seek and transfer, improve things much more for a timeshared system.  It is
> here, and not in higher transfer rates (or even seek times) that SCSI wins
> over ESDI. But the advantage is nonexistent if you have only one drive.
> 
> If your only worry is single task fast transfer rate (signal/image
> processing), be prepared to implement something like the Amoeba or Dartmouth
> or Cray file systems.
> 
> The problem is software, not hardware.

Pah, indeed.  I am measuring >6 MB/sec. through our filesystem today, abeit
not with SCSI.  Our SCSI (synchronous) is only a bit over 2 MB/sec. on a
single drive.  Striping and other wonderful *software* things do much more.
---Through the filesystem, mind you.---

And ESDI is much, much faster in certain applications in that one can better
sort, queue and optimize the performance that is limited by the speed of the
drives' mechnaisms.  It must be agreed, however, that if the software does
not permit full utilization of the raw speed of the hardware, then the speed
of that hardware does very little for one.

						markb



--
Mark Bradley				"Faster, faster, until the thrill of
I/O Subsystems				 speed overcomes the fear of death."
Silicon Graphics Computer Systems
Mountain View, CA 94039-7311		     ---Hunter S. Thompson

********************************************************************************
* Disclaimer:  Anything I say is my opinion.  If someone else wants to use it, *
*             it will cost...						       *
********************************************************************************

pcg@rupert.cs.aber.ac.uk (Piercarlo Grandi) (03/03/90)

In article <51507@sgi.sgi.com> markb@denali.sgi.com (Mark Bradley) writes:

   In article <1660@aber-cs.UUCP>, pcg@aber-cs.UUCP (Piercarlo Grandi) writes:
   > Pah. The bottleneck is the filesystem, unless you do asynch io via a raw
   > device. You cannot get more than 600KB per second out of the filesystem in
   > the best of circumstances, and even that is only achieved, as far I know, by
   > the MIPS UNIX. Others top out at around 300KB per second.
	[ ... ]

Andrew Koenig in another message reports that some 88K Tek machine also
gets up to 600KB second (cheers!) and some other machines get to
450KB/sec. Actually, even some SUNs get to that mark. Your average
workstation will only do 150-200KB/sec. (which is horrid, considering
that I get that much out of a System V filesystem structure, when clean,
on my home 386 with an RLL controller), and at most around 300KB.

   > If your only worry is single task fast transfer rate (signal/image
   > processing), be prepared to implement something like the Amoeba or Dartmouth
   > or Cray file systems.
   > 
   > The problem is software, not hardware.

   Pah, indeed.  I am measuring >6 MB/sec. through our filesystem today, abeit
   not with SCSI.  Our SCSI (synchronous) is only a bit over 2 MB/sec. on a
   single drive.  Striping and other wonderful *software* things do much more.
   ---Through the filesystem, mind you.---

Oh yeah. Thanks for supporting my contention/complaint. Now, if only
other people took heed from the likes of you and reimplemented the
filesystem software.  The FFS paper is very clear about what are the
limits of the BSD design (But I think it has others, actually).

I would not go as far as the Amoeba filesystem (files as *contiguous*
lumps of disc space, transferred in one IO operation from disc to memory
or viceversa, damn external fragmentation, and this works because Unix
files are on average minuscule). The Dartmouth flexible extent based
filesystem with daemonic compaction looks good enough to me.

   And ESDI is much, much faster in certain applications in that one can better
   sort, queue and optimize the performance that is limited by the speed of the
   drives' mechnaisms.

This is the notorious problem that SCSI will hide from the OS the drive
geometry (down to sector remapping, wich can be really nasty), which of
course pays put to many nice optimizations. On the other hand, ESDI (on
PCs, where it is most popular) has the non trivial problem that
controllers are on average not multithreaded. Pah again.
--
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

madd@world.std.com (jim frost) (03/09/90)

pcg@rupert.cs.aber.ac.uk (Piercarlo Grandi) writes:
>This is the notorious problem that SCSI will hide from the OS the drive
>geometry (down to sector remapping, wich can be really nasty), which of
>course pays put to many nice optimizations.

It's not all that difficult to determine the geometry of a SCSI drive.
During the last USENIX a BSD person who's been researching FS
optimizations which take into account rotational latency hinted that
the BSD people have done so.  What you do with SCSI, then, is run a
geometry analyzer which dumps out the configuration of the SCSI drive
so that the filesystem can do the apropriate optimizations.  No big
trick there.

jim frost
saber software
jimf@saber.com

mjacob@wonky.Sun.COM (Matt Jacob) (03/10/90)

In article <1990Mar9.022931.4674@world.std.com> madd@world.std.com (jim frost) writes:
>pcg@rupert.cs.aber.ac.uk (Piercarlo Grandi) writes:
>>This is the notorious problem that SCSI will hide from the OS the drive
>>geometry (down to sector remapping, wich can be really nasty), which of
>>course pays put to many nice optimizations.
>
>It's not all that difficult to determine the geometry of a SCSI drive.
>During the last USENIX a BSD person who's been researching FS
>optimizations which take into account rotational latency hinted that
>the BSD people have done so.  What you do with SCSI, then, is run a
>geometry analyzer which dumps out the configuration of the SCSI drive
>so that the filesystem can do the apropriate optimizations.  No big
>trick there.
>
>jim frost
>saber software
>jimf@saber.com

Umm- it would be interesting to see whether they claim to be able
to make sense out of variable geometry (bit-zone recording method)
drives.

My own personal opinion is that geometry based filesystems are
getting to be a bad microoptimization. With the coming of SCSI-2
multiple command targets, it seems to me that one should just
concentrate on getting requests out to the target as quickly
as possible and let the microprocessor on the drive figure out
the best order do them in.

-matt jacob

hedrick@athos.rutgers.edu (Charles Hedrick) (03/12/90)

>My own personal opinion is that geometry based filesystems are
>getting to be a bad microoptimization.

You might want to separate issues of placement and command sorting.
If the disk controller is prepared to reorder transactions, and does
so well, then I agree it's a mistake for the kernel to do so.  It
should just get transfer requests to the controller as quickly as
possible.  The controller is in a better position to know what the
heads are doing.  

However it probably still makes sense for the kernel to try to place
blocks of files in positions that require minimal effort to read.  I
don't know of any controllers that are prepared to take over
management of the file system.  (In fact even the capability to
reorder transactions doesn't seem to be present in most SCSI
controllers that are actually available.)  It's not clear how much
this requires the kernel to know about the disk geometry.  My
suspicion is that the standard BSD file placement code gains something
even if it doesn't know where the exact track boundaries are.  At
least it will tend to keep files reasonably compact.  This assumes
that SCSI controllers will map logical to physical addresses in a
monotonic fashion, even if it can't be exactly linear.  (Apparently
some do a better job of this than others.  I take this into account
when buying disk drives.)

madd@world.std.com (jim frost) (03/13/90)

mjacob@wonky.Sun.COM (Matt Jacob) writes:
>My own personal opinion is that geometry based filesystems are
>getting to be a bad microoptimization. With the coming of SCSI-2
>multiple command targets, it seems to me that one should just
>concentrate on getting requests out to the target as quickly
>as possible and let the microprocessor on the drive figure out
>the best order do them in.

You'd be wrong no matter what you did.

Let's face it, there's no way you can know what someone wants to do
with your drive.  IBM has had hardware keying in its drives for years
and years, and I know people who absolutely swear it's faster than
software keying could EVER be.

What's the problem with this?

For every access technique I've ever seen, there's an optimal and a
suboptimal series of requests.  There are a number of techniques which
boast near-even access times all the time, and a number of them which
produce access times which are near optimal for the hardware for
specific sequences.  None that I've seen can offer near optimal ALL
the time, for ALL given sequences.

You'd have to do that to get hardware to perform as well as software
would when software can know ahead of time how the accesses are going
to be done.  The application writer has the ability to examine how
accesses are going to be done and optimize the data layout based on
that knowledge.  The drive manufacturer does not.  Therefore, if the
software just happens to use the worst-case access sequence, it gets
terrible performance.

There are a hell of a lot of algorithm books out there which bend over
backwards trying to prove that you can't fool all of the people all of
the time, which is what you're wishing for if you think the hardware
designer can predict what everyone is going to want to do with the
hardware.

Happy hacking,

jim frost
saber software
jimf@saber.com

henry@utzoo.uucp (Henry Spencer) (03/14/90)

In article <132788@sun.Eng.Sun.COM> mjacob@sun.UUCP (Matt Jacob) writes:
>... With the coming of SCSI-2
>multiple command targets, it seems to me that one should just
>concentrate on getting requests out to the target as quickly
>as possible and let the microprocessor on the drive figure out
>the best order do them in.

This is reasonable, provided that (a) one can impose constraints on the
ordering to meet filesystem-integrity requirements, and (b) the micro
on the drive has enough queue space for (potentially) hundreds of
requests.  I'm not holding my breath.
-- 
MSDOS, abbrev:  Maybe SomeDay |     Henry Spencer at U of Toronto Zoology
an Operating System.          | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

michael@xanadu.com (Michael McClary) (03/19/90)

In article <1990Mar13.190317.17846@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
>In article <132788@sun.Eng.Sun.COM> mjacob@sun.UUCP (Matt Jacob) writes:
>>... With the coming of SCSI-2
>>multiple command targets, it seems to me that one should just
>>concentrate on getting requests out to the target as quickly
>>as possible and let the microprocessor on the drive figure out
>>the best order do them in.
>
>This is reasonable, provided that (a) one can impose constraints on the
>ordering to meet filesystem-integrity requirements, and (b) the micro
>on the drive has enough queue space for (potentially) hundreds of
>requests.  I'm not holding my breath.

I hereby spend a little of the net's bandwith to point out, especially
to the authors of drive firmware, that (a) is VERY important.

VERY VERY VERY VERY VERY VERY VERY VERY VERY VERY VERY VERY important.

A drive that does no write-order optimization whatsoever is usable
for a high-reliability database.

A drive that buffers and re-orders writes is UNusable UNLESS its
write order can be constrained (or neither it nor the computer it
is connected to EVER suffer any failures or unexpected shutdowns).

The smallest simple-to-implement constraint I know is to be able to
tell the drive "Be sure everything you got before >NOW< is written
and power-fail safe before writing anything you get after >NOW<."
If you can't give me at least that, write the data in the order you
got it.

The constraint "Be sure everything you got before >NOW< is written
and power-fail safe before allowing another operation to start."
is sufficient, but causes an unnecessary performance hit for some
applications.

Thank you for your attention.

=========================================================================
I normally have the option of turning opinions expressed in my postings
into 1/5 of 1% of the opinion of Xanadu Operating Company.

On this issue, my opinion >IS< the opinion of Xanadu Operating Company.
=========================================================================