[comp.sys.apollo] VME SCSI controller in DN10000

dfazio@nachos.SSESCO.com (Dennis Fazio) (12/07/90)

We are trying to install a Ciprico Rimfire 3500 SCSI controller
in a DN10000 VMEbus to attach large quantities of SCSI disks.
Has anyone else done something similar? We have a driver
program written for Suns that works with the Ciprico board and need
any help or information that anyone may be able to give on
either converting this driver program to Apollo Domain/OS 10.2,
or telling us if this is even the correct way to go about this?

Does Domain/Os even support Unix-style device drivers, or do 
we have to do something completely different?

You may respond by posting or E-mail.
Thanks in advance.
-- 
Dennis Fazio                     | Internet: dfazio@ssesco.com
SSESCO, Inc.                     | Gabnet:   (612) 342-0003
511 11th Avenue South, Suite 268 | Faxnet:   (612) 344-1716
Minneapolis, Minnesota  55415    | 

krowitz@RICHTER.MIT.EDU (David Krowitz) (12/07/90)

Domain /OS does not support Unix style device drivers. All of their drivers
are written in a package known as GPIO (General Purpose I/O, I think -- I 
just use it, I don't pay attention to the names -;) ).  Your biggest
problem is that Apollo's method of writing a loading device drivers
winds up loading them ON TOP OF the OS kernal and the native, memory-mapped
file system. You can not page to, memory-map a file on (either via Unix
"mmap" calls or Aegis "MS_$" calls), "mount", "unmount", a disk which you
have written your own device driver for. It *is* possible to write a device
driver for a disk device and then to create a stream I/O type-manager for
the device which will handle Unix-style I/O calls and Aegis IOS_$ and STREAM_$
calls (Fortran and C I/O all pass through these facilities). This is a *LOT*
of work! Entire companies (including the ever present Workstation Solutions)
have their product lines built around this kind of effort.

If you want to attach SCSI disks to your network, I can think of two possible
solutions:

1) Get either a DN2500 (20 Mhz, ~ 3.5 MIPS, 68030) or a 400t (50 Mhz, ~12 MIPS, 68030)
   or a 425t (25 Mhz, ~15-20 MIPS, 68040 when it becomes available). The native SCSI
   disk support in the OS kernals for these machines seem to be able to recognize most
   SCSI disk drives. The DN2500 costs ~$4000. It is slower than the other models, but
   it uses standard 1MBx9 or 4MBx9 100ns SIMM memories which are plentiful and cheap.
   You can load up a 64 MB machine (16 4MBx9 SIMMS) for under $5000 for the extra 
   memory. The 400t costs ~ $7000 for an 8MB machine. It's quite a bit faster, which
   may be important for a disk server, but extra memory from HP is $4000 per 8MB.
   Third party memory seems to be going for about $3000 per 8MB. When the 68040 chips
   make it out the door, the 425t will give you even more speed for about $9000. Again,
   it uses the more expensive memories. Where the trade off point is between CPU
   speed and memory capacity for caching network disk I/O is unknown at  this point.
   If anyone has done any testing, they haven't posted it to the net (hint, hint, guys
   and gals).

2) Apollo does list a system integrator's package for disk drives on their price
   list. It's *expensive*, and the price list description isn't all that clear
   on whether it contains the source code and mechanisms for integrating your own
   disk into the OS, or whether it is for supporting existing drives and controllers
   which you are installing yourself in diskless machines bought under and OEM
   agreement.

If what you really want is more disk space, as opposed to SCSI disk space, on your
DN10000, then my advice is to look at the existing ESDI disks and controller in
your system, copy down the part numbers, and call your favorite electronics components
supply house. I looked into getting a 350MB drive for my DN4000 about 6 months ago,
and I got prices ranging from $1200 to $1500 for brand new Maxtor 4380E and/or
Micropolis 1558 drives -- which is not all that much more than what an SCSI drive
costs.


 -- David Krowitz

krowitz@richter.mit.edu   (18.83.0.109)
krowitz%richter.mit.edu@eddie.mit.edu
krowitz%richter.mit.edu@mitvma.bitnet
(in order of decreasing preference)

rees@pisa.ifs.umich.edu (Jim Rees) (12/07/90)

In article <9012071403.AA02580@richter.mit.edu>, krowitz@RICHTER.MIT.EDU (David Krowitz) writes:

     Where the trade off point is between CPU
     speed and memory capacity for caching network disk I/O is unknown at  this point.
     If anyone has done any testing, they haven't posted it to the net (hint, hint, guys
     and gals).

I think that queueing theory predicts you're better off putting your memory
on the clients than on the servers, unless your network is very fast or your
server memory size can be made larger than your total (across all clients)
working set.  I think this has even been verified by experiment for fairly
wide ranges of disk and network speed.  A guy I used to work with at
University of Washington (Ed Lazowska) did some work along these lines.

Then again, I haven't seen any work on this lately, and I don't have any
references right at hand.