[comp.unix.xenix] Caching disk controllers and 386 multiprocessor

steveb@aostul.UUCP (Steve Bogner) (05/26/89)

I am trying to put together a very high performance SCO Xenix
system and would like to know if anyone out there has had
experience (good or bad) with the caching disk controllers made
by DPT (Distributed Processing Technologies).  They say you can
have a hardware cache of up to 16 MB, reducing disk access to .6
ms.  Also, a company called Corollary makes a 386 multiprocessing
board for SCO Xenix called the 386/mp.  Up to four 386's can be
added to the server.  Has anyone actually done this?  I'm using
the ALR 33/386 server with 16 MB of 80 ns RAM and Maxtor 4380E
disks.  Please reply by e-mail.

		Steve Bogner

!uunet!romed!aostul!steveb

jeff@swusrgrp.UUCP (Jeff Tye sys adm) (06/05/89)

In article <10@aostul.UUCP>, steveb@aostul.UUCP (Steve Bogner) writes:
> 
> I am trying to put together a very high performance SCO Xenix
> system and would like to know if anyone out there has had
> experience (good or bad) with the caching disk controllers made
> by DPT (Distributed Processing Technologies).  They say you can
> have a hardware cache of up to 16 MB, reducing disk access to .6
> ms. 

I've been using and selling the DPT caching disk controllers for over
a year and a half. I highly recommend the product and the company. The
performance benefits are outstanding. I like them so much that I propose
them with every system I sell. The difference is like night and day.

Some tips on using them:

1)	Don't use it unless you have the RAM card with it. 512K is not
	enough by itself. Trust me. I've tried it in a variety of
	configurations and 2.5MB cache RAM or more is needed.

2)	Lower your disk buffers in Xenix to 256K when using the DPT card.
	You get more RAM for XENIX that way and the DPT does the rest for
	you.

3)	Use the DPT format utility and *NOT* the Xenix install format. DPT
	uses a special formatting algorithm that skews the sectors for 
	optimum performance.

>    Also, a company called Corollary makes a 386 multiprocessing
> board for SCO Xenix called the 386/mp.  Up to four 386's can be
> added to the server.  Has anyone actually done this?  I'm using
> the ALR 33/386 server with 16 MB of 80 ns RAM and Maxtor 4380E
> disks.  Please reply by e-mail.

You must use Corollary's motherboard in order to use their multiprocessing
capabilities. The AT buss is too slow for multiprocessing so Corollary
created a new AT compatible motherboard with dedicated 'C' buss for memory
and data transfers. It is a true multiprocessing buss with cache coherency
and all that stuff. Zenith is OEMing the product in their Z10000 (?) line.

-- 
Jeff Tye
southwest!/usr/group                  The Southwest U.S. chapter of /usr/group
c/o Copperstate Business Systems                          voice (602) 244-9391
ncar!noao!asuvax!hrc!swusrgrp!jeff                     swusrgrp (602) 275-2541

pim@ctisbv.UUCP (Pim Zandbergen) (06/06/89)

In article <10@aostul.UUCP> steveb@aostul.UUCP (Steve Bogner) writes:
>I would like to know if anyone out there has had
>experience (good or bad) with the caching disk controllers made
>by DPT (Distributed Processing Technologies).

And how about a comparison between caching (ESDI) disk controllers
and the SCSI disk controller made by Adaptec, supported in
SCO Xenix 2.3GT (and supposedly in SCO Unix 3.2)?

Which has the best performance/price ratio?
-- 
--------------------+----------------------+-----------------------------------
Pim Zandbergen      | phone: +31 70 542302 | CTI Software BV
pim@ctisbv.UUCP     | fax  : +31 70 512837 | Laan Copes van Cattenburch 70
...!uunet!mcvax!hp4nl!ctisbv!pim           | 2585 GD The Hague, The Netherlands

sl@unifax.UUCP (Stuart Lynne) (06/07/89)

In article <755@ctisbv.UUCP> pim@ctisbv.UUCP (Pim Zandbergen) writes:
>In article <10@aostul.UUCP> steveb@aostul.UUCP (Steve Bogner) writes:
>>I would like to know if anyone out there has had
>>experience (good or bad) with the caching disk controllers made
>>by DPT (Distributed Processing Technologies).
>
>And how about a comparison between caching (ESDI) disk controllers
>and the SCSI disk controller made by Adaptec, supported in
>SCO Xenix 2.3GT (and supposedly in SCO Unix 3.2)?
>
>Which has the best performance/price ratio?

What I would like to see is a comparison of a caching disk controller (say a
DPT with 2MB of RAM) and a normal controller (say a WD1006) *plus* the same
amount of memory allocated to block buffers.

I get the impression from the articles I read that the reviewer simply puts
in the caching controller steps back and marvels at the improved response
:-) 

It seems obvious that adding 2MB of buffers is going to help somewhat. 
But where is the best place to put the memory, on the controller or in the 
kernel?

If it's accessible to the kernel then presumably you save a bit of time not
having to move it from the controller to a block buffer (presumably over a
slow AT bus).

If it's on the controller then perhaps you can save some overhead by having the
controller managing it.

I presume it reduces to a comparison of the efficencies of the two block 
bufferring algorithms and the time to transfer between kernel memory and the
controller memory.

One wild idea would be to implement a controller with dual ported memory and
have it interact with the kernel to implement a shared buffer cache. Unix
just ask's for a block and it auto-magically appears in the appropriate
buffer after some period of time. Maybe someone will do this on a 32bit EISA
card!


-- 
Stuart.Lynne@wimsey.bc.ca uunet!van-bc!sl 604-937-7532(voice) 604-939-4768(fax)

zeeff@b-tech.ann-arbor.mi.us (Jon Zeeff) (06/08/89)

I'm interested in the performance from various controller/disk combinations.
Can people run the following and provide me with 

the time (sys and real) on an unloaded system
your operating system vendor + version
drive type / manufacturer
interface type / manufacturer
CPU type and speed

/bin/time dd if=/dev/dsk/0s1  of=/dev/null bs=4k count=2000

/dev/dsk/0s1 can be changed but should be a block device.  I'd be 
interested in non '386 system results also.  I will summarize the 
results to the net.  

-- 
  Jon Zeeff			zeeff@b-tech.ann-arbor.mi.us
  Ann Arbor, MI			sharkey!b-tech!zeeff
-- 
  Jon Zeeff			zeeff@b-tech.ann-arbor.mi.us
  Ann Arbor, MI			sharkey!b-tech!zeeff

paul@csnz.co.nz (Paul Gillingwater) (06/13/89)

In article <1216@swusrgrp.UUCP> jeff@swusrgrp.UUCP (Jeff Tye sys adm) writes:
>In article <10@aostul.UUCP>, steveb@aostul.UUCP (Steve Bogner) writes:
>> 
>> I am trying to put together a very high performance SCO Xenix
>> system and would like to know if anyone out there has had
>> experience (good or bad) with the caching disk controllers made
>> by DPT (Distributed Processing Technologies).  They say you can
>> have a hardware cache of up to 16 MB, reducing disk access to .6
>> ms. 
>
>I've been using and selling the DPT caching disk controllers for over
>a year and a half. I highly recommend the product and the company. The
>performance benefits are outstanding. I like them so much that I propose
>them with every system I sell. The difference is like night and day.
>
Here's another tip for those who want to get the very best performance.
First, I assume you want to run large disks, e.g. 600 Mb plus.  Hint:
don't buy one big disk.  Buy two 300 Mb disk drives, and put them
on different controllers, and when I say different, I mean different,
like use an ESDI for your root partition and a SCSI for your application
data.  You can put two drives on one controller, but you can only access
one of them at a time, whereas with two controllers, you can get much
better throughput - especially if you have to swap out tasks.

So why not two ESDI controllers?  Simple - when you have your multi-port
serial cards, printers, tape backups, VGAs, mouse etc. in the system,
you tend to very quickly run out of IRQ's.

Make sure you use fast caches too!   But be VERY sure that your
operating system will support them. 
-- 
Paul Gillingwater, Computer Sciences of New Zealand Limited
Bang: ..!uunet!dsiramd!csnz!paul    Domain: paul@csnz.co.nz
Call Magic Tower BBS V21/23/22/22bis 24 hrs +0064 4 767 326

jpp@slxsys.specialix.co.uk (John Pettitt) (06/13/89)

From article <134@unifax.UUCP>, by sl@unifax.UUCP (Stuart Lynne):
+ What I would like to see is a comparison of a caching disk controller (say a
+ DPT with 2MB of RAM) and a normal controller (say a WD1006) *plus* the same
+ amount of memory allocated to block buffers.
+ 
+ I get the impression from the articles I read that the reviewer simply puts
+ in the caching controller steps back and marvels at the improved response
+ :-) 
+ 
+ It seems obvious that adding 2MB of buffers is going to help somewhat. 
+ But where is the best place to put the memory, on the controller or in the 
+ kernel?

From the tests I ran the DPT wins over adding 2 MB of buffers to /xenix
(don't know about `real' unix).  The improvment gained by having more 
buffers in /xenix was not great.  I think this was mostly due to the 
cacheing code being rather old and not that well written (putting on
flame proof suit :-).   The basic problem seemed to be the cache write
back code:  When lots of buffers are added to xenix it fills almost all of
them with dirty data then spends several seconds writing them all back and the
real time response goes out the window.

The DPT is far better balanced in this respect and so gives a mush more
even throughput.
+ 
+ I presume it reduces to a comparison of the efficencies of the two block 
+ bufferring algorithms and the time to transfer between kernel memory and the
+ controller memory.
+ 
+ One wild idea would be to implement a controller with dual ported memory and
+ have it interact with the kernel to implement a shared buffer cache. Unix
+ just ask's for a block and it auto-magically appears in the appropriate
+ buffer after some period of time. Maybe someone will do this on a 32bit EISA
+ card!

Arrrrrrrgh,   and how do you hope to track the changes in xenix ?
+ -- 
+ Stuart.Lynne@wimsey.bc.ca uunet!van-bc!sl 604-937-7532(voice) 604-939-4768(fax)
-- 
John Pettitt, Specialix, Giggs Hill Rd, Thames Ditton, Surrey, U.K., KT7 0TR
{backbone}!ukc!slxsys!jpp     jpp@slxinc.uucp     jpp@slxsys.specialix.co.uk
Tel: +44-1-941-2564       Fax: +44-1-941-4098         Telex: 918110 SPECIX G
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

pcg@aber-cs.UUCP (Piercarlo Grandi) (06/13/89)

In article <134@unifax.UUCP> sl@unifax.UUCP (Stuart Lynne) writes:
    
    It seems obvious that adding 2MB of buffers is going to help somewhat. 
    But where is the best place to put the memory, on the controller or in the 
    kernel?

Extensive research (on mainframes, mostly) shows that caching is better done
by the os in main memory. Also, it is vastly more flexible. Even more
important, caching controllers become married to their discs, and need (if
safety has any importance) battery backup of the cache.
    
    I get the impression from the articles I read that the reviewer simply puts
    in the caching controller steps back and marvels at the improved response
    :-)

Precisely, even reviewers are often just glorified naive users, and don't
bother twiddling the cache size. I remember reading that on a "tipical mini"
a unix buffer cache of 2 megs gives you a hit rate of almost 90%. Using large
buffer caches in NFS clients (25% of memory) *and* servers (I have configured
cache of up to 6 megs) cuts network traffic and clients response time
wonderfully.
-- 
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

pcg@aber-cs.UUCP (Piercarlo Grandi) (06/14/89)

In article <4038@slxsys.specialix.co.uk> jpp@slxsys.specialix.co.uk (John Pettitt) writes:

    + It seems obvious that adding 2MB of buffers is going to help somewhat. 
    + But where is the best place to put the memory, on the controller or in the 
    + kernel?
    
    From the tests I ran the DPT wins over adding 2 MB of buffers to /xenix
    (don't know about `real' unix).  The improvment gained by having more
    buffers in /xenix was not great.  I think this was mostly due to the
    cacheing code being rather old and not that well written (putting on
    flame proof suit :-).

Well, there is not much you can do to improve the caching algorithm once you
have hashed cache access. If Xenix 2.3.2 does not have it, tough; all BSDs
and later SystemVs do have it.

    The basic problem seemed to be the cache write back code:  When lots of
    buffers are added to xenix it fills almost all of them with dirty data
    then spends several seconds writing them all back and the real time
    response goes out the window.

But Unix does flush the cache to have less loss of data in the case of
crashes. In particular, "hardened" filesystems implement directory and inode
modifications with write-thru, instead of write-back, which kills the
performance of dump restores when there are many small files.

    The DPT is far better balanced in this respect and so gives a mush more
    even throughput.

Either the DPT has battery backing for its 2MBs of cache, or it is a very
dangerous proposition. Even if it has non volatile memory, it will lenghten,
possibly by much, the moment in which data is written to the disc, which
makes error reporting even more impricise than it is; and, I hope it has an
explicit flush command, if you want to be sure that disc contents actually
reflect what you think is on them.

In practice you lose with cache in the controller vs. main memory also when
you have two controllers, which is virtually a must (given that most are not
multithreading, except SCSI ones) for high performance multiuser systems.
-- 
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

jpp@slxsys.specialix.co.uk (John Pettitt) (06/14/89)

From article <1015@aber-cs.UUCP>, by pcg@aber-cs.UUCP (Piercarlo Grandi):
> In article <4038@slxsys.specialix.co.uk> jpp@slxsys.specialix.co.uk (John Pettitt) writes:
>     From the tests I ran the DPT wins over adding 2 MB of buffers to /xenix
>     (don't know about `real' unix).  The improvment gained by having more
>     buffers in /xenix was not great.  I think this was mostly due to the
>     cacheing code being rather old and not that well written (putting on
>     flame proof suit :-).
> 
> Well, there is not much you can do to improve the caching algorithm once you
> have hashed cache access. If Xenix 2.3.2 does not have it, tough; all BSDs
> and later SystemVs do have it.

Xenix does have hashed cache access.
The point I was making was not about cache access methods but about cache 
write back strategys.
> 
>     The basic problem seemed to be the cache write back code:  When lots of
>     buffers are added to xenix it fills almost all of them with dirty data
>     then spends several seconds writing them all back and the real time
>     response goes out the window.
> 
> But Unix does flush the cache to have less loss of data in the case of
> crashes. In particular, "hardened" filesystems implement directory and inode
> modifications with write-thru, instead of write-back, which kills the
> performance of dump restores when there are many small files.

You have missed the point.
The problem was that xenix would sit around for up to 2 or 3 seconds then try
and write over a meg to the disk in one go.  If you happend to want a file
that was on a block at the other end of the disk you had to wait, and wait,
and wait.
> 
>     The DPT is far better balanced in this respect and so gives a mush more
>     even throughput.
> 
> Either the DPT has battery backing for its 2MBs of cache, or it is a very
> dangerous proposition. Even if it has non volatile memory, it will lenghten,
> possibly by much, the moment in which data is written to the disc, which
> makes error reporting even more impricise than it is; and, I hope it has an
> explicit flush command, if you want to be sure that disc contents actually
> reflect what you think is on them.

Non volatile ram in the controller buys you very little, the write back
window on the DPT is only .25 of a second (from write to the controler to 
the time the sector is added to the controller to disk transfer Q).

And yes it does have a flush command.

Given that XENIX (and thats what the question was about) can wait 2 or 3 
seconds the added danger is minimal- if you are that worried buy a UPS.

> In practice you lose with cache in the controller vs. main memory also when
> you have two controllers, which is virtually a must (given that most are not
> multithreading, except SCSI ones) for high performance multiuser systems.

Before making statments that caching controllers lose, you should check the
facts.

1) My posting was based on real, timed benchmarks. 

2) Why do you need two controllers ?  I thought that the point of having
a cache was to avoid that sort of kludge hardware solution.

3) The DPT 3011/70 has 10 (count em) LED's to indicate what it's doing
so it must be a `real computer' (it's got flashing lights :-) :-) :-)

4) The DPT is easy to use, you plug it in and it works, no drivers, no broken
installs.  Given that XENIX users (for the most part) would not know where to
start with any other solution DPT have a good product.

> Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
> Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
> Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk
-- 
John Pettitt, Specialix, Giggs Hill Rd, Thames Ditton, Surrey, U.K., KT7 0TR
{backbone}!ukc!slxsys!jpp    jpp%slxinc@uunet.uu.net     jpp@specialix.co.uk
Tel: +44-1-941-2564       Fax: +44-1-941-4098         Telex: 918110 SPECIX G
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

steve@nuchat.UUCP (Steve Nuchia) (06/15/89)

There are exceptions to the general rule that more main memory for
the unix block cache beats caching controllers and ram disks.  The
exceptions arrise when there is something wrong with your computer
system that, for one reason or another, you can't fix while you
*can* add the special-purpose device.

Examples:
	limited memory address space: Petes are the classic example,
	limited to 4mb by the Qbus so a couple of extra Mb on
	the disk controller or a bank-switched ram disk can really help.
	Also applies to 286s.

	Depending on how the system is balanced, it may make sense
	to use slow, cheap ram on the disk controller or for a
	ram disk when you can't afford fast, expensive, main memory.

	You've got a binary distribution of "The Unix (tm) Operating
	System" with the slow(tm) file system.  Large sparse cache
	in the disk controller may help, particularly when an
	rm -r starts the disk arm in its washer-walk mode.

	You've got a binary-only driver that is too slow to run
	at a reasonable interleave.  A controller that caches
	a track (or a cylinder) at a crack and feeds it to you
	as fast as you can digest it (spoofing the interface
	your driver expects) wins.

	The "general rule" applies to the mythical "typical job mix."
	There are applications in which a well-implemented ram disk
	is just the thing.

None of these situations *should* happen, but they do.  I'm suffering
from 2, 3, and 4 for instance.
-- 
Steve Nuchia	      South Coast Computing Services
uunet!nuchat!steve    POB 890952  Houston, Texas  77289
(713) 964 2462	      Consultation & Systems, Support for PD Software.

rfarris@serene.UUCP (Rick Farris) (07/22/89)

In article <10870@nuchat.UUCP> steve@nuchat.UUCP (Steve Nuchia) writes:
> There are exceptions to the general rule that more main memory for
> the unix block cache beats caching controllers and ram disks.

I've been watching this discussion for several days now, and I've
noticed that no-one has mentioned the biggest advantage of the DPT
controller:  It has an on-board computer!  We're talking
multi-processing, folks.  The main cpu can write to the controllers
cache, and then the cpu on the controller can do the elevator writes,
etc, *at the same time the main cpu is doing other things.*


Rick Farris   RF Engineering  POB M  Del Mar, CA  92014   voice (619) 259-6793
rfarris@serene.uu.net      ...!uunet!serene!rfarris       serene.UUCP 259-7757