[comp.periphs.scsi] Controller Cache vs. Software Cache

jt34@prism.gatech.EDU (THOMPSON,JOHN C) (06/06/91)

How does the performance of a scsi host adaptor with built in caching hardware
compare to the performance of a software caching program or OS caching?
Which is faster? Is onboard drive caching even faster? Any definitive research
on this subject? Is there a source on the net for a scsi perpiheral benchmark
program/source code? Thanks

-- 
THOMPSON,JOHN C
Georgia Institute of Technology, Atlanta Georgia, 30332
uucp:	  ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!prism!jt34
Internet: jt34@prism.gatech.edu

gerry@zds-ux.UUCP (Gerry Gleason) (06/13/91)

In article <30738@hydra.gatech.EDU> jt34@prism.gatech.EDU (THOMPSON,JOHN C) writes:
>How does the performance of a scsi host adaptor with built in caching hardware
>compare to the performance of a software caching program or OS caching?
>Which is faster? Is onboard drive caching even faster? Any definitive research
>on this subject? Is there a source on the net for a scsi perpiheral benchmark
>program/source code? Thanks

Logically, software caching must be faster (assuming reasonalble
implementations in both cases), in the case of a cache hit because it
can just hand over the data rather than needing to perform an I/O
operation.  On the other hand, there is one type of hardware caching
that does make sense, read-ahead track buffering, but even this can
be handled to some extent in software, and it's better done in the
drive itself if your going to do it at all (and some drives do).

Now, this doesn't mean there aren't other reasons for wanting a cache
on the controller; for example, so you can implement special multi-drive
features such as mirroring, arrays, etc.  If the controller designer
does it right, they could introduce a simple caching controller and
later provide these advanced features as a firmware upgrade.

Gerry Gleason

jt34@prism.gatech.EDU (THOMPSON,JOHN C) (06/13/91)

You mention several intersting points with regards to drive arrays. In order to
take full advantage of a dirve array you need to have the array working on
multiple requests simultaneously. Scsi 2 allows for command queing and
overlapped commands, just the kind of thing necessary to maximize performance 
from a disk array. The problem as I see it is that most OS device drivers do
not yet support SCSI 2 and hence would limit the potential effectiveness of
an array. Rather than change all the device drivers for all OS's might it
be easier to implement these features with a smart caching host adaptor?

-- 
THOMPSON,JOHN C
Georgia Institute of Technology, Atlanta Georgia, 30332
uucp:	  ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!prism!jt34
Internet: jt34@prism.gatech.edu

harv@rat.uucp (Patrick Harvey) (06/16/91)

In article <633@zds-ux.UUCP> you write:
>In article <30738@hydra.gatech.EDU> jt34@prism.gatech.EDU (THOMPSON,JOHN C) writes:
>>How does the performance of a scsi host adaptor with built in caching hardware
>>compare to the performance of a software caching program or OS caching?
>>Which is faster? Is onboard drive caching even faster? Any definitive research
>>on this subject? Is there a source on the net for a scsi perpiheral benchmark
>>program/source code? Thanks
>
>Logically, software caching must be faster (assuming reasonalble
>implementations in both cases), in the case of a cache hit because it
>can just hand over the data rather than needing to perform an I/O
>operation.  On the other hand, there is one type of hardware caching
>that does make sense, read-ahead track buffering, but even this can
>be handled to some extent in software, and it's better done in the
>drive itself if your going to do it at all (and some drives do).
>
>Now, this doesn't mean there aren't other reasons for wanting a cache
>on the controller; for example, so you can implement special multi-drive
>features such as mirroring, arrays, etc.  If the controller designer
>does it right, they could introduce a simple caching controller and
>later provide these advanced features as a firmware upgrade.
>
>Gerry Gleason

Almost any time you have bottlenecks between processing nodes, installing some
kind of cache can potentially speed up the application.  The question of
whether a hardware cache is better than a software cache can depend on
what application set you are running.  If you don't want the host cpu spending
time performing caching while it could be executing an application, then a
hardware cache makes sense.  Also, a hardware cache, if implemented with a
suitable processor, can perform some interesting heuristics to achieve an
impressively good hit rate.  The software cache definitely has the advantage
of having the most direct route from cache to host, but it steals resources
such as host memory and cpu cycles which are better used for running the
host's applications.

The best solution might be some kind of cache controller which can reside
on the motherboard (maybe an IDE cache controller) with its own memory and
support some high-performance method of moving data into host memory without
dealing with the ISA or EISA busses.

... It happens that we were at Comdex with just such a device.  For more
info contact:

Peter Sorrells
VLSI Technology
(602) 752-6163

vlsisj!phx!sorrells@decwrl.dec.com

iverson@bang.uucp (Tim Iverson) (06/16/91)

In article <633@zds-ux.UUCP> gerry@zds-ux.UUCP (Gerry Gleason) writes:
>Logically, software caching must be faster (assuming reasonalble

Not *must be* faster, but certainly *could be*.  Personally, I prefer the
software angle - using a hardware cache is too much like throwing money at
the problem.

>implementations in both cases), in the case of a cache hit because it
>can just hand over the data rather than needing to perform an I/O

No.  Good software cacheing beats hardware cacheing because of inside
information - the OS has a much better idea of what it would like to keep
around than the host adapter ever could.  Unfortunately, few OSes take
advantage of this info.  For PC's (Unix/DOS/Netware), smart software
cacheing is simply not done; at best just simple LRU is used.

Hardware cacheing adapters have two very strong points: they're generally
easier to add than software for the end user, and they offload processing
from the main CPU.  Unless the software cache does a good job (simple LRU
isn't enough), these two points will win every time, but they cost big $$.

>operation.  On the other hand, there is one type of hardware caching
>that does make sense, read-ahead track buffering, but even this can
>be handled to some extent in software, and it's better done in the
>drive itself if your going to do it at all (and some drives do).

The best place to put any cache is on the other side of an expensively
crossed data path.  If you have lots of active devices on your SCSI bus (an
expensive path), then even though the drive's builtin controller performs a
read-ahead, bus contention may make it moot.

>Now, this doesn't mean there aren't other reasons for wanting a cache
>on the controller; for example, so you can implement special multi-drive
>features such as mirroring, arrays, etc.  If the controller designer

A cache might make the designer's job easier in these cases, but it
certainly is not required.

>Gerry Gleason

- Tim Iverson
  iverson@xstor.com -/- uunet!xstor!iverson