[mod.computers.vax] CI

HELLER@cs.umass.edu (Stride 440 User) (01/22/87)

    We have 4 VAX 750's (on a cluster with 5 other 750s and a pair of 780s)
and three DMA/UNIBUS graphics/image-processing devices. These three devices
are on three of the VAXen. Our researchers run large LISP systems (with
lowlever code in C or FORTRAN) to do computer vision research. Two or three
users running one of these systems is enough to bring a VAX 750 to its
knees. The problem is: these users would like to display results on one or
another of the graphics devices, but often they are on the "wrong" system
(they want to use device X, which is on VAX A, but they are are on VAX
C...). While it is posible to save the results to disk, exit and connect to
another VAX, this would tend to have everybody on one machine, which is just
too much for a VAX 750 to handle. One of the graphics devices (a Grinnell
GMR-270 w/DR11-W) is presently setup with a DECNet hack (using the Ethernet
the VAXen also have) to make it accessable from any machine. This is slow
and has problems (DECNet overhead precludes some operations). It also forces
us to use small I/O buffers (1400 bytes/700 words). The other two graphics
devices (a COMTAL VISION ONE/20 and a Gould DeAnza IP8500) want big I/O
buffers (16384 for the COMTAL and 65535 for the IP8500). DECNet/Ethernet
only allows about 1400 bytes for a user buffer.

    We would like to use the CI hardware to transfer data between the VAXen
and make the graphics devices either appear or be accessable from any node
on the cluster.  The problem is, DEC doesn't document how to do this.  The
"best" there is looking at the micro-fich of the DECNet/CI hack, which
doesn't really help much (has lots of weirdness for DECNet's use, but
which is not helpfull for what we want to do).

    Has anyone out there in Info-VAX-land figured out how to move large
chunks of data between cluster nodes (and no, I don't want to use a disk
buffer - this is for high-speed DMA graphics devices!)? I suspect I will
need to write some sort of class driver for the PAA0: (low-lever cluster I/O
driver) port driver (much like the DECNet "class" driver). Probably also
some sort of server-process and maybe a dummy device w/ACP, for user
programs to talk to.

		Robert Heller (VISIONS research group)
ARPANet:	Heller@UMass-CS.CSNET
BITNET:		Heller@UMass.BITNET
BIX:		Heller
GEnie:		RHeller
FidoNet:	101/27 (Dave's Fido, Gardner, MA)
CompuServe	71450,3432
Local PV VAXen:	COINS::HELLER
UCC Cyber/DG:	Heller@CS

MHJohnson@HI-MULTICS.ARPA (Mark Johnson) (01/24/87)

  Looking at the DECNET documentation for the CI connection indicates
that it is SLOWER to transfer data w/ DECNET on the CI than through the
ethernet.  A lot of stuff about extra overhead due to the smaller packet
size.  I would first look at increasing the buffer sizes on the ethernet
instead.  One way (if you can get agreement across the net...)  to do it
is to increase the buffer size and segment buffer size with NCP.  Our
systems had the lower limit (576) instead of the more reasonable
ethernet limit (1498).  This increases throughput quite a bit but must
be coordinated across the net.  See page 3-18 of the Networking Manual
for more information about adjusting the size.

Of course, you could have code running on both CPU's that talk their own
protocol across the ethernet as well.  The device driver book has the
details about that too.  About CPU to CPU transfers w/ the CI, I have no
idea how that works but I would start looking at the MSCP served disk
code to see what VMS does and copy that.  Good Luck!
  --Mark