[net.arch] Cray-XMP v/s VP-200

sharma@uicsg.UUCP (05/11/84)

#N:uicsg:3200004:000:1750
uicsg!sharma    May 11 15:12:00 1984


It seems that the most important factor due to which the VP-200
outperforms the Cray-XMP is the lack of a good Vectorizing Compiler.
Hardware performance is also an issue, but this factor is expected 
to be a constant and has been observed to be between 1.0 and
about 1.5.

Preliminary results on an experiment which compared the two results
have been published in  :
	IEEE trans. on Computers, April '84 (pp374-375)


The present Cray-XMP Compiler has two problems, the first of 
which is of significance here.
	1. The Cray Vectorizing Compiler cannot automatically
	   handle loops with conditional statements in the body
	   of the loop. The VP-200 does a good job on such 
	   loops.
	2. The present Cray software cannot make use the two CPUs 
	   in the machine concurrently. Adequate software is expected
	   to be available soon.

There are two important hardware features that also contribute to
the edge that the Fujitsu machine has.
	1. The VP-200 has a Vector-register to Vector-pipe data
	   path width of 128 bits - which is twice that of the 
	   Cray-XMP. I wonder why ? - they couldn't have left
	   the data-path and functional-pipeline bandwidths 
	   on the Cray unmatched.
	2. The VP-200 has a main memory of 256 Mbytes. The Cray
	   has only 32Mbytes and uses a SSD ( Solid State Disk (or device))
	   as a staging memory.

The clock rates are not too different : 7.5nsec for VP-200 and 9.5nsec for
the Cray - a difference of 20%.

The point to be noted is that the Cray performance falls by a factor of
at least 2 when the loop has a conditional statement - only because the
Vectorizer fails. 


					- Madhumitra Sharma
					Computer Systems Group,
					Univ. of Illinois, Urbana.


UUCP address : ihnp4!uiucdcs!uicsg!sharma

rcd@opus.UUCP (Dick Dunn) (05/19/84)

>It seems that the most important factor due to which the VP-200
>outperforms the Cray-XMP is the lack of a good Vectorizing Compiler.

This may well be accurate, but it still hurts a bit to read this.  If you
look at the situation from a very global view, you see:
	problems stated in vector notation, with the analytic solutions
	carried out using vector algebra up to the point of getting ready
	to do the calculations, then...

	the calculations are reformulated to get rid of all of the vector
	notation and describe them as explicit iterations, obscuring the
	real intent and effacing the distinction between iterations in
	which the order of calculation matters and those in which it does
	not, following which...

	the description of the calculations is submitted to a compiler
	which has to perform a task just epsilon this side of AI to
	reconstruct the original vector form of the calculations!

No, I'm not so naive as to think that we can just toss out FORTRAN and
replace it with a wonderful language which has notation for vector
operations built into it - but perhaps if such a language (or even a
modification to an existing language) were created, it could gain a
foothold.  The non-vector programming language is clearly a bad bottleneck
between the vector problem and the vector machine.

[On the other hand, these may just be the rantings of a programming-
language nut who's had to work in FORTRAN recently for the first time in
seven years...]
-- 
...A friend of the devil is a friend of mine.		Dick Dunn
{hao,ucbvax,allegra}!nbires!rcd				(303) 444-5710 x3086

chris@umcp-cs.UUCP (05/20/84)

Well golly gee whiz, why not just use APL?
-- 
In-Real-Life: Chris Torek, Univ of MD Comp Sci (301) 454-7690
UUCP:	{seismo,allegra,brl-bmd}!umcp-cs!chris
CSNet:	chris@umcp-cs		ARPA:	chris@maryland

gnu@sun.uucp (John Gilmore) (05/22/84)

In fact, Analogics (the major builder of array processors) has an ongoing
project called "The APL Machine".  They are building an APL system that
uses the array processor.  Last time I saw it demonstrated they only had
the monadic functions working, but it would run them at .5Mflops or something.
The AP is controlled by a 68000 board and the user interface is via an IBM PC
over a serial line.

Tim Budd at U. Arizona has also done some research on compiling APL for a vector
machine.  (Note that the APL Machine project is an interpreter, not a compiler.)

I could conceive of Cray or another (maybe more software oriented) super-
computer company buying rights to the Analogics product on supercomputers,
leaving Analogics the market for supermicros with AP's.  Actually, having
a supermicro product exactly compatible with the Cray product would probably
result in a lot of supermicro sales to people preparing programs for the
Cray.

Ever since the CDC STAR, the APL world has been wondering why no super-
computer manufacturer would bother to write an APL.  Maybe we'll find out
within the next few years whether it's worthwhile or not.

ags@pucc-i (Seaman) (05/22/84)

>  The task of vectorizing a Fortran Program is not as difficult as you may 
>  imagine. Systems have been developed that can do the job quite well.
>  In some cases these programs are claimed to be better then even 
>  hand-coded versions. In general, however, they are able to extract 
>  upto about 70-80 percent of the maximum parallelism attainable for 
>  the program. 

There is much more to vectorization than you think.  I have seen code
generated by two of these programs:  the Vector and Array Syntax
Translator (VAST) and the Kuck Analyzer Program (KAP) for the CYBER 205.
Both programs are reasonably good at SYNTACTIC VECTORIZATION (recognizing
and translating vectorizable DO loops), although the resulting vector
code can be further improved by any competent programmer.

Neither program can do SEMANTIC VECTORIZATION (transforming or replacing
entire algorithms to make parallel computation possible).  I have seen
cases where semantic vectorization improved program performance by two
orders of magnitude, which is far beyond what can be achieved by vectorizing
preprocessors.
-- 

Dave Seaman
..!pur-ee!pucc-i:ags

"Against people who give vent to their loquacity 
by extraneous bombastic circumlocution."