[comp.arch] Gould NP1

uusgth@sw1e.UUCP (04/03/87)

Regarding my earlier posting .....

      >The NP1 is an ECL, gate-array processor sitting on a very
      >fast bus (154 megabytes per second). With 2 cpu's on the
      >same bus, coupled with a math accelerator unit, it does 12

       --------------------CORRECTION, should have said 12 MIPS

      >per processor, so 2 cpu configuration yields 24 MIPS------
      >MIPS sustained at a base price of about $400,000. With a

      ------ that's about $17,000 per MIP -----------------------

      >multiple, coupled bus option, 8 cpu's can co-exist for close
      >to 100 MIPS, addressing 2 gigabytes of 54 nanosec memory.

	I do appreciate the comments I have gotten so far on this posting.
       I didn't meant to leave out anything about the other super-mini's,
       to the contrary, I wanted to get some discussion started. Let's
       anyone who can put some info out about Pyramid; Alliant; HP's new
       model 930; etc.

   Tom Helton			Support Group
   ..{ihnp4}!	| | |\ | | \/   Southwestern Bell Telephone Co.	
  sw1e!uusgth	|_| | \| | /\   One Bell Ctr 24W5, StL, MO 63101	

eric@hippo.UUCP (04/03/87)

	I've had some personal experience benchmarking the Pyramid
9820, so I'll try and comment on it here.

	The 9820 is a dual processor system, which measures at right
around 13 to 14 MIPS for the total system. (I'm using 1 MIP = 1 VAX
11/780.) The system includes arithmetic accelerators for both CPUs.
The system has a 64 Kbyte data cache and a 16 Kbyte instruction cache
per processor. The total system can be expanded to 128 Mbytes of memory
(fully accessible by both processors.) It uses a 40 Mbyte message based
bus to intelligent RISC based I/O controllers. The disk controller
(called an IOP) is capable of supporting 11 Mbytes/sec sustained
throughput with a 21 Mbyte/sec burst rate.  The TPE
(tape/printer/ethernet) controller also supports one of the highest
throughputs I have seen for either tape or ethernet (who cares about
printer throughput!). The ITP (terminal processor) is capable of
sustaining all 16 lines at 9600 baud, both output and input.

	The 9820 supports both BSD 4.2 and System V.2 under its "dual
universe" environment. (Refer to the sales literature for a full
description of this.)

	Entry level price for a 9820 with 16 Mbytes of memory, 32
ports, UNIX, a 470 MB disk, and a 1600 bpi tape is $309K.

	I would like to second Ron's comment that a few fast processors
is better than lots of small processors. Two reasons for this:  first,
if you have a single big process, having lots of smaller processors
doesn't help, unless the big process has been broken into smaller
processes.  Even with a mix of smaller jobs, queuing theory tells us
that if you have a system with n servers of speed 1, and another system
with a single server of speed n, given the same process mix, the system
with a single server will more efficiently process the queue. This is
even before we start worrying about contention for resources,
synchronization of processors, etc.

-- 

					eric
					...!ptsfa!hippo!eric

bzs@bu-cs.UUCP (04/04/87)

I believe that a factor almost more important than comparing where
current machines are is to investigate where their vendors are going.
The "mips" game is changing so rapidly that I for one tend to demand
some outline of a vendor's game plan for the next (very) few years.

A rule of thumb I am using right now is that vendors must be able to
present to me the following clearly defined paths:

	1. Workstations - dozens of mips
	2. Super-minis - 100ish mips (with small numbers of processors)
	3. Larger scale parallel systems - to 1000 mips or more, w/o voodoo.
	4. Mainframes, super-computers - hard to say, Cray-II's are
	a good baseline, as in "when will you acheive Cray performance
	for 1/X the cost?". I/O and Floating Point performance is the
	issue here as much as raw CPU system performancs.

Some good examples:

Sun and Iris (MIPS) are indicating clear paths to double-digit mips
in the very near future in the workstation arena.

Systems like the Elxsi and Encore look like they have a clear path
to three-digits for their "mainline" products in the near future.

Encore's DARPA sponsored project is promised to deliver four-digits
(1000+ mips) within two years on a generic (ie. UNIX based) parallel
processor system.

Alliant seems to be moving towards "cheap" crayitude (a mere couple of
million $$, that would be cheap, I am not being sarcastic.) Some of
the new IBM3090 series are also moving in the right direction
(although the software still leaves lots to be desired.)

I think the real problem will soon be finding ways to keep them busy.

I heard that Shearson-Lehman has two Crays? So much for some
three-letter companies "business" strategies and loss of interest
in high performance machines.

Oh well, it's refreshing every so often to retune your brain.

	-Barry Shein, Boston University

Disclaimer: I have no affiliation with any of the above companies
except as the owner of 4 Encores, a multitude of SUNs and 2 3090s.

grunwald@uiucdcsm.UUCP (04/04/87)

Queuing theory isn't reality, though. I'm runing CSIM-based simulations on
an Encore multimax. We have 10 CPUs. I can grab 4 or 5 of them in the
middle of the day and still leave plenty for everyone else.
	I can run a single job on each CPU, and I never block. I never
context switch except for paging. I avoid the overhead of having to swap
between processes & the comensurate clearing of my working set from
memory.
	When you have a set of 70 -> 150 simulations to run & you have to
put up with a time-sharing system, there's alot to be said for multiple
CPUs.

mash@mips.UUCP (04/06/87)

In article <6123@bu-cs.BU.EDU> bzs@bu-cs.BU.EDU (Barry Shein) writes:
>
>I believe that a factor almost more important than comparing where
>current machines are is to investigate where their vendors are going.
>The "mips" game is changing so rapidly that I for one tend to demand
>some outline of a vendor's game plan for the next (very) few years.
>
>A rule of thumb I am using right now is that vendors must be able to
>present to me the following clearly defined paths:
>
....a reasonable chart of where things will be soon.....

>I think the real problem will soon be finding ways to keep them busy.

Thank goodness that's the one problem we won't have!
At least in some sectors of this business, there's an infinite
appetite for performance out there. [Some of the CAD guys are great
examples: tell them you've doubled performance, and they say
"Oh good! we can simulate bigger circuits now....but it still
takes too long: when we can we have 4 or 8X?" ]

-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	{decvax,ucbvax,ihnp4}!decwrl!mips!mash, DDD:  	408-720-1700, x253
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

eugene@pioneer.UUCP (04/06/87)

In article <6123@bu-cs.BU.EDU> bzs@bu-cs.BU.EDU (Barry Shein) writes:
>
>I believe that a factor almost more important than comparing where
>current machines are is to investigate where their vendors are going.
>The "mips" game is changing so rapidly that I for one tend to demand
>some outline of a vendor's game plan for the next (very) few years.

Some one posted that "you should go out and survey" what users are doing
now.  I mailed back that this is wrong.  If you do this, you are
guaranteed to be TWO generations between what users need.  You should
ask users what they PLAN to (would like to) do next.

I also ask people for vision and perspective.  I'm really disillusioned with
ELXSI, I think they wasted too much time with EMBOS.  Alliant has always
been too slow scalar wise.  Convex is trying to keep up, but Wallach has
to get scalar speed up.  Several other companies look interesting.  SCS
got side tracked on CTSS then COS.  Multiflow looks interesting, but
will probably suffer from scalar speeds, too.  There's too little
software on the Connection Machine to make it more than a curiosity.
The same could be said the the Hypercubes.  (Hypercomputing? anyone?)

>A rule of thumb I am using right now is that vendors must be able to
>present to me the following clearly defined paths:
>
>	1. Workstations - dozens of mips
>	2. Super-minis - 100ish mips (with small numbers of processors)
>	3. Larger scale parallel systems - to 1000 mips or more, w/o voodoo.
>	4. Mainframes, super-computers - hard to say, Cray-II's are
>	a good baseline, as in "when will you acheive Cray performance
>	for 1/X the cost?". I/O and Floating Point performance is the
>	issue here as much as raw CPU system performancs.

Note you have mostly prefixes and adjectives.  These are meaningless.  I
suggest an interview with Enrico Clementi entitled "Supercomputer is
just a marketing term."  Note you really don't have micros or minis,
they are gone.  I would assert super-minis and mainframes are basically
going.  There will juse be computers: something like workstations and
computing akin to supers (I don't like this latter prefix).  As
quantitative evidence, I point you to any of Gordon Bell's recent papers
on trends.  Just call them computers and put them on the same scale.
Worry about $$s later.

>Some good examples:  #terrible examples
>
>Sun and Iris (MIPS) are indicating clear paths to double-digit mips
>in the very near future in the workstation arena.
>
>Systems like the Elxsi and Encore look like they have a clear path
>to three-digits for their "mainline" products in the near future.

the problem with all these new engines is their scalar units are too
slow.  Cray has nothing to worry about.

>the new IBM3090 series are also moving in the right direction
>(although the software still leaves lots to be desired.)

I don't know.  The last time I looked, the 3090 fortran compiler is better than
anything I've seen.  This includes Convex's, Alliant's, and Fujitsu's.
You must mean the OS ;-).

>I think the real problem will soon be finding ways to keep them busy.

True.  On several levels: the load leveling aspect, the application
level, and so forth.

>Oh well, it's refreshing every so often to retune your brain.
>
>	-Barry Shein, Boston University
Sounds like you need M A X  H E A D R O O M to me.

Actually, we should discuss the concept of balance and matching in
architecture.   Brian Reid posted a really nice thing a couple of years
ago on balance in the then new Mac.

From the Rock of Ages Home for Retired Hackers:

--eugene miya
  NASA Ames Research Center
  eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {hplabs,hao,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene

bzs@bu-cs.UUCP (04/07/87)

[Note: John and I have locked horns on this once before, maybe it would
be better conducted on INFO-FUTURES@BU-CS.BU.EDU?]

>Thank goodness that's the one problem we won't have!
>At least in some sectors of this business, there's an infinite
>appetite for performance out there. [Some of the CAD guys are great
>examples: tell them you've doubled performance, and they say
>"Oh good! we can simulate bigger circuits now....but it still
>takes too long: when we can we have 4 or 8X?" ]
>
>-- 
>-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>

Well I'm not speaking about 4X or 8X but, perhaps, 50..1000X, I think
somewhere in there lies a qualitative difference.

My claim that we may find ourselves with an embarrassment of riches
when it comes to raw processing power is as much aimed at the massive
parallelism and all the software (that we use now) which can't use it
as just the fact that the software we now use will probably soon start
running so fast that a 2X speedup may cease to be much of an issue.

Before people start presenting existence proofs ("there exists a
program at site A which could benefit from a 10,000X speedup of
hardware) consider one important premise of mine:

In the beginning (let's say the start of the small computer
revolution, around 1970) no one was happy with the cycles they were
getting.

Probably around a year or two ago a sizeable percentage of folks,
if presented with the following questionairre:

	You have $XX,XXX to spend on your computer, would
	you spend it on:

	A) Doubling the CPU performance?
	B) A piece of software to ease your life?

would start answering 'B' (in the old days they would almost always
answer 'A' and figure out some way to work around 'B'.)

So, although there will probably always exist a community that is
cycle hungry, I claim that community is shrinking rapidly in numbers.
This is compounded by the fact that the entire community is growing,
mostly in the low-cycle-hunger category.

Don't get me wrong, I don't think we're "there" yet. But when I see
high-double-digit MIPs personal computers just over the horizon I
can't help but wonder how close we are coming to, say, 90% of the
computing community saying "oh, it's plenty fast, I just wish there
was some software to keep it busy".

	-Barry Shein, Boston University

tihor@acf4.UUCP (04/07/87)

But remember many of the nice things that fall into category B
are just spiffy at eating cycles while being friendly.

Which I think is a perfectly fine investment but doesn't imply that we
need to think about closing the computer hardware section of the patent 
office.

Passim: Note that the classic existance proof of need for cycles is the 
need to simulate the next generation of (bigger/faster/friendlier) systems
on the current generation.