[comp.arch] Prisma is gone

mslater@cup.portal.com (Michael Z Slater) (11/22/89)

I've hear from a few sources now that Prisma, which was attempting to
build a gallium arsenide SPARC machine, has had their plug pulled by
the VCs.  Anyone know what happened?  Does this have implications for
SPARC's "scalability" argument, or were the problems independent of
SPARC?

Michael Slater, Microprocessor Report   mslater@cup.portal.com

dmk@dmk3b1.UUCP (David Keaton) (11/22/89)

In article <24317@cup.portal.com> mslater@cup.portal.com (Michael Z Slater) writes:
>I've hear from a few sources now that Prisma, which was attempting to
>build a gallium arsenide SPARC machine, has had their plug pulled by
>the VCs.  Anyone know what happened?  Does this have implications for
>SPARC's "scalability" argument, or were the problems independent of
>SPARC?
>
>Michael Slater, Microprocessor Report   mslater@cup.portal.com

     The problems were related to technology and funding.  They were
independent of SPARC.
--
					David Keaton
					dmk%dmk3b1@uunet.uu.net
					uunet!dmk3b1!dmk

michell@cs.utah.edu (Nick Michell) (11/29/89)

I'm interested in GaAs, although I don't much about Prisma.
If the failure was technology related, was that due to the use
of parts from Gigabit - high speed, but also high power and only
MSI density (not to mention the fact the Gigabit is in financial
trouble)?  The leading start-up in GaAs seems to be Vitesse, which
trades off some speed for lower power and higher density (currently,
around 15,000 gates).

On a related subject, DARPA has funded a number of GaAs RISC chips,
which have been reported in various conferences and acedemic publications.
This research has, at least so far, resulted in no commercial spin-offs.

Is GaAs just not up to it?  Is the technology too immature still?
It certainly appears to me that the current Vitesse technology would
do fine for a Risc chip set.  It appears from recent EE times articles
that at least Convex and Solbourne think select use of GaAs is worthwhile.
Any comments, netland?

/Nick Michell
 michell@cs.utah.edu

rro@bizet.CS.ColoState.Edu (Rod Oldehoeft) (11/29/89)

In article <1989Nov28.104128.8045@hellgate.utah.edu> michell@cs.utah.edu (Nick Michell) writes:
>I'm interested in GaAs, although I don't much about Prisma.
>If the failure was technology related, was that due to the use
>of parts from Gigabit - high speed, but also high power and only
>MSI density (not to mention the fact the Gigabit is in financial
>trouble)?


Pete Wilson from Prisma spoke here yesterday in the dept. colloquium
series.  He used many of the overheads from the Hot Chips Symposium,
but was also able to relate recent items.  Apparently the purchase of
Gigabit by Cray Computer made people nervous about continuing
availability of parts.  He also detailed many hard problems they had
to solve, which slowed progress.  At the end they had a design for a
non-GaAs multiprocessor, which didn't impress the VC people.  At this
time Prisma has a nice, cheap water-cooling method they'd like to
license, as well as an excellent SPARC C compiler.

I recommend Pete to anyone interested in a fine discussion of the
Prisma architecture.


Rod Oldehoeft                    Email: rro@CS.ColoState.EDU
Computer Science Department      Voice: 303/491-5792
Colorado State University        Fax:   303/491-2293
Fort Collins, CO  80523

scarter@gryphon.COM (Scott Carter) (11/30/89)

In article <1989Nov28.104128.8045@hellgate.utah.edu> michell@cs.utah.edu (Nick Michell) writes:
>I'm interested in GaAs, although I don't much about Prisma.
>If the failure was technology related, was that due to the use
>of parts from Gigabit - high speed, but also high power and only
>MSI density (not to mention the fact the Gigabit is in financial
>trouble)?  The leading start-up in GaAs seems to be Vitesse, which
>trades off some speed for lower power and higher density (currently,
>around 15,000 gates).
>
>On a related subject, DARPA has funded a number of GaAs RISC chips,
>which have been reported in various conferences and acedemic publications.
>This research has, at least so far, resulted in no commercial spin-offs.

DARPA has so far funded only development of what might be called "bare" CPUs
and SRAMs.  Several more part types are going to be needed (the McDonnell
Douglas part needed branch target cache, FPU, MMU/cache controller, operand
memory pipeline controller, and a glue chip called the system controller.  I
imagine the TI chip would need about the same).  While for development/demo
purposes one can use e.g. 10K ECL for the glue, cache RAMs, etc. (even then
it's not easy), if your "production" system is going to be mostly ECL it
might as well be all ECL (see your BIT or Motorola rep).  The big advantages
of GaAs for embedded military systems (speed/power product, military temp
range, radiation hardness) mostly disappear if the system isn't all GaAs and
CMOS (would anybody like to develop a Mil-spec BiCMOS RAM, please?).

>
>Is GaAs just not up to it?  Is the technology too immature still?
>It certainly appears to me that the current Vitesse technology would
>do fine for a Risc chip set.  It appears from recent EE times articles
>that at least Convex and Solbourne think select use of GaAs is worthwhile.
>Any comments, netland?
>
I think the _published_ Vitesse technology isn't quite there.  The current
McDonnell Douglas CPU is about 22K transistors, and that works out to be
about the bottom edge of what you need to do a workable integer execution
unit.  As it is, you give up a lot (maybe a GaAs Acorn?).  For an Enterprise-
(R6000) class machine it's not clear to me that GaAs really gives you that much
over ECL.  For a Cray-grade machine where speed-power product is critical, then
yes.

>/Nick Michell
> michell@cs.utah.edu

gillies@m.cs.uiuc.edu (12/05/89)

> GaAs is still poking around in niche markets years after the pioneers,
> without ever having entered a regime of exponential capacity growth and
> inverse-exponential price drop.
>
>      Eric S. Raymond = eric@snark.uu.net    (mad mastermind of TMN-Netnews)


Be careful what you say, since CAD software is in its infancy with
respect to adding VLSI fault-tolerance problem automatically.
But work in this area is moving forward rapidly.