[comp.arch] Yield of core-MIPS chips MIPSCo yield? && Other Issues

news@omepd (News Account) (03/12/88)

--------
From: mcg@iwarpo3.intel.com (Steve McGeady)
Path: iwarpo3!mcg

In article <1806@obiwan.mips.COM> mark@mips.COM (Mark G. Johnson) writes:
>Second, a question.  The other DARPA core-MIPS paper at the ISSCC
>(a 200-MIPS GaAs bipolar device from Texas Instruments) devoted a
>segment of the oral presentation to chip yield.  They were quite
>pleased to reveal their exact percentage yields to date (on this
>DARPA-funded project) and to give their yield projections for the
>next 12 months or so.
>
>Could somebody from GE tell us what the yield is on the GE DARPA
>core-MIPS chip?  TI's data included (a) # of core-MIPS chips built
>to date; (b) # of them that are fully-functional, (c) trendline
>predictions of (b)/(a) for the near future.


This opens a very interesting can of worms.  I would like to ask MIPSCo
what *their* chip yield is.

And lest I be accused of pursuing a hidden agenda, here it is:

	This newsgroup (comp.arch) tends to focus on a very narrow band
	of issues concerning the success (intellectually and commercially)
	of architectures and their implementations.  The predominant
	issues discussed here are performance, "capability", and
	architectural elegance.  All too seldom do we discuss issues
	that, ultimately are more important to the success of an
	architecture:

		-> bug-free, working silicon
		-> yield (affects price, availability)
		-> system integration issues
			-> HW interface complexity
			-> availability of compatible interface chips
		-> support tools
			-> availability of compilers, tools
			-> development environment
			-> software support
			-> debug capabilities (ICE, SW debug, others)
		-> hardware support
			-> demonstration designs

We've heard much (too much, some would say) from MIPSCo regarding the
raw performance of their processor.  I, for one, would be interested in
hearing some other questions answered, for instance:

	1) Who manufactures your silicon, on what process, what yield
	   do you get, and how does/will this influence chip prices?
	   [I suppose this begs the question of whether MIPSCo is
	   primarily a chip, board, or system vendor - I have heard
	   all three as the answer at various points in the last few
	   years].

	2) Without a captive silicon manufacturing establishment, how
	   can your silicon-foundry-provided 2-micron CMOS technology
	   effectively compete with sub-micron technology from
	   manufacturers with captive silicon development technology?

	3) What is the availability of development tools (compilers,
	   assemblers, debuggers) on VAXen, SUN's, and PC's?

	4) What is the complexity of integrating a MIPSCo chip set
	   into a system?  What amount and kind of support HW is needed?



As a more general question to this group, what does it take to make an
architecture successful?

It can't be elegance of design, for (e.g.) the 80386 and the MIPSco processor
are each somewhat inelegant in their own ways (for those who don't wish to
fill in the blanks, segmentation and register model in the former case,
exposure of pipeline and other implementation details in the latter);

It can't be raw speed, for slower processors (e.g. the 68020 and the 80386)
are still consistently out-selling faster ones;

It can't be ease of UNIX ports, because the original 68000 UNIX ports (e.g.
Sun-1 and Apollo) were exercises in frustration (usually due to memory
management limitations);

And it certainly can't be volume of posting in comp.arch, or MIPSco would
have gone public long since.

My stab at an answer to this question:

The answer is that there are many things that contribute, including
marketing (both hype and non-hype), but (and forgive me if this sounds
circular), what makes an architecture is VOLUME.  An architecture is successful
if you sell a lot of the chip implementing that architecture.  This is because
volume lowers the price of the part, encourages third parties to develop
software for it, and in general increases familiarity with the architecture,
lowering the learning curve for new designs and the "fear factor" many
have.

Put another way, I would much rather have designed a processor that found its
way into every anti-skid braking system, microwave oven and laser-printer
than to have designed a processor that was used only in a particular
$100,000 software engineering workstation.


I've rambled on long enough.  Now, to Mr. Johnson's answer about MIPSCo
yield - I'm very interested, and I thank him for giving me an excuse to
raise the issue.

S. McGeady
Intel Corp.


p.s. - If you counter with 'what is Intel's yield on (e.g.) the 386', I will
	have to answer that this is considered one of Intel's most proprietary
	pieces of information, and I couldn't reveal it even if I knew the
	answer, which I don't.  I can say, however, that we shipped 1,000,000
	386's in 1987, and expect to ship more than that in 1988, ugly
	architecture and all.

jimv@radix (Jim Valerio) (03/12/88)

Keywords:


In article <2904@omepd> mcg@iwarpo3.UUCP (Steve McGeady) asks:
>what does it take to make an architecture successful?

The observation that's always been quoted to me, and seems to have some
basis in fact, is:

	Good computers don't sell.

Of course, there are so many bad computers out there that this observation
could be an artifact of the random probability of any computer succeeding. :-)

Less cynically, it seems pretty clear to me that name brands consistently win
out over fads.  There's something about sizable investments that makes many
people turn conservative.
--
Jim Valerio	jimv%radix@omepd.intel.com, {verdix,omepd}!radix!jimv

marc@oahu.cs.ucla.edu (Marc Tremblay) (03/15/88)

In article <2904@omepd> mcg@iwarpo3.UUCP (Steve McGeady) writes:
>
>We've heard much (too much, some would say) from MIPSCo regarding the
>raw performance of their processor.  I, for one, would be interested in
>hearing some other questions answered, for instance:
>
>	1) Who manufactures your silicon, on what process, what yield
>	   do you get, and how does/will this influence chip prices?

As far as I know MIPSCo has agreements with three companies to 
manufacture and sell their processor. Each of the companies has
rights to both the 12.5 MHz and 16.7 MHz version.
The three companies are:
	i)   LSI Logic Corp 
	ii)  Integrated Device Technology Inc.
	iii) Performance Semiconductor Corp.

>	2) Without a captive silicon manufacturing establishment, how
>	   can your silicon-foundry-provided 2-micron CMOS technology
>	   effectively compete with sub-micron technology from
>	   manufacturers with captive silicon development technology?

Performance Semiconductor Corp. has a high-speed CMOS process
using submicron technology.

>	4) What is the complexity of integrating a MIPSCo chip set
>	   into a system?  What amount and kind of support HW is needed?

I would also like to know more about that one. (MIPS guys?)

					Marc Tremblay
					marc@CS.UCLA.EDU
					...!(ihnp4,ucbvax)!ucla-cs!marc
					Computer Science Department, UCLA

hansen@mips.COM (Craig Hansen) (03/16/88)

In article <10355@shemp.CS.UCLA.EDU>, marc@oahu.cs.ucla.edu (Marc Tremblay) writes:
> In article <2904@omepd> mcg@iwarpo3.UUCP (Steve McGeady) writes:
> >We've heard much (too much, some would say) from MIPSCo regarding the
> >raw performance of their processor.  I, for one, would be interested in
> >hearing some other questions answered, for instance:
> >	1) Who manufactures your silicon, on what process, what yield
> >	   do you get, and how does/will this influence chip prices?
> As far as I know MIPSCo has agreements with three companies to 
> manufacture and sell their processor. Each of the companies has
> rights to both the 12.5 MHz and 16.7 MHz version.
> The three companies are:
> 	i)   LSI Logic Corp 
> 	ii)  Integrated Device Technology Inc.
> 	iii) Performance Semiconductor Corp.

These companies also have rights to future, higher-performance designs.

> >	2) Without a captive silicon manufacturing establishment, how
> >	   can your silicon-foundry-provided 2-micron CMOS technology
> >	   effectively compete with sub-micron technology from
> >	   manufacturers with captive silicon development technology?
> Performance Semiconductor Corp. has a high-speed CMOS process
> using submicron technology.

All three companies have competitive CMOS technology.

> >	4) What is the complexity of integrating a MIPSCo chip set
> >	   into a system?  What amount and kind of support HW is needed?
> I would also like to know more about that one. (MIPS guys?)

For all the belly-aching about multiplexed busses and multiphase
clocks, integrating a MIPS chip set is easier than most
microprocessors, particularly when you consider that a
_high-performance_ micro needs to have external cache hand-crafted by
the system designer, whereas on the MIPS chip, the cache is formed
from some latches and buffers and off-the-shelf, standard static
RAM's, available in quantity from multiple vendors. All the customised
hardware to use the SRAMs as caches are on the chip. [The SPARC chip
set also requires a hand-crafted MMU design; we have the MMU on chip.]
The support HW required is:

	1) a 2x frequency oscillator
	2) a tapped delay line
	3) some '373 latches
	4) some 1804 buffers
	5) some fast SRAM

The number of latches, buffers, and SRAM depends on the size of
the caches you choose; anything from 4kb to 64kb for each of the
instruction and data caches is permitted in the current parts.
This can all be hooked up in cookbook fashion; we provide design data
that shows how it all goes together. Because of the processor's high
speed, and its write-through caches, writes to memory occur rather
frequently, so we also provide a set of gate arrays that form a 4-deep
FIFO between the fast processor and a slower main memory system.
The use of these gate arrays is optional, but they're easy to use
and improve performance about 10% over having a one-deep write buffer.

-- 
Craig Hansen
Manager, Architecture Development
MIPS Computer Systems, Inc.
...{ames,decwrl,prls}!mips!hansen or hansen@mips.com   408-991-0234

sedwards@esunix.UUCP (Scott Edwards) (03/16/88)

From article <2904@omepd>, by news@omepd (News Account):
> --------
> From: mcg@iwarpo3.intel.com (Steve McGeady)
> Path: iwarpo3!mcg
> 
> 	This newsgroup (comp.arch) tends to focus on a very narrow band
> 	of issues concerning the success (intellectually and commercially)
> 	of architectures and their implementations.  The predominant
> 	issues discussed here are performance, "capability", and
> 	architectural elegance.  All too seldom do we discuss issues
> 	that, ultimately are more important to the success of an
> 	architecture:
>
[ alot of valid points deleted ]
> 
> S. McGeady
> Intel Corp.
> 

IMHO this newsgroup (comp.arch) tends to focus on a very narrow band of
RISCy business.  I agree with Steve, there is more to computer architecture
than raw cpu mips (vips, vups, ticks, clicks, clocks, whatever:-).  I think
RISC's are great for specific applications where the instruction set is
optimized for that specific application and someone is willing to pay the
extra $$$$$ for more, fast memory and optimizing compilers.  What about
architectures designed for software reliability like SWARD and iapx432
(Yes, I know it was slow, but perhaps that was just the implimentation).
Is anyone still working in this direction?  Or that dataflow stuff (didn't
NEC have some sort of graphics processor) is it dead and buried?  Or is
*EVERYBODY* (except me) convinced that RISC is the wave of the future?
Less is more, yesterday's technology today! ;-)


-- Scott

*****************************************************************************
*  My opinions are not shared by my  *  On the first part of the journey,   *
*  employer or anyone else with half *  I was looking at all the life...    *
*  a brain (or even a whole one).    *		                 - America  *
*****************************************************************************

mch@computing-maths.cardiff.ac.uk (Major Kano) (03/17/88)

In article <2904@omepd> mcg@iwarpo3.UUCP (Steve McGeady) writes:
>
>This opens a very interesting can of worms.  I would like to ask MIPSCo
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(* Absolutely *)
(* stuff deleted *)
>And lest I be accused of pursuing a hidden agenda, here it is:
>
>	This newsgroup (comp.arch) tends to focus on a very narrow band
>	of issues concerning the success (intellectually and commercially)
>	of architectures and their implementations.  The predominant
>	issues discussed here are performance, "capability", and
>	architectural elegance.  All too seldom do we discuss issues
>	that, ultimately are more important to the success of an
>	architecture:
>
>		-> bug-free, working silicon
>		-> yield (affects price, availability)
(*
   And how. On of the first nieveties (forgive my spelling) that I have had
   to unlearn at College was that fantastic architectures may not be practical
   in the real world, given the avaliable process technologies.
*)
>		-> system integration issues
>			-> HW interface complexity
>			-> availability of compatible interface chips
>		-> support tools
>			-> availability of compilers, tools
>			-> development environment
>			-> software support
>			-> debug capabilities (ICE, SW debug, others)
>		-> hardware support
>			-> demonstration designs
(* Interesting; good points *)
(* Stuff deleted *)
>It can't be elegance of design, for (e.g.) the 80386 and the MIPSco processor
>are each somewhat inelegant in their own ways (for those who don't wish to
>fill in the blanks, segmentation and register model in the former case,
                     ^^^^^^^^^^^^              ^^^^^

                     ** WHAT THE $@#@++%%& HELL ?!? **


   Wow ! And I thought Ray Coles (writes for Practical Computing, a UK
   Magazine) had it in for Intel !

   Agreeing, as I do, that the register model doesn't have enough of
   them, and that even the '386 isn't regular enough, I though that the 
   feature of the '386 that made it so TECHNICALLY advanced
   (IBM compatibility not withstanding :-) ** WAS ** its memory management
   and protection model. The 32-bit within-segment addresses are what
   people have been waiting for for ages. I would question the fact that
   only 16 bit selectors are avaliable, but I defy anyone to come up, in the
   near or intermediate future, with an Intel-style memory model that is
   better than Intel's, without opening up a whole can of voracious
   memory-eating killer-worms at the descriptor table level. If you don't
   know what I mean by that, pretend for a moment, that YOU were the
   person who had to come up with the byte/page granularity
   kludge in order to make 4GB segments fit in a DT entry.
   (Maybe that person (or team) could comment themselves, if they use the net).

   What does everyone think ? As a computer architecture junkie, I would
   be very interested; as the Intel segments seem (to me) to come in for
   a lot of flack every so often; I just didn't expect it to come from
   an Intel employee .
*)
(* Stuff deleted *)
>than to have designed a processor that was used only in a particular
>$100,000 software engineering workstation.
(*
   WOW ! That is the sort of processor I personally WOULD like to
   work on once I've had enough experience; although I would also like
   to see such a CPU/chip set sold everywhere, of course.
*)
>
>I've rambled on long enough.  Now, to Mr. Johnson's answer about MIPSCo
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(* So have I. Hope someone comments on all this. Bye ! *)
(* Stuff deleted *)

-mch

PS: I would be grateful to anyone who POSTS replies or flames or whatever
about this if they would also e-mail me, as I go home for the Easter Vacation
in two day's time, and our system manager does expire articles every so often !
Thanks in advance, -mch
-- 
Martin C. Howe, University College Cardiff | "You actually program in 'C'
mch@vax1.computing-maths.cardiff.ac.uk.    |  WITHOUT regular eye-tests ?!"
-------------------------------------------+-----+------------------------------
My cats know more about UCC's opinions than I do.| MOSH! In the name of ANTHRAX!

wcs@ho95e.ATT.COM (Bill.Stewart.<ho95c>) (03/29/88)

In article <751@esunix.UUCP> sedwards@esunix.UUCP (Scott Edwards) writes:
:RISC's are great for specific applications where the instruction set is
:optimized for that specific application and someone is willing to pay the
:extra $$$$$ for more, fast memory and optimizing compilers.  What about
:architectures designed for software reliability like SWARD and iapx432

You've got it backwards - the theory of CISC is that you really need
binary-coded-decimal-graphics-interpreters because your COBOL programs
draw a lot of pie charts, and need them in hardware to go fast.
RISC says make the hardware do the few simple things *everybody* needs,
and do them elegantly and real fast.  That way it's easy to write
good compilers, and fast enough to do your custom work in software.

The main arguments between CISC and RISC people are whether simple and
elegant is enough faster to justify the lack of complex features, and
whether it really *is* easier to write compilers good enough that you
don't need hand-coded assembler.  RISC architectures are probably more
reliable, since there's less complexity.

You *always* need optimizing compilers - compare Microsoft C 5.0 with the
earliest 8086 compilers - it's taken a long time to produce decent CISC code.
As far as memory speed is concerned, note that *data* memory is
accessed at about the same rate, because you're doing the same work,
and most calculations are to registers rather than RAM.
Code memory speed has to be higher on RISCs, since you generate more
code, but that's what instruction caches are for.

You also mentioned graphics processors and the like.  Certainly when
you have very specialized applications that you use a lot, it's worth
building custom hardware.  AT&T's DSP-32 digital signal processor does
8 MFLOPS of add and multiply, plus I/O, because speech processing needs
fast crunching; it can get away with minimal interrupt-handling and
small address spaces because it's not trying to be a general-purpose
chip.  Similarly, several of the commercial graphics chips do BITBLTs
real fast, but typically use an 80186 or 68000 to deal with communications.
But for general applications, like hacking integers and characters,
RISC is usually better.
-- 
#				Thanks;
# Bill Stewart, AT&T Bell Labs 2G218, Holmdel NJ 1-201-949-0705 ihnp4!ho95c!wcs
# So we got out our parsers and debuggers and lexical analyzers and various 
# implements of destruction and went off to clean up the tty driver...

davidsen@steinmetz.steinmetz.ge.com (William E. Davidsen Jr) (03/30/88)

In article <2089@ho95e.ATT.COM> wcs@ho95e.UUCP (46323-Bill.Stewart.<ho95c>,2G218,x0705,) writes:

| You've got it backwards - the theory of CISC is that you really need
| binary-coded-decimal-graphics-interpreters because your COBOL programs
| draw a lot of pie charts, and need them in hardware to go fast.
| RISC says make the hardware do the few simple things *everybody* needs,
| and do them elegantly and real fast.  That way it's easy to write
| good compilers, and fast enough to do your custom work in software.

Actually a CISC processor is in reality two parts: a RISC processor
which executes a simple native language, and a silicon compiler which
takes a pseudo assembler (what we see as the native instruction set) and
translates to the native RISC instructions (microcode).

Seriously, when I see people claim that they are taking a pseudocode
generated by <your favorite compiler> and doing a translate and optimize
so they can run it on RISC, I wonder why all the fuss? In a few more
years, perhaps ten, we will know how to build a translator in silicon
which does a better and faster job of translation, NOP fills, etc, than
we are now doing in software. I think we will still do a better job with
software at that time, but it may not be cost effective.

IBM sort of used this approach with the PC/370, modifying a 68000 to
translate 370 opcodes into the same microcode that the 68000
conventional opcodes use.
-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {uunet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

jesup@pawl14.pawl.rpi.edu (Randell E. Jesup) (03/31/88)

In article <10170@steinmetz.steinmetz.ge.com> davidsen@crdos1.UUCP (bill davidsen) writes:
>Actually a CISC processor is in reality two parts: a RISC processor
>which executes a simple native language, and a silicon compiler which
>takes a pseudo assembler (what we see as the native instruction set) and
>translates to the native RISC instructions (microcode).

	Well, I wouldn't put it that way.  More like a silicon interpreter
than a compiler (it takes a higher-level language one instruction at a time,
interprets what it means, does it, repeat.  A compiler would read it it,
then write out the microcode version of it for later execution.

>Seriously, when I see people claim that they are taking a pseudocode
>generated by <your favorite compiler> and doing a translate and optimize
>so they can run it on RISC, I wonder why all the fuss? In a few more
>years, perhaps ten, we will know how to build a translator in silicon
>which does a better and faster job of translation, NOP fills, etc, than
>we are now doing in software. I think we will still do a better job with
>software at that time, but it may not be cost effective.

	CISC vs RISC is all a matter of where the balance lies.  Neither is
'better', each is better for specific cases.  The balance lies in the
application it's designed for, the process used for the chip, the speed and
size and layout of external memory, cost considerations, etc, etc.  In ten
years we will have smarter and better CISCs, and faster RISCs.  The balance
will shift back and forth, but I doubt either will be eliminated.

>IBM sort of used this approach with the PC/370, modifying a 68000 to
>translate 370 opcodes into the same microcode that the 68000
>conventional opcodes use.

	Actually, I'd thought they re-microcoded the 68000's to understand
370 code instead of 68000 code (running a different internal interpreter
program, effectively.)  No translation is done.

	I think one of the reasons RISC became popular is that on-chip
references (of microcode) are no longer immensely faster than off-chip,
due to things like pipelining, and chip tech allows denser, faster chips
and rams.


	On another point:  Does anyone at Mips want to comment on the
R-3000?  (Announced a few days ago, I believe)

     //	Randell Jesup			      Lunge Software Development
    //	Dedicated Amiga Programmer            13 Frear Ave, Troy, NY 12180
 \\//	beowulf!lunge!jesup@steinmetz.UUCP    (518) 272-2942
  \/    (uunet!steinmetz!beowulf!lunge!jesup) BIX: rjesup

(-: The Few, The Proud, The Architects of the RPM40 40MIPS CMOS Micro :-)

david@titan.rice.edu (David Callahan) (03/31/88)

In article <10170@steinmetz.steinmetz.ge.com> davidsen@crdos1.UUCP (bill davidsen) writes:
>Seriously, when I see people claim that they are taking a pseudocode
>generated by <your favorite compiler> and doing a translate and optimize
>so they can run it on RISC, I wonder why all the fuss? In a few more
>years, perhaps ten, we will know how to build a translator in silicon
>which does a better and faster job of translation, NOP fills, etc, than
>we are now doing in software. I think we will still do a better job with
>software at that time, but it may not be cost effective.
>	bill davidsen		(wedu@ge-crd.arpa)
>  {uunet | philabs | seismo}!steinmetz!crdos1!davidsen
>"Stupidity, like virtue, is its own reward" -me

Seriuosly, when people advocate doing things at run-time which we
already know can be done effectively at compile time (once per program!)
I wonder.  The lessons from RISC and VLIW/TRACE seem to be: if you can
move a runtime decision (like instruction scheduling) into the compiler
not only does the hardware become simpler and hence faster, but that's one
less thing to do at run-time. If we could build silicon to do a
``better and faster job ...'' then we pobably will be able to build
one helluva compiler :-).


David Callahan
Rice Univeristy

mcg@omepd (Steven McGeady) (04/02/88)

Three weeks ago I posted an article with the above title, asking, among
other things, that we discuss in this group a broader range of issues
regarding processor architectures and their success than just (e.g.)
conditions codes or not, scoreboarding or not, or UNIX-applicability or
not.  Other than a few small factual responses to the yield and
silicon foundary question, none of the topics I suggested have been
raised, and this discussion has devolved into another tedious RISC. vs.
CISC debate!

The questions I initially asked where (briefly):

	1) Who manufactures MIPSco's silicon?, with details.

This was partially answered, both by MIPSco and by others, with names of
the foundries ("strategic partners") that MIPSco uses, and other information
such as yield was understandably not present.

	2) How does MIPSco keep apace of silicon technology?

The argument was presented by MIPSco that their foundaries and strategic
partners have competitive silicon technology which they can exploit.
This begs the question to some extent of whether a foundary can be
competitive with captive technology development, but that's likely to be
a matter of opinion.

	3) What is the availability of compilers, assemblers, debuggers)
	   on VAXen, SUN's, and PC's?

Absolutely NO ONE has addressed this.  John Mashey politely did address it
in private mail, but nobody else seems to care.  John pointed out that
MIPSco's compilers (a major part of their competitive posture) are available
only on MIPSco boxes.  This harkens back to the days of captive development
systems for microprocessors, which I thought had been finally vanquished.
I wonder why there is no discussion here of this.

	4) What is the complexity of integrating a MIPSCo chip set
	   into a system?  What amount and kind of support HW is needed?

I got a brief response in private mail from MIPSco saying "No harder than
anybody else".  Being a software type, I don't know all the right probing
questions to ask, but surely there are others out there who either: a)
have experience with this; or b) are curious and know the right questions.
(E.g. memory bus i'face: how wide, deep, long, strong?; cache: how fast,
expensive, hard, necessary?)

	5) What does it take to make an architecture successful?

I posited a (somewhat circular) answer to this that I thought would
engender some discussion.  I am disappointed.  If you all ignore this
message too, I will assume that my interest in these questions is
perverse and crawl back into my hole.


Finally, One respondent picked up on my reference to some "somewhat
inelegant" aspects of the 386 (along with some inelegant aspects of the
MIPSco chip).  I pointed this out only to be (or appear to be) somewhat
objective.  Let there be no mistake: the 80386 pays my (meager)
paycheck every month, and even though I'm working on unrelated
architectures directed at different problem sets, and regardless of how
I truely feel about the 386 architecture, which you won't be able to
find out without knowing me much better,  I am daily grateful to the
386 and its progenitors for the continued rise in the value of my Intel
stock holdings.

And, in case you hadn't figured it out, this and all other communications
by me are my opinion alone, and in no way represent the opinion of anyone
else, least of all my employer, and are presented without the explicit review
or approval of that employer, something that will probably get me in big
trouble someday.


S. McGeady
Intel Corp.
mcg@iwarp.intel.com, mcg@omepd.intel.com,
...!intelca!omepd!mcg, ...!tektronix!ogcvax!omepd!mcg