[comp.arch] Japanese 32-bit micro can be a 68020 or 80386

karl@sugar.UUCP (Karl Lehenbauer) (05/17/88)

There is an article in this week's "PC Week" magazine about a 32-bit micro
developed in Japan by a joint venture of a couple of major players there
(sorry, I don't have the article at hand) that has a writable control store 
and thus can have different microprograms loaded into it to emulate the 80386,
68020 and others, apparently as a means of getting around microcode copyright 
issues that have prevented Japanese manafacturers from cloning those processors.
It's called VM, for Virtual Microprocessor.

Even the immediate implications are stunning, I think.  One thing is that 
those of us who have always wanted to diddle microcode may soon be able to, 
tho' there won't be much software or software compatibility for those who
do (the Microprogramming Construction Set?).  It would be a boon to
researchers and others trying to implement oddball language architectures
and get them to execute efficiently on a CISC-type machine (Smalltalk, LISP, 
etc.)

Another thing is that it could be more desirable to have a microprogrammable
machine than not, for the wider compatibility it could offer (a lot of work
required to implement that range of compatibility, though.)  

Imagine a version of the microprogrammable chip in which the operating system 
could context switch among trusted sets of microprograms.  Weird.

Imagine a virtual personal computer which, on exactly the same hardware, 
could be a DOS-compatible 80386 workstation or a 68020 workstation, with 
microcode and software being the only differences.  How about a
Forth machine?  A Vax?

If tangible benefits beyond end-running microcode copyright issues are
provided, reasonably soon, American uP manafacturers will need to develop 
their own microprogrammable machines, unless they're certain of their RISC 
architectures, because people will buy them.  If the processor is only
used as a means of getting around the copyright problem, it's no big deal 
to us as microprocessor consumers which we would use other than price,
performance and reliability, although I imagine it is regarded as a 
very big deal by Motorola and Intel and any other manafacturers selling
processors that this processor can emulate.
-- 
"Now here's something you're really going to like!" -- Rocket J. Squirrel
..!{bellcore!tness1,uunet!nuchat}!sugar!karl, Unix BBS (713) 438-5018

guy@gorodish.Sun.COM (Guy Harris) (05/18/88)

> Even the immediate implications are stunning, I think.  One thing is that 
> those of us who have always wanted to diddle microcode may soon be able to, 
> tho' there won't be much software or software compatibility for those who
> do (the Microprogramming Construction Set?).  It would be a boon to
> researchers and others trying to implement oddball language architectures
> and get them to execute efficiently on a CISC-type machine (Smalltalk, LISP, 
> etc.)

Well, maybe.  An earlier article in "comp.arch" claimed:

> From: mdr@reed.UUCP (Mike Rutenberg)
> Newsgroups: comp.arch,comp.lang.smalltalk
> Subject: Re: hardware for late-binding languages
> Keywords: Smalltalk
> Date: 17 May 88 05:37:19 GMT
> Organization: Reed College, Portland OR

> The tendency among Smalltalk implementations (a very late bound language)
> is to run on standard CPUs like 68020s or 80386s.

> You might propose that a custom instruction set or hardware support
> would make the implementation faster.  Specific hardware support may
> help a given implementation, but you then have to build the next
> generation of that machine if the performance win is going to continue
> with you.

> If you do your own custom hardware to support a language, you have to
> do it all, both the software and the hardware.  You can't spend as much
> time building fast software and you don't get the automatic win
> that occurs when somebody *else* spends the millions to do something
> like an mc88000.

> It looks to me that you get the fastest language machines by concentrating
> on building fast software that will work on the fastest standard CPUs.

I'm not sure that the "buckets of microcode" or, if you will, the "use the
microcode as a fast machine language and write a big interpreter in this
machine language" approach is necessarily the only way to get those languages
to run efficiently, or even necessarily the best way.

> Imagine a version of the microprogrammable chip in which the operating
> system could context switch among trusted sets of microprograms.  Weird.

Weird, maybe, but did not the Burroughs 1700 series do this a long time ago?

ohbuchi@unc.cs.unc.edu (Ryutarou Ohbuchi) (05/18/88)

Hmmmm. karl@sugar.UUCP (Karl Lehenbauer) writes;

>There is an article in this week's "PC Week" magazine about a 32-bit micro
>developed in Japan by a joint venture of a couple of major players there
>(sorry, I don't have the article at hand) that has a writable control store 
>and thus can have different microprograms loaded into it to emulate the 80386,
>68020 and others, apparently as a means of getting around microcode copyright 
>issues that have prevented Japanese manafacturers from cloning those processors.
>It's called VM, for Virtual Microprocessor.

<Various stuffs, such as possibility of microprogrammable processor for 
researchers, hackers, or necessity to counter by Intel/Motorola, etc. 
deleted>

Well.... I think it sounds interesting, in the sense that who in the 
Japanese industry got screwed up their mind.   Of course many of them
shows strange behaivior, strange enough to design and implement
'Ultimate General purpose register absed CISC'ish architecture 
-- TRON -- as Japanese (de facto) industry standard uP.  But 
dynamically microprogrammable processor, as a joint venture ?  To
counter 80x86/680x0 ?  Hummm... :-&

Or it may be the PC Week writer who's made a mistake;  The writer
may have misinterpreted the news release on TRON microprocessor(s)
(there are several versions from several companies/joint ventures).
(Or something else.)

I wish to know the true story.

Even if it is true, I strongly doubt the utility of such a processor as a 
joint venture (which must aiming at some standard hardware basis), and
as a research vehecle (toy) for Japanese and Americal researcher/hackers.

I'm Japanese, who is out of country for almost two years.  So my 
intuition about the sanity of Japanese industry may be totally outdated...

==============================================================================
Any opinion expressed here is my own.
------------------------------------------------------------------------------
Ryutarou Ohbuchi	"Life's rock."   "Climb now, work later."
ohbuchi@cs.unc.edu	<on csnet>
Department of Computer Science, University of North Carolina at Chapel Hill
==============================================================================

mwm@eris (Mike (I'm in love with my car) Meyer) (05/18/88)

In article <2006@sugar.UUCP> karl@sugar.UUCP (Karl Lehenbauer) writes:
<(sorry, I don't have the article at hand) that has a writable control store 
<and thus can have different microprograms loaded into it to emulate the 80386,
<68020 and others, apparently as a means of getting around microcode copyright 

This isn't anything new. I've even worked on machines that had
writeable control store for the nanocode level, so you could implement
whatever you wanted for a micromachine (wanna run the 68020 microcode
on it? Sure - no problem.... :-).

<Even the immediate implications are stunning, I think.  One thing is that 
<those of us who have always wanted to diddle microcode may soon be able to, 
<tho' there won't be much software or software compatibility for those who

Check out the PDP-11/60. Or almost any VAX. Or ask DEC about the WCS
board for the micro-11 (11/03? The one sold by Heath for a while).
I've even got a computer architecture text that discusses putting a
butterfly FFT into the WCS on the 11/60. You might even ask BBN about
the C/70.

<Imagine a version of the microprogrammable chip in which the operating system 
<could context switch among trusted sets of microprograms.  Weird.

Burroughs has been doing that for years. You get to design the target
machine when you design a compiler. Doing that leads to oddities like
instruction sets with a push but no pop (you push args, then subtract
from the stack pointer to unpop them all).

<Imagine a virtual personal computer which, on exactly the same hardware, 
<could be a DOS-compatible 80386 workstation or a 68020 workstation, with 
<microcode and software being the only differences.  How about a
<Forth machine?  A Vax?

At what cost? You'll notice that Burroughs has been loosing ground to
DEC, and DEC has been loosing ground to various RISC-machine
companies. And the magic machine with the WCS for both micro and nano
code is no longer sold at all.

For instance, can anyone provide performance figures for the virtual
microprocesser in 68020 mode? Assuming I did the right magic on a
daughter board, could I plug it into a Sun 3/280 and get the same
performance as I do from the 68020?

	<mike
--
The weather is here, I wish you were beautiful.		Mike Meyer
My thoughts aren't too clear, but don't run away.	mwm@berkeley.edu
My girlfriend's a bore, my job is too dutiful.		ucbvax!mwm
Hell nobody's perfect, would you like to play?		mwm@ucbjade.BITNET

lindsay@k.gp.cs.cmu.edu (Donald Lindsay) (05/18/88)

In article <2006@sugar.UUCP> karl@sugar.UUCP (Karl Lehenbauer) writes:
>There is an article in this week's "PC Week" magazine about a 32-bit micro
>developed in Japan ...
>...that has a writable control store 
>and thus can have different microprograms loaded into it to emulate the
>80386, 68020 and others ...
>It would be a boon to
>researchers and others trying to implement oddball language architectures
>and get them to execute efficiently on a CISC-type machine (Smalltalk, LISP, 
>etc.)

Microprogrammable hardware has been for sale for forever. (Well, almost
forever. I helped evaluate one in 1969. ) The only new aspect here is
having the writable control store on-chip.

It's flexible, all right, but it doesn't necessarily run preexisting
instruction sets as fast as dedicated designs can. It will also be slow when
doing things that it wasn't quite designed for. (I remember a Microdata (?)
box that was programmed to be a PDP-11. Throughput was awful, simply because
the box did not want to believe in three-bit fields.)

There used to be three big wins. One was that the microcode often had access
to extra registers. Another was that there were fewer instruction fetches,
either because your macroinstruction set was application specific, or else
because your most crucial subroutines were written in microcode.

Neither of these is as big a win anymore. The newer machines have more
registers, and the Harvard machines don't do their instruction fetches over
the data bus.

The third  (and least) win came when the low-level muck-with-the-bits coding
allowed some result to come out of the hardware in fewer clocks.  This win
can still happen, but it tends to happen when you have special data types,
or when the hardware supports e.g. tags. Most of these wins are the
territory of the Lisp engines and the graphics engines.


-- 
	Don		lindsay@k.gp.cs.cmu.edu    CMU Computer Science

mash@mips.COM (John Mashey) (05/18/88)

In article <53583@sun.uucp> guy@gorodish.Sun.COM (Guy Harris) writes:
>> Even the immediate implications are stunning, I think.  One thing is that 
>> those of us who have always wanted to diddle microcode may soon be able to.

>Well, maybe.  An earlier article in "comp.arch" claimed:
...
>I'm not sure that the "buckets of microcode" or, if you will, the "use the
>microcode as a fast machine language and write a big interpreter in this
>machine language" approach is necessarily the only way to get those languages
>to run efficiently, or even necessarily the best way.

>> Imagine a version of the microprogrammable chip in which the operating
>> system could context switch among trusted sets of microprograms.  Weird.

>Weird, maybe, but did not the Burroughs 1700 series do this a long time ago?
Yes, to handle support for different languages.

Also, at least some of the XEROX D-machines were heavily microcoded;
note that Smalltalk, for example, runs pretty well on 68020s + current
RISCS.

Robert F. Rosin once gave a great talk at Bell Labs (about 8-10
years ago), with a subject like "user-microcoding considered harmful",
with a lot of history.  He particularly described his experiences
at SUNY waiting for the heavily-microcodable
Nanodata QM-1 (like waiting for Godot), and what finally happened to
them.....most ended up emulating PDP-11s running UNIX....

This is not to say that microcode is NECESSARILY bad; in many
important machines it's been a useful emulation aid, and a useful
design mechanism.  On the other hand, these days, if you want to
run multiple instruction sets:
	a) In the microprocessor domain, addin boards are awfully cheap.
	b) And even without extra hardware, fast processors can sometimes
	do fine job, via software techniques.  Take a look at
	Insignia or Phoenix X86 emulation softwasre, for example.
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	{ames,decwrl,prls,pyramid}!mips!mash  OR  mash@mips.com
DDD:  	408-991-0253 or 408-720-1700, x253
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

colwell@mfci.UUCP (Robert Colwell) (05/18/88)

In article <53583@sun.uucp= guy@gorodish.Sun.COM (Guy Harris) writes:
== Even the immediate implications are stunning, I think.  One thing is that 
== those of us who have always wanted to diddle microcode may soon be able to, 
== tho' there won't be much software or software compatibility for those who
== do (the Microprogramming Construction Set?).  It would be a boon to
== researchers and others trying to implement oddball language architectures
== and get them to execute efficiently on a CISC-type machine (Smalltalk, LISP, 
== etc.)
=
=Well, maybe.  An earlier article in "comp.arch" claimed:
=
== From: mdr@reed.UUCP (Mike Rutenberg)
== If you do your own custom hardware to support a language, you have to
== do it all, both the software and the hardware.  You can't spend as much
== time building fast software and you don't get the automatic win
== that occurs when somebody *else* spends the millions to do something
== like an mc88000.
=
== It looks to me that you get the fastest language machines by concentrating
== on building fast software that will work on the fastest standard CPUs.
=
=I'm not sure that the "buckets of microcode" or, if you will, the "use the
=microcode as a fast machine language and write a big interpreter in this
=machine language" approach is necessarily the only way to get those languages
=to run efficiently, or even necessarily the best way.
=
== Imagine a version of the microprogrammable chip in which the operating
== system could context switch among trusted sets of microprograms.  Weird.
=
=Weird, maybe, but did not the Burroughs 1700 series do this a long time ago?

Some of you guys may also remember the Perq, from Three Rivers
Computer/Perq Systems.  It had a user-writable microstore in a
roll-your-own bit-slice CPU, and I think one of the reasons for its
demise was what Mike mentioned above -- it's really hard to compete
with the microprocessor guys on performance (and that's ignoring the
price difference).  At any given micro incarnation, one can probably
construct something out of existing less-dense technology and beat
it, but it's a hard game to play.  And if you're also having to come
up with new OS revisions, new compilers, and also maintain your
benchmarking and marketing, it's even tougher.  A Perq was capable of
very high performance when judiciously microcoded, but that didn't
sell enough of them to make up for all the disadvantages of
proprietary hardware and compilers.

One other thing.  The micros I worked on had "nanocode", a la the 68K
series, and if somebody had handed us a large enough pot of money, I
suspect that we'd have been able to let them do another version of
the chip with their own custom microcode.  But the performance would
not have been that much higher -- at the nanocode level of a
microprocessor, there are very significant constraints on what is
possible (chip bus contention, vertical microcode decoding, state
machine restrictions).  Unless you designed your micro for an unusual
degree of orthogonality *at the functional-unit level*, you may be
pretty disappointed at what hand-microcoding at that level could buy
you.  And if you DID design it that way, I'd bet that your basic
performance is going to suffer substantially as compared to your
counterpart designing the 68XXX.

Bob Colwell            mfci!colwell@uunet.uucp
Multiflow Computer
175 N. Main St.
Branford, CT 06405     203-488-6090

kds@mipos2.intel.com (Ken Shoemaker ~) (05/19/88)

I believe the microarchitectures of the 68k and the 80*86 are significantly
different for some very good reasons (like, the nature of the instruction sets
are different).  But beyond this, an obvious way of doing much the same thing
is just to have an interpreter of the appropriate instruction set as a
macro program running on a risc, since that is essentially what you would be
doing by trying to have a microcode program simulating some other instruction
set on yet another general purpose machine.  But unless the new machine
risc or microcodable one) were significantly faster then both
target machines, it couldn't be faster than the custom hardware implementation
of the specific machine.

-------------
You don't have to break many eggs to hate omlets -- Ian Shoales

Ken Shoemaker, Microprocessor Design, Intel Corp., Santa Clara, California
uucp: ...{hplabs|decwrl|amdcad|qantel|pur-ee|scgvaxd|oliveb}!intelca!mipos3!kds

rmb384@leah.Albany.Edu (Robert M. Bownes III) (05/19/88)

	Talking about r[micro/nano]coding:

		Didn't IBM remicrocode a couple of 68K dice to look like
a restricted version of the 370? I remember reading about this quite a while 
back. Anyone remember any of the details?

	Bob

-- 
Bob Bownes, Aka Keptin Comrade Dr Bobwrench III	|  If I didn't say it, It
bownesrm@beowulf.uucp  (518)-482-8798		|  must be true.
{steinmetz,brspyr1,sun!sunbow}!beowulf!bownesrm	|	- me, tonite -

tmg@nyit.UUCP (Tom Genereaux) (05/19/88)

I have seen adverts for a system that appears to be similar from
WISC Machines in Ca. Sorry, don't have prices or much more information
than that, but at least part of the design was done by Phil Koopman.
Any additional information would be appreciated(sp!).
			Tom Genereaux
			NYIT Computer Graphics Lab
			philabs!nyit!tmg

david@sun.uucp (David DiGiacomo) (05/19/88)

In article <2006@sugar.UUCP> karl@sugar.UUCP (Karl Lehenbauer) writes:
>There is an article in this week's "PC Week" magazine about a 32-bit micro
>developed in Japan by a joint venture of a couple of major players there
>(sorry, I don't have the article at hand) that has a writable control store 
>and thus can have different microprograms loaded into it to emulate the 80386,
>68020 and others, apparently as a means of getting around microcode copyright 
>issues that have prevented Japanese manafacturers from cloning those processors.
>It's called VM, for Virtual Microprocessor.

The impression I got from reading (vague) articles in EE Times, InfoWorld
and other places is that the VM Systems part will *not* have writable
control store.  As far as I can tell, Shima is designing a 386 clone which
uses PLAs for instruction decoding and sequencing.  The theory seems to be
that this will completely avoid the microcode copyright issue -- writable
control store would just transfer the liability from the IC vendor to the
system vendor.

It's hard to believe that the same micro-machine could efficiently emulate
both the 386 and 68020, microcode or no microcode.

-- 
David DiGiacomo, Sun Microsystems, Mt. View, CA  sun!david david@sun.com

alexande@drivax.UUCP (Mark Alexander) (05/19/88)

In article <2006@sugar.UUCP> karl@sugar.UUCP (Karl Lehenbauer) writes about:
>a 32-bit micro developed in Japan ... that has a writable control store 
>and thus can have different microprograms loaded into it to emulate the 80386,
>68020 and others, apparently as a means of getting around microcode copyright 
>issues...

The May 2 Infoworld article on the VM 8600S (as it's called) says this
chip uses a PLA instead of microcode.  The article goes on to say that
it can be "made compatible" with the 68000, the V60, etc., by changing
the PLA.  It doesn't sound like it can be changed on the fly like a
writable control store, unfortunately.
-- 
Mark Alexander	(UUCP: amdahl!drivax!alexande)
"Bob-ism: the Faith that changes to meet YOUR needs." --Bob (as heard on PHC)

faustus@ic.Berkeley.EDU (Wayne A. Christopher) (05/19/88)

In article <53780@sun.uucp>, david@sun.uucp (David DiGiacomo) writes:
> ... As far as I can tell, Shima is designing a 386 clone which
> uses PLAs for instruction decoding and sequencing.  The theory seems to be
> that this will completely avoid the microcode copyright issue...

Hmm, that's really interesting.  If you reverse engineer the microcode,
feed it to a logic synthesis tool (like a PLA optimizer and generator),
and put that on your chip, isn't that just like taking excerpts from
somebody's book and rearranging them, changing the names of all the
characters, and making global substitutions like "do not" for "don't",
then selling it under your own name?  What does copyright law say about
this?

	Wayne

Michael_MPR_Slater@cup.portal.com (05/19/88)

The chip you are referring to is VM Technology's VM8600S.  My understanding is
that the chip does NOT have a writable control store; it is designed so that
the instruction set can be easily modified by recoding the PLAs, but it is not
user-modifiable.

Calling it a 386-compatible is also misleading.  It is 386-real-mode compatible
--not protected mode-- in that it does not have the MMU functions.  Thus, it is
not useful as a 386 replacement; its more of a fast 8086 replacement.

The designer of the chip is Masa Shima, who designed the 8080, Z80, and Z8000.
He recently left as head of Intel's Japanese R&D facility to start VM, in
partnership with K. Nishi, head of ASCII Corp and former VP of Microsoft in
Japan.

As usual, the trade press screwed up this story.  The Microprocessor Report
newsletter has the real story.  (I suppose I should admit that I'm editor and
publisher.  I'll gladly send a free sample issue on request.  The newsletter is
written for designers of microprocessor-based hardware; in the past, it has

Michael_MPR_Slater@cup.portal.com (05/19/88)

(This is the end of my previous message, which was prematurely terminated.)

If you are interested in a sample copy of Microprocessor Report, send your
request via email or US mail:

Michael Slater, Microprocessor Report, 550 California Ave, Suite 320,
Palo Alto, CA 94306  (415) 494-2677

mslater@cup.portal.com    BIX: mslater   MCI: mslater

abali@phao.eng.ohio-state.edu (Bulent Abali) (05/19/88)

In article <803@leah.Albany.Edu> rmb384@leah.Albany.Edu (Robert M. Bownes III) writes:
>
>	Talking about r[micro/nano]coding:
>
>		Didn't IBM remicrocode a couple of 68K dice to look like
>a restricted version of the 370? I remember reading about this quite a while 
>back. Anyone remember any of the details?
>
>	Bob
 
  IBM's XT/370 and AT/370 personal computers have 2 remicrocoded
  68000's which implement a subset of the 370 instruction set.

  Bulent

davidsen@steinmetz.ge.com (William E. Davidsen Jr) (05/19/88)

The VAX has loadable control store (at least the 11/780) as an option.
In all the years we've had them, to my knowlege only one person ever
used the feature. He coded an FFT instruction, and wrote a master's
thesis about it. Note that no one else seems to have used the store, or
even his neat instruction.

As I recall he got about 4:1 improvement in performance, but that was
many years ago and he may have gotten more.
-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {uunet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

jesup@cbmvax.UUCP (Randell Jesup) (05/19/88)

In article <2206@winchester.mips.COM> mash@winchester.UUCP (John Mashey) writes:
>This is not to say that microcode is NECESSARILY bad; in many
>important machines it's been a useful emulation aid, and a useful
>design mechanism.

	I think user microcoding is a mostly dead issue.  If one want's to
run another machines code, it is pretty easy to write an interpreter for
one of the many fast RISCs out there, and get almost as good (or better)
performance than any possible user-microcoded machine.

	In fact, CORE ISA compilers compile to an in some ways CISCy language
(CORE), which is then translated to RPM-40, or any of several other RISC
native languages.  CORE even has things like double-words shifts and rotates,
fpsqrt, etc.

Randell Jesup		{uunet|ihnp4|allegra}!cbmvax!jesup
(-:  The proud, the few, the designers of the GE RPM-40!  :-)

stevenso@hplabsb.UUCP (David Stevenson) (05/20/88)

In article <53780@sun.uucp>, david@sun.uucp (David DiGiacomo) writes:
>                 As far as I can tell, Shima is designing a 386 clone which
> uses PLAs for instruction decoding and sequencing. 

When I worked at Zilog, I had the opportunity to study the implementation
and documentation of Shima's last USA-based microprocessor (the Z8000 for
those of you with long memories).  The documentation flowcharts looked very
much like microcode (which registers, buses, etc were enabled during each
state transition), but the chip's logic layout was very PLA-ish (in order
to conserve silicon area, not for copyright purposes).  If you want to
see what that chip looked like, see David Patterson's article in the
Scientific American on (gasp!) "Microcode." [Those were the pre-RISC days.]

David Stevenson
HP Labs
hplabs.hp.com

stevew@nsc.nsc.com (Steve Wilson) (05/20/88)

>I'm not sure that the "buckets of microcode" or, if you will, the "use the
>microcode as a fast machine language and write a big interpreter in this
>machine language" approach is necessarily the only way to get those languages
>to run efficiently, or even necessarily the best way.
>
>> Imagine a version of the microprogrammable chip in which the operating
>> system could context switch among trusted sets of microprograms.  Weird.
>
>Weird, maybe, but did not the Burroughs 1700 series do this a long time ago?

Yes!  The 1700/1800/1900 series machines switched interpreters as a
function of which language the machine was emulating for a given 
program.  Even most of the OS(called an MCP for you Burrougs fans)
was mostly interpreted except for the interrupt handling routines.
Note that it was also fairly SLOW!  The base hardware ran at something
like 6 MIPS and the instructions being executed were fairly simple.
I think some time ago I saw a Datamation estimate at 200Kips performance.

Another limitation of the Burroughs architecture was that Memory
management was also done with software!  This is probably about
the only machine in computer science history that got slower as
you added more memory!  The OS had to periodically scan the memory
segments to build a map of what was available.  More memory meant
a longer scan time.

In summary, the B1700's were neat machines, but probably are inherently
limited when top-end performance is considered desirable


[Standard disclaimer goes here!]       Steve Wilson,
                                       National Semiconductor

wtr@moss.ATT.COM (05/20/88)

In article <3527@pasteur.Berkeley.Edu> faustus@ic.Berkeley.EDU (Wayne A. Christopher) writes:
>
>Hmm, that's really interesting.  If you reverse engineer the microcode,
>feed it to a logic synthesis tool (like a PLA optimizer and generator),
>and put that on your chip, isn't that just like taking excerpts from
>somebody's book and rearranging them, changing the names of all the
>characters, and making global substitutions like "do not" for "don't",
>then selling it under your own name?  What does copyright law say about
>this?
>

well, not really.  this isn't a strict case of "reverse-engineering"
i don't BELIEVE (CMA) that the case you described is appropriate.

the case we have here would be more like reading a set of Cliff's 
Notes, with basic character desciptions and plot summary, and then
compose a novel with the same basic structure/plot/ending.  (i will
admit that there is a possablity a legality about character name
usage ;-)

the engineers have taken a set of high level specifications, and
have build a PLA machine that behaves the exact same way as the
original processor.  phoenix did the same basic thing with ibm's
roms (take that itsey bitsey machine company!)

the legal question may arise if the engineers in question had direct
contact with the original microcode during their developement. 
however, what if the VM folkes claim that their machine is a
'hardware emulator' or some such? (microcode emulator?)  sounds like
litagation to me!  gawd! imagine if apple had been involved in this.

=====================================================================
Bill Rankin
Bell Labs, Whippany NJ
(201) 386-4154 (cornet 232)

email address:		...![ ihnp4 ulysses cbosgd allegra ]!moss!wtr
			...![ ihnp4 cbosgd akgua watmath  ]!clyde!wtr
=====================================================================

PS- why is it that companies all seem to think that the best way to
make money is to sue the pants off their competitors instead of
produce a better/cheaper product?

this is getting worse than malpractice suits.

mac3n@babbage.acc.virginia.edu (Alex Colvin) (05/21/88)

> set on yet another general purpose machine.  But unless the new machine
> risc or microcodable one) were significantly faster then both
> target machines, it couldn't be faster than the custom hardware implementation
> of the specific machine.

There used to be a PDP-9 at Dartmouth, as well as a pile of DG Novas.  It
was believed that the 9, which was a faster machine, could emulate a Nova
faster than the Nova could run.  On the other hand, Novas are almost
microcode and good at this sort of thing, so a Nova might be able to
emulate a 9 faster than the 9 could run.  By going through more emulation,
you ought to be able to get asymptotic speedups ;-|

As for writeable control stores, many of the early RISC work was done by
"microcode refugees", who found that the could do better by just compiling
directly to microcode and making that the instruction set.

henry@utzoo.uucp (Henry Spencer) (05/22/88)

Let's see, what's a microprogrammable machine?  Well, it uses a simple
and often rather bizarre instruction set, running one instruction per
cycle, to interpret a different instruction set.  Now, name three important
differences between that and something like a Mips machine, except that
the Mips machine can run programs in "native" mode too.  The original
Stanford MIPS project looked a whole lot like a microengine running
microcode out of main memory, and I seem to recall seeing it explicitly
explained in those terms.  The MipsCo machine looks rather less like a
microengine, mind you.  (I'd imagine there are good reasons for the change,
since MipsCo doesn't seem to choose the color of the box without systematic
evaluation of the alternatives... :-))

You can make some gains by specializing the hardware for emulating other
machines.  On the other hand, running the programs "native" is usually
faster, so why bother?  Microprogramming looks, right now, like an idea
whose time has come AND GONE.

Michael_MPR_Slater@cup.portal.com (05/23/88)

The chip is definitely real, and is designed by Masa Shima of 8080/Z80/Z8000
fame, but it does NOT allow the user to modify the microcode.  PC Week got
a bit confused.  It seems that there was an article in the Tokyo general-
circulation newspaper, which got translated and then quoted in Infoworld and
PC week.  I called VM in Tsukuba to get the real story.  Note that the chip
is NOT fully 386 compatible -- it does not implement the MMU functions.

Michael Slater, Editor and Publisher, Microprocessor Report
550 California Ave., Suite 320, Palo Alto, CA 94306 (415) 494-2677
mslater@cup.portal.com    mci: mslater  bix: mslater

Michael_MPR_Slater@cup.portal.com (05/23/88)

The recoding of the 68000 for the 370 instruction set was done by Nick
Tredennick, who is now with NexGen Microsystems in Sunnyvale (Santa Clara?).

Michael Slater

Michael_MPR_Slater@cup.portal.com (05/23/88)

Copyright law is very confused about this.  The NEC/Intel case may explore this
issue in part.

Michael Slater        mslater@cup.portal.com

johnz@lxviiik.uucp (John Zolnowsky) (05/23/88)

From article <5762@cup.portal.com>, by Michael_MPR_Slater@cup.portal.com:
> The recoding of the 68000 for the 370 instruction set was done by Nick
> Tredennick, who is now with NexGen Microsystems in Sunnyvale (Santa Clara?).
>	- Michael Slater

This is a confusion of two distinct projects.

A modified 68000 chassis was microcoded by a team of IBM engineers from
Binghampton, NY.  The resulting processor, together with a vanilla 68000
and a derivative of the Intel floating point unit, went into the XT/370
product which appeared in 1983.

While the above project was in progress, Nick Tredennick (who did microcode
the 68000) had left for IBM Yorktown, NY where the micro-370 project was
launched.  This project used similar design methodology as the 68000 project
itself, but never resulted in a product.

				- John Zolnowsky

bobw@wdl1.UUCP (Robert Lee Wilson Jr.) (05/25/88)

Presumably the new chip does at least something in a hard-wired way: Micro-
code can't take you all the way back to the big bang, and the PLA/ALU
don't generally talk directly to pins. The 8*86 and 68K
families are so different at bottom levels that it is hard to imagine any
PLA changes which could let the underlying hard wired stuff be even a
little efficient at both games. Either buffers are wired to make
big-endian or little-endian memory connections efficient, but not both.
Conceivably either microcode playing games with registers as they are
stored/loaded or else PLA paths could reconcile these differences, but it
seems likely to cost cycles (even for the PLA implementation) and either
a lot of your PLA or else lots of microcode space.

If the goal were a chip which, in one piece of silicon by reloading
microcode, could emulate either family, then maybe the performance
penalties would be offset by the versatility. When all you get is that the
same fundamental die can play both games, but two different pieces of
silicon are required, that level of versatility is lost.

This sounds like it might be a neat legal move around Intel's holding the
386 so closely, which might work for a while and even have advantages to
those of us who are Intel customers, but it hardly sounds like it has more
technical advantages!

Bob Wilson

(I'm not allowed to have opinions, much less to claim they represent my
employer!)

kds@blabla.intel.com (Ken Shoemaker) (05/25/88)

I also heard of someone (at Stanford, I believe) who remicrocoded parts
of the VAX to get address traces for cache simulations.  Certainly not
really generally applicable, but an interesting research tool...

You don't have to break many eggs to hate omlets -- Ian Shoales

Ken Shoemaker, Microprocessor Design, Intel Corp., Santa Clara, California
uucp: ...{hplabs|decwrl|amdcad|qantel|pur-ee|scgvaxd|oliveb}!intelca!mipos3!kds