[net.micro] ANOTHER 32-BIT MACHINE???

FRANK@sri-vax.ARPA (Victor Frank) (03/15/85)

                 ROOM FOR ANOTHER 32-BIT CPU?

     I have followed with great interest the debates between Henry
Spencer (henry@utzoo), Richard Mateosian (National), and others at
National and Motorola on the relative (dis)advantages of National's
and Motorola's 32-bit CPUs.

     Now I read in March 1 Electronics Products (pg 25) that Hitachi is
working on a general purpose 1.3 um, CMOS 32-bit microprocessor with 
5 MIPs operation and full 32 bit address and data.  The 6.5 X 9 mm chip
is said to contain more than 300,000 transistors, proprietary pipelined
architecture, high cache memory, a 200 kbit ROM with 50 ns cycle time and
a 32-bit ALU.

     The article (by Barbara Tuck) says that Hitachi's Micro 32 will
probably not be a real product until late 1986, and that Hitachi is doing
extensive market research before defining the architecture and operating
characteristics.  The article mentions 3 options open to Hitachi:

     1.  Make it compatible with the 68000 (subject to Motorola's OK)
     2.  Make it compatible with the 68020 (subject to Motorola's OK)
     3.  Use a proprietary architecture (a very real possibility).

     On the horizon or already (not)available are proprietary chips from
HP, AT&T, and others???  Zilog and Intel have announced 32 bit cpus that
will eventually be available to the public.  NCR has the 32000, maybe
Fairchild, Texas Instruments will be in there too.  

     My question.  By early 1987, will there be any market for another
32 bit chip?  With yet another instruction set?  I suspect that four
different basic chip types (National, Motorola, Zilog, & Intel) are plenty!

     I just wish these chip manufacturers learn something from the 16-bit
battlefield:

     1.  Time is money
     2.  The best is not always chosen
     3.  However, the first is not always chosen either
     4.  Without software, your product is crippled
     5.  If you want to restrict distribution/information on your
         device, you'd better be able to build, sell, & service your
         own computers & write the software too 
     6.  Before IBM came out with the PC, Intel was giving away 8088s
         to engineers.  Hobbyists were using them.  There were magazine
         articles using them.  I think that was an investment that paid off!
     7.  32 programamers/engineers working on a problem for one month does
         not equal one programmer/engineer working for 32 months--but then 
         who can wait 32 months (or even 12 months) in this business.

     My opinion is that Motorola's instruction set is pretty neat, and 
National's is probably not far behind.  These guys at Hitachi would have to
go a long ways to improve on them.  By 1987 they may be out in the cold with
a proprietary architecture.  If they are pin-for-pin and instruction-for-
instruction compatible with either the 68020 or the 32532 they might have a
prosperous future.  They could probably survive with an IBM 370 or VAX-11
instruction set too, but not in the same market.

     I trust that there are representatives of the major CPU manufacturers
on the net.  Am I premature in concluding that the 32-bit race has already
been won (by Motorola and National)?  Come on, AMD, Fairchild, TI, Zilog, 
NCR & Intel, let's hear your side!!!

Victor Frank, Editor
68796 Hacker's Newsletter

------

BillW@SU-SCORE.ARPA (William Chops Westfield) (03/16/85)

Hmm.  there are two questions.

Is there room for another general purpose 32bit microprocessor?

	One would hope not, but Im afraid that there is.  Everyone
	is out to copy the PDP11 and or VAX, which is too bad since
	the 11 doesn't scale well to larger memory or register files,
	and the vax doesn't have all that wonderful an instruction
	set anyway (an less than successful 11 copy there also).
	The state of software (even for existing 16 bit cpus), is
	in such a sorry state that having more processors wont make
	it much worse.  What with the newer compiler strategies, it
	is relatively easy to re-write your back end to produce bad
	code for any number of different processors...

Is there room for another 32bit microprocessor?

	Sure, there are a lot of things that can be done that would
	be useful, or at least interesting.  I don't think that any
	of the current 32 bit chips does a very good job.  After all,
	they are mostly 16bit machines with wider busses an ALUs.
	Id like to see a comercially available 32 bit chip with
	a RISC instruction set, or with user writeable microcode,
	or one with taged operands (lisp machine on a chip?)

Semiwhimsically
Bill Westfield

davet@oakhill.UUCP (Dave Trissel) (03/17/85)

In article <9254@brl-tgr.ARPA> FRANK@sri-vax.ARPA (Victor Frank) writes:
>
>                 ROOM FOR ANOTHER 32-BIT CPU?
>
>
>     My opinion is that Motorola's instruction set is pretty neat, and 
>National's is probably not far behind.  These guys at Hitachi would have to
>go a long ways to improve on them.  By 1987 they may be out in the cold with
>a proprietary architecture.  If they are pin-for-pin and instruction-for-
>instruction compatible with either the 68020 or the 32532 they might have a
>prosperous future.  They could probably survive with an IBM 370 or VAX-11
>instruction set too, but not in the same market.
>

Some good observations.  Is there even a single person on the net that
knows about the T-90000?  After the MC68000 became famous Toshiba decided
to build (in their opinion) a better and faster processor all their own
using SOS technology (silicon on saphire.)  At the time I figured that even
if they could do it (SOS had never been proven in mass produced quantaties)
it could not possibly "catch on" due to lack of the massive backing required
for software development, hardware emulators, and lots of interested potential
customers.  At that time Japan had almost no original software products to
speak of so it seemed highly unlikely that they could produce an entirely
new OS with utilities and languages etc. for an all new and different
instruction set.

Microprocessor acceptance depends on many things but one sure requirement is
having attributes which will attract the backing of a large group of
supporters.

Motorola Semiconductor Inc.            Dave Trissel
Austin, Texas       {ihnp4,seismo,gatech}!ut-sally!oakhill!davet

keithd@cadovax.UUCP (Keith Doyle) (03/22/85)

[.................]
>Microprocessor acceptance depends on many things but one sure requirement is
>having attributes which will attract the backing of a large group of
>supporters.

>Motorola Semiconductor Inc.            Dave Trissel

Well, unfortunately while Motorola seems to attract much of the software
supporters, it seems to be Intel who attracts much of the hardware
supporters, and many time it's the hardware supporters who decide what
the software people have to deal with.

Keith Doyle
(a hardware/software person who's afraid he might get stuck having to
learn 286 assembly code, now that he's gotten to know and love the 68k)
#  {ucbvax,ihnp4,decvax}!trwrb!cadovax!keithd

clif@intelca.UUCP (Clif Purkiser) (03/23/85)

> 
>                  ROOM FOR ANOTHER 32-BIT CPU?


> 
>      I trust that there are representatives of the major CPU manufacturers
> on the net.  Am I premature in concluding that the 32-bit race has already
> been won (by Motorola and National)?  Come on, AMD, Fairchild, TI, Zilog, 
> NCR & Intel, let's hear your side!!!
> 
> Victor Frank, Editor
> 68796 Hacker's Newsletter

	I have been good about not jumping into the architectural fray on 
net.micro and net.arch.   But ...  I couldn't let Victor's comments that the
32 bit race has been won by Motorola and National escape with out being
commented on.

	I strongly believe that the starting gate has just opened on the 
32 bit race and only two of the four or five major players have entered.
Clearly, Intel with the most popular 16 bit architecture (70%+), and by far the
largest software base (> $6 billion  and growing at the rate of $5 billion/year)
has to be a major player in the 32 bit microprocessor race.

	I suspect that one or two Japanese manufacturers will also be 
major participatants in the 32 bit market probably NEC, possibly others.

	However, I suspect that the Victor will be right that only a two or
three manufactures will dominate this market. 

	The important thing to realize is that the 32 bit market is very
small now and eventhough it will grow tremendously, 16 bit microprocessor will
still be largest dollar volume for many years.  (Dataquest, estimates that 
120 million 16 bit microprocessors will be sold in 1988 of which only 
1 million will be 32 bit micros.)

	Thus, just because a semiconducter manufacture does not have, a 32
bit microprocessor today doesn't mean they are out of the race. Intel's entry
into the 32 bit microprocessor market will be the 80386, 386 samples will be
available latter this year.  

	In somewhat, unique approach in the high-tech industries we have
avoided telling the entire world about the 80386. Eventhough, it would fun
to debate the relative merits of the 386 vs the competition in net.micro
and net.arch , I think it is a little irresponsible for companies to 
pre-announce products before their availability. Because, all you end up
with is a lot of upset customers who believed your ads for vaporware or
vaporsilicon. 

	About, all I can say about the 386 is that it is a full 32 bit 
microprocessor, (with four gigabyte segments) which is totally binary 
compatible with the iAPX86 family.  It will faster, and more powerful
with a higher level of integration than any other microprocessor.   
In the future their will be plenty of articles and manuals 
describing more of the chips details.  

	But, meanwhile don't be foolish and count Intel out of the race. 

		Clif Purkiser
		HIGH PERFORMANCE MICROPROCESSORS
                {amd pur-ee hplabs
-- 
Clif Purkiser, Intel, Santa Clara, Ca.
HIGH PERFORMANCE MICROPROCESSORS
{pur-ee,hplabs,amd,scgvaxd,dual,idi,omsvax}!intelca!clif
	
{standard disclaimer about how these views are mine and may not reflect
the views of Intel, my boss , or USNET goes here. }

henry@utzoo.UUCP (Henry Spencer) (03/25/85)

> 	... all I can say about the 386 is that it is a full 32 bit 
> microprocessor, (with four gigabyte segments) which is totally binary 
> compatible with the iAPX86 family.

That statement is self-contradictory.  Binary compatibility with the
8086 is fundamentally incompatible with a full 32-bit architecture.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

jchapman@watcgl.UUCP (john chapman) (03/26/85)

> > 	... all I can say about the 386 is that it is a full 32 bit 
> > microprocessor, (with four gigabyte segments) which is totally binary 
> > compatible with the iAPX86 family.
> 
> That statement is self-contradictory.  Binary compatibility with the
> 8086 is fundamentally incompatible with a full 32-bit architecture.
> -- 
> 				Henry Spencer @ U of Toronto Zoology
> 				{allegra,ihnp4,linus,decvax}!utzoo!henry

 why is it contradictory? the 386 could have a compatability mode
 to execute iAPX86 programs (similar to vax compatability mode -
 or does being able to run pdp-11 programs mean the vax isn't a
 true 32 machine?).
 
 John Chapman
 ....!watmath!watcgl!jchapman

henry@utzoo.UUCP (Henry Spencer) (03/27/85)

> > > 	... all I can say about the 386 is that it is a full 32 bit 
> > > microprocessor, (with four gigabyte segments) which is totally binary 
> > > compatible with the iAPX86 family.
> > 
> > That statement is self-contradictory.  Binary compatibility with the
> > 8086 is fundamentally incompatible with a full 32-bit architecture.
> 
>  why is it contradictory? the 386 could have a compatability mode
>  to execute iAPX86 programs (similar to vax compatability mode -
>  or does being able to run pdp-11 programs mean the vax isn't a
>  true 32 machine?).

When it's running in compatibility mode, the VAX is most assuredly not a
32-bit machine; it acts like a 16-bit machine, to wit the pdp11.  Or
rather, a pdp11 subset.

Besides, "totally binary compatible" doesn't sound like a compatibility
mode to me.  Much more likely, especially considering the source (Intel),
is that it's the same old sickening story of backward compatibility
with all previous mistakes, right back to the 4004.  (Really.  The new
x86 chips are 8086 compatible, the 8086 had a lot of 8080 compatibility,
the 8080 was source-compatible with the 8008, and the 8008 was pretty
much an 8-bit 4004.  Isn't it thrilling to know that you're programming
a machine descended from a souped-up calculator?  Such roots, such a
sense of history, such a feeling of nausea...)
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

jchapman@watcgl.UUCP (john chapman) (03/28/85)

> > > > 	... all I can say about the 386 is that it is a full 32 bit 
> > > > microprocessor, (with four gigabyte segments) which is totally binary 
> > > > compatible with the iAPX86 family.
> > > 
> > > That statement is self-contradictory.  Binary compatibility with the
> > > 8086 is fundamentally incompatible with a full 32-bit architecture.
> > 
> >  why is it contradictory? the 386 could have a compatability mode
> >  to execute iAPX86 programs (similar to vax compatability mode -
> >  or does being able to run pdp-11 programs mean the vax isn't a
> >  true 32 machine?).
> 
> When it's running in compatibility mode, the VAX is most assuredly not a
> 32-bit machine; it acts like a 16-bit machine, to wit the pdp11.  Or
> rather, a pdp11 subset.
> 
> Besides, "totally binary compatible" doesn't sound like a compatibility
> mode to me.  Much more likely, especially considering the source (Intel),
> is that it's the same old sickening story of backward compatibility
> with all previous mistakes, right back to the 4004.  (Really.  The new
> x86 chips are 8086 compatible, the 8086 had a lot of 8080 compatibility,
> the 8080 was source-compatible with the 8008, and the 8008 was pretty
> much an 8-bit 4004.  Isn't it thrilling to know that you're programming
> a machine descended from a souped-up calculator?  Such roots, such a
> sense of history, such a feeling of nausea...)
> -- 
> 				Henry Spencer @ U of Toronto Zoology
> 				{allegra,ihnp4,linus,decvax}!utzoo!henry
 
 Well this seems a bit picky to me, but ....
 Your'e statement was that binary compatability with an 8086 is funda
 mentally incompatible with a full 32 bit architecture.  I don't see
 any difference between this and the vax-pdp11 example I made and I
 seriously doubt that because the vax occasionally executes binary
 code (which is my understanding of compatibility mode {please correct
 me if I'm wrong} - i.e. "totally binary compatible") from a 16 bit
 architecture that very many people would claim it is not a true 32
 bit architecture. Does 32 bit mean you're not allowed to execute 16
 bit instructions or something? I mean it seems reasonable to me that
 a 32 bit machine would also have a complete set of 16 bit instructions
 as well!
 
 Personally I think the 8086 resembles a 360 in architecural style
 as much as an 8080 (maybe that's why IBM liked it so much :-> ).
 Why if the 8086 is so poor are their so many of them? - because
 intel delivers a LOT faster than the other companies. 5 years ago
 I was shopping around for a replacement for my z80 and the 68000
 was only promises (trying to order one from my local dealer was
 very interesting - they had been announced).  Similarily with the
 16032 - I was drooling when that was announced; I went out and
 bought the programmers manual and really liked what I saw - but
 could I gt a working system?  On the other hand the 8086 was
 available, I could get the cpu and other support boards and
 there was software available - an off the shelf system that works,
 has software, and is readily available is preferable to a superior
 system that is unavailable and unsupported (from the individuals
 point of view).  Where can I get a 68020/32032 system today for
 a reasonable price, including some system and support software?
 Sure the architecture doesn't show much imagination, I don't
 like it myself, but how many people program in assembler - I 
 would expect that the architecture (with the possible exception
 of 64k segments for data) would have zero noticable impact on most
 people - they are either programming in HLLs or running application
 packages.
 
 I can get msdos (not a great system but it works), an assembler,
 linker, pascal compiler, fortran compiler, modula compiler and
 a screen editor for under $1000 - what will software support
 for other machines cost? 
 
 John Chapman
 Computer Graphics Lab
 University of Waterloo
 ....!watmath!watcgl!jchapman
 
 Disclaimer: the above does not represent the views of anyone 
             important, institutional, or otherwise worth
             harrassing for fiscal recompense.

henry@utzoo.UUCP (Henry Spencer) (03/29/85)

My statement was indeed that binary compatibility with an 8086 is
fundamentally incompatible with a full 32-bit architecture.  The 8086
is not a 32-bit architecture, and does not extend gracefully into one.
An 8086-compatible machine and a full 32-bit architecture are two
different cpus; whether they happen to be on the same silicon, with a
mode bit switching between them, is quite irrelevant to how useful
the combination is.  Use of 8086 compatibility and use of full 32-bit
architecture cannot occur simultaneously, even though Intel is trying
to fake you out into believing they can.

>  seriously doubt that because the vax occasionally executes binary
>  code (which is my understanding of compatibility mode {please correct
>  me if I'm wrong} - i.e. "totally binary compatible") from a 16 bit
>  architecture that very many people would claim it is not a true 32
>  bit architecture.

My point was, when running in pdp11 compatibility mode (which, by the
way, is NOT "totally binary compatible", because they left some things
out), the VAX is *not* a 32-bit architecture.  Just because a pdp11
program happens to be running on a VAX does not mean the pdp11 program
suddenly has 32-bit addressing; no way.  That pdp11 program sees a
16-bit architecture just like the pdp11.  Except possibly for speed,
it sees no difference between the VAX and, say, an 11/34.

> Does 32 bit mean you're not allowed to execute 16
>  bit instructions or something? I mean it seems reasonable to me that
>  a 32 bit machine would also have a complete set of 16 bit instructions
>  as well!

"Full 32-bit architecture", in my book, means addressing as well as
32-bit arithmetic.  This means that the entire instruction set has to
be duplicated, since the current 8086 addressing semantics are very
much tied to 16 bits.  I think it unlikely to the point of ridicule
that Intel is going to do that.
 
> Why if the 8086 is so poor are their so many of them? - because
> intel delivers a LOT faster than the other companies.

Have you tried getting delivery on, say, an 80286?  In volume?  Intel
delivered the 8086 very quickly because it was such a mediocre chip.
They made a conscious decision (well, they may not have considered it
in quite these terms...) to go for quantity rather than quality.  They
are now paying the penalty, as large software gets harder and harder
to cram into an 80*86 architecture.

>  Sure the architecture doesn't show much imagination, I don't
>  like it myself, but how many people program in assembler - I 
>  would expect that the architecture (with the possible exception
>  of 64k segments for data) would have zero noticable impact on most
>  people - they are either programming in HLLs or running application
>  packages.

The impact it has is that software package X either is totally unavailable
or is very late, because the producers had trouble making it fit the 8086.
"Possible exception" my foot -- the 64k data segments are the major botch
of the machine, and by far the hardest thing to fix.  If you think it's
easy to hide this under an HLL, without major performance impact, you
should try implementing an 8086 compiler some time.
 
> I can get msdos (not a great system but it works), an assembler,
> linker, pascal compiler, fortran compiler, modula compiler and
> a screen editor for under $1000 - what will software support
> for other machines cost? 

Tinplate is cheaper than steel, too.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

baba@spar.UUCP (Baba ROM DOS) (04/01/85)

> My statement was indeed that binary compatibility with an 8086 is
> fundamentally incompatible with a full 32-bit architecture.  The 8086
> is not a 32-bit architecture, and does not extend gracefully into one.
> An 8086-compatible machine and a full 32-bit architecture are two
> different cpus; whether they happen to be on the same silicon, with a
> mode bit switching between them, is quite irrelevant to how useful
> the combination is.  Use of 8086 compatibility and use of full 32-bit
> architecture cannot occur simultaneously, even though Intel is trying
> to fake you out into believing they can.

Technical information and even a few advance data sheets for the 386 have
circulating for over a year now.  Henry obviously needs no such data in
order to pontificate on its architecture.  Some of the rest of you may
be interested in what I've been able to find out.  The 386 has essentially 
the same (pathetic) register architecture as the 8086, except that each
register is 32 bits long.  In 8086 mode or 286 mode, it will only use
the bottom 16 bits of each.  The ALU is presumably also partitioned into 
two 16 bit slices.  The tightly coupled CPU/MMU can change mode on the fly
on the basis of information in the segment descriptor for the code segment.

In 386 mode, each segment is to provide a linear 32-bit address space,
as opposed to the ghastly 64k segments of the 8086 and 286.

It is really not all that difficult to build a multi-mode CPU, but it
does tend to add a lot of circuitry in areas that you'd prefer to keep 
clean, and the added complexity increases the probability of design error,
without contributing anything to the native-mode performance.  It remains
to be seen just what performance Intel can deliver, and when.

					Baba ROM DOS

david@daisy.UUCP (David Schachter) (04/21/85)

Henry Spencer of U. of Toronto Zoology makes several points with which I
disagree.  My information is obtained from two Intel seminars on the 80386
I have attended and it is not covered by any confidentiality agreement.

1) Mr. Spencer claims the 80386 must use 8086 addressing constructs.  This is
not correct.  The 80386 can run programs in 8086 mode (just like the 80286)
or in 'native' mode.  In 'native' mode, programs can consist of mixed segments
of '286 and '386 code.  '286 code segments follow the '286 address model:
a 16 bit selector selects one of 16,384 segments and a 16 bit offset selects
a byte out of 64 kB.  '386 code segments follow the '386 address model: a 16
bit selector selects one of 16,384 segments and a 32 bit offset selects a byte
out of 4 GB.  The mapping of selectors to segments is done by two tables: the
Global Descriptor Table and the Local Descriptor Table, either of which may
be changed at any time.  Typically, the GDT doesn't change much and the LDT
is changed as part of each context switch.  Thus each task has an address space
of 2^16 * 2^32 = 2^48 bytes.  The physical address space is 16 MB which should
be sufficient until 1988.  By then, one hopes Intel will have a "386B" or some-
thing.

Note that 8086 code can usually be run even in native mode.  Only certain
types of address arithmetic (which the 8086 supports but you warned not to
use) won't work.

2) Mr. Spencer suggests that delivery on the 80286 is poor.  This is untrue.
My company has no problems getting production delivery of 6 MHz '286s from
Intel.  We have found Intel to be a most reliable and cooperative vendor.
Delivery by Intel of production quantities of various VLSI chips has always
occurred, for us, in a timely fashion after delivery of sample chips.  And
sample chips are delivered on the date promised.  There was an early glitch
with delivery of '286s, in December of 1983.  Intel warned us about it some
months ahead of time and we were able to compensate.  (Not only did Intel
warn us, they also told us why the glitch occurred and what they were doing
to insure against a re-occurrence.)

3) Mr. Spencer states "They [Intel] made a conscious decision (well, they
may not have considered it in quite those terms) to go for quantity rather
than quality."  Again, this is untrue.  The former head of the design team
for the Intel 80186 is a founder and employee of my company.  Many other
members of the 80186 design team are now employees of Daisy as is one of the
lead microcoders for the 80286 and one of the members of the 8087 team.  I
have talked with most of them, complaining about the 8086 architecture.  The
universal answer is that Intel optimized time-to-market and compatibility.
Intel succeeded.

My company chose Intel because, at the time, only Intel had development
machines, a good compiler, debugger, and other software, strong support, and
the ability to deliver chips.  As of this writing, Motorola is sampling the
68020 and National is sampling the 32032 and some day, both of these fine
companies will deliver their beautifully architectured chips.  ("Archi-
tectured"?!)  But Intel is delivering '286s now!  I don't know about you
but Daisy's customers want deliveries, not promises.  In non-partisan CAE
benchmarks, the '286 at 6 MHz beats a bit-slice version of the 68000, the
68010, and various 68000s time and time again.  Intel created a part with
an ugly architecture that runs fast and delivered it when the market needed
it.  (Intel, according to a survey by one of the professional research
companies, has about 70 percent of the high-end market.  Moto and National
share most of the remaining 30 percent.)

4) Mr. Spencer claims "software package X is either totally unavailable or
is very late, because the producers had trouble making it fit the 8086."  Tell
that to the developers of Lotus 1-2-3, DBase II, Wordstar (ugly but verrrry
profitable), and so on.  Writing software for the 80*86 is no more difficult
than for other systems, not to any practical extent.  Porting 80*86 software
to other systems is easy.  Porting software from other systems to the 80*86
is often difficult, unless you have a good compiler.  (Even then, for top
performance, you will have to tweak the code.)

The biggest number of software packages are available for the Apple ][.  The
IBM PC, based on the 80*86 architecture, is in second place.  The IBM 370 is,
I believe, in third place.  Somewhere much farther down the list are the
Motorola and National architectures.

5) Mr. Spencer states that although machines with artistically nicer
architectures cost more than machines based on the abysmal 80*86 architecture,
"Tinplate is cheaper than steel."  His statement is true but not particularly
relevant.  You may be willing to wait for perfection but your competitors
aren't!  I'd rather drive a Civic now than wait to save the money for a
Cadillac with tail fins and power seats.  The end user is not going to notice
the beauty of the underlying machine architecture.  All s/he cares about is
"can it do the job", "when can I get it", and "how much does it cost?"  (And,
for smarter end users, "who supports it?")


In conclusion, Mr. Spencer, in deriding the importance of backwards compat-
ibility, shows ignorance of fundamental market forces.  (Oh, I should be a
professor-- but could I retain any self-respect?)  IBM has succeeded because
it supported its products to the hilt (like Intel) and, in particular, because
it maintained backwards compatibility.  Apple has succeeeded because newer
versions of the Apple ][ maintain compatibility with older versions.  (Mac is
still only a small share of Apple revenue, according to published reports.)
And Intel has succeeded because its microprocessors are always backwards
compatible, to some extent.  (Usually in architecture if not in software.)
Much early 8086 software was terrible-- it was merely translated 8080 software.
(Written in PL/M and re-compiled with PL/M-86 or written in ASM and converted
with CONV-86.)  But at least it was available!  Motorola and National had no
similar capability and it has cost them dearly.  Who will win?  If I knew
that, I would be a lot richer.  But Intel has done quite well with the 8088,
the 8086, and the 80286.  The improved architecture of the 80386, while still
not as good as the National architecture, is sure to be a winner-- because it
is COMPATIBLE with the 8086 and the 80286-- just as VAX compatibility with
the PDP-11 was an important factor in the early success of the VAX and gave
DEC time to solidify its market position.

                                          -- David Schachter

[The expressions expressed herein are my own and are not the responsibility of
Daisy Systems Corporation.  I have no financial interest in Intel Corporation,
its affiliates, competitors, or stockholders.]

Addendum: Architectural purity is fun but you can't make money off it.  "Good
enough is perfect."  Losing your customers (or grad students) because you
waited for a better chip is inefficient, unless you have divine guidance.

jbn@wdl1.UUCP (04/23/85)

     Actually, the major reason that Intel seems to run away with the market
is that Intel gets the support chips out the door along with the processor.
The M68000 is nice.  But the matching MMU came out years after the CPU.  The
FPU is now being sampled.  Motorola's new math coprocessor sounds really great.
Maybe next year I'll be able to get it in a SUN.  The year after, I might
be able to get it in a MAC-class machine.  By 1988 it might make sense to market
a software package that required it.  But not sooner.  Intel has interface
chips for the Multibus.  If Motorola has VMEbus interface chips, the VMEbus
board vendors aren't using them in a big way (possibly for good reason;
see my previous posting about FCOs for Omnibyte CPU cards.)  Motorola just
doesn't put enough priority on getting the support chips out the door.
    It's really frustrating.  Motorola makes some really great products.
Here's a dream product.  Every part needed has been announced. 

	The Mighty MAC

	2MB RAM (16 1Mb chips).
	M68020 @ 24Mhz with matching MPU and FPU.
	Hyperdrive-type hard disk.
	DSDD 3.5" removable disks.
	MAC-compatible but with multiple processes.

Think I can get one for Christmas 1986?

						John Nagle

henry@utzoo.UUCP (Henry Spencer) (04/26/85)

> Mr. Spencer claims the 80386 must use 8086 addressing constructs. ...

Not quite in so many words.  I claimed that the thing cannot simultaneously
be 100% 8086 compatible and still have large (>64KB) contiguous chunks of
memory.  Your comments confirm this:  you either run in 8086 mode, with
8086 addressing, or run in native mode and lose complete 8086 compatibility.
Actually, I'm not complaining so much about this tradeoff, as about the
Intel marketing hype that tries to sweep it under the rug.

> Mr. Spencer suggests that delivery on the 80286 is poor.  This is untrue.
> My company has no problems getting production delivery of 6 MHz '286s...

Since I personally wouldn't touch an 80anything with a ten-foot pole, I
obviously have no personal experience with 80286 delivery.  But there's
certainly been a remarkable chorus of "we're late because Intel's late"
from the industry.

> Mr. Spencer states "They [Intel] made a conscious decision (well, they
> may not have considered it in quite those terms) to go for quantity rather
> than quality."  Again, this is untrue.  The former head of the design team
> for the Intel 80186 is a founder and employee of my company.  Many other
> members of the 80186 design team are now employees of Daisy as is one of the
> lead microcoders for the 80286 and one of the members of the 8087 team.  I
> have talked with most of them, complaining about the 8086 architecture.  The
> universal answer is that Intel optimized time-to-market and compatibility.
> Intel succeeded.

Precisely!  They succeeded in a strategy that stressed time-to-market
(getting in before the competition, to increase *quantity* sold) and
compatibility (making it look like previous products, to make it more
attractive and increase *quantity* sold) rather than paying attention
to quality and giving it a decent-sized address space and a civilized
register structure.

Note here that I am not knocking Intel for getting a mediocre product
to market quickly, instead of taking the time to produce something
better.  The relative sales figures clearly demonstrate that their
decision was reasonable.  I just wish Intel would stop pretending that
their out-the-door-fast chip is every bit as good as the products from
people who *did* invest the time in improving the architecture.

Note also that I am not contending that Intel's chips themselves were
shoddy.  The quality issues of which I speak are matters of architecture,
not design bugs or fabrication problems.

> ... As of this writing, Motorola is sampling the
> 68020 and National is sampling the 32032 and some day, both of these fine
> companies will deliver their beautifully architectured chips.  ("Archi-
> tectured"?!)  But Intel is delivering '286s now!  ...

Motorola has been delivering 68000s and 68010s for rather a long time,
well before Intel started delivering 80286s.  Let us not even *mention*
when deliveries of the 80386 will start; I was reading about the wonders
of the 80286 quite a while ago, and it's just starting to show up.

> My company chose Intel because, at the time, only Intel had [support and
> delivery]...

Similarly, I'm not criticizing *you* for choosing stew now rather than
filet mignon this evening.  But some people would choose differently.

> ...  In non-partisan CAE
> benchmarks, the '286 at 6 MHz beats a bit-slice version of the 68000, the
> 68010, and various 68000s time and time again.  

By any chance, were these "non-partisan CAE benchmarks" such that none
of them needed arrays larger than 64KB?

> 4) Mr. Spencer claims "software package X is either totally unavailable or
> is very late, because the producers had trouble making it fit the 8086."  Tell
> that to the developers of Lotus 1-2-3, DBase II, Wordstar (ugly but verrrry
> profitable), and so on. 

Then ask them how much earlier the stuff would have been done with a
better architecture.  Especially one that was hospitable to a high-level
language.  (Don't claim the 80*86 is hospitable to HLLs until you have
computed the performance hit taken by "large model" code.)  Certainly
the people *I* know who've done applications work on an 80*86 have not
been happy about it.

> The biggest number of software packages are available for the Apple ][.  The
> IBM PC, based on the 80*86 architecture, is in second place.  The IBM 370 is,
> I believe, in third place.  Somewhere much farther down the list are the
> Motorola and National architectures.

Since the Apple II is in first place, are we to conclude that the 6502
is a better processor than the 80*86?  Or that writing software for the
(truly vile) 6502 is easier?  All this argument demonstrates is that if
you want to make money, you make whatever sacrifices are necessary to
get your software to run on popular abominations rather than on well-
designed machines that don't sell as well.

> 5) Mr. Spencer states that although machines with artistically nicer
> architectures cost more than machines based on the abysmal 80*86 architecture,
> "Tinplate is cheaper than steel."  His statement is true but not particularly
> relevant.  You may be willing to wait for perfection but your competitors
> aren't!  I'd rather drive a Civic now than wait to save the money for a
> Cadillac with tail fins and power seats.

But if you want to move heavy loads, you'll save up for a truck rather
than wrecking the suspension in your Civic.  My statement *is* relevant:
tinplate is generally available sooner than steel, but it's not as useful.
By all means, use tinplate if it's good enough... but remember that things
change, and your future needs may require messy and expensive reinforcing.

> The end user is not going to notice
> the beauty of the underlying machine architecture.  All s/he cares about is
> "can it do the job", "when can I get it", and "how much does it cost?"  (And,
> for smarter end users, "who supports it?")

*I* certainly would charge more for future support of software that was
constrained to fit on an 80*86.  Not to mention initial development.

> In conclusion, Mr. Spencer, in deriding the importance of backwards compat-
> ibility, shows ignorance of fundamental market forces.

Not recognizing the drawbacks of backwards compatibility shows ignorance
of the horrendous problems that "backward compatibility with all previous
mistakes" can cause your company.  Ask IBM just how smart it was to let
the upper bits of an address be used for other things, or ask DEC just
how much fun they're having supporting ever-growing software products
on machines with only 16-bit addresses.  They'll tell you what backward
compatibility can be like.

> (Oh, I should be a
> professor-- but could I retain any self-respect?)

Gee, so should I.  I'm a software developer and maintainer by profession,
not an educator.

> Addendum: Architectural purity is fun but you can't make money off it.  "Good
> enough is perfect."  Losing your customers (or grad students) because you
> waited for a better chip is inefficient, unless you have divine guidance.

My impression was that Motorola's cleaner architecture is making them
quite a bit of money, actually, and that Intel is losing the important
high end of the market.  ("Important" because that's where the whole
market will be before too very long.)  "Not good enough is really the pits."
Increasing your next quarter's profits at the expense of next year's is
not just inefficient, but potentially ruinous.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

agn@cmu-cs-k.ARPA (Andreas Nowatzyk) (04/27/85)

Here are my 2 cent's worth of experience with the 8086 and 80286:

Machine: a top-of-the-line CAD workstation running large circuit
         design tools on large designs (say > 60K gates).

Old configuration: 8086 @ 10Mhz, 1 wait-sate for main memory via a
                   modified Multibus, 1.5 Mbyt RAM, 80 Mbyte Disk

after "upgrade":   80286 (unknown clock/wait states) with a high speed
		   data path to 1.75 Mbyte RAM (avoids Multibus), same disk

A benchmark of a representative cross-section of all commonly used tool
on a representative design (for us) was run on both configurations:
Old configuration: 6h 31min   after "upgrade": 4h 10min.

The "upgrade" occured mid-December '84 and until now, the vendor was not
able to have their software running without frequent and unavoidable system
crashes (that is you can't do anything but to wait for them to fix an other
bug). Most of these bugs come from the fact that the 8086 had no memory
management/protection and the 80286 MMU is now catching all the references
to illegal RAM locations that went unnoticed on the 8086. Because the code
is large, there are ton's of them. So much for compatibility.

The second source of problems and low performance is also directly related
to the poor 8086 architecture: the 64Kbyte segment limit causes lots of
restrictions to the data structures (say, only x names/page, y wires/bus or
z blocks/design). We ran into a lot of those limits without being limited by
the amount of memory in the machine. These address limitations also explain
why dozens of temporary files are used: While compiling a 60K gate design
(that is large for the system), a dozen temporary files were allocated to
hold various symbol tables etc: most of these files were only a few 10Kbyte
long and the total size of all would have fit into the free memory easily:
no wonder that it takes 12h to compile.

Bottom line: the 8086, 80186 and 80286 deserve no place in a discussion
about state of the art 32 bit processors, certainly not to establish
credit to the 80386.

  --  Andreas             Arpa:  Andreas.Nowatzyk@cmu-cs-k.arpa
                          usenet:   ...!seismo!cmu-cs-k!agn

sean@ukma.UUCP (Sean Casey) (04/29/85)

In article <385@wdl1.UUCP>, jbn@wdl1.UUCP writes:
>
>     It's really frustrating.  Motorola makes some really great products.
> Here's a dream product.  Every part needed has been announced. 
> 
> 	The Mighty MAC
> 
> 	2MB RAM (16 1Mb chips).
> 	M68020 @ 24Mhz with matching MPU and FPU.
> 	Hyperdrive-type hard disk.
> 	DSDD 3.5" removable disks.
> 	MAC-compatible but with multiple processes.
> 
> Think I can get one for Christmas 1986?
> 
> 						John Nagle

A friend of a friend at Motorola told me that they had some samples running
at 40mhz. Make it scream!

-- 
--- Sean Casey
---
--- UUCP:	{hasmed,cbosgd}!ukma!sean  or  ucbvax!anlams!ukma!sean
--- ARPA:	ukma!sean<@ANL-MCS>  or  sean%ukma.uucp@anl-mcs.arpa

		"We're all bozos on this bus."

henry@utzoo.UUCP (Henry Spencer) (04/29/85)

>      Actually, the major reason that Intel seems to run away with the market
> is that Intel gets the support chips out the door along with the processor.

Have you talked to an 80186 user who wants hardware floating-point lately?
I hear there are, uh, some problems...

(This does not invalidate your basic comment, that Intel does a better job
on support chips than Motorola, which is true.  If you want a really odd
case, consider National, which has had pretty-much-clean support chips for
the 32000 series for some time, and is just starting to ship more-or-less
clean 32000 CPUs.)

(He who claims that National has had clean ["Rev N"] 32000 CPUs for N
months now has not talked to customers who tried to buy Rev N chips.
I'm told it's getting easier, but that's a very recent development.)
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

sambo@ukma.UUCP (Inventor of micro-S) (04/30/85)

Never having had the need to write huge programs, how often does one really
come across the need for larger segments?  I agree that this limit is
certainly not the state of the art, but is it really such a bad idea for the
majority of applications?  I do not wish to discuss any other aspect of the
iAPX86 architecture at this point in time.

clif@intelca.UUCP (Clif Purkiser) (05/01/85)

	It's good to see that somethings never change, like the almost religious
architecture wars which are a fixture in net.micro.  

	Since, I'm a product marketing engineer for the 386, I won't bother to
inject my obviously baised :) views on the iAPX vs 68K architecture.  However, I
 would like to state for the record that the 386 is not an announced part. 
Therefore, Mr Spencer's statements about it are generally SPECULATION and not
facts.   I find it unfortunate that he blasts a new CPU before he even knows
the facts about, just because it is from Intel.   
(I couldn't find any record of Mr Spencer signing a non-disclosure agreement
on the 386.)

	Obviously, Henry is within in his rights to flame about the 8086 and
80286, but I think he is premature to nail Intel on the 386. 

	As a Mac owner, and largely an Apple fan, I found it interesting that
the Mac is really a segmented machine, except that they restrict segments
to 32K!!.  So, I really don't think segmentation is bad, just the fact that
aren't as large as you want.

	Clif Purkiser

-- 
Clif Purkiser, Intel, Santa Clara, Ca.
HIGH PERFORMANCE MICROPROCESSORS
{pur-ee,hplabs,amd,scgvaxd,dual,idi,omsvax}!intelca!clif
	
{standard disclaimer about how these views are mine and may not reflect
the views of Intel, my boss , or USNET goes here. }

phil@amdcad.UUCP (Phil Ngai) (05/03/85)

In article <563@intelca.UUCP>, clif@intelca.UUCP (Clif Purkiser) writes:
> 	As a Mac owner, and largely an Apple fan, I found it interesting that
> the Mac is really a segmented machine, except that they restrict segments
> to 32K!!.  So, I really don't think segmentation is bad, just the fact that
> aren't as large as you want.
> 
> -- 
> Clif Purkiser, Intel, Santa Clara, Ca.

This is a point that I think bears repeating: segments are not bad, small
segments are bad. To say that the 68000 is superior because it has a flat
address space is incorrect, what is really meant is that it has a large
address space. Although I know little about the 386, I expect that it will
have 4 gigabyte segments and that will stop the complaints about its segments.

Say, Clif, when is the 386 coming out? Is it still scheduled for the second
half of 1985? (i.e. 12/31/85)

-- 
 I speak for myself and no one else.

 Phil Ngai (408) 749-5720
 UUCP: {ucbvax,decwrl,ihnp4,allegra}!amdcad!phil
 ARPA: amdcad!phil@decwrl.ARPA

jbn@wdl1.UUCP (05/03/85)

      Don't think of the 386 as an extension to the 8086.  Think of it
as a new machine with an 8086 emulation capability.  Early VAXen had
PDP-11 emulation capability, not that it was ever very good.

henry@utzoo.UUCP (Henry Spencer) (05/03/85)

> ... the 386 is not an announced part. 
> Therefore, Mr Spencer's statements about it are generally SPECULATION and not
> facts.   I find it unfortunate that he blasts a new CPU before he even knows
> the facts about, just because it is from Intel.   
> (I couldn't find any record of Mr Spencer signing a non-disclosure agreement
> on the 386.)

Clif is correct that I have not signed non-disclosure and therefore have
no access to inside information.  I am grudgingly starting to suspect that
I may have flamed the 386 prematurely.  Not necessarily "unjustifiably"
or "incorrectly", mind you, just "prematurely".

Properly chastised and repentant (well, somewhat, sort of), I shall refrain
from further flames against the 386 until I have more information.  Then I
will either be (a) surprised and delighted because Intel has actually built
a decent cpu, or (b) confirmed in my nasty prejudices once again.  From the
information I *have* heard, I strongly suspect (b)... but we'll see.

> 	As a Mac owner, and largely an Apple fan, I found it interesting that
> the Mac is really a segmented machine, except that they restrict segments
> to 32K!!.

A friend of mine characterized the Mac's software as "RT-11 with windows".
Blah.  What a waste.

> So, I really don't think segmentation is bad, just the fact that
> aren't as large as you want.

Segmentation is probably a Good Thing, although it is relatively unpopular
these days, but not making the segments big enough is a disastrous mistake
that more than cancels its good points.
-- 
				Henry Spencer @ U of Toronto Zoology
				{allegra,ihnp4,linus,decvax}!utzoo!henry

ech@spuxll.UUCP (Ned Horvath) (05/04/85)

clif@intelca.UUCP (Clif Purkiser) writes:

> 	As a Mac owner, and largely an Apple fan, I found it interesting that
> the Mac is really a segmented machine, except that they restrict segments
> to 32K!!.  So, I really don't think segmentation is bad, just the fact that
> aren't as large as you want.

Whoops, I can't let that go by: there is nothing in the architecture of the
68k, or its deployment in the Mac, that limits one to 32k text segments.
That is strictly a limitation of the Pascal compiler, which prefers to
generate only PC-relative text references (which use signed 16-bit offsets).
In turn, that allows segments up to 32k to move around in memory without
being relocated (as long as you don't try to cache absolute code pointers!).
In fact, one does need to exercise some minimal care -- code only floats if
you cut it loose.  The SUMacC system creates arbitrary-sized segments
(although all hell breaks loose if the thing is allowed to move -- the compiler
there DOES compile absolute addresses which get relocated at launch time).

Indeed, intersegment calls in the Mac, even using Pascal, can be to
absolutely anywhere in a 2^24 byte namespace; the particular trick they use is
a jump table pointed to by a dedicated address register, thus
	jsr a5(offset)
actually calls the instruction
	jmp [absolute address of entry]
the last being a full 24 bit address.

My sole affiliation with Motorola is as a satisfied end-user and programmer
of 68k-based products (Mac & CT Miniframe).

=Ned=

campbell@DECWRL.ARPA (05/05/85)

It's not (usually) code space that gets you, it's data space.  I've never
written a program that ended up being more than 64K of object code, but
I've written plenty that could use LOTS more than 64K of data space.
64K segments is one reason no one's produced a decent LISP for the
iAPX/86 family.

- Larry Campbell
  The Boston Software Works, Inc.
  120 Fulton St., Boston MA 02109
UUCP: {decvax, security, linus, mit-eddie}!genrad!enmasse!maynard!campbell
ARPA: decvax!genrad!enmasse!maynard!campbell@DECWRL.ARPA

johnl@ima.UUCP (05/08/85)

> Never having had the need to write huge programs, how often does one really
> come across the need for larger segments?

All the time, really.  On the 8086 family, the biggest problem is that the
8086's segment architecture does not fit at all well with the addressing
models in the sorts of programming languages that people use, for example C.
Most C compilers put static and automatic data in one segment, because the
code generated that way is much better than it would be if everything were
in a separate segment, and pointers had to remember what segment their
pointees reside in, or if (much worse) there were different types of pointers
for "pointer to stack," "pointer to local static," "pointer to global static,"
"pointer to stack passed from some other routine," and so forth.

The 8086 architecture is as far as I can tell unique among modern computers 
because there is no way at all to treat address space as a linear array.  (You 
can simulate it with lots of instructions, but that's not what I mean.) [No, I 
take it back, the PDP-11 has the segmentation problem even worse, but 
everybody admits the 11's addressing is a mess.] The VAX's addressing is 
segmented, too, but the segments are large and, more important, you can pass 
around and dereference pointers easily and quickly without having to worry 
about which segment a pointer points into.  What we all hate about the 8086 is 
that you have to be thinking about segments every second when you're writing 
programs.  

Finally, on the 286 at least, segmentation is slow.  Loading and dereferencing
a pointer within the current data segment takes about 10 cycles, while loading
and dereferencing a segment+offset pointer takes about 26.  Ugh and a half.


John Levine, Javelin Software, Cambridge MA 617-494-1400
{ decvax!cca | think | ihnp4 | cbosgd }!ima!johnl, Levine@YALE.ARPA