[comp.sys.ibm.pc] Intel Microprocessors

chkg@ptsfa.UUCP (Chuck Gentry) (01/01/70)

In article <16456@toto.uucp> dbercel@sun.UUCP (Danielle Bercel, MIS Systems Programming) writes:
>
>The Intel 4004 was followed by the 8008 and then the 8080. Zilog
>followed with the Z80 and Motorola came up with the 6500 (01?).
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Motorola came up with the 6800.  Mos Technologies, later bought out
by Commodore, brought out the 650X. (One chip, I think the 6501, was
a plug replacement for the 6800.  I'm not sure if it was ever produced.)
The popular chip was the 6502.

As a side note, the Intel 4004 was followed by the 4040, an improved version.

>
>
>danielle

Chuck Gentry
{seismo,lll-lcc,ihnp4,qantel}!ptsfa!chkg

alexande@drivax.UUCP (Mark Alexander) (01/01/70)

In article <6950@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:
> 3) Intel programs are MUCH smaller than 68k programs, due to the many
>one byte instructions (yes I know the instruction set is irregular).

This may not always be true.  It all depends on a number of factors:
the memory model being used, the application, the compiler, etc.

No doubt small model 8086 assembly language programs are smaller than
equivalent 68K assembly programs.  But I've found that large model
8086 programs can be much larger.  A large program like a multi-user
operating system, written in a high-level language like C, may turn out
to be almost the same size on the two processors.  (It was true with
our operating system, at least.  Your mileage may vary.)
-- 
Mark Alexander	...{hplabs,seismo,sun,ihnp4}!amdahl!drivax!alexande
"Bob-ism: the Faith that changes to meet YOUR needs." -- Bob

farren@hoptoad.uucp (Mike Farren) (01/01/70)

Once again, I am astounded by the provincialism exhibited by both the
Intel camp and the Motorola camp.  Certainly, the 680X0 parts have a
much nicer instruction set than the 80X86 parts.  Yes, the 8X87 math
chips may well have an edge on the Motorola equivalents.  Well, so what?
Both chips are being used in a lot of systems, both chips have nifty
software available for them, both chips are doing USEFUL WORK!

Flaming about which chip is better, especially after the fact, strikes
me as one of the biggest wastes of time and energy I know of.  Like it
or not, IBM is using Intel, and that represents a whole lot of computers
in the world.  Apple, et. al, are using Motorola, and that also represents
a whole lot of computers out there.  As for me, I'll program either one.
I don't care, it all brings in income and produces working programs.




-- 
----------------
                 "... if the church put in half the time on covetousness
Mike Farren      that it does on lust, this would be a better world ..."
hoptoad!farren       Garrison Keillor, "Lake Wobegon Days"

jru@etn-rad.UUCP (John Unekis) (01/01/70)

In article <7042@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:
>....
>I will quote some figures from Byte magazine, July 1987 issue. I
>personally do not doubt the Intel benchmarks at all. If they chose to
>select benchmarks which show the best points of their product, can you
>...
>test		68010	68020	68020	80386
>		7.8MHz	16MHz	12.5MHz	16MHz
>		1w/s	1w/s	0w/s	0.5w/s
>		--	881@8	881@12	287@16
>
>Fibonacci	 264.0	  71.6	  70.2	   3.1	time in sec
>Float		 230.0	   4.2	   2.9	   5.4
>Sieve		  64.7	  14.9	  12.8	   6.0
>Sort		 111.3	  19.8	  12.6	   9.7
>Savage		1884.3	   8.8	  24.8	  35.1
>
>Whetstones	 574.0	2114.0	2702.0	3703.7	whetstones / sec
>
...
first off - those were dhrystones, not whetstones, one is an integer test,
the other is floating point.

       I notice that you conveniently forgot the worst 
       Intel column
	 80286
	 8Mhz
	 1w/s
	 ----
Fib       950
Float     116.36
Seive     26.71
Sort      46.53
Savage    1103.0

 FLAME ON!!!----

I thought that the results of these tests looked particularly bogus,
so I looked up the BYTE magazine article to see what was going on.
The most obvious discrepancy was in the fibonacci sequence routine, so 
I typed in their code verbatim and ran it on two machines for comparison.

On a Motorola 68020 with 68881 at 12 Mhz the test ran in 69.1 seconds
On an Intel 80386/80287 in full 32 bit mode at 16Mhz it ran in 60.3 seconds
(Both systems had 1w/s memory and ran UNIX V)
Even with a 25% clock speed advantage the Intel processor was only ~12%
faster.
The machine used in the Byte article must either have been a Gallium Arsenide,
100Mhz custom chip, or the authors are guilty of a gross clerical error.
Perhaps they forgot to read the minute portion of the time on their stop
watch.

I would hate to impugn the reputation of a Magazine like BYTE by suggesting
that they would publish a test where the results were deliberatley falsified,
but it is true that the best way to make INTEL look better than Motorola
is to LIE.

jru@etn-rad.UUCP (John Unekis) (08/04/87)

In article <880@bdmrrr.bdm.com> davis@bdmrrr.bdm.com (Arthur Davis x4675) writes:
>
>
>If you have moved your code to a large model, I hope you have changed
>your malloc calls to _fmalloc (and free to _ffree).  You can get some
>strange results using malloc in a far environment.  One result you won't
>get is the compiler message "Oh gosh, you really shouldn't use malloc in
>a large model".  Not to start an argument with anyone, but it is for
>reasons such as these that I love 68000-family architectures.  Good luck.

        Let`s face it, If IBM hadn`t made the mistake of using an Intel
	processor in the PC, Intel wouldn`t be in the Microprocessor
	business today. The one thing that I don`t understand is why,
	given the internal errors in the 80386 chip mask and the fact
	that the "braindamaged"(Microsoft's own word) architecture of
	the 80286 is forcing IBM to create a non-MSDOS-compatible 
	operating system (that's right, OS/2 will not run MSDOS
	applications in protected mode) , why, why, why, didn't IBM
	make use of their opportunity to escape from the Intel tar pit
	and use the MC68020 to make the PS/2 into a REAL computer?

	Oh, well. I suppose that as long as I`m making a wish list
	I might as well include peace on earth and an end to world hunger.
	But it sure would be nice to have a home computer that wasn't
	constantly tripping over it's own segments. Maybe I should have
	bought a Macintosh.

bobmon@iucs.UUCP (RAMontante [condition that I not be identified]) (08/06/87)

In article <234@etn-rad.UUCP> jru@etn-rad.UUCP (0000-John Unekis) writes:
>        Let`s face it, If IBM hadn`t made the mistake of using an Intel
>	processor in the PC, Intel wouldn`t be in the Microprocessor
>	business today. The one thing that I don`t understand is why,
>	given the internal errors in the 80386 chip mask and the fact
>	that the "braindamaged"(Microsoft's own word) architecture of
>	the 80286 is forcing IBM to create a non-MSDOS-compatible 
>	operating system (that's right, OS/2 will not run MSDOS
>	applications in protected mode) , why, why, why, didn't IBM
>	make use of their opportunity to escape from the Intel tar pit
>	and use the MC68020 to make the PS/2 into a REAL computer?

[flameflameflame]
Braindamaged architectures for braindamaged companies... within the limits
of my knowledge of IBM's S\360 and S\370 architectures (very narrow
limits :-), the 80x86 architectures look strikingly similar -- like
1/25th scale model cars are similar to the real thing.  I think IBM leans
toward this architecture partly because it's familiar/comfortable/the-product-
of-minds-just-as-bent-as-their-own.  The 680x0 family is a lot closer to
the PDP-11 style of architecture, which is probably reason enough to terminate
any IBM designer heretical enough to suggest its use.

And there's always the canard about "compatibility".  The code may not be
directly portable, but at least the use of the 80286/80386 means that all
the 8080/8088 addressing doesn't have to be re-thought-out intelligently,
as it would if it were moved into Flat-Address-Space-Land.

Disclaimer: If I expressed these opinions in comp.arch, people who know what
they're talking about would have me for breakfast.

~-~-~-~-~				"Have you hugged ME today?"
RAMontante
Computer Science Dept.		might be -->	bobmon@iucs.cs.indiana.edu
Indiana University		or maybe -->	montante@silver.BACS.indiana.edu

matt@hprndli.HP (Matt Wakeley) (08/06/87)

>	Let`s face it, If IBM hadn`t made the mistake of using an Intel
>	processor in the PC, Intel wouldn`t be in the Microprocessor
>	business today. The one thing that I don`t understand is why,
>	given the internal errors in the 80386 chip mask and the fact
>	that the "braindamaged"(Microsoft's own word) architecture of
>	the 80286 is forcing IBM to create a non-MSDOS-compatible 
>	operating system (that's right, OS/2 will not run MSDOS
>	applications in protected mode) , why, why, why, didn't IBM
>	make use of their opportunity to escape from the Intel tar pit
>	and use the MC68020 to make the PS/2 into a REAL computer?


why, why, why? Because IBM has bought (at least part of) Intel.  Besides,
when has IBM ever done anything smart?

burton@parcvax.Xerox.COM (Philip M. Burton) (08/07/87)

Guys, you are missing the point. IBM obviously chose the 8086 because it was
an easy upgrade from the 8080.  In fact DOS 1.0 had specific features that resembled
CP/M 80.  DOS 1.0, crude as it was, was aneasier upgrade than CPM/80.


-- 
Philip Burton
Xerox Corporation    408 737 4635
 ... usual disclaimers apply ...

cbenda@unccvax.UUCP (carl m benda) (08/07/87)

In article <234@etn-rad.UUCP>, jru@etn-rad.UUCP (John Unekis) writes:
> In article <880@bdmrrr.bdm.com> davis@bdmrrr.bdm.com (Arthur Davis x4675) writes:
> 	make use of their opportunity to escape from the Intel tar pit
> 	and use the MC68020 to make the PS/2 into a REAL computer?
> 

Well John, can you say memory management?  Why is it that a 4 meg Mac
can't run a real multitasking operating system?  It has the wonderous
68000 microprocessor.  Awww guess what? it can't manage its own memory.
Segmentation allows the Intel chips to manage their memory.  I suggest
you put a MAC with 4 meg next to an AT with 4 meg of memory running
Microport system V or SCO Xenix and place your bets as to which machine
can start compiling 4 different programs simultaneously.  You see John,
not even the 'powerful' 68020 can run UNIX without the aid of a WHOLE
different chip to aid it namely the 68851 MMU.  Just ask computer engineers
which chip set they'd rather design a computer with.  The MAC II isn't
even a real computer because it can not run a real multi-tasking OS.
at least not without and extra 600.00 chip (68851)

One final note, the 386 has a 1meg segment limit.  What a limit.
What size segmentation would you like?

/Carl
...decvax!mcnc!unccvax!cbenda

dbercel@toto.uucp (Danielle Bercel, MIS Systems Programming) (08/07/87)

>In article <234@etn-rad.UUCP> jru@etn-rad.UUCP (0000-John Unekis) writes:
>        Let`s face it, If IBM hadn`t made the mistake of using an Intel
>	processor in the PC, Intel wouldn`t be in the Microprocessor
>	business today. 

Long before IBM was in the PC business, AND before there was an
Apple, or any of the exisiting companies/computers around today
Intel produced the 4004. It was this chip that started the
entire personal computer industry off and running. It was the
first real CPU on a chip. Albeit, not very powerful.

The Intel 4004 was followed by the 8008 and then the 8080. Zilog
followed with the Z80 and Motorola came up with the 6500 (01?).
After that, the industry began forming along the lines we are
familar with today.

For those of you not old enough to remember when personal computing
got started it was 1973-74. Then after Intel came out with the 8-bit
CPU, the New Mexico company MITS began selling (as a kit!) the MITS
Altair 8800. This was now 1975-76. This is where the S100 bus came
into the picture and why some people still look for S100 products.

At the time, 4K of dynamic RAM was a *LOT* of memory and many
people used Teletypes for terminals. CRTs as we know them were
kits, poor quality, and typically ran at 110 baud. 300 baud was
considered fast. No joke. I remember sitting at my first 300 baud
terminal and saying, "How can I read it? It's moving too fast."

Once the Altair 8800 became popular all the stuff we take for
granted today began taking shape. The period of 1975 - 1978
was the time of birth for a lot that we use today. Bill Gates's
implementation of BASIC (which he did for the Altair) was to 
become the standard and led directly into Microsoft. Apple
was formed and as an alternative to the Altair 8800 it was
very attractive indeed. 

Anyway, that's enough history. The point being, with or
without IBM Intel would have done just fine.

danielle

tang@Shasta.STANFORD.EDU (Kit Tang) (08/07/87)

> > 	make use of their opportunity to escape from the Intel tar pit
> > 	and use the MC68020 to make the PS/2 into a REAL computer?
> 

> One final note, the 386 has a 1meg segment limit.  What a limit.
                                ^ 4G (I think)
      Yes, one segment in the 386 is the same as the whole 68020.

> What size segmentation would you like?
      The current virtual address space the 386 can address is 64T, but
      the huge address space has to be accessed through segment (4G max).
      How about a single segment of 64T ?
> 
> /Carl
> ...decvax!mcnc!unccvax!cbenda

-mkt

german@uxc.cso.uiuc.edu (08/07/87)

IBM has used the 68000 in some products.  The one the comes to mind is the IBM
Instruments CS9000.  It was a pretty nice system for some laboratory 
applications.

         Greg German (german@uxc.CSO.UIUC.EDU) (217-333-8293)
US Mail: Univ of Illinois, CSO, 1304 W Springfield Ave, Urbana, IL  61801
Office:  181 Digital Computer Lab.

lmg@sfmin.UUCP (08/07/87)

> 
> One final note, the 386 has a 1meg segment limit.  What a limit.
> What size segmentation would you like?
> 
> /Carl
> ...decvax!mcnc!unccvax!cbenda

The segment limit is 4 Gigabytes, not 1 Megabyte, in the 386.

					Larry Geary
					ihnp4!attunix!lmg

mike@ivory.SanDiego.NCR.COM (Michael Lodman) (08/07/87)

In article <789@unccvax.UUCP> cbenda@unccvax.UUCP (carl m benda) writes:
>Just ask computer engineers which chip set they'd rather design 
a computer with.

Ok, I'll bite. If its between the Motorola and the Intel chip set, I'd 
pick the Motorola 68000 family every time.


-- 
Michael Lodman  (619) 485-3335
Advanced Development NCR Corporation E&M San Diego
mike.lodman@ivory.SanDiego.NCR.COM 
{sdcsvax,cbatt,dcdwest,nosc.ARPA,ihnp4}!ncr-sd!ivory!lodman

dave@leo.UUCP ( Dave Hill) (08/07/87)

In article <789@unccvax.UUCP>, cbenda@unccvax.UUCP (carl m benda) writes:
> Just ask computer engineers which chip set they'd rather design a
> computer with. 


   The 68020 anytime anywhere.  Just thought you'd like to know.


-- 

		"Be scientific; be genetic!"

{allegra!hplabs!felix, ihnp4!trwrb!felix, seismo!rlgvax}!ccicpg!leo!dave

jru@etn-rad.UUCP (John Unekis) (08/07/87)

In article <789@unccvax.UUCP> cbenda@unccvax.UUCP (carl m benda) writes:
>In article <234@etn-rad.UUCP>, jru@etn-rad.UUCP (John Unekis) writes:
>> In article <880@bdmrrr.bdm.com> davis@bdmrrr.bdm.com (Arthur Davis x4675) writes:
>> 	make use of their opportunity to escape from the Intel tar pit
>> 	and use the MC68020 to make the PS/2 into a REAL computer?
>> 
>
>Well John, can you say memory management?  Why is it that a 4 meg Mac
>can't run a real multitasking operating system?  It has the wonderous
>68000 microprocessor.  Awww guess what? it can't manage its own memory.
>Segmentation allows the Intel chips to manage their memory.  I suggest
>you put a MAC with 4 meg next to an AT with 4 meg of memory running
>Microport system V or SCO Xenix and place your bets as to which machine
>can start compiling 4 different programs simultaneously.  You see John,

   This type of sleazy logic is typical of braindamaged intel vicitms,
   I picked a MAC for my comparison with a PC specifically  because they
   are both single-tasking machines. If you want to compare a UNIX based
   68xxx machine with a UNIX based 80x86, try a SUN 3 , which makes a
   UNIX based AT look like the toy it really is.

>not even the 'powerful' 68020 can run UNIX without the aid of a WHOLE
>different chip to aid it namely the 68851 MMU. 

    Only ONE chip? To run UNIX on any processor requires DOZENS of support
    chips. There is the chip set to support the state machine for the
    BUS protocol, the (optional) floating point processor, the memory itself,
    etc. etc. ... OH, but the 80386 can run UNIX all by itself can it?
    When was the last time you had the cover off your computer? Or maybe
    you did know these things and just forgot to mention them (more intel
    sleaze).

>Just ask computer engineers
>which chip set they'd rather design a computer with. 
    I have worked with dozens of computer engineers, and none of them that
    I know of consider Intel to be a serious contender in the super micro 
    arena. After going for years without ANY intel processor with a 32 bit
    instruction set, they finally cough out the 80386 and guess what...
    it has errors in the chip mask that cause it to generate random bit
    errors when it does 32 bit math. WAY TO GO INTEL ! This is almost as
    confidence inspiring as their 82586 ethernet chip (which is probably
    up to rev Z by now).

The MAC II isn't
>even a real computer because it can not run a real multi-tasking OS.
>at least not without and extra 600.00 chip (68851)

    If we are discussing Intel vs. Motorola, then the limits of the
    MAC II are pretty much irrelevant. There are dozens of vendors 
    that supply myriad systems using the 68xxx class processor running
    multitasking operating systems. There are Motorla, Plexus, Heurikon,
    Ironics, Sony, Masscomp, Stratus, ...(too many to list here), running
    operating systems like Versados,Pdos,OS9,UNIX V, UNIX BSD4.3, VOS,
    etc.
    How many vendors are there for 80386 systems? Well theres Intel,
    and IBM, and Intel, and IBM(clones don't count). And multitasking
    operating systems? Well there's UNIX, and UNIX, and UNIX. 
    What about real-time applications? The 68xxx has Pdos, OS9, and 
    Versados. The 80386 ... do I hear crickets chirping?


    And on the subject of memory management- the whole point of an
    MMU is to allow each task to run in its own memory space independently,
    this allows the task to behave as if it owned all the memory in
    the system, without being aware that other tasks are there. It also 
    provides security by keeping tasks from trouncing each other's
    memory.

    In the 68020 world , you load the MMU with the start of a memory 
    area in user space, (lets say address 0) then where the memory 
    starts in real address space, and how big it is. From then on the
    program runs as if it had the whole memory (up to 4 gigabytes)
    to itself. No segment changes are ever needed. And for security no
    user state process can modify the MMU, so that no user process can
    ever physically modify another's memory.

    In the Intel scheme, you first have to decide whether you are running
    in protected mode or not. If not, then you are constantly having to juggle
    your segment registers to be able to address memory across 64 K boundaries.
    There is no protection to keep a user task from waltzing into anothers
    memory whatsoever. If you decide to go into protected mode, then you
    must first set up a global segment table in supervisor state to tell
    you where to find the segment tables for each task. Then within the 
    context of the global table, you set up a local segment table for each
    task. This does not free you from segmentation, however. You must 
    still juggle the segment registers, only now they must be translated
    as an index into a segment table. The segment register contains an index
    into the table which gets you a segment descriptor, which contains the
    base address of a segment which is added to the remainder of the segment
    register to get an actual segment base address which is then added to
    the offset register to get to real memory. Piece of shi... sorry, I meant
    piece of cake.

    The Intel memory management is to Motorola Memory Management what the
    three stooges are to the Bolshoi ballet.

>
>One final note, the 386 has a 1meg segment limit.  What a limit.

  The application that we are dealing with at my facility is digital
  image processing. Typical images are 2048 by 2048 pixel elements, 
  where each pixel is 8 bits deep. That's four megabytes for each
  image. And some operations require us to combine two images. Trying
  to do this on an Intel processor is like running with your shoelaces
  tied together.
>What size segmentation would you like?
>
  I would like no segments at all, A flat linear address space that
  is practically infinite. Which is what I get with Motorola.


  REMEMBER CAMPERS -

  When you buy Motorola , you get a 32 bit micro with a mainframe 
  hidden inside.
  When you buy Intel, you get an 8 -bit 8080 chip disguised as a
  32 bit microprocessor.


  ---------------------------------------------------------------
  The opinions above are practically perfect in every way. Naturally
  they belong to me.    ihnp4!wlbr!etn-rad!jru

boykin@custom.UUCP (Joseph Boykin) (08/08/87)

In article <412@parcvax.Xerox.COM>, burton@parcvax.Xerox.COM (Philip M. Burton) writes:
> 
> Guys, you are missing the point. IBM obviously chose the 8086 because it was
> an easy upgrade from the 8080.  In fact DOS 1.0 had specific features that resembled
> CP/M 80.  DOS 1.0, crude as it was, was aneasier upgrade than CPM/80.

Humm, I may be wrong but you may also be missing the point.  DOS 1.0
(and its predessors) was written as a CP/M lookalike by Seattle
Computer Products (SCP) (and some consultants).  At the time SCP
came out with their 8086 S-100 board there was no operating system
available, so they wrote one to look (somewhat) like the most
popular micro-computer OS available (CP/M).  Microsoft
bought DOS when IBM came to them (and DRI) looking for an OS
for their new computer.

By the way, I have Q-DOS (what DOS was originally called!)
version 0.17 on 8 1/2" disks, can anyone out there beat that?!?

Joe Boykin
Custom Software Systems
...necntc!custom!boykin

dave@sdeggo.UUCP (David L. Smith) (08/08/87)

In article <789@unccvax.UUCP>, cbenda@unccvax.UUCP (carl m benda) writes:
> In article <234@etn-rad.UUCP>, jru@etn-rad.UUCP (John Unekis) writes:
> > 	make use of their opportunity to escape from the Intel tar pit
> > 	and use the MC68020 to make the PS/2 into a REAL computer?
> Well John, can you say memory management?  Why is it that a 4 meg Mac
> can't run a real multitasking operating system?  It has the wonderous
> 68000 microprocessor.  Awww guess what? it can't manage its own memory.
> Segmentation allows the Intel chips to manage their memory.  

Can we say segments are a pain in the ass?  Have you tried doing any work
on an 80286 running Microport?  char * != int breaks half the known code 
in exsistance, and all thanks to those damned segments.  Can we say "No
arrays larger than 64K?"  This is another _major_ pain.  Also, all that
segment loading and unloading slows things down quite a bit.

>I suggest
> you put a MAC with 4 meg next to an AT with 4 meg of memory running
> Microport system V or SCO Xenix and place your bets as to which machine
> can start compiling 4 different programs simultaneously.  You see John,
> not even the 'powerful' 68020 can run UNIX without the aid of a WHOLE
> different chip to aid it namely the 68851 MMU.  Just ask computer engineers
> which chip set they'd rather design a computer with.  
Put the AT next to a Sun and see which one compiles more programs off the
net.
Ask any programmer.  Most of them will tell you that working with an
80x8x chip is a lot less fun than working with a 680x0.  You see, Carl, the
system designer only has to do his job once.  If it's a toss-up between
making it easier for the programmers, or making it easier for the designers,
it should be made easier for the programmers.  After all, it's the
programmers that the designers are making the machine for, is it not?

> 
> One final note, the 386 has a 1meg segment limit.  What a limit.
> What size segmentation would you like?
I think it's 1G.  That just goes to show you that Intel finally wised up
and basically did away with segments.
> 
> /Carl
> ...decvax!mcnc!unccvax!cbenda


-- 
David L. Smith
{sdcsvax!sdamos,ihnp4!jack!man, hp-sdd!crash}!sdeggo!dave
sdeggo!dave@sdamos.ucsd.edu 
Microport Unix!  More bugs for the buck!

randys@mipon3.intel.com (Randy Steck) (08/08/87)

After due deliberation, I just have to say something.

I think that this discussion is really missing several vital points.

First, the reason that IBM chose the 8088 in the first place for the PC was
because they could build a PC around it without even trying hard.  It just
was not possible at the time to get interface chips for the 68000 or any
other 16-bit processor.  This, along with the 8-bit data paths, allowed
IBM to make PCs very cheaply and to essentially take over the PC
market.  In other words, they made a good marketing decision.

Second, IBM has used 68K processors in equipment that they have built for
external sale as well as internal use.  An example of this was given in a
Byte article a couple of years ago.  It was a measurement system using the
68K.  I think that the conclusion that any perceived similarity between the
370 and the 86 family justifies the use of the 8086 is unreasonable.

Third, segments are a reasonable programming paradigm.  Intel has said
this before to the derision of the world at large.  The most that one
can fault Intel for is the choice of size for these segments (but what
could one reasonably expect in 1978?).  Larger segments, as found on
some mainframe computers, would simplify the task of programming the
part.  Memory management (a la 286) is a fall-out from the segmented
addressing of the 86 architecture.

Fourth, I must disagree with the statement that the introduction of the
PS/2 was an opportunity missed by IBM to "get out of the tar pit of
Intel processors".  Would you really like to see a system introduced as
a follow-on to the PC that would not run all of the code already
written for the PC?  I am sure that the marketeers at IBM get a big
chuckle every time someone suggests such an idea.

I am a design engineer at Intel working on 32-bit processors and math
coprocessors.  I personally believe that there are things that could be
better in the 86 architecture, but please don't assert that Motorola
did everything right in their processors either!  If architectures are
as perfect as the marketing people say they are (those for any company,
not just Intel and Moto), then why do newer parts keep getting
designed?  Performance is only one of the reasons.

If you really want an Intel processor to flame, then you should learn about
the 432. (Yes, I worked on that one too).  It was not a commercial success
but served its purpose.  

I can't wait until I get a 386/486 on my desk.  Then I won't have to
rely on the silly little VAX for everything.

Randy Steck 
Intel Corp.
Disclaimer: Do you really think Intel agrees with anything I say?

davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr) (08/09/87)

Several reasons IBM chose the Intel 8088:
 1) it allowed an upgrade from CP/M, the most popular o/s in use at that
time. There are translate programs which will take a CP/M-80 program and
produce a MS-DOS program (assembler).

 2) There were a number of problems with the 68k in terms of support
chip (not really available) and the cost of a 16bit memory path.

 3) Intel programs are MUCH smaller than 68k programs, due to the many
one byte instructions (yes I know the instruction set is irregular).
There cost of memory has come down. When the PC was introduced it had
64k and that was considered enough for all but power users. The program
size difference is still there (Sun/3 vs Xenix/386 program size) and
with a pipeline machine, this allows somewhat slower memory to be used.

Flames to /dev/null! I have both types of processor and am clarifying
the points, not making a personal choice.

-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {chinet | philabs | sesimo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

burton@parcvax.Xerox.COM (Philip M. Burton) (08/09/87)

In article <174200061@uxc.cso.uiuc.edu> german@uxc.cso.uiuc.edu writes:
>
>
>IBM has used the 68000 in some products.  The one the comes to mind is the IBM
>Instruments CS9000.  It was a pretty nice system for some laboratory 
>applications.


IBM also used a slightly modified version of the 68000 to build a "370 on a
chip"  I recall that this was done about 1980, using two 68000's with
modified instruction sets.  I'm not sure if this ever made it to a product,
but I read about it in Electronics or Electronics Design, or a similar
mag.



-- 
Philip Burton
Xerox Corporation    408 737 4635
 ... usual disclaimers apply ...

iverson@cory.Berkeley.EDU (Tim Iverson) (08/09/87)

In article <4680001@hprndli.HP> matt@hprndli.HP (Matt Wakeley) writes:
>why, why, why? Because IBM has bought (at least part of) Intel.  Besides,
>when has IBM ever done anything smart?

Well, I have never known them to make a technically inovative decision
(they're worse than conservative), but their marketing is way cool.


- Tim Iverson
  iverson@cory.Berkeley.EDU
  ucbvax!cory!iverson

scotty@l5comp.UUCP (Scott Turner) (08/10/87)

Sheesh, years later and the battle still rages.

There is ONE reason that Intel and the 80X8X chips are still here, a very
sharp marketing person at Intel did the right thing when IBM can by one
day. And the same person at Motorola didn't. Motorola decided to ask too
much, and Intel realized what a boon it would be to say "Yeah, we can meet
that price!"

Now they're up to the 80386 and we hear all this wonderful stuff about big
address spaces and tons of 32 bit registers etc etc etc and that they will
sell 200,000 of them this year. But the dirty truth is that while the 80386
has all this stuff, fully 95% of them will be used to run 808X software.
Which makes the issue of what use this stuff is very moot.

Why? Intel goofed once again in it's architectural design. The chip IS NOT
fully protected while running in protected mode. A user level program can
bring the chip, and all the other users on that chip, down quite easily.
And there's not a DAMN thing the OS writer can do about it.

I'm sure every one here has seen write ups on the herculean efforts that
Microsoft is putting in in order to make the Intel chip secure. And yes, if
a program follows all the rules the chip WILL be secure. But if a program
trips over a bug (gee where did that damn bug come from? Couldn't be
someone got confused over those crazy mnemonics could it?) and slaughters a
few key things the whole thing will still come apart at the seams.

A person also mentioned stacking a 68000 based Mac against an AT and seeing
who could run 4 compilers.

How about we change that challenge? Lets take the FIRST GENERATION 8086 and
stack it against a FIRST GENERATION 68000? Lets take the new PS/2 model 20
and compare it against an Amiga 1000? To be fair we'll run both with the
most recent versions of OS software supplied by the machines maker. We'll
also only use "off-the-shelf" compilers and hardware.

For the model 20 we'll use PC-DOS 3.3 and Turbo C. Ooops, that damn TurboC
comes on 5.25 disks don't it (it does), hmmph, well I'll just get the dealer
to move it over for me. We also get all the RAM the model 20 can handle
directly, 1 meg.

For the Amiga we'll use AmigaDOS 1.2, alas things get unfair at this point.
PC-DOS 3.3 is what the 10th revision of their OS and ADOS 1.2 is the 3rd?
Hmm, maybe we should run PC-DOS 2.0? Naw, can't buy that anymore, IBM
doesn't support it. We'll buy Aztec C for the Amiga's compiler. We'll slap
on 4 meg of ram so we won't get cramped. :) But the price of the 8 meg is
so tempting, what the hell, make it 8 meg to go! Too bad the model 20 can't
do that too, sigh.

We drag 'em back home and set them up. With the Amiga we can sit down and
fire up 4 Manx compilers an editor or two (or three, we bought 8 meg after
all :). With the model 20 we fire up ONE turbo C and then start frantically
digging through the PC DOS manuals looking for the 'RUN' command...

Game over, Amiga wins.

As for those that think 8080 compatibility was a BIG thing, don't forget that
the 8088 is NOT OBJECT CODE COMPATIBLE with the 8080! You had to run your
SOURCE code through a convertor and then assemble it for the 8088. The same
kind of convertors exist to go 8080 -> 68000.

As for the segmentation is neat argument... I think all I need do is point
at the most recent bug in Turbo C's linker that keeps people from using
data hunks larger than 64k. Even without the bug how come the programmer
has to worry over NEAR and FAR? Why is it that IRQ handlers have to do
NO-NO's like writing to their code segments (something you can't do on the
80286/80386 in protected mode BTW) so that the IRQ handler can find out
where the data segment is? And lets not forget how much fun paragraphs are!
:) If segments are so neat how come they get in the way so much? And when
am I going to start seeing some BENIFIT from them in my Intel coding
chores?

Scott Turner
-- 
UUCP-stick: stride!l5comp!scotty | If you want to injure my goldfish just make
UUCP-auto: scotty@l5comp.UUCP    | sure I don't run up a vet bill.
GEnie: JST			 | "The bombs drop in 5 minutes" R. Reagan
		"Pirated software? Just say *NO*!" S. Turner

jwhitnel@csib.UUCP (Jerry Whitnell) (08/10/87)

In article <789@unccvax.UUCP| cbenda@unccvax.UUCP (carl m benda) writes:
|In article <234@etn-rad.UUCP|, jru@etn-rad.UUCP (John Unekis) writes:
|| In article <880@bdmrrr.bdm.com| davis@bdmrrr.bdm.com (Arthur Davis x4675) writes:
|| 	make use of their opportunity to escape from the Intel tar pit
|| 	and use the MC68020 to make the PS/2 into a REAL computer?
|| 
|
|Well John, can you say memory management?  Why is it that a 4 meg Mac
|can't run a real multitasking operating system?

The latest rumors have it that Apple will be annoucing a multiasking Finder
soon (possibly this month).  Since they have had one in beta test for several
months now (fact, not rumor), I'd believe this rumor.  It'll be amusing to see
Apple have a multitasking OS before IBM (ignoreing UNIX for the moment).
BTW, memory managment is NOT necessary to have multitasking.  It just makes
the world a little safer.

|It has the wonderous
|68000 microprocessor.  Awww guess what? it can't manage its own memory.
|Segmentation allows the Intel chips to manage their memory.  I suggest
|you put a MAC with 4 meg next to an AT with 4 meg of memory running
|Microport system V or SCO Xenix and place your bets as to which machine
|can start compiling 4 different programs simultaneously. 

But how fast does it compile?  The whole point of computers is not just
what they can do, but do they help you get your job done faster?  I'd give
up multitasking in an instance for a very fast compiler.  Because I spend
most of my time waiting for the compile to complete so I can test the latest
bugs I put in my program (:-)).  And the compiler on my Mac is far faster then
anything I've seen on an AT class machine (including Turbo C).

| ...
|What size segmentation would you like?
I don't like segments at all.  Segments mean segment registers which means
extra code to keep track of the registers and extra bugs caused by  not
setting segment registers correctly.

|
|/Carl
|...decvax!mcnc!unccvax!cbenda

tang@Shasta.STANFORD.EDU (Kit Tang) (08/11/87)

In article <174200061@uxc.cso.uiuc.edu>, german@uxc.cso.uiuc.edu writes:
> IBM has used the 68000 in some products.  The one the comes to mind is the IBM
> Instruments CS9000.  It was a pretty nice system for some laboratory 
> applications.

So, how success is that series ?  It's based on the 68000 :-)

-mkt

tang@Shasta.STANFORD.EDU (Kit Tang) (08/11/87)

In article <320@l5comp.UUCP>, scotty@l5comp.UUCP (Scott Turner) writes:
> A person also mentioned stacking a 68000 based Mac against an AT and seeing
> who could run 4 compilers.
> 
> How about we change that challenge? Lets take the FIRST GENERATION 8086 and
> stack it against a FIRST GENERATION 68000? Lets take the new PS/2 model 20
> and compare it against an Amiga 1000? To be fair we'll run both with the
> most recent versions of OS software supplied by the machines maker. We'll
> also only use "off-the-shelf" compilers and hardware.

Well, I don't know about PS/2 model 20 (Is this model exist ?) vs Amiga 1000.
But if you are interested about Mac II vs PS/2 model 80, there is an article
on this topic in a recent issue of Byte Magazine.

-mkt

jru@etn-rad.UUCP (John Unekis) (08/11/87)

In article <414@parcvax.Xerox.COM> burton@parcvax.xerox.com.UUCP (Philip M. Burton) writes:
>
>IBM also used a slightly modified version of the 68000 to build a "370 on a
>chip"  I recall that this was done about 1980, using two 68000's with
>modified instruction sets.  I'm not sure if this ever made it to a product,
>but I read about it in Electronics or Electronics Design, or a similar
>mag.
...
   Yes, it was announced as a product, the PC AT/370. It didn't live long 
   though. It used 2 MC6800 chips with the instruction set completely redone
   to be a clone of the 370 (they had to start with the 68K because of the
   370's 32 bit word and 16 general purpose registers). This was not a 
   Motorola product, and the chips bore no resemblance in function to the
   original MC68000. It failed as a product because it achieved a blinding
   speed of 0.25 MIPS, and used the widely unaccepted operating system VM.
   Somehow even IBM couldn't convince people that they wnted to give up 
   the user friendly interface of a PC for a mainframe imitator at half
   the CPU speed. I wonder if this bodes ill for the new VM based IBM
   system 9370 which they are trying to call the 'VAX-KILLER' Chuckle-chuckle-
   snort-giggle.

-------------------------------------------------------------------
ihnp4!wlbr!etn-rad!jru

rich@etn-rad.UUCP (Rich Pettit) (08/11/87)

In article <320@l5comp.UUCP> scotty@l5comp.UUCP (Scott Turner) writes:
>Sheesh, years later and the battle still rages.
>..Blah blah blah....ridiculous comparisons...blah blah...stupid analogies

Hey, I took a Cray with 3 Gigabytes, loaded CP/M with a BASIC interpreter,
and even IT couldn't stand up next to that Amiga. We have a couple of guys
in the department that have Amigas at home. They come in glary-eyed with
singed beards mumbling some incomprehensibilities about matrix inversions
and "far-away transforms" or something. I don't know, are those things
nuclear powered or what ?

Oh and by the way....you're right. Those Intel processors are a laugh.


		"...I laugh all the way to the bank."
				-- Liberace

-- 
      Richard L. Pettit, Jr.   Software Engineer    IR&D  Eaton Inc., IMSD
    31717 La Tienda Dr.      Box 5009 MS #208     Westlake Village, CA 91359

               { ihnp4,voder,trwrb,scgvaxd,jplgodo }!wlbr!etn-rad!rich

paul@aucs.UUCP (08/11/87)

In article <1221@leo.UUCP> dave@leo.UUCP ( Dave Hill) writes:
>In article <789@unccvax.UUCP>, cbenda@unccvax.UUCP (carl m benda) writes:
>> Just ask computer engineers which chip set they'd rather design a
>> computer with. 
>
>   The 68020 anytime anywhere.  Just thought you'd like to know.
>
Just thought I'd add my two cents worth.  I've been programming in assembly
language on and off now since 1976.  In fact, I even enjoyed it.  When it
comes to the 808x chip, there are several useful features for the assembly
language programmer.  However, the hassles that segments cause outweigh
any other feature considerably.  In my opinion, the 808x chips were designed
the way they were to make porting of 8080 software easier.  From a marketing
standpoint, it was certainly the right move.  But if Intel had asked enough
programmers about the design, I'm pretty sure I know what the answer would be.
No doubt about it...  Give me a linear address space any day.


Paul H. Steele      UUCP:      {uunet|watmath|utai|garfield}!dalcs!aucs!Paul
Acadia University   BITNET:    Paul@Acadia
Wolfville, NS       Internet:  Paul%Acadia.BITNET@WISCVM.WISC.EDU
CANADA  B0P 1X0     PHONEnet:  (902) 542-2201x587

cmaag@csd4.milw.wisc.edu (Christopher N Maag) (08/11/87)

In article <320@l5comp.UUCP>, scotty@l5comp.UUCP (Scott Turner) writes:
> A person also mentioned stacking a 68000 based Mac against an AT and seeing
> who could run 4 compilers.
> 
> How about we change that challenge? Lets take the FIRST GENERATION 8086 and
> stack it against a FIRST GENERATION 68000? Lets take the new PS/2 model 20
> and compare it against an Amiga 1000? To be fair we'll run both with the
> most recent versions of OS software supplied by the machines maker. We'll
> also only use "off-the-shelf" compilers and hardware.
>

Oh yeah??  Well *my* microprocessor can beat up *your* microprocessor!!!  
So there!!! Nyaa, nyaaa.

---------------------------------------------------------------------------
Path: uwmcsd1!csd4.milw.wisc.edu!cmaag
From: cmaag@csd4.milw.wisc.edu (Christopher Maag)

 {seismo|nike|ucbvax|harvard|rutgers!ihnp4}!uwvax!uwmcsd1!uwmcsd4!cmaag 
----------------------------------------------------------------------------

mrk@gvgspd.UUCP (Michael R. Kesti) (08/11/87)

In article <320@l5comp.UUCP> scotty@l5comp.UUCP (Scott Turner) writes:
>For the model 20 we'll use PC-DOS 3.3 and Turbo C. Ooops, that damn TurboC
>comes on 5.25 disks don't it (it does), hmmph, well I'll just get the dealer
>to move it over for me.

Hate to join the fray here, but I just HAD to point out that when I bought
my TurboC, a card included in the books pointed out that Borland was willing
to ship one a copy on 3.5 disks upon reciept of the 5.25s and $15.  It
*would* be much nicer to be able to buy them at your dealer, and the $15
seems a little unreasonable, but 3.5s *ARE* available from Borland.


-- 
===================================================================
Michael Kesti		Grass Valley Group, Inc.
P.O. Box 1114   	Grass Valley, CA  95945
UUCP:	...!tektronix!gvgpsa!gvgspd!mrk

doug@edge.UUCP (Doug Pardee) (08/11/87)

> Guys, you are missing the point. IBM obviously chose the 8086 because it was
> an easy upgrade from the 8080.

Sigh.  Try this for a better explanation:  what other choice did they have?

The PC wasn't designed yesterday; it was designed in '80-'81.  The only real
choices were an 8-bit CPU (probably Z-80 like every other micro on the
market at the time) or the 16-bit iAPX86.  I think most of you would agree
that as warty as the '86 is, it's better than the Z8000.  And National's
16-bit offering of the day was already dying, due to be replaced with the
32000 line (called the 16000 line back then).

What about the 68000, you say?  Back then, Motorola had this bug up their
behinds about the 68000 being for use in *real* computers.  They wouldn't
talk to anyone who wanted to use them in micros.  They actively blocked any
attempt to use them in micros.  Ask the folks at Digital Acoustics, who
developed a 68000 add-in board for the Apple II at about that time.  Sure,
Motorola finally caught on, and eventually we got Macs and Amigas and STs.
But that was years later.
-- 
Doug Pardee, Edge Computer Corp; ihnp4!mot!edge!doug, seismo!ism780c!edge!doug

perkins@bnrmtv.UUCP (Henry Perkins) (08/12/87)

In article <320@l5comp.UUCP>, scotty@l5comp.UUCP (Scott Turner) writes:
> As for those that think 8080 compatibility was a BIG thing, don't forget that
> the 8088 is NOT OBJECT CODE COMPATIBLE with the 8080! You had to run your
> SOURCE code through a convertor and then assemble it for the 8088.

That's not necessary.  Most 8080 instructions translate directly
to single 8086 instruction equivalents.  You don't need to have
the source code available to do the porting.
-- 
{hplabs,amdahl,3comvax}!bnrmtv!perkins        --Henry Perkins

It is better never to have been born.  But who among us has such luck?
One in a million, perhaps.

kevin@iisat.UUCP (Kevin Davies) (08/12/87)

In article <1166@csib.UUCP>, jwhitnel@csib.UUCP (Jerry Whitnell) writes:
> In article <789@unccvax.UUCP| cbenda@unccvax.UUCP (carl m benda) writes:
> |It has the wonderous
> |68000 microprocessor.  Awww guess what? it can't manage its own memory.
> |Segmentation allows the Intel chips to manage their memory.  I suggest
> |you put a MAC with 4 meg next to an AT with 4 meg of memory running
> |Microport system V or SCO Xenix and place your bets as to which machine
> |can start compiling 4 different programs simultaneously. 
> 
> But how fast does it compile?  The whole point of computers is not just
> what they can do, but do they help you get your job done faster?  I'd give
> up multitasking in an instance for a very fast compiler.  Because I spend
> most of my time waiting for the compile to complete so I can test the latest
> bugs I put in my program (:-)).  And the compiler on my Mac is far faster then
> anything I've seen on an AT class machine (including Turbo C).

The main point here is what you've said.. "...get your job done...".
For some instances, it's not good having the latest & fastest compiler
on machine X if you're designing something which is to be used in a
multi-user environment. Like a database where you have 2 people entering
information and you have another searching the database. Here, neither
Turbo C or the Mac will give you much head way.. especially when it
comes to locking files, and file security.

The machine you use and the software you run on it will be decided
by what applications are to be done and the surrounding circumstances.
I agree, a Mac compiler probably is faster... depending on what you
want it for.

> 
> | ...
> |What size segmentation would you like?
> I don't like segments at all.  Segments mean segment registers which means
> extra code to keep track of the registers and extra bugs caused by  not
> setting segment registers correctly.

That's why you have compilers, to help speed software development etc.
People don't go 'round producing their own compilers etc. (at least
I don't think so).  This is why the compilers have options so that
it can deal with the type of program you will be compiling. The headache
with the segment registers is done _once_. After that, you get on
with the task at hand (save for bugs in the compiler itself :-).
> 
> |
> |/Carl
> |...decvax!mcnc!unccvax!cbenda

One thing to remember to multi-user environments (i.e., Unix/Xenix),
they like to see PLENTY of memory.
And for the record, I have worked on 68000 series and DEC-Vax
assembly... (but not Intel...)
-- 
Kevin Davies		International Information Service (IIS)
UUCP:  {mnetor,seismo,utai,watmath}!dalcs!iisat!kevin
----------------------------------------

brad@looking.UUCP (08/12/87)

You folks astound me.  What world do you live in?

In the real world, there are lots of other constraints in choosing a
chip than what the architecture looks like to software.

The appearance of the architecture is important, but it's only one of
several constraints.  It is for the other constraints that IBM selected the
8088, and I have yet to see a serious argument as to how they could have
selected the 68000 at the time.
-- 
Brad Templeton, Looking Glass Software Ltd. - Waterloo, Ontario 519/884-7473

dave@micropen (David F. Carlson) (08/12/87)

In article <789@unccvax.UUCP>, cbenda@unccvax.UUCP (carl m benda) writes:
> In article <234@etn-rad.UUCP>, jru@etn-rad.UUCP (John Unekis) writes:
> > In article <880@bdmrrr.bdm.com> davis@bdmrrr.bdm.com (Arthur Davis x4675) writes:
> > 	make use of their opportunity to escape from the Intel tar pit
> > 	and use the MC68020 to make the PS/2 into a REAL computer?
> > 
> 
> Well John, can you say memory management?  Why is it that a 4 meg Mac
... more spate on how much people like Intel over Motorola omitted ...
> 
> /Carl
> ...decvax!mcnc!unccvax!cbenda

Carl, sarcasm sucks.  First of all, Intel chose segmented architechure for
the 8086/8088 before the 80286 with its memory management was more than a
glimmer in a designer's eye.  It has been my opinion that Intel hoped to
capitalize from their successful 8080/5 CP/M-running processors with a 16-
bit model that the market seemed to require.  At the time no one could even
imagine why anyone would *ever* need more than 64K on a microcomputer.  
People could use bank switched memory if they did.  That was CP/M and the
market as it was in the late seventies.  So Intel makes an upgrade from the
eight bit world but the expense of maintaining instruction set compatibility
was too great, (as it was from the 680X to the 680X0 for Motorola).  Thus,
a chip that allowed 256K direct addressing (with CS DS SS ES) seemed like
a good compromise:  256K was still more memory than anyone would ever need.


To tell anyone that "computer engineers" prefer Intel over Motorola is a
very silly thing indeed.  There has been in my knowledge of the market no
person on the software side of the world that "prefers" a segmented space
to a linear space.  (Although some old PDP persons might argue...)

-- 
David F. Carlson, Micropen, Inc.
...!{seismo}!rochester!ur-valhalla!micropen!dave

"The faster I go, the behinder I get." --Lewis Carroll

lib@mtuxj.UUCP (EDSX.LIB) (08/12/87)

Someone (forgot who) points out that the Intel microprocessors
are capable of multitasking, while the 68020 is not without added
hardware.

Seems like I read in several different places (including USENET) that
multitasking is now standard on all Macintoshes. OS/2 is at
least six months away, more likely a year.

I'll pass on Intel, thank you.



-- 
   +-----+     *                           _\o' \---\/====/             The
   | ___ |     | /\                 `\|/'   ~\\___\ /  _/ ___..'     Archdruid
   | | | |      `7l==    *   *   *  -=O=-     \_  /   / -~_--'          ~~~   
...| |.| |....../_l.................'/|\`...._/_`--_/_/-'~......................

cbenda@unccvax.UUCP (carl m benda) (08/13/87)

In article <1166@csib.UUCP>, jwhitnel@csib.UUCP (Jerry Whitnell) writes:
> Apple have a multitasking OS before IBM (ignoreing UNIX for the moment).
> BTW, memory managment is NOT necessary to have multitasking.  It just makes
> the world a little safer.

I don't think that you can just disregard XENIX from SCO for example, which
is a multi-user and multi-tasking OS which will run on the PC XT.  By the 
way, how many users will the Apple OS support on a single MAC?

> most of my time waiting for the compile to complete so I can test the latest
> bugs I put in my program (:-)).  And the compiler on my Mac is far faster then
> anything I've seen on an AT class machine (including Turbo C).

Numbers please... the ALR I just purchased will, using Turbo C, compile at a
rate of 15,000 lines per minute.  In my opinion, 'far faster' implies at 
least 30,000-50,000 lines per minute.  Okay, I'll be the first in line for
MAC-plus, which is more exspensive than my ALR, if it will compile C code
at 50k lines per minute.

/Carl
...decvax!mcnc!unccvax!cbenda

tim@cit-vax.Caltech.Edu (Timothy L. Kay) (08/13/87)

In article jru@etn-rad.UUCP (0000-John Unekis) writes:
>In article burton@parcvax.xerox.com.UUCP (Philip M. Burton) writes:
>>
>>IBM also used a slightly modified version of the 68000 to build a "370 on a
>>chip"  I recall that this was done about 1980, using two 68000's with
>>modified instruction sets.  I'm not sure if this ever made it to a product,
>...
>   Yes, it was announced as a product, the PC AT/370. It didn't live long 
>   though. It used 2 MC6800 chips with the instruction set completely redone
>   to be a clone of the 370 (they had to start with the 68K because of the

I have used an xt/370 for some work.  It not only suffers from terribly
poor performance, but it has other problems as well.  They ported VM, but
in the process they destroyed its capability to do multitasking.
Furthermore, they used an old version of VM, so lots of the more recent
functionality are missing.  They only support 4 megabytes of memory.
Also, the version I had only emulated a 3277 type terminal which is
horrible.  THEY EVEN EMULATED THE FACT THAT THE 3277 LOOSES INTERRUPTS.

It is true and impressive that the 370 instruction set was implemented
using re-microcoded 68000's.  What was not mentioned was that the
floating point was implemented using a re-microcoded 8087, which talked
to the 68000's!

The xt/370 and at/370 are typical examples of IBM trying to solve problems
by compatibility rather than innovation.  They made a mess.

>   system 9370 which they are trying to call the 'VAX-KILLER' Chuckle-chuckle-
>   snort-giggle.

I have heard that the 9370 is selling like hot-cakes.  It isn't simply a
compatible system.  It happens to be a very good one as far as
cost/performance.  It also runs Unix, and it has ASCII ports.

hooper@leadsv.UUCP (Ken Hooper) (08/14/87)

  As with most such `religeous' arguments, the `goodness' of a 68020 vs. a
80386 is dependant on the circumstances. If I were designing a cheap, high
volume microcontroller, I wouldn't choose either. But if the task were a
general purpose, multi-tasking personal computer, neither would I choose a
68008 or 8088 (at least now). At the time the 8088 was designed, 64k was a
lot of memory, and 1M was incredible, so while you may fault them for
short-sightedness, they certainly aren't `brain-damaged'. Because Motorola
was late to the market place, they were able to learn from Intel's bold step.
(BTW I though the real innovation in the 8086 was the pipelined architecture.)
  Enter IBM. At the time the IBM-PC was designed, the only competition to the
8086/8 was the Z8000, that decision may have been wrong, but when you're on
a schedule you can't wait for Motorola to get around to shipping a yet to
be seen product. As for their recent choice to stay with the Intel line in the
PS/2; what would you have them do? Switch to the architecture their major
competitors already have several years experience with? Hurt the value of their
own Intel property? Alienate all their previous PC customers? Their decision
was and is perfectly reasonable from their point of view, and most of their
customers'.
  The people on this net are far from typical users. We can hardly expect the
industry to revolve around us. I personally find the 68000 a more pleasant
architecture to design with, but I'd give the 80x8x line the benefit of more
compact code and higher performance in general. But the choice of a processor
is affected by many more factors than ease of assembly coding and Whetstones
per second. As for arguments about whether a PS/2 model 80 is faster than a
Macintosh II, that's an almost totally unrelated subject, which is in turn
largely unrelated to which one you should buy.
						Ken Hooper

bobc@LBI.UUCP (08/14/87)

In article <892@looking.UUCP>, brad@looking.UUCP (Brad Templeton) writes:
> 
> You folks astound me.  What world do you live in?
> 
> In the real world, there are lots of other constraints in choosing a
> chip than what the architecture looks like to software.
> 
> The appearance of the architecture is important, but it's only one of
> several constraints.  It is for the other constraints that IBM selected the
> 8088, and I have yet to see a serious argument as to how they could have
> selected the 68000 at the time.
> -- 
> Brad Templeton, Looking Glass Software Ltd. - Waterloo, Ontario 519/884-7473

	Users are constantly told to evaluate their needs, find out which
software packages meet those requirements, and buy the machine that runs
the software.

	Clearly then it is users who dictate market conditions for
choice of software.

	Clearly it is programmers, who generate the software, to dictate 
what the hardware should architecture should and should not provide.

	The one headache we could all have lived without, is segment registers,
and nothing Intel can say or do will ever change that fact.

	In fact IBM chose Intel because they were a good takeover target
at the time. Not because Intel had any handle on the marketplace at large.

	In one fell swoop, IBM released an inferior  machine that became a
standard , and saved Intel from going broke. Aren't we forever greatful
to BIG BLUE.

Oh BIG BLUE if only you would have known the Frankinstein you created!

..!dlb!ERidani!LBI!bobc

burton@parcvax.Xerox.COM (Philip M. Burton) (08/17/87)

In article <79@LBI.UUCP> bobc@LBI.UUCP (Robert Cain) writes:
>> Brad Templeton, Looking Glass Software Ltd. - Waterloo, Ontario 519/884-7473
>
>	Clearly then it is users who dictate market conditions for
>choice of software.
>
>	Clearly it is programmers, who generate the software, to dictate 
>what the hardware should architecture should and should not provide.

No, that's all horsepucky, at least for the successful programmers.  They
write to the architecture, but to the installed base and sales figures.

Programmers always have and always will write to IBM because of its installed
base, and its ability to penetrate a market.  (Remember the "PC" market in
1981.  Z80 was IT, and it ran CP/M.  Is either a factor today?)

Programmers MAY write to other systems if the above conditions can be met.
So, we have programmers writing for Mac's and Amigas.  But who writes for
the Coleco Adam or the Microsoft MSX system, or even for UNIX (sorry, no
flames, I love UNIX, but there's lots more application software for PC's
running DOS than running UNIX.)

-- 
Philip Burton
Xerox Corporation    408 737 4635
 ... usual disclaimers apply ...

farren@hoptoad.UUCP (08/17/87)

In article <79@LBI.UUCP> bobc@LBI.UUCP (Robert Cain) writes:
>
>	Clearly it is programmers, who generate the software, to dictate 
>what the hardware should architecture should and should not provide.

Bull puckey.  As one with 15 years of experience, half in hardware and half 
in software, I think this idea should slowly (no! quickly!) sink into the
ooze and go away.  It is VERY rare to find a software person who really 
understands the issues involved in hardware design, especially those 
involved in VLSI design.  If you don't know the constraints, you can't
make *effective* recommendations.

>	The one headache we could all have lived without, is segment registers,
>and nothing Intel can say or do will ever change that fact.

With the state of VLSI technology in 1975, I don't know that Intel had
that many options.  The 8086 was designed at a time when the microprocessor
market was still miniscule.  Some measure of 8080 compatibility was required
due to the marketing issues involved.  Segment registers allowed Intel to
produce a chip that was a clear step ahead from the available chips, while
still remaining well within the available VLSI fabrication capabilities of
the time.

>	In fact IBM chose Intel because they were a good takeover target
>at the time. Not because Intel had any handle on the marketplace at large.

If Intel were a good takeover target, then why didn't IBM take them over?
They've certainly shown little hesitation to do so with other companies that
fell within their marketing requirements.  Rolm, for example.

>	In one fell swoop, IBM released an inferior  machine that became a
>standard , and saved Intel from going broke. Aren't we forever greatful
>to BIG BLUE.

Justify this statement.  I don't think you can.  Intel may not have done 
as well as it has, but it's strengths across the entire product line would
have pretty much assured that it didn't go broke.

>Oh BIG BLUE if only you would have known the Frankinstein you created!

Well, all in all, it seems to have been a pretty benign Frankenstein.  Just
ask the thousands (millions?) of people who have been, and are, using PCs,
8088 and all, to do real and productive work.  THEY don't care whether it's
an 8088, a 68000, or a Cray X-MP inside the box, just that it gets a job
done.



-- 
----------------
                 "... if the church put in half the time on covetousness
Mike Farren      that it does on lust, this would be a better world ..."
hoptoad!farren       Garrison Keillor, "Lake Wobegon Days"

guest@vu-vlsi.UUCP (visitors) (08/18/87)

In article <2764@hoptoad.uucp> farren@hoptoad.UUCP (Mike Farren) writes:
>Well, all in all, it seems to have been a pretty benign Frankenstein.  Just
>ask the thousands (millions?) of people who have been, and are, using PCs,
>8088 and all, to do real and productive work.  THEY don't care whether it's
>an 8088, a 68000, or a Cray X-MP inside the box, just that it gets a job
>done.

You can say that again!  Especially if you do little low-level programming
at all, you really couldn't care less what is inside of the thing.  As long
as it gets the job done, correctly in some cases, in a reasonable amount of
time.

 
==============================================================================
| Mark Schaffer        | BITNET: 164485913@vuvaxcom                          |
| Villanova University | UUCP:   ...{ihnp4!psuvax1,burdvax,cbmvax,pyrnj,bpa} |
| (Go Wildcats!)       |           !vu-vlsi!excalibur!164485913              |
==============================================================================
 
  please respond/reply to the above addresses and not to guest@vu-vlsi.UUCP

root@hobbes.UUCP (08/18/87)

+---- Robert Cain writes the following in article <79@LBI.UUCP> ----
| Brad Templeton writes:
| > In the real world, there are lots of other constraints in choosing a
| > chip than what the architecture looks like to software.

AMEN!

| 	The one headache we could all have lived without, is segment registers,
| and nothing Intel can say or do will ever change that fact.

Hindsite is always best.  Granted, segment registers can be a pain, but
AT THE TIME, they were very useful!  What were *you* doing with micros in
September of 1980 (when the PC was starting design life)?  CP/M was THE OS
for Micros, 64K was HUGE - I managed several 6 user systems which ran
NorthStar BASIC with 2 floppy disk drives, a Z80, and 64K!  Actually, it
only had 56K, and of that, 8K was unused by the OS for anything!  The OS
was so configured as to allow each user a 4 or 5K workspace and rarely did
the students ever run out of room!  (How many of you can write a program
which solves 5 linear equations with 5 unknowns while not using more than
5K?  Including input and output routines?  I just tried it with C on a
Gould - the executable was 10K...  :-)

What has this to do with segments?  CP/M 3.0 and MP/M-II both used bank
selected memory (a 64K chunk was made up of (say) 56K of "unique" memory
and 8K of "shared" memory, sort of like the LIM memory boards in the PC do
today).  With this very hardware dependent method, the OS was able to use
as much memory as was needed, just not very easily.  (ie, to load a program
into bank 5, one had to switch to bank 0 where the OS was located, read in
the desired program, and copy 4K of it to a data buffer in the 8K of
"shared" memory.  When this was done, you had to switch to bank 5, copy the
4K to the right place in this bank, and go back to bank 0 to repeat this
process until everything was copied.

Along came the 8086 family.  WOW!  DIRECT access to 1 Meg of memory!  No
more bank switching.  Compared to the current Minis out there (the pdp-11
family; Vaxen were still very new) this was hot stuff!  The pdp family was
limited to 256K, 512K, or 1Mb for various machines smaller than the 11/70.
Unix kernals were running about 80-100K in size (v6, and v7), so this CPU
was right in the middle of things.

The development machines of the time were mostly S-100 boxes.  The reliable
ones used static memory.  64K cost $795!  256Kb cost $1695.  The dynamic
memory boards had timing problems when used with anything other than 2 to
4Mhz Z80s, and they cost $350 per 64K ($1300/256K).  A full blown 1 Meg
8086 system with two 8" floppies and a (gasp) 10MB hard disk would set you
back about $12,000.  I know, I built a few.  The 10Mb Morrow hard disks 
(19" RETMA Rack mount, no less) were going for the low price of $2,950!
(Prices taken from BYTE, Vol7, No 1 January 1982)

Where were the others?  Perusing the Bible of the time, Adam Osbourn's
series "An Introduction to Microcomputers", Volume 2, "Some Real
Microprocessors", 1980 update, and culling the recent trafic on
comp.sys.{intel,motorola,nsc}, and the Byte's from Oct to Dec 1981 I
find that:

  Fairchild had the F8, but it was geared for the imbedded processor
market.
  National had the SC/MP, but it, too was suited for only simple
applications.  National's new CPU was nowhere close to ready - the 16032
wasn't readily avaliable till 1982, and then it had so many bugs in it that
the only thing offered for it was a Forth OS kernal.
  Moto's new 68K was looking real good, but the peripheral chips were
nonexistant.  It wasn't even listed in the Osbourn book!  On the other
hand, Dual was advertizing a S-100 68K system with 256K and v7 Unix for $8295
(PLUS hard disk, monitors...) Moto was marketing their chip in the "mini"
nitch, and didn't want to be bothered by these hobbiests and their micros,
after all, wasn't the good old reliable 6800 good enuf for them?  (The
MC6800 is a contemporary of intel's 8080a.  In the Jaemco/ACP/Jade..  ads
in Byte you could get 6 MHz 8088s for $40; the ads had prices for 6800s but
NO mention of the 68K!  This in 1981, after the PC had been released!)
  Apple had shown that the 6502 from MOS Tech was a good processor; MOS had
a range of peripheral chips avaliable as well as TEN different versions
of the basic CPU!  But if IBM had gone with the MOS 6502 series, they
would have been the ones playing catch up and johnny come lately.  Not a
position IBM ever wants to admit publicly.

| 	In fact IBM chose Intel because they were a good takeover target
| at the time.

Where are your facts to back this up?  Or is this an armchair assumption?
Sure, IBM bought intel stock; wouldn't you when a sizable percent of your
income depended on their product?  

|              Not because Intel had any handle on the marketplace at large.

Intel didn't have a handle...?  You're kidding, right?  EVERY CP/M system out
there (8080, 8085, or Z80) has intel parts in it!  Intel CPUs, DMA chips,
USARTs, DRAM controllers, Floppy controllers, and other "glue".  Many of
the new 68K boxes (CompuTex...) were built on Multibus (an intel bus) cards!
They had their fingers into almost everyones pie, and they knew it.

| 	In one fell swoop, IBM released an inferior  machine that became a
| standard

Sure, by TODAYS standards it is inferior, but in 1980, it was a step above
the other stuff that was AVALIABLE.

| Oh BIG BLUE if only you would have known the Frankinstein you created!

They have sold much more than 5 MILLION of these beasts - Figure a standard
35% markup from wholesale to retail, and you still get several billion dollars
income!  Quita a profitable Frankenstein!  And that's what drives IBM,
profits!  Not state of the art, not segments, not CPU speed, but $$$$.

Sorry about SHOUTING so much, but let's try to realize that there is a BIG
difference between 1987 and 1980 in terms of computer power and knowledge.
In it's time, Multics (which was a segmented OS) was in the forefront of
the OS research, and segments were A Good Thing.  They looked promising and
were an easy extention from the 8080 for intel.  If we remember that
hardware always lags the current state of the art computer programming
*theory* then we realize that the current crop of CPUs (68020, 80386, 32332)
is mirroring the theory of yesterday also.  What's on the horizon?  The
68030, 80486, and the 32532.  And by the time we get them, they too will
be "old theory".  But without them, we wouldn't get anywhere!

Here's to tomorrow's CPUs; let's keep on improving them!

-- 
John Plocher uwvax!geowhiz!uwspan!plocher  plocher%uwspan.UUCP@uwvax.CS.WISC.EDU

gerard@tscs.UUCP (08/18/87)

In article <1924@Shasta.STANFORD.EDU> tang@Shasta.STANFORD.EDU (Kit Tang) writes:
>>> 	make use of their opportunity to escape from the Intel tar pit
>>> 	and use the MC68020 to make the PS/2 into a REAL computer?
>> 
>
>> One final note, the 386 has a 1meg segment limit.  What a limit.
>                                ^ 4G (I think)
>      Yes, one segment in the 386 is the same as the whole 68020.
>
>> What size segmentation would you like?
>      The current virtual address space the 386 can address is 64T, but
>      the huge address space has to be accessed through segment (4G max).
>      How about a single segment of 64T ?

Yes, the 80386 has up to 4 Gigabytes of physical RAM, and 64 Terabytes of
virtual address space.  

I don't think 4 Gigabytes the limiting factor here!

How many memory chips is 4 Gigabytes anyway?

Chip Size	  64Kx1		 256Kx1		1024Kx1
# Required	524,288		131,072		 32,768

Note: This does not include parity or ECC.

If 256K chips were $1.75 each (Cheap) and the board and support logic was free,
that would be: $225,376.00 (hardly micro costs)

If it took only 5 seconds to install each chip: It would take 7.6 days of
continous work to install 131,072 256K chips.

If a disk drive was large enough and transfered data at a rate of 1.25M
bytes/second (maximum xfer rate of SMD or ESDI drives), and you were the
only process on that drive: 55 minutes transfer time for 4 Gigabytes.
I could see the message "core dumped - go to lunch", or on a system with
other users, "core dumped - come back tommorrow" :-).

Stacked on top of each other, a stack of 131,072 - 256K chips would be
1,547 feet 4.5 inches or .29 miles high. (not counting pins)

131,072 - 256K chips in sockets would occupy 327.6 square feet of board space
or 864 IBM AT sized PC boards. (no support circuitry)

The power supply for 131,072 - 256K RAMs would require a 5 volt supply
capable of between 39,322 and 131,072 watts.  Not to mention how much air
conditioning would be required to keep it cool.

By the way, anyone have 64+ Terabytes of swap space?

Not to sell 4G physical, and 64T virtual address space short, it is nice to
have limits that are extremely high.  This might make you think you have no
limits at all!  I think we need higher density RAM's and larger/faster disk
storage devices before we could even think of reaching 4G of main memory.

Yes, an on-chip MMU does reduce the chip count, nice for the engineer.
A programmer on the other hand would welcome an additional 8 thirtytwo bit
registers.  Yes, the MC68020 has twice the number of 32 bit general purpose
registers as the 80386.  If you consider the usage of the registers on each
machine, the 68020 come out further ahead.  The 68020 also has 4 Gigabytes
of physical address space.  The MC68030 does have a built-in MMU.

For a good laugh, you might like to read a Motorola publication entitled
"Motorola MC68020 Benchmark Report" (Publication BR322).  It is interesting
to see how Intel cheated on some of it's benchmarks against Motorola chips.
I wouldn't want to buy a used car from Intel :-).  I tend to believe Motorola's
claims, as if they were false, the lawyers at Intel would have a field day.
Also included in this report, are the listings of the benchmark programs.

At least the 68020 can multiply 32 bit numbers :-).

------------------------------------------------------------------------------
Stephen Gerard  -  Total Support Computer Systems  -  Tampa  -  (813) 876-5990
UUCP: ...{codas, gatech}!usfvax2!tscs!gerard
US-MAIL: Post Office Box 15395 - Tampa, Florida  33684-5395

wtm@neoucom.UUCP (08/18/87)

<< various stuff about 80x8[8|6] vs. 680x0 and IBM, etc. >>

Anybody remember the truly awful IBM 9000 so-called laboratory
computer?  I'm not sure; what it had inside seems like it might have
been a 68K chip.  It hit the market at about the same time as the
original PeeCee.

Regardless of the PeeCee, virtually everybody I knew that can in
contact with the IBM 9000 was pretty annoyed with it.  It was
totally incompatible with everything else in the world, including
everything from IBM itself.  IBM refused to sell us software.  They
sold it to us to ostensibly run our liquid chromatographs, but
waited two years to deliver the LC software.  Actually, IBM did
deliver some LC software, but it would only digitize a single
channel.  They had promised that the machine would be able to
handle two channels.  They never did manage
to deliver the assembler for it, which is why I'm not sure what
chip was really inside.  We struggled along trying to write our
software in a really rotten port of Microsoft MBASIC.  It had the
feature of yielding 2*8=16 while something like 1*16=15.75!!  The
internal representation was supposed to support 6 digits of
precision: oh well.

The 9000 also featured a color printer mechanism made by IDS (now
Data Products) that was unbelievably unreliable.  (Anybody ever
have to endure a IDS Paper Tiger?) IBM has since sold their
laboratory instrument division to Nicolet and mercifully burried
the 9000.  Nicolet **gave** us a manual operating station to run
the LCs so that the IBM computer could be removed.  The IBM now is
used mainly to prop the door to the room open on hot days.

Oh, I just heard that it was definitely based on the 68000.  The
person looking over my shoulder also complained that the darn RAM
chips were soldered in, so they couldn't even be used for something
worthwhile like an Aboveboard.

--Bill
(wtm@neoucom.UUCP)

davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr) (08/19/87)

In article <138@tscs.UUCP> gerard@tscs.UUCP (system administrator) writes:
|For a good laugh, you might like to read a Motorola publication entitled
|"Motorola MC68020 Benchmark Report" (Publication BR322).  It is interesting
|to see how Intel cheated on some of it's benchmarks against Motorola chips.
|I wouldn't want to buy a used car from Intel :-).  I tend to believe Motorola's
|claims, as if they were false, the lawyers at Intel would have a field day.
|Also included in this report, are the listings of the benchmark programs.

I will quote some figures from Byte magazine, July 1987 issue. I
personally do not doubt the Intel benchmarks at all. If they chose to
select benchmarks which show the best points of their product, can you
seriously believe that Motorola is so dumb that they don't do the same
thing? 

test		68010	68020	68020	80386
		7.8MHz	16MHz	12.5MHz	16MHz
		1w/s	1w/s	0w/s	0.5w/s
		--	881@8	881@12	287@16

Fibonacci	 264.0	  71.6	  70.2	   3.1	time in sec
Float		 230.0	   4.2	   2.9	   5.4
Sieve		  64.7	  14.9	  12.8	   6.0
Sort		 111.3	  19.8	  12.6	   9.7
Savage		1884.3	   8.8	  24.8	  35.1

Whetstones	 574.0	2114.0	2702.0	3703.7	whetstones / sec

The only comment I would make on this is that the 80287 was used rather
than the 80387. This typically would improve the f.p. performance by at
least 2:1. The 80387 has much faster trig functions (I have seen figures
indicating 10:1), which would improve the Savage benchmark performance.
-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {chinet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

Isaac_K_Rabinovitch@cup.portal.com (08/19/87)

John Plocher uwvax!geowhiz!uwspan!plocher is to be thanked for an interesting
and informative summary of chip history.  I have to disagree with his
opinions on a couple of points.

First, the Z80 is not an Intel chip, it's Zilog, a company started by Intel
renegades.  I seem to recall that when CP/M was *the* micro OS, the Z80 was
*the* chip.

First, while it's helpful of Plocher to debunk the conspiracy theories spouted
by people angry at IBM, it's also worth thinking about how the IBM PC and its
kin are standing in the way of progress in micro OS and applications, and
flamming or excusing IBM for its dicisions of 7 years ago won't change that.
Perhaps it would be worth talking about what we're gonna do about the problem,
rather than rehahsing its olitlitl

phil@amdcad.AMD.COM (Phil Ngai) (08/20/87)

In article <182@hobbes.UUCP> root@hobbes.UUCP (John Plocher) writes:
>Hindsight is always best.  Granted, segment registers can be a pain, but
>AT THE TIME, they were very useful!  What were *you* doing with micros in
>September of 1980 (when the PC was starting design life)?

John does a decent job of defending IBM's choice of Intel's 8088 for
the PC. I'd like to add a few notes on the environment in which Intel
designed the 8088. (Of course, I hate small segments as much as anyone
but I suspect a lot of the youngsters on the net don't have enough
sense of history to understand why things are the way they are.)

These quotes are from William Davidow's book _Marketing High
Technology_.  He was senior vice president of sales and marketing for
Intel and has a Ph.D from Stanford. 

 "Les Vadasz and I had been co-general managers of the microprocessor
division in 1976 when the 8086 was being planned. At the time we
decided to make the product an extension of the then-successful 8080
family. That created some design problems, but they were more than
counterbalanced, in our opinion, by the resulting access to a large
existing software library. 
  The 8086 was introduced to the market in 1978. As the first
high-performance, fully supported 16-bit microprocessor, it had
quickly gained the top position in the market, capturing the lead from
older and less capable products supplied by TI and National."

I did a lab project in 1978 using the then new and expensive 16,384
bit dynamic RAM. I needed an 8-bit memory but the RAMs were so
expensive I could only afford one and I had to multiplex and
demultiplex 8 bits into 1 bit. From 1980, when I started working,
until about 1983, 65,536 bit dynamic RAMs were state of the art. 

Our era of cheap megabyte memories started a very short time ago.  The
8086 was designed over 11 years ago, in a time when 4,096 bytes of
static memory took up an entire S-100 card, and dynamic memory was
regarded as unreliable. The designers also had not had our experience
of more than a decade of watching memory sizes double every few years
and offering a megabyte of address space must have seemed like quite a
bold idea. 

Hey Clif, why don't you defend your own company?
-- 
I speak for myself, not the company.

Phil Ngai, {ucbvax,decwrl,allegra}!amdcad!phil or amdcad!phil@decwrl.dec.com

Isaac_K_Rabinovitch@cup.portal.com (08/20/87)

Re:  the 3gig segments on the 80386.

In 1992 Fry's will be selling 1gigabit chips for $25.  Don't laugh.  Remember
when everyone though a megabyte whas *huge*?

madsen@vijit.UUCP (Dave Madsen) (08/21/87)

In article <4633@iucs.UUCP>, bobmon@iucs.UUCP (RAMontante [condition that I not be identified]) writes:
| In article <234@etn-rad.UUCP> jru@etn-rad.UUCP (0000-John Unekis) writes:
| >        Let`s face it, If IBM hadn`t made the mistake of using an Intel
| >	[deleted]
| >	... "braindamaged"(Microsoft's own word) architecture of
| >	the 80286 ...
| 
| [flameflameflame]
| Braindamaged architectures for braindamaged companies... within the limits
| of my knowledge of IBM's S\360 and S\370 architectures (very narrow
| limits :-), the 80x86 architectures look strikingly similar -- like
| 1/25th scale model cars are similar to the real thing.  ...
| [deleted]
| RAMontante
| Computer Science Dept.	might be -->	bobmon@iucs.cs.indiana.edu
| Indiana University		or maybe -->	montante@silver.BACS.indiana.edu

I have been working with the IBM 360 style of architecture since the 360/50
and on into today.  However, I'm not painted Blue.  Nevertheless, I happen to
like that particular assembler and instruction set.  It is far more orthogonal
and flexible than the 80x86 stuff, and far more high-level.  
The comparison made above is unfair to the 360.  I have worked on "micro"
architectures as well, including the (now ancient and defunct IBM 1130), 8080,
Z80, etc, and find BY FAR that the most distasteful is the 80x86 stuff.  
They are all "family", but as you go BACKWARDS it becomes easier
to work on.  I think that this is a clear case of "compatibility blues".  
Most of today's problems have simply outgrown that architecture, and trying
to add "features" on top only makes the whole thing more unwieldy.  
I would personally say that much elegance is derived from simplicity (or vice
versa ?).  Would you say that the 80286 is either simple or elegant?
The problem is compounded by the goofiest assembler/linker that I'VE ever 
worked with in my entire life.  I LIKE assembler (please, no religious wars;
I use and like both high & low level languages), and consider myself a
"bit-fiddler".  Nevertheless, I find that I AVOID working in 80x86 assembler 
whenever possible.

I see many people slamming IBM lately.  While I VERY DEFINITELY do not like 
some of IBM's products lately, I think some caution is in order here;
it is not an us-or-them situation.  We need to identify what ACTIONS and
PRODUCTS we feel are more or less in our own / our industry's best interest.
Berating the whole company is useless and doesn't accomplish anything.
(Unless we're paid to do it!)   <-- A joke.

(BTW, I work for an IBM competitor  :-) )

I read somewhere that technology moves ahead in 4 steps:
1)  An original idea or prototype.
2)  Idea developed into a usable product.  Refinements.
3)  Further refinements as the product matures.  There are diminishing 
    returns on each successive refinement.
4)  New idea breakthrough.  The new idea could supplant the old, but there
    is resistance from the people with a interest in the mature 
    technology.  Note that this interest need not be monetary in nature.
Nowhere are these steps being more rapidly seen than in the area of computer
hardware.  Much of the contention in the industry is due to the fact that
hardly does one breakthrough happen before the next.  The topics of debate
change, but the furor does not.

[Now off the soapbox].


Dave Madsen   ---dcm

ihnp4!vijit!madsen    or    gargoyle.uchicago.edu!vijit!madsen

I sure can't help what my employer says; they never ask me first!

greg@gryphon.CTS.COM (Greg Laskin) (08/21/87)

In article <79@LBI.UUCP> bobc@LBI.UUCP (Robert Cain) writes:
>	In fact IBM chose Intel because they were a good takeover target
>at the time. Not because Intel had any handle on the marketplace at large.
>
>	In one fell swoop, IBM released an inferior  machine that became a
>standard , and saved Intel from going broke. Aren't we forever greatful
>to BIG BLUE.

IBM selected the 8088, in part, because Intel could manufacture sufficient
quantities of the part to meet IBM's requirement.  It's possible that they
moved away from the 8086 and 68000 to avoid impacting other areas of
their product line.

Several YEARS after the introduction of the PC, Intel was short of cash
because of the general malaise in the semiconductor market, and IBM
bought a substantial amount of Intel stock to keep Intel going and
maintain their supply of parts.  IBM has since sold off most of this
stock.

IBM has always metered out technology, incorporating just enough in their
product line to maintain their market position.  Intel seems to do
better at MARKETING microcomputer product than Motorola, an area in
which IBM also dominates its markets.

That said, its ok to bash IBM all you want because they really don't
care what you think but you really ought to try to keep your facts
straight.



-- 
Greg Laskin   
"When everybody's talking and nobody's listening, how can we decide?"
INTERNET:     greg@gryphon.CTS.COM
UUCP:         {hplabs!hp-sdd, sdcsvax, ihnp4}!crash!gryphon!greg
UUCP:         {philabs, scgvaxd}!cadovax!gryphon!greg

rod@intelca.UUCP (Rod Skinner) (08/24/87)

>I will quote some figures from Byte magazine, July 1987 issue. I
>personally do not doubt the Intel benchmarks at all. If they chose to
>select benchmarks which show the best points of their product, can you
>seriously believe that Motorola is so dumb that they don't do the same
>thing? 

According to COMPAQ's documentation, the 287 is executing at 8MHz with
the appropriate logic to slow down the CPU access to the 287.  The Model
80 numbers from the August issue of BYTE have been added (as well as the
corrections discovered, even impartial benchmarkers make mistakes).
COMPAQ Fibo was 53.1 seconds and the Float was 4.41 seconds.


FROM BYTE:
                                        vvvvv   vvvvv   vvvvv

test		68010	68020	68020	68020	80386	80386
		7.8MHz	16MHz	12.5MHz	15.7Mhz	16MHz	16Mhz
					MAC-II	COMPAQ	IBM
		1w/s	1w/s	0w/s	1w/s	0.5w/s	1w/s
		--	881@8	881@12	881@15	287@8	387@16

Fibonacci	 264.0	  71.6	  70.2	  83.7	  53.1	57.4	time in sec
Float		 230.0	   4.2	   2.9	   2.7	   4.4	 0.5
Sieve		  64.7	  14.9	  12.8	  16.7	   6.0	 6.5
Sort		 111.3	  19.8	  12.6	  22.4	   9.7	 9.5
Savage		1884.3	   8.8	  24.8	   5.4	  35.1	19.2

Dhrystones	 574.0	2114.0	2702.0	2083.0	3703.7	3125.0	dhrys/sec

>The only comment I would make on this is that the 80287 was used rather
>than the 80387. This typically would improve the f.p. performance by at
>least 2:1. The 80387 has much faster trig functions (I have seen figures
>indicating 10:1), which would improve the Savage benchmark performance.
>-- 
>	bill davidsen		(wedu@ge-crd.arpa)
>  {chinet | philabs | seismo}!steinmetz!crdos1!davidsen
>"Stupidity, like virtue, is its own reward" -me

The Float numbers that BYTE measured were very close to the 10x numbers
that you might see moving from the 287-8 to a 387-16.  The Savage numbers
are however puzzling.  The numbers are not even "frequency scaled".  So
when we pulled out our version of the Metaware High C compiler and executed
the Savage benchmark, our numbers did not match.  Without optimizations,
the number was 6.0 seconds on the Model 80 only using the "387" switch.

The same Savage benchmark executing on the Intel 386/24 Multibus I board
with UNIX/386 System V and the Greenhills C 1.8.2H compiler took 3.9 seconds.
The difference in these two compilers is their ability to generate inline
transcendentals versus calling a subroutine.  The Metaware High C compiler
makes a call to process the exponential used in the Savage code while the
Greenhills compiler processes it inline.

The last time I saw the Mot Benchmark Report BR322, it was dated October 1986
and used information that Intel published in April 1985.  Not what I would 
call current information.

Rod Skinner
Santa Clara Microcomputer Division
Intel Corp.   MS SC4-40
3065 Bowers Avenue
Santa Clara, CA 95051 
{mipos3,hplabs,oliveb,qantel}!intelca!rod

doug@edge.UUCP (Doug Pardee) (08/25/87)

> First, the Z80 is not an Intel chip, it's Zilog, a company started by Intel
> renegades.  I seem to recall that when CP/M was *the* micro OS, the Z80 was
> *the* chip.

Indeed so.  And the reason the Z-80 was so popular was that it had the 7-bit
on-chip refresh counter, so that you could easily design a system which used
those huge huge huge unbelievably enormous 16K dynamic RAMs instead of the
usual 1K static RAMS!
-- 
Doug Pardee, Edge Computer Corp; ihnp4!mot!edge!doug, seismo!ism780c!edge!doug

peter@aucs.UUCP (Peter Steele) (08/25/87)

I would think the "fairest" benchmarks should compare the following chips:

    68000 <--> 8086
    68008 <--> 8088
    68010 <--> 80186
    68020 <--> 80286
    68030 <--> 80386

And from what I hear, the 68030 blows the 80386 out of the water...


Peter W. Steele     UUCP     : {uunet|watmath|utai|garfield}!dalcs!aucs!Peter
Acadia University   BITNET   : Peter@Acadia
Wolfville, N.S.     Internet : Peter%Acadia.BITNET@WISCVM.WISC.EDU
Canada  B0P 1X0     PHONEnet : (902) 542-2201x121

jwhitnel@csib.UUCP (Jerry Whitnell) (08/25/87)

In article <257@etn-rad.UUCP| jru@etn-rad.UUCP (0000-John Unekis) writes:
| FLAME ON!!!----
|
| ...
|
|I would hate to impugn the reputation of a Magazine like BYTE by suggesting
|that they would publish a test where the results were deliberatley falsified,
|but it is true that the best way to make INTEL look better than Motorola
|is to LIE.

It looks like Byte uses the standard Intel method of benchmarking.  Take the
best 80386 compiler you can find, the take a mediocre '020 compiler, run
them aginst each other and call it a CPU vs. CPU test.  For two indepent
sets of benchmarks, see IEEE Micro Vol 6, No 4, pp 53-58 "A Benchmark
Comparison of 32-bit Microprocessors" and IEEE Micro Vol 7 No 3 "A Synthetic
Instruction Mix for Evaluating Microprocessor Performance"  The former shows
(using the EDN benchmarks) that the 020 is about 1.75 times faster then the 386.
I've posted a summary of the results from these articles to comp.sys.mac.


Jerry Whitnell                           It's a damn poor mind that can only
Communication Solutions, Inc.            think of one way to spell a word.
						-- Andrew Jackson

anton@utai.UUCP (08/26/87)

In article <416@aucs.UUCP> peter@aucs.UUCP (Peter Steele) writes:
>I would think the "fairest" benchmarks should compare the following chips:
>
>    68000 <--> 8086
>    68008 <--> 8088
>    68010 <--> 80186
>    68020 <--> 80286
>    68030 <--> 80386
>
>And from what I hear, the 68030 blows the 80386 out of the water...
>
>
>Peter W. Steele     UUCP     : {uunet|watmath|utai|garfield}!dalcs!aucs!Peter
>Acadia University   BITNET   : Peter@Acadia
>Wolfville, N.S.     Internet : Peter%Acadia.BITNET@WISCVM.WISC.EDU
>Canada  B0P 1X0     PHONEnet : (902) 542-2201x121


I cannot see the logic behind this statement.

I THINK that we should compare current production hardware.  I THINK that
we should compare similarly priced systems like the new Macs and 386
machines, because that is what can be had right now and not in the year
2001.  

In conclusion have you noticed that the micro wars ended abruptly when
the 386 bencmarks were published.  I guess this is because people who
have something to proove have no more fiction to go on.  What we have
in the preceeding message is a futile attempt to throw some sand
in the form of 68030 into our eyes.

feg@clyde.UUCP (08/28/87)

In article <4048@utai.UUCP>, anton@utai.UUCP (Anton Geshelin) writes:
> In article <416@aucs.UUCP> peter@aucs.UUCP (Peter Steele) writes:
> >I would think the "fairest" benchmarks should compare the following chips:
> >    68000 <--> 8086
> >    68008 <--> 8088
> >    68010 <--> 80186
> >    68020 <--> 80286
> >    68030 <--> 80386
> >And from what I hear, the 68030 blows the 80386 out of the water...
> 
> I cannot see the logic behind this statement.
> I THINK that we should compare current production hardware.  I THINK that
> we should compare similarly priced systems like the new Macs and 386
> machines, because that is what can be had right now and not in the year
> 2001.  
> What we have in the preceeding message is a futile attempt to throw some sand
> in the form of 68030 into our eyes.

  Amen. We could go like this forever. 68040, 68050, 68060, 80486, 80586
  ad infinitum, ad nauseam.  But there are millions of "brain-damaged" PC's
  and hundreds of thousands of Macs.  As several others have mentioned,
  the overriding factor on Intel's design was prior software overhang.
  They, like IBM before them, had their eyes on that.  People-cost in
  software is a far greater aggregate than any goodies made available
  by technical smarts in the CPU, which breaks that software inventory.
  The smartest decision IBM ever made was to force all succeeding
  computer designs to run preceding software, beginning with the
  360.  Whether we programmer hackers like it or not, that is going
  to be the way of the future.  Whose micro PC has recently been
  fixed to run the other guy's software? Was it the Mac or the IBM PC?
  You get one guess.

Forrest Gehrke

peter@aucs.UUCP (08/28/87)

in article <4048@utai.UUCP>, anton@utai.UUCP (Anton Geshelin) says:
} 
} In article <416@aucs.UUCP> peter@aucs.UUCP (Peter Steele) writes:
}>I would think the "fairest" benchmarks should compare the following chips:
}>
}>    68000 <--> 8086
}>    68008 <--> 8088
}>    68010 <--> 80186
}>    68020 <--> 80286
}>    68030 <--> 80386
}>
}>And from what I hear, the 68030 blows the 80386 out of the water...
}>
...
} have something to proove have no more fiction to go on.  What we have
} in the preceeding message is a futile attempt to throw some sand
} in the form of 68030 into our eyes.

I posted this because I read this somewhere a while back. It was
was only agreeing and I had no sand in my hands when I posted it.


Peter W. Steele     UUCP     : {uunet|watmath|utai|garfield}!dalcs!aucs!Peter
Acadia University   BITNET   : Peter@Acadia
Wolfville, N.S.     Internet : Peter%Acadia.BITNET@WISCVM.WISC.EDU
Canada  B0P 1X0     PHONEnet : (902) 542-2201x121

davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr) (08/28/87)

In article <257@etn-rad.UUCP> jru@etn-rad.UUCP (0000-John Unekis) writes:
|In article <7042@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:
|>....
|>I will quote some figures from Byte magazine, July 1987 issue. I
................ and I did
|first off - those were dhrystones, not whetstones, one is an integer test,
|the other is floating point.
................ true. I had been doing a whetstone test before reading
................ my mail and my fingers didn't switch gesr.
|
|       I notice that you conveniently forgot the worst 
|       Intel column
|	 80286
|	 8Mhz
|	 1w/s
|	 ----
|Fib       950
|Float     116.36
|Seive     26.71
|Sort      46.53
|Savage    1103.0
................ Actually I meant to leave out the 68010 results, too.
The original article was about 68020 vs. 80386. The results were not
germane to the point I was making.
... blithering deleted

|I would hate to impugn the reputation of a Magazine like BYTE by suggesting
|that they would publish a test where the results were deliberatley falsified,
|but it is true that the best way to make INTEL look better than Motorola
|is to LIE.
.... my original point was that calling the Intel benchmarks lies
because Intel chose the benchmarks carefully, and I stand by it.
Obviously you would rather slander BYTE than admit that the 80386 was
faster in these benchmarks. Moreover you were so eager to disagree with
the results that you totally missed the point, which is the validity of
the benchmarks, not "my CPU is better than your CPU". I have both CPUs,
and I can't seem to care about benchmarks.
-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {chinet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

davidsen@steinmetz.steinmetz.UUCP (William E. Davidsen Jr) (08/28/87)

In article <416@aucs.UUCP> peter@aucs.UUCP (Peter Steele) writes:
|I would think the "fairest" benchmarks should compare the following chips:
|
|    68000 <--> 8086
|    68008 <--> 8088
....fair but probably meaningless
|    68010 <--> 80186
....the 80186 will be quite a bit faster than the 8086, due to new
instructions. The 68010 was mainly released to allow demand paging, and
it is not much of an improvement. This is probably not very fair to the
68010.
|    68020 <--> 80286
....this is a 32 bit bus vs a 16 bit buss, and linear vs segmented
addressing. I think the 80386 is a better comparison in this case.
|    68030 <--> 80386
....I would expect the 68030 to be faster When the 486 comes out it is
supposed to have separate data and operation busses, including address,
so it *should* be a better comparison.
|
|And from what I hear, the 68030 blows the 80386 out of the water...
....the only benchmarks I have seen indicate that the 68030 is about
40-60% faster with the same speed memory. I'm not sure what the
price/performance ratio is.
|
|Peter W. Steele     UUCP     : {uunet|watmath|utai|garfield}!dalcs!aucs!Peter

It's very hard to compare these processors exactly, since their features
don't match. For many things the 80386 is better because it's so much
cheaper as a package. The <$3000 AT style machines will run Xenix/386
and still stay under $5k for hardware and software for a "personal
machine" sized system with 2-3MB memory and a few serial ports for uucp.
-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {chinet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

jru@etn-rad.UUCP (John Unekis) (08/29/87)

>
>In conclusion have you noticed that the micro wars ended abruptly when
>the 386 bencmarks were published.  I guess this is because people who
>have something to proove have no more fiction to go on.  What we have
>in the preceeding message is a futile attempt to throw some sand
>in the form of 68030 into our eyes.
...
  I see that you did not notice my counter -posting to the bench marks.
  The tests in byte magazine produced extremely misleading results, to the
  point of being completely falsified. What was timed by the authors in byte
  magazine was the delta time between the start and stop messages their
  program printed out. Since they were running under a special 386 protected
  mode software package, the start message was probably significantly delayed.
  This gave the impression that the software ran in seconds, when the actual
  elapsed time was probably over a minute. To confirm this I ran their 
  fibonacci sequence routine on a 16 Mhz 80386/80287 and on a 12 Mhz 68020
  with 68881. The 80386 was running UNIX V in 32 bit mode, and so was the
  68020. The timing was done with a UNIX time utility which measures actual
  CPU usage, as versus the BYTE article which used a hand held stop-watch.

  Under these much more accurate conditions the fibonacci test ran
  in 69.1 sec on the 12Mhz 68020/68881 and in 60.3 sec on the 16Mhz 
  80386/80287.   Even with a 25% clock speed advantage the 80386 was
  only 12% faster.(BYTE said the 80386 would do it in 3.1 sec) 

  The conclusions are that accurate benchmarks are hard to perform,
  that you should always double check results on other machines,
  and that any test which shows that much of a performance lead 
  for INTEL has obviously been RIGGED.

zentrale@rmi.UUCP (08/30/87)

In article <7145@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:
[...]
: don't match. For many things the 80386 is better because it's so much
: cheaper as a package. The <$3000 AT style machines will run Xenix/386
: and still stay under $5k for hardware and software for a "personal
: machine" sized system with 2-3MB memory and a few serial ports for uucp.
[...]
       ... and e.g. e SUN 3/50 with 4 MB(w/o) HD < $ 5000
	   140 MB HD/60MB Streamer               < $ 2500
incl. fortran, pascal, c, windowing, tcp/ip, ethernet .....
So "other" workstation come to a reasonable price class..

: -- 
: 	bill davidsen		(wedu@ge-crd.arpa)
:   {chinet | philabs | seismo}!steinmetz!crdos1!davidsen
: "Stupidity, like virtue, is its own reward" -me

*****************************************************************
* addresses:  uucp   rmohr@rmi.uucp       rmohr@unido.bitnet    *
*****************************************************************

jru@etn-rad.UUCP (John Unekis) (08/31/87)

In article <7144@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:
>In article <257@etn-rad.UUCP> jru@etn-rad.UUCP (0000-John Unekis) writes:
>|In article <7042@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:
>|>....
>Obviously you would rather slander BYTE than admit that the 80386 was
>faster in these benchmarks. Moreover you were so eager to disagree with
>the results that you totally missed the point, which is the validity of
>the benchmarks, 
....... 
No ,I did not miss the point, you deleted from my posting the results
that I posted which showed the times that I got when I reran the BYTE
benchmarks under controlled conditions.  My objection to the BYTE benchmarks
was that they did not measure the actual CPU usage, they used a stop watch
to measure the time between printed messages from their test program. The
80386 benchmarks were run under a special 32 bit protected mode software 
environment which I believe interfered with the printing of these messages
and thus invalidated the timing of the test. 

I reran the BYTE fibonacci sequence test under controlled conditions, and
instead of the 20 to 1 advantage that BYTE claimed for the 80386, an
accurately measured(using UNIX time command)test showed that a 16Mhz 80386 was
only 12% faster than a 12Mhz 68020. 

I therefore have posted the results of this counter-benchmark in the hope
of showing how badly inaccurate benchmarks can distort performance figures.

I routinely use several CPU's in my work, including the 80386, 8088, 80286,
68020, 32032 and some lesser-known processors. None of them has any 
advantages that would give them a greater than 1 order of magnitude speed
advantage in real life situations. I prefer the 68020 to the intel family
beacuase it does not require me to deal with segmentation.

A little light-hearted hand-to-hand verbal combat in defense of one's 
favorite CPU is all in good fun. But I refuse to sit still when grossly
inaccurate tests are presented as gospel truth.

---------------------------------------------------------------------

disclaimer: You can't blame anyone but me for what I do.
ihnp4!wlbr!etn-rad!jru

dbercel@toto.UUCP (08/31/87)

In article <12990@clyde.ATT.COM> feg@clyde.ATT.COM (Forrest Gehrke) writes:
>
>  .................................................... People-cost in
>  software is a far greater aggregate than any goodies made available
>  by technical smarts in the CPU, which breaks that software inventory.
>
>Forrest Gehrke

Indeed, I recall reading an IBM report that said for every penny ($0.01)
they spend on hardware R & D they had to spend 100M ($100,000,000.00) in
software R & D.

danielle

davidsen@steinmetz.UUCP (09/01/87)

In article <701@rmi.UUCP> rmohr@rmi.UUCP (Rupert Mohr) writes:
|In article <7145@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:
|[...]
|: don't match. For many things the 80386 is better because it's so much
|: cheaper as a package. The <$3000 AT style machines will run Xenix/386
|: and still stay under $5k for hardware and software for a "personal
|: machine" sized system with 2-3MB memory and a few serial ports for uucp.
|[...]
|       ... and e.g. e SUN 3/50 with 4 MB(w/o) HD < $ 5000
|	   140 MB HD/60MB Streamer               < $ 2500
|incl. fortran, pascal, c, windowing, tcp/ip, ethernet .....
|So "other" workstation come to a reasonable price class..

This does not appear to be quite true. The price includes a "right to
use" license. Average users do not have an ethernet tap from which to
download their software. It also does not include the manuals, and
update service.

If you are at a site which has lots of Suns on an enet, the prices are
correct. For people who have to buy software and manuals, I believe
you're off by about $1500 at least. Sun is quite possibly the cheapest
major brand BSD system, but it quite pricy for a personal UNIX system,
which, I believe, was the original question.
-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {chinet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

davidsen@steinmetz.UUCP (09/01/87)

In article <265@etn-rad.UUCP> jru@etn-rad.UUCP (0000-John Unekis) writes:
|In article <7144@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:
|>In article <257@etn-rad.UUCP> jru@etn-rad.UUCP (0000-John Unekis) writes:
|>|In article <7042@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:

|[...]
|I reran the BYTE fibonacci sequence test under controlled conditions, and
|instead of the 20 to 1 advantage that BYTE claimed for the 80386, an
|accurately measured(using UNIX time command)test showed that a 16Mhz 80386 was
|only 12% faster than a 12Mhz 68020. 

My original posting has been expired, but I believe that the 12MHz
machine also outran the 16MHz 68020, due to no wait states. I hope to be
able to run the benchmark on two Suns and two 386's at known CPU speeds
and wait states. I'll post the results here and leave the topic of
benchmarking alone until then.
-- 
	bill davidsen		(wedu@ge-crd.arpa)
  {chinet | philabs | seismo}!steinmetz!crdos1!davidsen
"Stupidity, like virtue, is its own reward" -me

phil@amdcad.UUCP (09/01/87)

In article <16481@toto.uucp> dbercel@sun.UUCP (Danielle Bercel, MIS Systems Programming) writes:
>Indeed, I recall reading an IBM report that said for every penny ($0.01)
>they spend on hardware R & D they had to spend 100M ($100,000,000.00) in
>software R & D.

Is this more misinformation from the woman who told us that Motorola
invented the 6502? Let's see, IBM is about $50 billion annual gross
revenue. If they spent it all on software R&D that would give a total
annual investment of $5 on hardware R&D. 

Yup, sounds like more misinformation. Danielle, you ought to get your
memory checked. 

-- 
I speak for myself, not the company.

Phil Ngai, {ucbvax,decwrl,allegra}!amdcad!phil or amdcad!phil@decwrl.dec.com

wrp@krebs.acc.virginia.edu (Wm Pearson) (09/02/87)

In article <701@rmi.UUCP> rmohr@rmi.UUCP (Rupert Mohr) writes:
>In article <7145@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:
>: don't match. For many things the 80386 is better because it's so much
>: cheaper as a package. The <$3000 AT style machines will run Xenix/386
>       ... and e.g. e SUN 3/50 with 4 MB(w/o) HD < $ 5000
>	   140 MB HD/60MB Streamer               < $ 2500 *****
						===============
>
	I would sure like to know where to get a 140 MB hard disk for
a Sun 3/50 for <$2500 (forget the tape).  Sun charges us $7900 - 20%.

Bill Pearson
wrp@virginia.EDU

merchant@dartvax.UUCP (Peter Merchant) (09/02/87)

In article <18165@amdcad.AMD.COM>, Phil Ngai writes:
> ... Let's see, IBM is about $50 billion annual gross
> revenue. If they spent it all on software R&D that would give a total
> annual investment of $5 on hardware R&D. 

Yup.  That sounds about right...
:^D :^D :^D :^D :^D :^D :^D :^D
--
"Oh Lord,                                 Peter Merchant (merchant@dartvax.UUCP)
 I am so tired.
 How long can this go on?"

jru@etn-rad.UUCP (John Unekis) (09/03/87)

In article <7181@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:
>In article <265@etn-rad.UUCP> jru@etn-rad.UUCP (0000-John Unekis) writes:
>|In article <7144@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:
>|>In article <257@etn-rad.UUCP> jru@etn-rad.UUCP (0000-John Unekis) writes:
>|>|In article <7042@steinmetz.steinmetz.UUCP> davidsen@crdos1.UUCP (bill davidsen) writes:
>
>|[...]
>|I reran the BYTE fibonacci sequence test under controlled conditions, and
>|instead of the 20 to 1 advantage that BYTE claimed for the 80386, an
>|accurately measured(using UNIX time command)test showed that a 16Mhz 80386 was
>|only 12% faster than a 12Mhz 68020. 
>
>My original posting has been expired, but I believe that the 12MHz
>machine also outran the 16MHz 68020, due to no wait states. I hope to be
>able to run the benchmark on two Suns and two 386's at known CPU speeds
>and wait states. I'll post the results here and leave the topic of
>benchmarking alone until then.
>-- 
Well Bill,

   It looks like we have quite a run of articles here. If you would check the
   Sept. 1987 issue of BYTE, you will notice that they have :

   A) Apologized for multiple errors in their benchmark procedure.

   B) Redone the benchmarks under more accurate conditions.

   C) Are now admitting that the 3.1 sec fibonnacci sequence takes up to
      57 seconds  to run on their 80386.

   Why don't we move this discussion to email and give the net a rest.
   I'm at {ihnp4 or voder}!wlbr!etn-rad!jru

   --------------------------------------------------------------------
   Disclaimer: these opinions are mine, all mine

caf@omen.UUCP (Chuck Forsberg WA7KGX) (09/04/87)

In article <16481@toto.uucp> dbercel@sun.UUCP (Danielle Bercel, MIS Systems Programming) writes:
:Indeed, I recall reading an IBM report that said for every penny ($0.01)
:they spend on hardware R & D they had to spend 100M ($100,000,000.00) in
:software R & D.
:
:danielle

They didn't spend that $$$ cleaning up DOS bugs.