[comp.arch] IBM PC prehistory

seeger@manatee.cis.ufl.edu (F. L. Charles Seeger III) (12/22/89)

In article <33896@mips.mips.COM> keith@mips.COM (Keith Garrett) writes:
|In article <1546@aber-cs.UUCP> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
|>In case you have not understood, I liked the Z8000 a lot. If only IBM had
|>chosen it instead of the 8088/8086... If only Zilog had managed to do an MMU
|>and restartable instructions soon enough...
|the MMU was fairly early, and the virtual memory changes turned out to be
|relatively easy. the development tools were very late. i think the lack of
|tools, and a weak marketing effort are what did in the z8000.

I think that IBM choose Intel for largely business reasons, like availability
of second sources for parts and the fact that Zilog was owned by another major
coporation that was a potential competitor.  They could pretty much count on
dominating a small company like Intel.  Could be wrong, though.  The IBM PC
has been *much* more successful than the original IBM designers ever thought
that it would be.  It was designed very quickly, so the choice may have been
one of preference of one of the designers.  Does anyone with inside IBM
information care to enlighten us?  Also, is that story about Gary Kildal
blowing off a meeting with the IBMers to go flying true?

Chuck
--
  Charles Seeger    E301 CSE Building        +1 904 335 8053
  CIS Department    University of Florida
  seeger@ufl.edu    Gainesville, FL 32611

johnl@esegue.segue.boston.ma.us (John R. Levine) (12/22/89)

In article <21559@uflorida.cis.ufl.EDU> seeger@manatee.cis.ufl.edu (F. L. Charles Seeger III) writes:
>I think that IBM choose Intel for largely business reasons, like availability
>of second sources for parts ...

My understanding is that the PC was originally supposed to be a CP/M Z-80 box.
Fairly late in the design they decided to go with a 16-bit chip and at the
time the 8088 was the only 16 bit processor that would run on an 8 bit bus.
Who knows, if the 68008 were ready they might have used that.
-- 
John R. Levine, Segue Software, POB 349, Cambridge MA 02238, +1 617 864 9650
johnl@esegue.segue.boston.ma.us, {ima|lotus|spdcc}!esegue!johnl
"Now, we are all jelly doughnuts."

gillies@p.cs.uiuc.edu (12/24/89)

> Fairly late in the design they decided to go with a 16-bit chip and at the
> time the 8088 was the only 16 bit processor that would run on an 8 bit bus.

My understanding was that 8086's cost $50+, but 8088's cost $25-$30,
so they adopted the 8088.

crisp@mips.COM (Richard Crisp) (12/24/89)

The story I heard was that the 68K was under serious consideration. MOTO
was unwilling to commit the volumes that IBM wanted (the 68k was just getting
the bugs out and was starting to build a little volume, the 68010 was in
design). The 8086/8088 was simply more mature by about 6months to 1year.
BTW, the 68010 was designed to some extent to be used in the XT/370 machine
that you may be familiar with. There was a more radically modified version
of the 68K that was also used in that box. I seem to remember the 68010 being
called the "Cascadilla Minor" with the more radically modified version (never
sold to anyone but IBM) being called the "Cascadilla Major".
Maybe someone else that worked at MOTO at that time remembers more details.

-- 
Just the facts Ma'am

bruceh@brushwud.sgi.com (Bruce R. Holloway) (12/27/89)

In article <76700097@p.cs.uiuc.edu>, gillies@p.cs.uiuc.edu writes:
> 
> My understanding was that 8086's cost $50+, but 8088's cost $25-$30,
> so they adopted the 8088.

As I recall, both the '86 & the '88 have multiplexed address & data busses,
so they are identical packages, and they must have almost identical dies.
I can't believe Intel priced them so differently.

The reason IBM chose the 8-bit model was to keep the expansion bus only
8 bits wide.  This saved them on connectors & buffers & drivers, & made
every board that plugged in cheaper as well.  It made memory expandable
in smaller chunks, too.

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (12/27/89)

In article <47021@sgi.sgi.com> bruceh@brushwud.sgi.com (Bruce R. Holloway) writes:

| The reason IBM chose the 8-bit model was to keep the expansion bus only
| 8 bits wide.  This saved them on connectors & buffers & drivers, & made
| every board that plugged in cheaper as well.  It made memory expandable
| in smaller chunks, too.

  Since I don't have access to IBM thinking at that time I can't say
your last statement is wrong, but since small memory parts were
available at that time (and cheaply), it would be easy to make smaller
expansion boards. I doubt that IBM was worried about how SMALL they
could go.

  I'm most surprized at the parity. They was IBM's big contribution to
PC technology. Up to then all the PCs were just 8 bits, and most of us
who wanted reliability ran static RAM instead of dynamic. I still have
an S100 system with about 1.5MB of CMOS static in 4Kx8 packages as I
recall. Same pinout as a 32k ROM, allowing installation of firmware as
desired.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
"The world is filled with fools. They blindly follow their so-called
'reason' in the face of the church and common sense. Any fool can see
that the world is flat!" - anon

mjt@mcnc.org (Michael Tighe) (12/27/89)

In article <21559@uflorida.cis.ufl.EDU> seeger@manatee.cis.ufl.edu (F. L. Charles Seeger III) writes:

> I think that IBM choose Intel for largely business reasons...

At the time, IBM owned a large percentage of Intel stock (> 20%). Because
of this, It is unlikely they would have gone to another chip maker.

-- 
Michael Tighe, mjt@ncsc.org

crisp@mips.COM (Richard Crisp) (12/28/89)

Gee I thought that IBM bought the Intel stock after they adopted the Intel
                                              ^^^^^
architecture. I seem to remember the big stock deal having been made in 1983.
Perhaps someone else remembers more precisely.

-- 
Just the facts Ma'am

a186@mindlink.UUCP (Harvey Taylor) (12/28/89)

In <244@dg.dg.com>, rec@dg.dg.com (Robert Cousins) writes:
|
| On a slightly different note, since we are talking about ancient
| history, what was the name of the guy at Seattle Computer who wrote
| MS-DOS before it was sold to Microsoft?
|
   Does Tim Patterson ring a bell? I heard the guy talk at a PCCFA fair
 in Vancouver (BC) in about 1984. Very interesting, precise & motor mouth
 speaker. As I recall he just wanted a CP/M for the 8080, which I guess
 is where the MSDOS = CPM on steroids signature originated. ;-)

 "Don't worry about the bum on the skids;
                   Take a look at the bum on the plush." -Utah Philips
      Harvey Taylor      Meta Media Productions
       uunet!van-bc!rsoft!mindlink!Harvey_Taylor
               a186@mindlink.UUCP

bruceh@brushwud.sgi.com (Bruce R. Holloway) (12/28/89)

In article <1957@crdos1.crd.ge.COM>, davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes:
> 
>   Since I don't have access to IBM thinking at that time I can't say
> your last statement is wrong, but since small memory parts were
> available at that time (and cheaply), it would be easy to make smaller
> expansion boards. I doubt that IBM was worried about how SMALL they
> could go.
> 
>   I'm most surprized at the parity. They was IBM's big contribution to
> PC technology. Up to then all the PCs were just 8 bits, and most of us
> who wanted reliability ran static RAM instead of dynamic. I still have
> an S100 system with about 1.5MB of CMOS static in 4Kx8 packages as I
> recall. Same pinout as a 32k ROM, allowing installation of firmware as
> desired.

Oh my, you are indeed not the kind of customer they were aiming at!

I didn't mean that IBM cared about implementing memory with 4Kx8's or
'374's or whatever, only that back when 64Kx8 or 256Kx8 of DRAM was
still somewhat expensive, it would be a significant economic advantage
to be able to buy only that much, instead of having to spring for 64Kx16
or 256Kx16.  Still speculation on my part--I never worked for 'em.

bruceh@brushwud.sgi.com (Bruce R. Holloway) (12/28/89)

In article <5946@alvin.mcnc.org>, mjt@mcnc.org (Michael Tighe) writes:
> In article <21559@uflorida.cis.ufl.EDU> seeger@manatee.cis.ufl.edu (F. L. Charles Seeger III) writes:
> 
> > I think that IBM choose Intel for largely business reasons...
> 
> At the time, IBM owned a large percentage of Intel stock (> 20%). Because
> of this, It is unlikely they would have gone to another chip maker.

Of course, IBM bought that interest in anticipation of contributing
substantially to the growth of Intel.

rec@dg.dg.com (Robert Cousins) (12/28/89)

In article <1957@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes:
>  I'm most surprized at the parity. They was IBM's big contribution to
>PC technology. Up to then all the PCs were just 8 bits, and most of us
>who wanted reliability ran static RAM instead of dynamic. I still have
>an S100 system with about 1.5MB of CMOS static in 4Kx8 packages as I
>recall. Same pinout as a 32k ROM, allowing installation of firmware as
>desired.
>-- 
>bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
>"The world is filled with fools. They blindly follow their so-called
>'reason' in the face of the church and common sense. Any fool can see
>that the world is flat!" - anon

There were a number of S-100 machines which had parity or ECC on them.
Piceon sold an ECC memory card which was pretty nice as I recall.  Aside
from that, a large number of other machines had parity memory. Some names
which come to mind include CSI, MuSys, Earth, and CCS.

Your machine sounds as if it is a Compupro.  Bill Godbout was always
arguing for static RAM over DRAM.  Is Compupro still around? They had
one of the first systems which could run either CP/M or CP/M 86 software.
Their CPU card had both an 8085 and an 8088 on it and was rigged to allow
the CPUs to alternate. Rumor has it that it was this platform which was
used by IBM to develop the code for the PC ROMs and by Microsoft to develop
MS-DOS for the PC.  The rumor continues that several of the peripheral chip
choices on the PC (such as the floppy disk controller chip) were made to
be compatible with the Compupro.
 
On a slightly different note, since we are talking about ancient history,
what was the name of the guy at Seattle Computer who wrote MS-DOS before
it was sold to Microsoft?

Robert Cousins
Dept. Mgr, Workstation Dev't.
Data General Corp.

Speaking for myself alone.

dwm@pinocchio.Encore.COM (Dave Mitchell) (12/29/89)

In article <244@dg.dg.com> uunet!dg!rec (Robert Cousins) writes:
>
>On a slightly different note, since we are talking about ancient history,
>what was the name of the guy at Seattle Computer who wrote MS-DOS before
>it was sold to Microsoft?
>
    From "Operating Systems", 2nd ed., H.M.Deitel,
    Addison-Wesley, ISBN 0-201-18038-3, pp. 633-634:  (typos are mine)

	"In 1979, Tim Paterson of Seattle Computer Products, a company
	that produced memory boards, needed software to test an
	8086-based product.  [ ... ]  Paterson developed his 86-DOS
	operating system to mimic CP/M because CP/M had become the de
	facto standard microcomputer operating system for
	8080/8086-based systems."

    And in reference to an earlier question in this thread:

	"IBM considered both the 8086 and 8088 16-bit microprocessors
	available from Intel.  The 8086 handles data internally and
	externally, 16 bits at a time.  The 8088 handles internal
	transfers 16 bits at a time, but communicates with peripherals
	8 bits at a time.  IBM chose the Intel 8088 microprocessor
	rather than the more powerful 8086 because the 8088 was
	considerably less expensive and because most of the peripherals
	available at the time communicated 8 bits at a time."

Dave Mitchell	Usenet:		...!{bu-cs,decvax,necntc,talcott}!encore!dwm
		Internet:	dwm@encore.com                   *<8-O) Yow!

johne@hpvcfs1.HP.COM (John Eaton) (12/29/89)

<<<<
< Gee I thought that IBM bought the Intel stock after they adopted the Intel
<                                               ^^^^^
< architecture. I seem to remember the big stock deal having been made in 1983.
< Perhaps someone else remembers more precisely.
----------
It was after. The story I heard was that it was a preemptive strike to prevent
AT&T from coming in and buying it. Remember this was back when everyone thought
the phone company was about to come in and clean up in the pc market.

Parity bits were a good marketing idea. People were still woried about alpha
particle's and it positioned the PC as a "Business" machine as opposed to all
the home and school machines currently on the market.

8088 vs 8086 was probably decided based on granularity. Your base system
only needed 9 vs 18 chips and you only needed to run 8 Data lines to your
cards. The yahoo who decided to use Address line A0 in decoding the first
IBM I/O cards should be taken out and shot.  While they may not have wanted
to design the 8086 in they should not have done anything like that to design
it out. That decision prevented a clean transition to the 286 and beyond for
the PC buss that we are still paying for today.


John Eaton
!hpvcfs1!johne
 

gerry@zds-ux.UUCP (Gerry Gleason) (12/29/89)

In article <47076@sgi.sgi.com> bruceh@brushwud.sgi.com (Bruce R. Holloway) writes:
>In article <1957@crdos1.crd.ge.COM>, davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes:
>>   Since I don't have access to IBM thinking at that time I can't say
>> your last statement is wrong, but since small memory parts were
>> available at that time (and cheaply), it would be easy to make smaller
>> expansion boards. I doubt that IBM was worried about how SMALL they
>> could go.

>I didn't mean that IBM cared about implementing memory with 4Kx8's or
>'374's or whatever, only that back when 64Kx8 or 256Kx8 of DRAM was
>still somewhat expensive, it would be a significant economic advantage
>to be able to buy only that much, instead of having to spring for 64Kx16
>or 256Kx16.  Still speculation on my part--I never worked for 'em.

Only that at the time, 64kx1 DRAM's were pretty new, and the original
PC's were socketted for 16kx1 chips, so the memory increment argument
does not work.

I don't know why, but there was a significant price differential between
8088's and 8086's.

Gerry Gleason

filbo@gorn.santa-cruz.ca.us (Bela Lubkin) (12/29/89)

In article <1957@crdos1.crd.ge.COM> Wm E Davidsen Jr writes:
>  I'm most surprized at the parity. They was IBM's big contribution to
>PC technology. Up to then all the PCs were just 8 bits, and most of us
>who wanted reliability ran static RAM instead of dynamic. I still have
>an S100 system with about 1.5MB of CMOS static in 4Kx8 packages as I
>recall. Same pinout as a 32k ROM, allowing installation of firmware as
>desired.

I speak in the past tense here because I don't know if the current
implementation has improved any, and I haven't gotten a parity error on
a PClone in over three years:

I'm not convinced that parity RAM benefitted reliability in early IBM
PCs.  The software implementation was so poor that the parity hardware
actually degraded reliability.  A parity error caused an instant,
irrecoverable crash.  After a parity error you had to power down the
machine and could not even save a memory image for later
analysis/extraction of important data.  As I see it, the presence of
parity memory effectively made the system 9/8 as likely to suffer a bad
bit, since there was 9/8 as much memory to go bad!

Software support for parity (and for uncorrectable ECC errors) should be
much more flexible.  At the least, it should offer options like: reboot;
run memory test on affected area; save memory image; "correct" parity
bit and continue; continue with parity checking disabled.  These would
allow at least some chance to recover important work-in-progress.

Of course, under a "real" operating system, the OS would know which
task's memory was affected; would hold that task while signaling other
tasks to do an orderly shutdown; would then restart that task and allow
it to attempt to shut down while keeping a journal of changes it makes
to files, in case it had gone insane.  In some cases the OS could
relocate the affected memory block (particularly in virtual memory
systems) and run memory tests on the bad area.  If it found a stuck bit,
it would know which bit to correct in the affected task and would be
able to restart it without any loss.

How is parity handled in larger systems?  I know about ECC; are there
any larger systems that use just parity, but attempt to handle it more
reasonably?  How do larger systems handle ECC correction failures?

In article <73@zds-ux.UUCP> Gerry Gleason writes:
>I don't know why, but there was a significant price differential between
>8088's and 8086's.

This was probably a marketing decision: perceived value of a "16-bit"
chip is higher than an 8-bit chip -- even if the distinction is ONLY at
the bus level.

Witness the 80386 and 80386-SX.  You can't tell me that the day it
started shipping, Intel could produce a 386SX more cheaply than a 386.
98% of the silicon is (logically) identical; long-term production yields
should be comparable; but the 386SX hadn't had time to mature.  Yet
initial list prices for the 386SX were considerably lower than the
comparable 386-16.  Intel sets prices by perceived value, not costs.

Bela Lubkin    * *    //  filbo@gorn.santa-cruz.ca.us  CI$: 73047,1112 (slow)
     @       * *     //  belal@sco.com  ..ucbvax!ucscc!{gorn!filbo,sco!belal}
R Pentomino    *   \X/  Filbo @ Pyrzqxgl +408-476-4633 and XBBS +408-476-4945

gillies@p.cs.uiuc.edu (12/30/89)

/* Written  4:40 pm  Dec 26, 1989 by bruceh@brushwud.sgi.com in comp.arch */
> As I recall, both the '86 & the '88 have multiplexed address & data busses,
> so they are identical packages, and they must have almost identical dies.
> I can't believe Intel priced them so differently.

If you look back at the history of the introduction of 16/8-bit
processors (8086/88 & 68000/8), in both cases the 8-bit processor cost
about HALF as much as the prevailing cost for the 16-bit processor,
when the 8-bit processor was introduced.

This is because the 8-bit processor is a "microcontroller", and the
16-bit processor is a "CPU".  Pricing has very little to do with
production costs -- it has more to do with the marketplace and the
demand for the processor.  The 8-bit processor is certainly *not* a
state-of-the-art processor.  Think:  How much does a 386SX cost
compared to a vanilla '386?

stevew@wyse.wyse.com (Steve Wilson xttemp dept303) (12/30/89)

In article <73@zds-ux.UUCP> gerry@zds-ux.UUCP (Gerry Gleason) writes:
>>other stuff deleted so the mailer would let this posting through... 
>
>Only that at the time, 64kx1 DRAM's were pretty new, and the original
>PC's were socketted for 16kx1 chips, so the memory increment argument
>does not work.
>
>I don't know why, but there was a significant price differential between
>8088's and 8086's.
>
>Gerry Gleason

Gerry,

I think the arguement about 16 bits wide versus 8 bits wide is the crux
of the matter.  Memory was expensive( I still have a IBM PC-1 with the
16K drams running in it (along with 64K's and 256K's ...what a technology
mix ;-) ) and certainly was a consideration. 

By going to x-8 instead of x-16 a memory upgrade was cheaper, and anything
receiving the bus was cheaper to implement.  Also don't forget that
the whole world was still 8 bit oriented due to the proliferation of 
8 bit S100 systems using Z80's and such.  The user community didn't 
expect 16 bits yet. 

Steve Wilson

The above are my opinions only, not those of my employer. 

bmw@isgtec.UUCP (Bruce Walker) (12/30/89)

In article <73@zds-ux.UUCP> gerry@zds-ux.UUCP (Gerry Gleason) writes:
> In article <47076@sgi.sgi.com> bruceh@brushwud.sgi.com (Bruce R. Holloway) writes:
> >... back when 64Kx8 or 256Kx8 of DRAM was
> >still somewhat expensive, it would be a significant economic advantage
> >to be able to buy only that much, instead of having to spring for 64Kx16
> >or 256Kx16.  Still speculation on my part--I never worked for 'em.
> 
> Only that at the time, 64kx1 DRAM's were pretty new, and the original
> PC's were socketted for 16kx1 chips, so the memory increment argument
> does not work.

The argument is valid because the original PC was shipped with one row
of 16Kx1's soldered in.  Later, when OS requirements changed, the
sockets were filled to 32K, 48K and *even* 64K of 16K chips! :-)

The same argument influences the design of the 32 bit PC's today.
256Kx1 chips are very popular for "entry-level" machines.  One
row of 256K's is of course 1 Meg, which is "perfect" for an AT-style
architecture.
-- 
Bruce Walker                            ...uunet!utai!lsuc!isgtec!bmw
"Just say, ``No!'' to bugs."                        isgtec!bmw@censor
ISG Technologies Inc. 3030 Orlando Dr. Mississauga. Ont. Can. L4V 1S8

bzs@world.std.com (Barry Shein) (12/31/89)

>> I think that IBM choose Intel for largely business reasons...
>
>At the time, IBM owned a large percentage of Intel stock (> 20%). Because
>of this, It is unlikely they would have gone to another chip maker.

Sounds conspiratorial and circular. Why did they own 20% of Intel?
Possibly because someone at IBM thought they had good products and
would prosper? If IBM didn't like Intel's chips well enough to use
them themselves mightn't they've sold that stock and moved the funds
to, say, Moto or whoever they thought was doing it right? Etc.

I suspect the real story hasn't come out here, these reasons sound
like hindsight.

-- 
        -Barry Shein

Software Tool & Die, Purveyors to the Trade         | bzs@world.std.com
1330 Beacon St, Brookline, MA 02146, (617) 739-0202 | {xylogics,uunet}world!bzs

bpendlet@bambam.UUCP (Bob Pendleton) (01/03/90)

I evaluated the 68K, Z8000, and 8086/8088, at Sperry Univac sometime
around '78 or '79. The 68K was flaky, the Z8000 was close to being non
existent and the 8086/8088 was a good solid machine. I was following
these machine quite closely in those days. 

About the time I'd guess that IBM was picking a processor for the PC
Intel announced a plastic package 8088 for $15 in large quantities.
Before that the 8088 in ceramic cost ~$30. The 8088 was by far the
cheapest "16 bit" processor you could buy. 


That's my guess for choosing the 8088. The only other things I can
think of are that the 8088 has the most "8s" in its name of any
processor available at the time :-) and 8088 sounds like a souped up
8080. (Which it is, mostly. :-))

			Bob P.
-- 
              Bob Pendleton, speaking only for myself.
UUCP Address:  decwrl!esunix!bpendlet or utah-cs!esunix!bpendlet

                      X: Tools, not rules.

gnb@bby.oz.au (Gregory N. Bond) (01/03/90)

In article <121.filbo@gorn.santa-cruz.ca.us> filbo@gorn.santa-cruz.ca.us (Bela Lubkin) writes:

   How is parity handled in larger systems?  I know about ECC; are there
   any larger systems that use just parity, but attempt to handle it more
   reasonably?  How do larger systems handle ECC correction failures?

Well, on Sun 3/50s, it panics and reboots.  Hardly a _large_ system,
but get much larger and they tend to have ECC.

Suns with ECC print a diag when correcting; I have never seen an
uncorrectable error.
--
Gregory Bond, Burdett Buckeridge & Young Ltd, Melbourne, Australia
Internet: gnb@melba.bby.oz.au    non-MX: gnb%melba.bby.oz@uunet.uu.net
Uucp: {uunet,pyramid,ubc-cs,ukc,mcvax,prlb2,nttlab...}!munnari!melba.bby.oz!gnb

adrianc@cobblers.UK.Sun.COM (01/03/90)

In article <GNB.90Jan3115709@baby.bby.oz.au>, gnb@bby.oz.au (Gregory N. Bond) writes:
> In article <121.filbo@gorn.santa-cruz.ca.us> filbo@gorn.santa-cruz.ca.us (Bela Lubkin) writes:
> 
>    How is parity handled in larger systems?  I know about ECC; are there
>    any larger systems that use just parity, but attempt to handle it more
>    reasonably?  How do larger systems handle ECC correction failures?
> 
> Well, on Sun 3/50s, it panics and reboots.  Hardly a _large_ system,
> but get much larger and they tend to have ECC.

The latest range of Sun machines (Sun3/80,SPARCstation-1, SPARCsystem-300
series) have synchronous parity reporting and correction. The 3/470 and
SPARCserver-490 have ECC. The Sun386i has standard parity.
Thats all we have on the pricelist nowadays.

Syncronous parity means that the parity error is reported to the CPU
at the same time as the memory reference rather than as a high priority
interrupt an unknown number of cycles later (as was done in older Sun's).
The kernel treats the parity error rather like a page fault. It looks to see
if that page exists, unmodified, on disk and if it can it gets the page into
a new memory page, remaps the virtual address space and restarts the
instruction that caused the parity error. It also writes test patterns to
the location to see if it is a hard error or a soft error. If hard then
the page is removed from use until the next reboot. An error message warns
that a recovery was made. If the page was a modified one then the process
that referenced it is killed and the page is checked as before, a message
warns that a process has been killed. If the page is part of the kernel
then there isn't much you can do so the machine panics after attempting
to print an error message.

The end result is that synchronous parity is much better than normal parity
but not as good (or expensive) as ECC. There is only about 1 Mbyte of memory
(where the kernel sits) that will panic the machine if it gets an error
and this is independent of whether the machine has 4 Mbytes (entry 3/80)
or 224 Mbytes (SPARCserver-390 loaded with 4 Mbit DRAMs one day..).

It's a neat trick but there is a slight performance cost in that the
3/80 has an extra wait state.

By the way, the ECL SPARC chip has parity on its cache with parity checking
on all system data paths and register files in both the integer and
floating point units. It uses synchronous parity-error traps to recover
from parity errors in unmodified cache locations.

Regards Adrian

Adrian Cockcroft -  adrian.cockcroft@uk.sun.com or adrian.cockcroft@sun.co.uk
Sun Microsystems, Merlin Place, Milton Road, Cambridge CB4 4DP, UK
Phone +44 223 420421 - Fax +44 223 420257

gordonl@microsoft.UUCP (Gordon LETWIN) (01/04/90)

In article <1989Dec30.235854.14254@world.std.com>, bzs@world.std.com (Barry Shein) writes:
> 
> >> I think that IBM choose Intel for largely business reasons...
> >
> >At the time, IBM owned a large percentage of Intel stock (> 20%). Because
> >of this, It is unlikely they would have gone to another chip maker.
> 

This is wrong... IBM bought a piece of Intel long after they chose the
8088 for the IBM Pc.  I seem to recall that they've since sold their
stake, as well.

Although I'm not an authority on the subject, I'm pretty sure that the
reason IBM used the 8088 is because of Intels clever marketing policy.
They designed the 8086 first, and then produced an 8 bit version, the 8088.
The 8088 is actually slightly more complex than the 8086 because of the
add on circuitry to "narrow" the bus interface, but they sold it much
cheaper in the theory that once the low price gets people "tied in" to the
instruction set then they'll upgrade to the much higher profit margin 8086.

I think that IBM used the 8088 because it was very much cheaper than
the other 16 bit chips at the time (IBM originally was thinking of something
like an 8085 - an 8 bit machine) and it also reduced system cost with
it's 8 bit bus.

	gordon letwin
	microsoft

henry@utzoo.uucp (Henry Spencer) (01/04/90)

In article <GNB.90Jan3115709@baby.bby.oz.au> gnb@bby.oz.au (Gregory N. Bond) writes:
>Suns with ECC print a diag when correcting; I have never seen an
>uncorrectable error.

In general, memory chips have gotten spectacularly more reliable since
the Bad Old Days of the 16Kb RAMs that got everyone very interested in
ECC and suchlike.
-- 
1972: Saturn V #15 flight-ready|     Henry Spencer at U of Toronto Zoology
1990: birds nesting in engines | uunet!attcan!utzoo!henry henry@zoo.toronto.edu

conrad@tlc.tlc.com (Conrad Dost) (01/04/90)

Gordon Letwin writes :

> Although I'm not an authority on the subject, I'm pretty sure that the
> reason IBM used the 8088 is ...

Just as important was the large number of support chips and ease of porting Z80
assemply code (CP/M programs) to the 8086 architecture.
-- 
- Conrad Dost, Total Logic Corp.
  12 South 1st Street, #808, San Jose, CA 95113 USA, (408)295-1792
  conrad@tlc.com, apple!motcsd!tlc!conrad

rec@dg.dg.com (Robert Cousins) (01/04/90)

In article <10131@microsoft.UUCP> gordonl@microsoft.UUCP (Gordon LETWIN) writes:
>In article <1989Dec30.235854.14254@world.std.com>, bzs@world.std.com (Barry Shein) writes:
>> >> I think that IBM choose Intel for largely business reasons...
>> >At the time, IBM owned a large percentage of Intel stock (> 20%). Because
>> >of this, It is unlikely they would have gone to another chip maker.
>This is wrong... IBM bought a piece of Intel long after they chose the
>8088 for the IBM Pc.  I seem to recall that they've since sold their
>stake, as well.
>Although I'm not an authority on the subject, I'm pretty sure that the
>reason IBM used the 8088 is because of Intels clever marketing policy.
>They designed the 8086 first, and then produced an 8 bit version, the 8088.
>The 8088 is actually slightly more complex than the 8086 because of the
>add on circuitry to "narrow" the bus interface, but they sold it much
>cheaper in the theory that once the low price gets people "tied in" to the
>instruction set then they'll upgrade to the much higher profit margin 8086.
>I think that IBM used the 8088 because it was very much cheaper than
>the other 16 bit chips at the time (IBM originally was thinking of something
>like an 8085 - an 8 bit machine) and it also reduced system cost with
>it's 8 bit bus.
>	gordon letwin
>	microsoft

Actually, the options available at the time were:

	Z80 (IBM marketed an S100 Z80 CP/M machine in Europe for 4 months)
	808[05] (No real advantage over the Z80 and some real disadvantages)
	808[68] (Could address more memory, had no real installed base)
	68000   (Not really very solid yet as I recall)
	Z8000   (Not shipping in quantity for larger address space)
	TI 9900 (Not really an option)
	6809	(Fast, ready, but not that much better than the Z80)

One mistake which will go down in history is that Motorola chose
to pitch the 6809 over the 68k to IBM.  This whole slice of
computer history which will probably be viewed in the future as
a turning point which altered the entire industry forever would
have been changed had any one of a list of things been different:

	Motorola pitched the 68K instead of the 6809.
	The Z800 had been ready on schedule.
	The Z8000 had been a real product at that time.
	The NSC 16000 been ready 2 years earlier.
	The 808[68] had been buggier.
	Compupro had not chosen to support both the 8088 and 8085 on a 
		single system.

This is obviously my personal optionion (pronounced blithering).

Robert Cousins.
Dept. Mgr, Workstation Dev't.
Data General Corp.

Speaking for myself alone.	

ron@woan.austin.ibm.com (Ronald S. Woan) (01/05/90)

In article <5946@alvin.mcnc.org>, mjt@mcnc.org (Michael Tighe) writes:
|>In article <21559@uflorida.cis.ufl.EDU> seeger@manatee.cis.ufl.edu 
|>(F. L. Charles Seeger III) writes:
|>> I think that IBM choose Intel for largely business reasons...
|>At the time, IBM owned a large percentage of Intel stock (> 20%). Because
|>of this, It is unlikely they would have gone to another chip maker.

I wasn't part of IBM back then, but I do know that the stake in
Intel(~19%+ option to buy more?) was purchased at a much later date,
so the real reason appears to have been cost. The 8088 was the
cheapest thing with 16-bit internal registers that was available in
quantity at the time.

					Ron

+-----All Views Expressed Are My Own And Are Not Necessarily Shared By------+
+------------------------------My Employer----------------------------------+
+ Ronald S. Woan  (IBM VNET)WOAN AT AUSTIN, (AUSTIN)ron@woan.austin.ibm.com +
+ outside of IBM       @cs.utexas.edu:ibmchs!auschs!woan.austin.ibm.com!ron +

hl.rogers@ofc.Columbia.NCR.COM (hl.rogers) (01/06/90)

<Actually, the options available at the time were:
<
<	Z80 (IBM marketed an S100 Z80 CP/M machine in Europe for 4 months)
<	808[05] (No real advantage over the Z80 and some real disadvantages)
<	808[68] (Could address more memory, had no real installed base)
<	68000   (Not really very solid yet as I recall)
<	Z8000   (Not shipping in quantity for larger address space)
<	TI 9900 (Not really an option)
<	6809	(Fast, ready, but not that much better than the Z80)
<
There were also some successful bit-slice micros available
from National and AMD.  These micros had 4 years of shipping
experience during this time.  Of course, this approach was
simply too expensive for the personal computer concept, as 
was the 68K and 9900.  It really all boiled down to cost and
schedule, or maybe schedule and cost.
-- 
---
HL Rogers    (hl.rogers@ncrcae.Columbia.NCR.COM)
Me?  Speak for my company??  HA!
"Call 202/653-1800 for a good time!" - John Matrow, 1989

jacka@aspen.IAG.HP.COM (Jack C. Armstrong) (01/09/90)

>Also, is that story about Gary Kildal
>blowing off a meeting with the IBMers to go flying true?

No.  However, it is true that he failed to ask how high (on the way up) when
IBM asked him to jump.  Meanwhile, in Seattle, P.T. Gates did just that, 
making a deal for someone else's software.  The whole truth is a little
more complicated, but not much.  

kenobi%lightsabre@Sun.COM (Rick Kwan - Sun Intercon) (01/09/90)

In article <250@dg.dg.com> uunet!dg!rec (Robert Cousins) writes:
>One mistake which will go down in history is that Motorola chose
>to pitch the 6809 over the 68k to IBM.  This whole slice of
>computer history which will probably be viewed in the future as
>a turning point which altered the entire industry forever would
>have been changed had any one of a list of things been different:
>
>	(...interesting what-if list deleted...)

I have often wondered what would happen if IBM had chosen the 68000
instead of 8088/8086, and tailored a their own simple OS to run on
it?  I think IBM could very easily have produced such a thing.  They
certainly had the expertise.

Had they done their own proprietary 68000 PC, my guess
is that:
    1.	68000/UNIX would have found it very difficult to sell against
	a 68000/simple-IBM-OS.  (Not enough perceived product
	differentiation.)
    2.	The companies producing 68000/UNIX boxes would have been a
	trickle, instead of the hoard that arose.
    3.	The Intel 80x86 architecture would have no strong supporters;
	in fact, there may not have been an 80286.
    4.	There would be no signficant "open systems" voice.
    5.	Start-ups like Apollo or Sun Microsystems would not have
	arisen.
    6.	Apple would not have a Macintosh-like machine.
    7.	Other 68000-based systems such as those from Amiga and
	Atari would not have existed either.
    8.	There would have been no perceived workstation vs. high-end
	PC war.
    9.	I would be working for a different company in a different
	industry (but probably still banging a keyboard).  (Perhaps
	I'd be running a variant of TSO on OS/MVT on a PC ;-)

Thanks, IBM, for my latest job opportunities.

	Rick Kwan
	Sun Microsystems - Intercontinental Operations
	kenobi@sun.com

"Travellin' through hyperspace ain't like dustin' crops, boy."
					--Han Solo

amos@nsc.nsc.com (Amos Shapir) (01/09/90)

In article <129994@sun.Eng.Sun.COM> kenobi@sun.UUCP (Rick Kwan - Sun Intercon) writes:
>I have often wondered what would happen if IBM had chosen the 68000
>instead of 8088/8086, and tailored a their own simple OS to run on
>it?
...
>    1.	68000/UNIX would have found it very difficult to sell against
>	a 68000/simple-IBM-OS.  (Not enough perceived product
>	differentiation.)

Quite the contrary - Unix made its way into the PC market *despite* the fact
that it needed 68k-like architecture, not *because* of it.  Its main
advantage over DOS is the sophisticated user interface; the fact that
it's hard to put UNIX on a 86-like architecture was an undesirable
side effect.  If IBM had chosen an architecture that was easier to
put UNIX on, there would have been a lot more hardware to spread UNIX
around on, and the domination of UNIX-based workstation would have started
much earlier.

-- 
	Amos Shapir				My other CPU is a NS32532
National Semiconductor, 2900 semiconductor Dr., Santa Clara, CA 95052-8090
amos@nsc.com or amos%taux01@nsc.com 

mrc@Tomobiki-Cho.CAC.Washington.EDU (Mark Crispin) (01/09/90)

In article <13504@nsc.nsc.com> amos@nsc.nsc.com (Amos Shapir) writes:
>Quite the contrary - Unix made its way into the PC market *despite* the fact
>that it needed 68k-like architecture, not *because* of it.  Its main
>advantage over DOS is the sophisticated user interface; the fact that
>it's hard to put UNIX on a 86-like architecture was an undesirable
>side effect.  If IBM had chosen an architecture that was easier to
>put UNIX on, there would have been a lot more hardware to spread UNIX
>around on, and the domination of UNIX-based workstation would have started
>much earlier.

Is this guy on drugs or what?
 _____     ____ ---+---   /-\   Mark Crispin           Atheist & Proud
 _|_|_  _|_ ||  ___|__   /  /   6158 Lariat Loop NE    R90/6 pilot
|_|_|_| /|\-++- |=====| /  /    Bainbridge Island, WA  "Gaijin! Gaijin!"
 --|--   | |||| |_____|   / \   USA  98110-2098        "Gaijin ha doko ka?"
  /|\    | |/\| _______  /   \  +1 (206) 842-2385      "Niichan ha gaijin."
 / | \   | |__| /     \ /     \ mrc@CAC.Washington.EDU "Chigau. Gaijin ja nai.
kisha no kisha ga kisha de kisha-shita                  Omae ha gaijin darou."
sumomo mo momo, momo mo momo, momo ni mo iroiro aru    "Iie, boku ha nihonjin."
uraniwa ni wa niwa, niwa ni wa niwa niwatori ga iru    "Souka. Yappari gaijin!"

peter@ficc.uu.net (Peter da Silva) (01/09/90)

> 	6809	(Fast, ready, but not that much better than the Z80)

I guess you had to be there, but I remember at the time thinking that IBM
should have gone with the 6809 and used bank selection. Segments just looked
like bank select on-chip, and Cromemco and others were doing lots of good
stuff with bank selection on the Z80 and 8085.

And the 6809 is considerably better than the Z80. Internally it's as much of
a 16-bit processor as the 8088. Only 16 bits of address space, true. But is
that so important when those extra bits of address are hidden in segment
registers? Certainly the PDP-11 was still quite viable for small jobs, and
the 6809 code was more compact than the PDP-11.

I hate to say nice things about Radio Shack, but look at some of the good
stuff that came out for the Coco. OS/9, for one. Cocos with megabytes of
RAM, for another.
-- 
 _--_|\  Peter da Silva. +1 713 274 5180. <peter@ficc.uu.net>.
/      \ Also <peter@ficc.lonestar.org> or <peter@sugar.lonestar.org>
\_.--._/
      v  "Have you hugged your wolf today?" `-_-'

dgr@hpfcso.HP.COM (Dave Roberts) (01/10/90)

Mark Crispin writes:

>In article <13504@nsc.nsc.com> amos@nsc.nsc.com (Amos Shapir) writes:
>>Quite the contrary - Unix made its way into the PC market *despite* the fact
>>that it needed 68k-like architecture, not *because* of it.  Its main
>>advantage over DOS is the sophisticated user interface; the fact that
>>it's hard to put UNIX on a 86-like architecture was an undesirable
>>side effect.  If IBM had chosen an architecture that was easier to
>>put UNIX on, there would have been a lot more hardware to spread UNIX
>>around on, and the domination of UNIX-based workstation would have started
>>much earlier.
>
>Is this guy on drugs or what?
> _____     ____ ---+---   /-\   Mark Crispin           Atheist & Proud
> _|_|_  _|_ ||  ___|__   /  /   6158 Lariat Loop NE    R90/6 pilot
>|_|_|_| /|\-++- |=====| /  /    Bainbridge Island, WA  "Gaijin! Gaijin!"
> --|--   | |||| |_____|   / \   USA  98110-2098        "Gaijin ha doko ka?"
>  /|\    | |/\| _______  /   \  +1 (206) 842-2385      "Niichan ha gaijin."
> / | \   | |__| /     \ /     \ mrc@CAC.Washington.EDU "Chigau. Gaijin ja nai.
>kisha no kisha ga kisha de kisha-shita                  Omae ha gaijin darou."
>sumomo mo momo, momo mo momo, momo ni mo iroiro aru    "Iie, boku ha nihonjin."
>uraniwa ni wa niwa, niwa ni wa niwa niwatori ga iru    "Souka. Yappari gaijin!"
>----------


Not at all, Mark, but from your signature insert, I'd say you were.

- D. Roberts

pcg@aber-cs.UUCP (Piercarlo Grandi) (01/10/90)

In article <13504@nsc.nsc.com> amos@nsc.nsc.com (Amos Shapir) writes:
    In article <129994@sun.Eng.Sun.COM> kenobi@sun.UUCP (Rick Kwan - Sun Intercon) writes:
    >I have often wondered what would happen if IBM had chosen the 68000
    >instead of 8088/8086, and tailored a their own simple OS to run on
    >it?
    ...
    >    1.	68000/UNIX would have found it very difficult to sell against
    >	a 68000/simple-IBM-OS.  (Not enough perceived product
    >	differentiation.)
    
    Quite the contrary - Unix made its way into the PC market *despite* the fact
    that it needed 68k-like architecture, not *because* of it.  Its main
    advantage over DOS is the sophisticated user interface; the fact that
    it's hard to put UNIX on a 86-like architecture was an undesirable
    side effect.

Here I have a small difference. The big problem with *existing*
68k machines is that they all different MMUs, thus making an
ABI, or a standardized Unix, that much more difficult. Even
worse, most 68k machines, even 68020 VME boards, have no MMU,
because it must be added externally.

This is because Motorola took a long time to get a page fault
supporting 68K (the 68010), and was too late with a working
MMU, and then did not put the MMU on chip until the 68030. The
overriding advantage of the 286 and 386 is that they have an
on-chip MMU and thus Unix can run on any 286 or 386 machine
around.

Try for example to port Unix to the MacIntosh, or the Atari ST,
which would, but for the lack of MMU, perfectly capable of
running UNIX. The Apple Lisa did have a (third party) UNIX
because it did have some MMU. If the MacIntosh or Atari ST had
an MMU third party Unxies for them would have long since been
available.

The idea is that the millions of DOS machines out there have no
need, and no (essentially) use for, an MMU, but the fact that
it is incorporated in the CPU means that it is there, and the
manufacturer cannot omit it.

The more incredible thing is not only that all the 386 running
DOS out there are essentially emulating an 8080 on steroids,
and thus don't ussually bother to use the MMU, but that even
Unix does not really take advantage of the Multics like
features of the 286/386 MMU at all, using the 286/386 as either
a glorified PDP or VAX.
-- 
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

johnl@esegue.segue.boston.ma.us (John R. Levine) (01/11/90)

In article <1576@aber-cs.UUCP> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
>The more incredible thing is not only that all the 386 running
>DOS out there are essentially emulating an 8080 on steroids,
>and thus don't ussually bother to use the MMU, but that even
>Unix does not really take advantage of the Multics like
>features of the 286/386 MMU at all, using the 286/386 as either
>a glorified PDP or VAX.

Some of us who have spent years fighting the Intel segmented architecture
are pleased as punch to be able to ignore it on the 386.  Please don't say
what a swell idea 286 segmentation is until you've tried to make a large
program work on an 8086 or 286, breaking all of the data up into 64K
chunks.  Even on the 386, a segmented address is 48 bits, a large and
inconvenient size that no language (except PL/I, sort of) really supports.

On both the 286 and 386 dereferencing a segmented address is about 7 times
slower than a non-segmented address, so to get reasonably fast segmented
code you have to pollute your program with directives to the compiler
telling it which addresses are inter-segment and which are intra-segment.
The heck with it.
-- 
John R. Levine, Segue Software, POB 349, Cambridge MA 02238, +1 617 864 9650
johnl@esegue.segue.boston.ma.us, {ima|lotus|spdcc}!esegue!johnl
"Now, we are all jelly doughnuts."

peter@ficc.uu.net (Peter da Silva) (01/11/90)

> If the MacIntosh or Atari ST had
> an MMU third party Unxies for them would have long since been
> available.

OS/9. MINIX. Just allocate some registers as base registers and use
reg+offset addressing modes. This is no more crippled than the 8086
is all the time.
-- 
 _--_|\  Peter da Silva. +1 713 274 5180. <peter@ficc.uu.net>.
/      \
\_.--._/ Xenix Support -- it's not just a job, it's an adventure!
      v  "Have you hugged your wolf today?" `-_-'

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (01/11/90)

In article <1576@aber-cs.UUCP> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:

| The more incredible thing is not only that all the 386 running
| DOS out there are essentially emulating an 8080 on steroids,
| and thus don't ussually bother to use the MMU, but that even
| Unix does not really take advantage of the Multics like
| features of the 286/386 MMU at all, using the 286/386 as either
| a glorified PDP or VAX.

  I talked with David {can't remember} at Honeywell late last year, and
he said that Multics "could probably" be ported to the 386, and guessed
four man-yr to do it. I suspect that this will never be done, and it's a
shame. Most of Multics would run nicely with four rings instead of
eight, except (I believe) the debugger)?
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)

"I'm a left-handed vegitarian, and my hobbies are judo and the number three"
                                     Babs Wilcox, _Don't Get Even, Get Odd_

pkr@maddog.sgi.com (Phil Ronzone) (01/12/90)

In article <1576@aber-cs.UUCP> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
>Here I have a small difference. The big problem with *existing*
>68k machines is that they all different MMUs, thus making an
>ABI, or a standardized Unix, that much more difficult. Even
>worse, most 68k machines, even 68020 VME boards, have no MMU,
>because it must be added externally.
>
>This is because Motorola took a long time to get a page fault
>supporting 68K (the 68010), and was too late with a working
>MMU, and then did not put the MMU on chip until the 68030. The
>overriding advantage of the 286 and 386 is that they have an
>on-chip MMU and thus Unix can run on any 286 or 386 machine
>around.
>
>Try for example to port Unix to the MacIntosh, or the Atari ST,
>which would, but for the lack of MMU, perfectly capable of
>running UNIX. The Apple Lisa did have a (third party) UNIX
>because it did have some MMU.


When I worked at UniSoft, I probably spec'ed the porting effort
for over 100 different 68XXXX machines. A different MMU does
NOT mean a different ABI.

The most critical item is the segment size -- since the binary
sections (text, data, shared memory) are aligned on the segment
boundary. Page size tends to be almost of no consequence.

Selecting a segment size of 128K allowed for a 80-90%
common binary among all the various UniSoft ports.

In general, it was only the REALLY grungy discrete MMU
designs that were "incompatible".


------Me and my dyslexic keyboard----------------------------------------------
Phil Ronzone   Manager Secure UNIX           pkr@sgi.COM   {decwrl,sun}!sgi!pkr
Silicon Graphics, Inc.               "I never vote, it only encourages 'em ..."
-----In honor of Minas, no spell checker was run on this posting---------------

daveh@cbmvax.commodore.com (Dave Haynie) (01/12/90)

in article <1576@aber-cs.UUCP>, pcg@aber-cs.UUCP (Piercarlo Grandi) says:
> Summary: Too bad the 68k did not have a standard MMU

> Here I have a small difference. The big problem with *existing*
> 68k machines is that they all different MMUs, thus making an
> ABI, or a standardized Unix, that much more difficult. Even
> worse, most 68k machines, even 68020 VME boards, have no MMU,
> because it must be added externally.

The MMU is certainly a difference when it comes to implementing UNIX on a
680x0 system.  But that's an OS concern, it shouldn't have anything to do
with an ABI standard.  That is, after all, Applications Binary Interface. 
Under 680x0 systems, MMU code is supervisor (kernel) mode only; it can't be
run at the user mode level under any circumstance. 

> The overriding advantage of the 286 and 386 is that they have 
> an on-chip MMU and thus Unix can run on any 286 or 386 machine
> around.

I would think the '386 user would much rather have a real '386 UNIX, using
paging and all, rather than a 16 bit, segmented UNIX that you're stuck with
on a '286 machine.  A '286 UNIX might have pleased someone used to UNIX on
a PDP-11, but who's happy with that anymore?

> Try for example to port Unix to the MacIntosh, or the Atari ST,
> which would, but for the lack of MMU, perfectly capable of
> running UNIX. 

Actually, a 68000 doesn't provide support for virtual memory either.  UNIX
is only real for 680x0 systems on 68020 and up CPUs for the most part, just
like it's only real for '386 systems.  Sure, a crippled UNIX will run on
the '286, and it will run on the 68010 with an MMU, like the AT&T UNIX PC
and the Tektronics lab computer. 

The SAME version of UNIX will run on most '386 machines, but that has
nothing to do with them being '386 machines, it's because they're MS-DOS
machines.  So the UNIX implementor knows what the hardware is going to be
like down to the level of what serial chip's driving the serial port and
how the keyboard is driven.  For 680x0 systems, just as for Sparc or MIPS
or 88k or anything else, a specific UNIX for a specific platfrom is
necessary.  But given implementations that conform to the standard ABI, the
same application program will run in Amiga UNIX, Macintosh UNIX, Sun UNIX, or 
even Atari ST UNIX once Atari has a 68020 or 68030 box. 

> Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
> Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
> Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk
-- 
Dave Haynie Commodore-Amiga (Systems Engineering) "The Crew That Never Rests"
   {uunet|pyramid|rutgers}!cbmvax!daveh      PLINK: hazy     BIX: hazy
                    Too much of everything is just enough

rec@dg.dg.com (Robert Cousins) (01/13/90)

In article <129994@sun.Eng.Sun.COM> kenobi@sun.UUCP (Rick Kwan - Sun Intercon) writes:
>I have often wondered what would happen if IBM had chosen the 68000
>instead of 8088/8086, and tailored a their own simple OS to run on
>it?  I think IBM could very easily have produced such a thing.  They
>certainly had the expertise.
>
>	Rick Kwan
>	Sun Microsystems - Intercontinental Operations
>	kenobi@sun.com
>
>"Travellin' through hyperspace ain't like dustin' crops, boy."
>					--Han Solo

Well, IBM did make a 68k based machine, the IBM 9000 from IBM Instruments.
If memory serves me correctly, it was written up in BYTE at approximately
the same time as the PC was becoming popular.  Does anyone care to 
throw in more info?  
 
I'm still hunting for more info on the S-100 based Z80 machine IBM sold
in Europe for a while before the PC was announced.  My experience with
Z80s was that they almost always ran faster than 8088s on the software
of that day.  This must have made for some interesting sales calls
when a salesman was trying to sell a slower "16-bit" machine.

Robert Cousins
Dept. Mgr, Workstation Dev't.

Speaking for myself alone.

pkr@maddog.sgi.com (Phil Ronzone) (01/13/90)

In article <9308@cbmvax.commodore.com> daveh@cbmvax.commodore.com (Dave Haynie) writes:
>The MMU is certainly a difference when it comes to implementing UNIX on a
>680x0 system.  But that's an OS concern, it shouldn't have anything to do
>with an ABI standard.  That is, after all, Applications Binary Interface. 
>Under 680x0 systems, MMU code is supervisor (kernel) mode only; it can't be
>run at the user mode level under any circumstance. 


Not so. Page size and above all segment ARE visible to the binary.
Loadable libraries, shared memory see MMU characteristics as well
(or see the programmed characteristics of the MMU).


------Me and my dyslexic keyboard----------------------------------------------
Phil Ronzone   Manager Secure UNIX           pkr@sgi.COM   {decwrl,sun}!sgi!pkr
Silicon Graphics, Inc.               "I never vote, it only encourages 'em ..."
-----In honor of Minas, no spell checker was run on this posting---------------

dricejb@drilex.UUCP (Craig Jackson drilex1) (01/13/90)

All these discussions of issues of MMUs in a possible 68k IBM PC seem moot:
I think it very unlikely that such a machine would have had an MMU at all.
The Macintosh is instructive here: it has no MMU, the ROMs aren't too
far away in the address space (by today's standards) and the I/O bus isn't
fully decoded.  (There are lots of locations above 8MB which access the SIO
chips, etc.)

Any IBM 68k PC would have been designed several years before the Mac, but
with much the same cost & producability goals, so it is unlikely that these
features would have been more sophisticated.

Admittedly, one can code on the raw 68k in a manner which allows relocatibility.
The Mac certainly does; evidently, so does OS/9.  Relocatibility isn't
essential for a multi-tasking OS--look at the Amiga.  But relocatibility
would be essential for almost any Unix-like operating system, and I would
suggest that an MMU is necessary for anything which wants to implement
fork() while allowing two tasks to occupy memory at once.

What's more, once you code a 68k for relocatibility in the absence of an
MMU, it begins to look much more like a segmented architecture.

My conclusions from all this:

If IBM had chosen the 68k, the most common 68k operating system today would
have been some sort of DOS.  Today's high-end 68k-DOS boxes would have an MMU,
 and would be using it to simulate a single large virtual address space.  Unix
boxes would have been made out of both 68ks and 286s, and 286s may
indeed have been touted for their MMUs.  The workstation community would
have grown up slightly sooner, due primarily due to the lower costs of
68k-related parts early on.  The DOS world might actually be stronger than
today, because of less address-space troubles.
-- 
Craig Jackson
dricejb@drilex.dri.mgh.com
{bbn,axiom,redsox,atexnet,ka3ovk}!drilex!{dricej,dricejb}

kyriazis@ptolemy0.rdrc.rpi.edu (George Kyriazis) (01/15/90)

In article <9308@cbmvax.commodore.com> daveh@cbmvax.commodore.com (Dave Haynie) writes:
>like it's only real for '386 systems.  Sure, a crippled UNIX will run on
>the '286, and it will run on the 68010 with an MMU, like the AT&T UNIX PC
				  ^^^^^^^^^^^^^^^^^
>and the Tektronics lab computer. 
>

May I remind you that a Sun-2 contains a 68010?  I don't see any
differnce between it's UNIX and a Sun-3 UNIX.  Do you? 
Otherwise, I agree on everything else you say.



  George Kyriazis
  kyriazis@turing.cs.rpi.edu
  kyriazis@rdrc.rpi.edu
------------------------------

jhallen@wpi.wpi.edu (Joseph H Allen) (01/15/90)

In article <7413@drilex.UUCP> dricejb@drilex.UUCP (Craig Jackson drilex1) writes:
>All these discussions of issues of MMUs in a possible 68k IBM PC seem moot:
>I think it very unlikely that such a machine would have had an MMU at all.
>The Macintosh is instructive here: it has no MMU, the ROMs aren't too
>far away in the address space (by today's standards) and the I/O bus isn't
>fully decoded.  (There are lots of locations above 8MB which access the SIO
>chips, etc.)

The Radio Shack model 16 didn't have an MMU but it did have an offset and
limit register.  These were enough to allow it run multiuser xenix and other
multiuser operating systems (RM/COS).  If would certainly be cheap enough to
do this way back when.

The Macintosh doesn't need it- it's a special purpose machine which is only
supposed to run canned apple software :)

The real question is:  Why didn't anyone make a multitasking for the PC?  The
segments provide a simple mechanism for relocation.  This would not be a
terribly great developement system but at least for programs which already
work, it would be fine.

The answer, of course, is that MS-DOS made it easy to port DBASE and WordStar
from CP/M and people didn't care to look very far ahead.

mcdonald@aries.scs.uiuc.edu (Doug McDonald) (01/15/90)

In article <256@dg.dg.com> uunet!dg!rec (Robert Cousins) writes:
>In article <129994@sun.Eng.Sun.COM> kenobi@sun.UUCP (Rick Kwan - Sun Intercon) writes:
>>I have often wondered what would happen if IBM had chosen the 68000
>>instead of 8088/8086, and tailored a their own simple OS to run on
>>it?  I think IBM could very easily have produced such a thing.  They
>>certainly had the expertise.
>>
>>	Rick Kwan
>>	Sun Microsystems - Intercontinental Operations
>
>Well, IBM did make a 68k based machine, the IBM 9000 from IBM Instruments.
>If memory serves me correctly, it was written up in BYTE at approximately
>the same time as the PC was becoming popular.  Does anyone care to 
>throw in more info?  
> 
       ^^

You have the word wrong. It is "up", as in "throw up", which is
what people did when looking at one of those. We had a couple.
UTTER GARBAGE. I watched IBM demo them - their machine wouldn't run
for quite a while, and they never were able to get it to take
data from the device it was connected to, nor to print anything.

The delivered products were worse than useless.

The problems probably had little if anything to do with the processor,
and a lot to do with the programming. Also, this was a rather
independent IBM division.

Doug McDonald

bpendlet@bambam.UUCP (Bob Pendleton) (01/15/90)

From article <7413@drilex.UUCP>, by dricejb@drilex.UUCP (Craig Jackson drilex1):

> But relocatibility
> would be essential for almost any Unix-like operating system, and I would
> suggest that an MMU is necessary for anything which wants to implement
> fork() while allowing two tasks to occupy memory at once.
> 
> What's more, once you code a 68k for relocatibility in the absence of an
> MMU, it begins to look much more like a segmented architecture.
> 
> My conclusions from all this:

My conclusion is that you've never heard of a relocating loader. I
didn't say linker, I said loader. It is pretty easy for a linker to
build executable modules that contain no absolute addresses. The
loader can then decide where to put the module in memory and back
patch the absolute addresses into the in memory version of the module. 

The back patching operation can be pretty quick, but it does slow down
program loading and you pay for table space in the disk copy of the
program. The point is that you can relocate programs at load time with
no runtime penalty, no weird coding style for relocatability, and no
MMU. 

			Bob P.
-- 
              Bob Pendleton, speaking only for myself.
UUCP Address:  decwrl!esunix!bpendlet or utah-cs!esunix!bpendlet

                      X: Tools, not rules.

johnl@esegue.segue.boston.ma.us (John R. Levine) (01/16/90)

In article <256@dg.dg.com> uunet!dg!rec (Robert Cousins) writes:
>Well, IBM did make a 68k based machine, the IBM 9000 from IBM Instruments.
>If memory serves me correctly, it was written up in BYTE at approximately
>the same time as the PC was becoming popular.  Does anyone care to 
>throw in more info?  

I know people who tried and failed to make Unix run on the 9000.  The problem
is that IBM Instruments was a little company in Connecticut that made gas
chromatographs, and the 9000 was designed to be a lab machine to control
instruments.

But then IBM bought them, apparently to have a toehold in the lab computer
instrument market, so here was this 68K box coming out with the IBM logo on
it.  Everyone got all excited, because it looked like it could be IBM's first
workstation.  (The RT project had barely begun at that point.)

Unfortunately, the 9000 was a kludgy and amateurish design, hard to build,
hard to maintain, and extremely hard to program.  Sort of like a Sun 1 in a
more attractive box.  It sank without a trace, and deserved to.  IBM disbanded
IBM Instruments shortly afterwards.
-- 
John R. Levine, Segue Software, POB 349, Cambridge MA 02238, +1 617 864 9650
johnl@esegue.segue.boston.ma.us, {ima|lotus|spdcc}!esegue!johnl
"Now, we are all jelly doughnuts."

sms@WLV.IMSD.CONTEL.COM (Steven M. Schultz) (01/16/90)

In article <380@bambam.UUCP> bpendlet@bambam.UUCP (Bob Pendleton) writes:
>My conclusion is that you've never heard of a relocating loader. I
>didn't say linker, I said loader. It is pretty easy for a linker to
>build executable modules that contain no absolute addresses...
>The back patching operation can be pretty quick, but it does slow down
>program loading...
>The point is that you can relocate programs at load time with
>no runtime penalty, no weird coding style for relocatability, and no
>MMU. 

	and of course the same thing can be done when a process is
	swapped out and reloaded later at a different memory location?

	a relocating loader sounds like only a part (the easy one) of
	the solution.  there's more to do than just loading the program
	the 1st time.

	Steven

johnl@esegue.segue.boston.ma.us (John R. Levine) (01/16/90)

In article <380@bambam.UUCP> bpendlet@bambam.UUCP (Bob Pendleton) writes:
>> I would suggest that an MMU is necessary for anything which wants to 
>> implement fork() while allowing two tasks to occupy memory at once.

>My conclusion is that you've never heard of a relocating loader.

Don't be so rude, he's right.  When you do a fork, you create two copies
of the same program.  In the absence of an MMI either they have to run at
different addresses or you have to do the old mini-Unix hack of only swapping
in one program at a time.

Coding a program so that it can be moved while it's running is a lot harder
than a relocating loader.  Either you have to tag every single pointer that
the program uses, which is impractical in C, or else you need a coding style
that reserves some registers as base pointers and make all memory references
relative to the base pointers.  It's been done, but it's pretty gross and
depends critically on the good will and non-bugginess of the compilers.
-- 
John R. Levine, Segue Software, POB 349, Cambridge MA 02238, +1 617 864 9650
johnl@esegue.segue.boston.ma.us, {ima|lotus|spdcc}!esegue!johnl
"Now, we are all jelly doughnuts."

willy@idca.tds.PHILIPS.nl (Willy Konijnenberg) (01/16/90)

In article <380@bambam.UUCP> bpendlet@bambam.UUCP (Bob Pendleton) writes:

|From article <7413@drilex.UUCP>, by dricejb@drilex.UUCP (Craig Jackson drilex1):
|> But relocatibility
|> would be essential for almost any Unix-like operating system, and I would
|> suggest that an MMU is necessary for anything which wants to implement
|> fork() while allowing two tasks to occupy memory at once.
|> 
|> What's more, once you code a 68k for relocatibility in the absence of an
|> MMU, it begins to look much more like a segmented architecture.
|> 
|> My conclusions from all this:
|
|My conclusion is that you've never heard of a relocating loader. I
|didn't say linker, I said loader. It is pretty easy for a linker to
|build executable modules that contain no absolute addresses. The
|loader can then decide where to put the module in memory and back
|patch the absolute addresses into the in memory version of the module. 
|
|The back patching operation can be pretty quick, but it does slow down
|program loading and you pay for table space in the disk copy of the
|program. The point is that you can relocate programs at load time with
|no runtime penalty, no weird coding style for relocatability, and no
|MMU. 

In response to this, in article <44106@wlbr.IMSD.CONTEL.COM>
sms@WLV.IMSD.CONTEL.COM.UUCP (Steven M. Schultz) writes:
|	and of course the same thing can be done when a process is
|	swapped out and reloaded later at a different memory location?
|
|	a relocating loader sounds like only a part (the easy one) of
|	the solution.  there's more to do than just loading the program
|	the 1st time.

You might want to have a look at the MINIX port to the Atari ST.
I think that is an implementation of the scheme that Bob is talking about.
An ST has no MMU, programs are relocated during the exec().
I don't think you should try to think of relocating a program once
it has been running for a while. You have no way of knowing what it is
doing with pointers.
When you run a unix-like system, there is one additional point where
this scheme slows the system down, in addition to the relocation work
during program load.
As Craig noted, when the program fork()s, you have two programs that need
to be located at the same virtual (== physical with no MMU) address space
to run, so for every context switch, you must check whether the program is
at the proper place and if not, swap things around (in memory, not necessarily
to disk), which dramatically increases context switch overhead.
Fortunately, this is normally not much of a problem, since usually a program
does an exec() shortly after the fork() and this exec() can fix the problem.

This scheme is not very elegant, but it allows one to run a unix system
on hardware like ST, Mac and Amiga.

-- 
	Willy Konijnenberg		<willy@idca.tds.philips.nl>

des@elberton.inmos.co.uk (David Shepherd) (01/16/90)

In article <256@dg.dg.com> uunet!dg!rec (Robert Cousins) writes:
>Well, IBM did make a 68k based machine, the IBM 9000 from IBM Instruments.
>If memory serves me correctly, it was written up in BYTE at approximately
>the same time as the PC was becoming popular.  Does anyone care to 
>throw in more info?  

I remember someone saying that they had met someone from IBM who was
suprised to be told that IBM PCs had 8088s and 80286s in them and not
68ks as all the ones he had used were 68k based.

david shepherd
INMOS ltd

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (01/16/90)

In article <1990Jan15.181550.2397@esegue.segue.boston.ma.us> johnl@esegue.segue.boston.ma.us (John R. Levine) writes:

| I know people who tried and failed to make Unix run on the 9000.  The problem
| is that IBM Instruments was a little company in Connecticut that made gas
| chromatographs, and the 9000 was designed to be a lab machine to control
| instruments.

  There was a version of Xenix for it, I was asked to install it for the
people who had the 9000. It was breathtakingly slow. I don't know if the
software was production, beta, or ESP, but the installation was fairly
simple. The performance was not up to an XT running PC/ix.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
            "Stupidity, like virtue, is its own reward" -me

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (01/16/90)

In article <380@bambam.UUCP> bpendlet@bambam.UUCP (Bob Pendleton) writes:

| The back patching operation can be pretty quick, but it does slow down
| program loading and you pay for table space in the disk copy of the
| program. The point is that you can relocate programs at load time with
| no runtime penalty, no weird coding style for relocatability, and no
| MMU. 

  This is a good point, but it doesn't solve the basic problem of no
MMU, which is protecting processes from one another. Without an MMU the
segmented archetecture, at least in small mode, is better. A well
behaved compiler will never generate anything which modifies the segment
registers, and a program will stay within it's own memory, even if it
gets a pointer messed up.

  Note that this (a) doesn't protect against deliberate bad behavior,
just against a program getting a pointer hosed, and (b) limits the
addressable memory to one segment (two for separate i and d space). One
example of this is PC/ix (SysIII) running on an XT. Running development
on a production machine was never a problem, but it would not stand up
to hackers with access to assembler, of course.

  BTW: I think CP/M used something very similar to a relocating loader
to build the system for a given size. There was a bitmap of addresses
which needed to be relocated. I remember (vaguely) doing two absolute
assemblies and running a bitmap generator to "diff" the two and write
the map so you could relocate later. Much easier with a relocating
assembler, of course.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
            "Stupidity, like virtue, is its own reward" -me

abm88@ecs.soton.ac.uk (Morley A.B.) (01/17/90)

mcdonald@aries.scs.uiuc.edu (Doug McDonald) writes:

>In article <256@dg.dg.com> uunet!dg!rec (Robert Cousins) writes:
>>In article <129994@sun.Eng.Sun.COM> kenobi@sun.UUCP (Rick Kwan - Sun Intercon) writes:
>>>I have often wondered what would happen if IBM had chosen the 68000
>>>instead of 8088/8086, and tailored a their own simple OS to run on
>>>it?  I think IBM could very easily have produced such a thing.  They
>>>certainly had the expertise.
>>>
>>>	Rick Kwan
>>>	Sun Microsystems - Intercontinental Operations
>>
>>Well, IBM did make a 68k based machine, the IBM 9000 from IBM Instruments.
>>If memory serves me correctly, it was written up in BYTE at approximately
>>the same time as the PC was becoming popular.  Does anyone care to 
>>throw in more info?  
>> 
>       ^^

>You have the word wrong. It is "up", as in "throw up", which is
>what people did when looking at one of those. We had a couple.
>UTTER GARBAGE. I watched IBM demo them - their machine wouldn't run
>for quite a while, and they never were able to get it to take
>data from the device it was connected to, nor to print anything.

>The delivered products were worse than useless.

>The problems probably had little if anything to do with the processor,
>and a lot to do with the programming. Also, this was a rather
>independent IBM division.

>Doug McDonald

Ha Ha Ha! I remember Personal Computer World reviewed it and said that they thought  it would be the next IBM PC!

gerten@uklirb.UUCP (Rainer Gerten) (01/17/90)

In article <1990Jan16.055625.8255> (John R. Levine) writes:
>Coding a program so that it can be moved while it's running is a lot harder
>than a relocating loader.  Either you have to tag every single pointer that
>the program uses, which is impractical in C, or else you need a coding style
>that reserves some registers as base pointers and make all memory references
>relative to the base pointers.  
 ^^^^^^^^^^^^^^^^^^^^

This sounds quite like segment register at the 8086 and co. For the
use in a non-secured enviroment the approach of "base registers" for
getting relocatable code is'nt that bad, if the segments are big 
enough. Strutured programming in mind, it is too a good way, to cut
the code in small pieces (based upon separate modules like modula).
Therefore, the need for big code pieces is'nt that urgent. The problem
in the intel architecture is not only the architecture at all, it's mainly
the design of the compilers (look at C, there you have 6 (?) memory-models).
Why are the compiler not clever engough, to decide the right kind of
addressing (like for example Turbo-Pascal with their units). 
I know, it's a problem for a C-compiler to decide, wether a function
is used only inside a module or outside too. But this problem is based
on the language design ! It true, the architecture of the 8086 and its
decendents is not that good for C, but in languages like modula or
ada this is no problem. 
A the side of data structures, it is a big problem for handling big
arrays in the style, most C compiler do it: assigning a huge amount
of linear memory and walking through it with "pointer++". But how
often are these arrays needed (in structured programming, you use
more often STRUCTURED data like b-trees and so on), so why are some
people crying (thats not up to you, John) ? Developing more sophisticated
languages and compilers, it is not that much the question of the
architecture, if there are problems for doing this or that.
With the above, I don't protect the architecture of segmented registers
nor flame on linear addressing, but I like to point out, that a combination
of looking at both would be more sophisticated.

Rainer Gerten
University of Kaiserslautern
mail: gerten@uklirb.informatik.uni-kl.de

bpendlet@bambam.UUCP (Bob Pendleton) (01/18/90)

From article <1990Jan16.055625.8255@esegue.segue.boston.ma.us>, by johnl@esegue.segue.boston.ma.us (John R. Levine):
> In article <380@bambam.UUCP> bpendlet@bambam.UUCP (Bob Pendleton) writes:
>>> I would suggest that an MMU is necessary for anything which wants to 
>>> implement fork() while allowing two tasks to occupy memory at once.
> 
>>My conclusion is that you've never heard of a relocating loader.
> 
> Don't be so rude, he's right.

When I wrote that line it seemed perfectly OK. But rereading I see
that it IS rude as hell. I'm sorry. I did not mean to be rude.

I didn't mean to be wrong either. But I am. Since you cannot identify
(aliasing) every pointer created in a running program you can't adjust
the value of the pointers when you copy them.

			Bob P.

-- 
              Bob Pendleton, speaking only for myself.
UUCP Address:  decwrl!esunix!bpendlet or utah-cs!esunix!bpendlet

                      X: Tools, not rules.

gillies@p.cs.uiuc.edu (01/18/90)

We used to use IBM 9000's (IBM "Advantage") for a graphics course
here.  They ran Xenix o.k., and GKS connected to a 1K*1K*4 bit
graphics card....

joel@cfctech.UUCP (Joel Lessenberry) (01/19/90)

In article <1990Jan15.181550.2397@esegue.segue.boston.ma.us> johnl@esegue.segue.boston.ma.us (John R. Levine) writes:
>In article <256@dg.dg.com> uunet!dg!rec (Robert Cousins) writes:
>>Well, IBM did make a 68k based machine, the IBM 9000 from IBM Instruments.
>>If memory serves me correctly, it was written up in BYTE at approximately
>>the same time as the PC was becoming popular.  Does anyone care to 
>>throw in more info?  
>

excuse me....that is IBM made two 68K based machines... the original 
IBM Displaywriter was 68K based..I installed CPM 86 and DBASE II in 
quite a few... rumor had it that a hard disk based displaywriter was
one of the options IBM was looking at for the original pc.

 Joel Lessenberry, Distributed Systems | +1 313 948 3342
 joel@cfctech.UUCP                     | Chrysler Financial Corp.
 joel%cfctech.uucp@mailgw.cc.umich.edu | MIS, Technical Services
 {sharkey|mailrus}!cfctech!joel        | 2777 Franklin, Sfld, MI