[net.arch] What if IBM Had chosen the 68000? Not what you think Re: 386 Family Products

brad@looking.UUCP (Brad Templeton) (11/19/85)

>
>    Think what the world would be like now if IBM had decided to go with
>    the Motorola family of chips for the PC series.  WOW!!  We would
>    really have some systems out there.  IBM chose Intel for business,
>    not technical, reasons.  I don't think Motorola would have sold IBM
>    twelve percent of their stock.  Besides, IBM and Motorola compete
>    (or will be shortly) in a number of areas.

Ok, just what would have happened under these circumstances?  I won't
say that this is gospel truth, but there is some evidence for it:

1) The 68000 was only 16 bits at the time, no 68008 was to be had for
   several years.  This would have resulted in either special bus
   multiplexing hardware (slow) or a 16 bit bus.  This all adds up to
   *cost*.  The PC then would cost what the IBM-AT costs now.  The
   higher cost equipment means fewer people buy the machine, and very
   few non-business customers buy it.  How many hobbyists have ATs?
   Result, little hacking in the mass market.

2) CP/M Software (8080) is given no place to migrate.  CP/M programs and
   6502 programs all have a high degree of processor loyalty that C programs
   for 16 bit CPU's don't.  You *can't* port a cp/m program to a 68000
   without a total rewrite.  (This may be a good thing!)  What this means
   is that CP/M doesn't die, and maintains strength the same way the Apple
   ][ and Commodore Architectures hang on.  The result: CP/M and the 6502
   are the only serious contenders against IBM.

   [This is the most serious consequence.  In order to advance the industry
   to a new generation of architectures, you must *kill* the previous
   generation.  This only gets done if previous generation software can
   be easily moved up.  To do this, you need to have some level of
   compatibility with the old stuff.  In the case of the 8 bit generation,
   only object level would do.  In later generations, source level will
   do.  If you really want to advance the industry, you should go back
   in time and push for a nice chip with a 6502 emulation mode.]

3) 68000 programs are a lot larger than 8086 programs.  A lot of programs
   that might have shown up don't fit.  On the plus side, this means a
   bit of a push for larger memory, but only to achieve the same results.

4) Unix on micros is delivered a real blow.  Chances are the IBM 68000
   has no memory managment.  It's expensive and slows things down.
   This means no Unix on this one.  Sure there will be Unix for more
   expensive 68000 boxes with no MMUs, but they will always be there.
   Other multi-tasking systems that need an MMU like QNX are also hurt.

If your goal is to make most people use a "nice" architecture (where "nice"
is subjective but usually means "easy to get programs running under") then
you must do three things:

	1) Have a nice architecture!
	2) Get people to stop using the old (not-nice) architectures
	3) Get people to use the nice architecture

#1 is engineering.  #2 and #3 are marketing.  To reach your goal, they
CAN'T be ignored.  You just can't wish them away.
==============

As an aside, I won't argue that it's time for the 8086 to go.  64K Segments
are getting me down.  But I am sure it was the right chip to choose in
1980, when IBM-PC design decisions were made.  Alsmost 5 years is a pretty good
lifetime in this biz.
-- 
Brad Templeton, Looking Glass Software Ltd. - Waterloo, Ontario 519/884-7473

wdm@ecn-pc.UUCP (Tex) (11/19/85)

In article <456@looking.UUCP> brad@looking.UUCP (Brad Templeton) writes:
>>
>>    Think what the world would be like now if IBM had decided to go with
>>    the Motorola family of chips for the PC series.  WOW!!  We would
>>    really have some systems out there.  
>
>Ok, just what would have happened under these circumstances?  I won't
>say that this is gospel truth, but there is some evidence for it:
>
>1) The 68000 was only 16 bits at the time, no 68008 was to be had for
>   several years.  This would have resulted in either special bus
>   multiplexing hardware (slow) or a 16 bit bus.  This all adds up to
>   *cost*.  The PC then would cost what the IBM-AT costs now.  The
>   higher cost equipment means fewer people buy the machine, and very
>   few non-business customers buy it.  How many hobbyists have ATs?
>   Result, little hacking in the mass market.

     Get serious!!  Does the macintosh cost as much as the AT?  The Amiga?
     The ST?  No of course not.  Face it, the bus interface hardware accounts 
     for a tiny fraction of the overall cost.  Given the cost/performance
     ratio, adding a sixteen bit bus would make a lot of sense.

>
>2) CP/M Software (8080) is given no place to migrate.  CP/M programs and
>   6502 programs all have a high degree of processor loyalty that C programs
>   for 16 bit CPU's don't.  You *can't* port a cp/m program to a 68000
>   without a total rewrite.  (This may be a good thing!)  What this means
>   is that CP/M doesn't die, and maintains strength the same way the Apple
>   ][ and Commodore Architectures hang on.  The result: CP/M and the 6502
>   are the only serious contenders against IBM.

    Is a sizable percentage of ms-dos software old cp/m software?  It
    would surprise me if it were.  Or maybe I should say it would sur-
    prise me if it weren't rewritten in a major way, seeing as how the
    operating systems are not at all alike.  I would have rather they
    were rewritten for the 68000 environment.
>
>   [This is the most serious consequence.  In order to advance the industry
>   to a new generation of architectures, you must *kill* the previous
>   generation.  

    I guess that explains why there are no ibm 370- type systems around
    anymore.  What?  You say there are?  Well I guess that means we haven't
    advanced since the mid-60's.  There is absolutely no need to "kill"
    off the previous generation of architectures.  In fact, this comment
    contradicts your point 2.

>   This only gets done if previous generation software can
>   be easily moved up.  To do this, you need to have some level of
>   compatibility with the old stuff.  In the case of the 8 bit generation,
>   only object level would do.  In later generations, source level will
>   do.  If you really want to advance the industry, you should go back
>   in time and push for a nice chip with a 6502 emulation mode.]

    Are you schizo or what?  Make up your mind!  I never said emulation
    was desirable.

>
>3) 68000 programs are a lot larger than 8086 programs.  A lot of programs
>   that might have shown up don't fit.  On the plus side, this means a
>   bit of a push for larger memory, but only to achieve the same results.

    If you mean there are alot of large 68k application programs out there,
    then I agree.  If you mean that a program for a given task is larger on
    the 68k than on the 808/8/6 then I disagree totally.  Look at any
    of a number of benchmarks that have been written.  

>
>4) Unix on micros is delivered a real blow.  Chances are the IBM 68000
>   has no memory managment.  It's expensive and slows things down.
>   This means no Unix on this one.  Sure there will be Unix for more
>   expensive 68000 boxes with no MMUs, but they will always be there.
>   Other multi-tasking systems that need an MMU like QNX are also hurt.

    The IBM PC has no memory management.  What is your point here?  It makes
    no sense.  You are incorrect in saying that because this imaginary
    machine has no mmu, there would be no unix (as you point out
    in the next sentence) since there are versions of unix right now
    that do not need memory management.  Anyway, who said our ibm 6800 
    has to run unix?  Why not os-9, or any or a bunch of operating systems?
>
>If your goal is to make most people use a "nice" architecture (where "nice"
>is subjective but usually means "easy to get programs running under") then
>you must do three things:
>
>	1) Have a nice architecture!
 
    Ok, I suggest the 68000 here.

>	2) Get people to stop using the old (not-nice) architectures

    Not all that many people were using the 808X before IBM anyway.
	
>	3) Get people to use the nice architecture

    Anything that IBM sells is going to be used, if they support it like
    they did the PC.
>
>#1 is engineering.  #2 and #3 are marketing.  To reach your goal, they
>CAN'T be ignored.  You just can't wish them away.

    Who is wishing them away?  My scenario is what if IBM chose the 68000
    instead of the 8088 for their PC.  Am I to believe that they wouldn't have
    marketed it as ardently as they did the 8088-based PC?  

>==============
>
>As an aside, I won't argue that it's time for the 8086 to go.  64K Segments
>are getting me down.  But I am sure it was the right chip to choose in
>1980, when IBM-PC design decisions were made.  

     Then how about presenting some real arguments to support your argument?
     Arguments like, the ibm 68000 probably would not have had Unix support, 
     or false arguments like the 68000 uses more memory to achieve the same
     task, do little to support what you are trying to say.

>-- 
>Brad Templeton, Looking Glass Software Ltd. - Waterloo, Ontario 519/884-7473

gemini@homxb.UUCP (Rick Richardson) (11/20/85)

>
>>As an aside, I won't argue that it's time for the 8086 to go.  64K Segments
>>are getting me down.  But I am sure it was the right chip to choose in
>>1980, when IBM-PC design decisions were made.  
>
>     Then how about presenting some real arguments to support your argument?
>     Arguments like, the ibm 68000 probably would not have had Unix support, 
>     or false arguments like the 68000 uses more memory to achieve the same
>     task, do little to support what you are trying to say.
>
>>-- 
>>Brad Templeton, Looking Glass Software Ltd. - Waterloo, Ontario 519/884-7473

I didn't really want to get dragged into this, but a comparison of the sizes
of executables (using size(1), and only adding up .text) of the stuff in
/bin and /usr/bin on a 68k UNIX (Sun-2) versus a 286 UNIX SYS V shows that the
286 binaries are only 65% of the size of the 68k binaries.  I think Brad's
argument *is* valid.

Rick Richardson, PC Research, Inc., ihnp4!castor!rer (201) 834-1378

caf@omen.UUCP (Chuck Forsberg WA7KGX) (11/20/85)

In article <456@looking.UUCP> brad@looking.UUCP (Brad Templeton) writes:
>Ok, just what would have happened under these circumstances?  I won't
>say that this is gospel truth, but there is some evidence for it:
>
>1) The 68000 was only 16 bits at the time, no 68008 was to be had for
>   several years.  This would have resulted in either special bus
>   multiplexing hardware (slow) or a 16 bit bus.  This all adds up to
>   *cost*.  The PC then would cost what the IBM-AT costs now.  The
>   higher cost equipment means fewer people buy the machine, and very
>   few non-business customers buy it.  How many hobbyists have ATs?
>   Result, little hacking in the mass market.

Only the memory need be 16 bits wide, programmed i/o boards only need 8
bits.  Besides, the Tandy 2000 and AT&T 6300 don't seem overpriced even
tho they have a 16 bit bus.

>2) CP/M Software (8080) is given no place to migrate.  CP/M programs and
>   6502 programs all have a high degree of processor loyalty that C programs
>   for 16 bit CPU's don't.  You *can't* port a cp/m program to a 68000
>   without a total rewrite.  (This may be a good thing!)  What this means
>   is that CP/M doesn't die, and maintains strength the same way the Apple
>   ][ and Commodore Architectures hang on.  The result: CP/M and the 6502
>   are the only serious contenders against IBM.

With a 68000 based PC, you might have seen more PDP-11 programs ported.
And, the p-system might have been more important (remember the p-system?)
I'd gladly translate 8080 spurce code into C or 68k source code if I
could have never had to use a segment register ...

>   [This is the most serious consequence.  In order to advance the industry
>   to a new generation of architectures, you must *kill* the previous
>   generation.  This only gets done if previous generation software can
>   be easily moved up.  To do this, you need to have some level of
>   compatibility with the old stuff.  In the case of the 8 bit generation,
>   only object level would do.  In later generations, source level will
>   do.  If you really want to advance the industry, you should go back
>   in time and push for a nice chip with a 6502 emulation mode.]

I don't see where the PC killed CP/M for a few years.

>3) 68000 programs are a lot larger than 8086 programs.  A lot of programs
>   that might have shown up don't fit.  On the plus side, this means a
>   bit of a push for larger memory, but only to achieve the same results.

68000 programs tend to be larger partly because there isn't an overriding
need to limit their size.  A 68k program that uses 16 bit ints and 16 bit
offsets isn't that much larger than an 8086 small model program.  And,
a 32 bit 68k program tends to be smaller than a 8086 large model program.

>4) Unix on micros is delivered a real blow.  Chances are the IBM 68000
>   has no memory managment.  It's expensive and slows things down.
>   This means no Unix on this one.  Sure there will be Unix for more
>   expensive 68000 boxes with no MMUs, but they will always be there.
>   Other multi-tasking systems that need an MMU like QNX are also hurt.

OS9, Mini-Unix and Idris all operate on machines without memory management.

-- 
  Chuck Forsberg WA7KGX   ...!tektronix!reed!omen!caf   CIS:70715,131
Omen Technology Inc     17505-V NW Sauvie Island Road Portland OR 97231
Home of Professional-YAM, the most powerful COMM program for the IBM PC
Voice: 503-621-3406     Modem: 503-621-3746 (Hit CR's for speed detect)
omen Any ACU 1200 1-503-621-3746 se:--se: link ord: Giznoid in:--in: uucp

jkpachl@watdaisy.UUCP (Jan Pachl) (11/20/85)

As far as I know, the main reason why IBM did not choose 68000 for the PC
was that at the time there was no second source for the 68000 chips;
certainly a very prudent decision.

Does anybody know anything more about this point?
 
                                               Jan Pachl

broehl@watdcsu.UUCP (Bernie Roehl) (11/20/85)

In article <428@ecn-pc.UUCP> wdm@ecn-pc.UUCP (Tex) writes:

>>3) 68000 programs are a lot larger than 8086 programs.
>    If you mean there are alot of large 68k application programs out there,
>    then I agree.  If you mean that a program for a given task is larger on
>    the 68k than on the 808/8/6 then I disagree totally.

You can disagree all you like; the fact remains that 68k programs are
a lot larger than 808(6,8) programs.

Code for the 8086 tends to run about 30% to 40% smaller than the same
code for the 68000 (at least for compiled code; hand-assembled stuff is
almost impossible to compare).

Note that I am *not* defending the 8086 architecture (though it isn't *that*
bad), nor am I saying it's better than the 68000.  I'm just saying that code
for the 68000 tends to be a lot bulkier than code for the 8086.

brad@looking.UUCP (Brad Templeton) (11/20/85)

In article <428@ecn-pc.UUCP> wdm@ecn-pc.UUCP (Tex) writes:
>In article <456@looking.UUCP> brad@looking.UUCP (Brad Templeton) writes:
>>Ok, just what would have happened under these circumstances?  I won't
>>say that this is gospel truth, but there is some evidence for it:
>>
>>1) The 68000 was only 16 bits at the time, no 68008 was to be had for
>>   several years.  This would have resulted in either special bus
>>   multiplexing hardware (slow) or a 16 bit bus.  This all adds up to
>>   *cost*.  The PC then would cost what the IBM-AT costs now.  The
>>   higher cost equipment means fewer people buy the machine, and very
>>   few non-business customers buy it.  How many hobbyists have ATs?
>>   Result, little hacking in the mass market.
>
>     Get serious!!  Does the macintosh cost as much as the AT?  The Amiga?
>     The ST?  No of course not.  Face it, the bus interface hardware accounts 
>     for a tiny fraction of the overall cost.  Given the cost/performance
>     ratio, adding a sixteen bit bus would make a lot of sense.
>
Please don't tell me to "get serious."  I am quite serious.  IBM is not
Atari.  IBM doesn't feel its place is to make the cheapest micro.  Their
goal is to make the best supported one - that's their reputation.
The IBM-AT has a 16 bit processor and support chips, 16 bit bus with 8 slots,
1.2 meg floppy, nice keyboard and a disk controller card.  The same computer
with a 68000 in it would cost just as much, or more than the 286 version.
How can you seriously suggest they would charge less?
>>
>>2) CP/M Software (8080) is given no place to migrate.  CP/M programs and
>>   6502 programs all have a high degree of processor loyalty that C programs
>>   for 16 bit CPU's don't.  You *can't* port a cp/m program to a 68000
>>   without a total rewrite.  (This may be a good thing!)  What this means
>>   is that CP/M doesn't die, and maintains strength the same way the Apple
>>   ][ and Commodore Architectures hang on.  The result: CP/M and the 6502
>>   are the only serious contenders against IBM.
>
>    Is a sizable percentage of ms-dos software old cp/m software?  It
>    would surprise me if it were.  Or maybe I should say it would sur-
>    prise me if it weren't rewritten in a major way, seeing as how the
>    operating systems are not at all alike.  I would have rather they
>    were rewritten for the 68000 environment.

No, lots of new software (mostly in HLLs) has been written for ms-dos.
The point I made is that all the wide-selling CP/M software was available
quickly for the 8086, providing a quick migration.  The software was
not rewritten, it was ported, which is to say they made adjustments instead
of totally reworking the project.
>>
>>   [This is the most serious consequence.  In order to advance the industry
>>   to a new generation of architectures, you must *kill* the previous
>>   generation.  
>
>    I guess that explains why there are no ibm 370- type systems around
>    anymore.  What?  You say there are?  Well I guess that means we haven't
>    advanced since the mid-60's.  There is absolutely no need to "kill"
>    off the previous generation of architectures.  In fact, this comment
>    contradicts your point 2.

You miss the point.  The 360 has been killed, in that there are almost none
of them out there.  You can still run 360 and 370 programs on the more
recent machines, but the software headache, "I have to get this to run
on the 360 if I want it to sell" is gone.  By "kill" I mean eliminate the
old processors, not the ablility to run their software.
>
>>
>>3) 68000 programs are a lot larger than 8086 programs.  A lot of programs
>>   that might have shown up don't fit.  On the plus side, this means a
>>   bit of a push for larger memory, but only to achieve the same results.
>
>    If you mean there are alot of large 68k application programs out there,
>    then I agree.  If you mean that a program for a given task is larger on
>    the 68k than on the 808/8/6 then I disagree totally.  Look at any
>    of a number of benchmarks that have been written.  

See other notes.  Please show your benchmarks.  I have yet to see a 68000
Unix box that runs reasonably with much less than 1 meg of ram.  I used
to own a Z-8000 Unix machine (ONYX) that was a very reasonable single user
system with 256K and a 3 user system with 512K.  Coherent ran at PDP-11
levels on an IBM-PC with an 8088!  Xenix on the 286 performs at a similar
level to 68000 systems costing more than twice as much, and can do it with
less memory.  In fact, since we know IBM isn't a "price cutter", isn't it
amazing that they make by far the cheapest machine for Unix? (forgetting
the clones, for the time being)
>
>>
>>4) Unix on micros is delivered a real blow.  Chances are the IBM 68000
>>   has no memory managment.  It's expensive and slows things down.
>>   This means no Unix on this one.  Sure there will be Unix for more
>>   expensive 68000 boxes with no MMUs, but they will always be there.
>>   Other multi-tasking systems that need an MMU like QNX are also hurt.
>
>    The IBM PC has no memory management.  What is your point here?  It makes
Incorrect
>    no sense.  You are incorrect in saying that because this imaginary
>    machine has no mmu, there would be no unix (as you point out
>    in the next sentence) since there are versions of unix right now
>    that do not need memory management.  Anyway, who said our ibm 6800 
Show me these versions of Unix...  If they exist, don't they have to
make very harsh restrictions on user code, ie. making it position independent?

>    has to run unix?  Why not os-9, or any or a bunch of operating systems?
I was making a point.  Perhaps OS-9 would do the job.  I was just saying,
"If you value unix, here's something to consider."
>>
>>If your goal is to make most people use a "nice" architecture (where "nice"
>>is subjective but usually means "easy to get programs running under") then
>>you must do three things:
>>
>>	1) Have a nice architecture!
> 
>    Ok, I suggest the 68000 here.
>
>>	2) Get people to stop using the old (not-nice) architectures
>
>    Not all that many people were using the 808X before IBM anyway.
8080 (Z-80) and 6502 were almost the whole industry prior to IBM.
Today there are more 6502 computers sold than any other, inluding IBM.
>	
>>	3) Get people to use the nice architecture
>
>    Anything that IBM sells is going to be used, if they support it like
>    they did the PC.
>>
>>#1 is engineering.  #2 and #3 are marketing.  To reach your goal, they
>>CAN'T be ignored.  You just can't wish them away.
>
>    Who is wishing them away?  My scenario is what if IBM chose the 68000
>    instead of the 8088 for their PC.  Am I to believe that they wouldn't have
>    marketed it as ardently as they did the 8088-based PC?  

I'm not saying it would have been bad to see IBM choose the 68000.  In fact,
they *did* market a 68000 machine at the same time as the PC.  They mostly
aimed it as a "lab" machine.  That's one reason it didn't sell well.
The others are (surprise, surprise) that it cost like an AT and couldn't
run any of the old software!  It couldn't compete with the price of the 8088
machine.

It wasn't pure goodness for IBM to choose the 8086.  But it would not have
been all sweetness and light if they had chosen the 68000, and that's the
notion I am trying to correct.
-- 
Brad Templeton, Looking Glass Software Ltd. - Waterloo, Ontario 519/884-7473

broehl@watdcsu.UUCP (Bernie Roehl) (11/21/85)

In article <7490@watdaisy.UUCP> jkpachl@watdaisy.UUCP (Jan Pachl) writes:
>As far as I know, the main reason why IBM did not choose 68000 for the PC
>was that at the time there was no second source for the 68000 chips;
>certainly a very prudent decision.
>
>Does anybody know anything more about this point?
> 
>                                               Jan Pachl

As far as I remember, there were at least three reasons for choosing the
8088:

   1.  The 68000 was not second-sourced.
   2.  The 8088 could use an 8-bit data bus, and was thus less expensive.
   3.  There was a large software base instantly available simply by
       reassembling 8080 CP/M programs (which is why DOS 1.00 was so
       much like CP/M).

In retrospect, it was a good choice; the PC succeeded.  (And don't say it's
just because it had IBM's logo on the front; those three little letters
didn't help the PC JR, and didn't help IBM's first PC much either (how many
people even remember the 5100?  It had Basic and APL in rom, as I recall,
along with 64k of ram and a proprietary processor).  IBM may be big and
well known, and that may be enough in the mainframe world, but consumers
don't care about prestige as much as they do about things like support and
price/performance).

I agree, it would have been interesting if IBM had gone 68k; however, I
suspect that Brad Templeton may be right about what would have happened
as a result.

jpn@teddy.UUCP (11/22/85)

In article <428@ecn-pc.UUCP> wdm@ecn-pc.UUCP (Tex) writes:
>    Is a sizable percentage of ms-dos software old cp/m software?  It
>    would surprise me if it were.  Or maybe I should say it would sur-
>    prise me if it weren't rewritten in a major way, seeing as how the
>    operating systems are not at all alike.  I would have rather they
>    were rewritten for the 68000 environment.

Not anymore, but I think you will find that for the first year or so, the
only applications available for the PC were ones ported from CPM, using
automatic 8080 to 8086 assembly language translators.  MSDOS 1.X looked an
AWFUL LOT like cpm - the program segment prefix and all the low dos
interrupts support a CPM-like environment.

mf1@ukc.UUCP (Michael Fischer) (11/23/85)

If one remembers, IBM didn't choose anything.  When the IBM micro
was announced, IBM officals were falling all over themselves to deny that
they would support the machine at all.

I'm sure I'll be corrected if I have my micro-mythology wrong, but I recall
that the IBM-PC was created by a friend of the son of the District Manager
of IBM in Boca Roca Florida, who managed to get IBM to reluctantly put 
their name on it as an experiment. Even if this "origin myth" is wrong,
the IBM-PC was an outside product sold to IBM.  IBM bought shares in Intel
somewhat later, after the machine sold.

jso@edison.UUCP (John Owens) (11/27/85)

> In article <428@ecn-pc.UUCP> wdm@ecn-pc.UUCP (Tex) writes:
> >In article <456@looking.UUCP> brad@looking.UUCP (Brad Templeton) writes:
> >>4) Unix on micros is delivered a real blow.  Chances are the IBM 68000
> >>   has no memory managment.  It's expensive and slows things down.
> >>   This means no Unix on this one.  Sure there will be Unix for more
> >>   expensive 68000 boxes with no MMUs, but they will always be there.
> >>   Other multi-tasking systems that need an MMU like QNX are also hurt.
> >
> >    The IBM PC has no memory management.  What is your point here?  It makes
> Incorrect
> >    no sense.  You are incorrect in saying that because this imaginary
> >    machine has no mmu, there would be no unix (as you point out
> >    in the next sentence) since there are versions of unix right now
> >    that do not need memory management.  Anyway, who said our ibm 6800 
> Show me these versions of Unix...  If they exist, don't they have to
> make very harsh restrictions on user code, ie. making it position independent?
> 
> -- 
> Brad Templeton, Looking Glass Software Ltd. - Waterloo, Ontario 519/884-7473

First, the IBM PC *DOES NOT* have memory management.  If you think it
does, please let me know how you define memory management, and what the
hardware for this is on the PC.

There are many versions of UNIX that don't require memory management.
Venix/86 from Mark of the Unicorn (marketed by Unisource) runs quite
well on the PC, and supports UNIX at about the level of a PDP-11 with
separate I&D space (11/44,11/70, and some of the newer ones (11/73?)).

On the PDP-11, you would have a 16 bit virtual address, which in
"normal" mode provided a linear address space, and in "separate" mode,
provided *two* linear virtual address spaces, one for references
through the PC (instructions and PC-relative data, usually constants),
and one for any other references.  This provided for 64K of code space,
and 64K of combined data/stack space.  Sounds a lot like one of the
808x memory models.... (Compact?)

The Venix C compiler uses small model for "normal" programs, and the
model above for "separate" programs.  Note that programs need never
modify or even know the value of their segment registers.  This isn't
any more "position independent" than MS-DOS programs, which can be
loaded anywhere - that's the whole point of segment registers (besides
just being a kludge to get 20-bit addresses).

Of course there's no absolute inter-process security, since anyone can
just set a segment register to whatever they want, but for practical
(non-secure, just reliable under use) purposes, this is fine - no one
codes in assembler on a UNIX system anyway, except for special
purposes.

Trivia for V7 buffs:  Venix/86 implements the (root-only) phys() call
by setting ES to the value you pass in.

I'm sure the other PC UNIXes (Xenix, PC/IX, etc.) use similar
mechanisms.  Does anyone know if Xenix/286 even uses the 286 protected
mode?

(In addition to the PC, UNIX runs on other machines without memory
 management, including some bare (no MMU) 68000 machines.)

-- 

			   John Owens
General Electric Company		Phone:	(804) 978-5726
Factory Automated Products Division	Compuserve: 76317,2354
	       decvax!mcnc!ncsu!uvacs
...!{		 gatech!allegra!uvacs	}!edison!jso
			  ihnp4!houxm

gemini@homxb.UUCP (Rick Richardson) (11/30/85)

> From: jso@edison.UUCP (John Owens)
> There are many versions of UNIX that don't require memory management.
> Venix/86 from Mark of the Unicorn (marketed by Unisource) runs quite
> well on the PC, and supports UNIX at about the level of a PDP-11 with
> separate I&D space (11/44,11/70, and some of the newer ones (11/73?)).
> 
Thats Venix/86 by Venturcom, marketed by Unisource.
> 
> I'm sure the other PC UNIXes (Xenix, PC/IX, etc.) use similar
> mechanisms.  Does anyone know if Xenix/286 even uses the 286 protected
> mode?

Both "Venix/286 System V Release 2.0" and Xenix/286 run in protected mode.
-Rick Richardson, PC Research, Inc. ..!houxm!castor!rer

farren@well.UUCP (Mike Farren) (12/04/85)

In article <617@edison.UUCP>, jso@edison.UUCP (John Owens) writes:
> 
> (In addition to the PC, UNIX runs on other machines without memory
>  management, including some bare (no MMU) 68000 machines.)
> 
   Yep. UniSoft had a version which ran on the Lisa (Macintosh XL), and I
know of no MMU in there.
  
   I would like to point out that protecting processes from one another is
a real nice idea, but in no way is it necessary.  Concurrent processes     
controlled only by a software memory management scheme have been around
for a long, long time.  With a lot of care (and a little bit of luck), it
really isn't THAT much of a problem...

-- 
           Mike Farren
           uucp: {dual, hplabs}!well!farren
           Fido: Sci-Fido, Fidonode 125/84, (415)655-0667
           USnail: 390 Alcatraz Ave., Oakland, CA 94618

esf00@amdahl.UUCP (Elliott S. Frank) (12/09/85)

[...urp...]

Some more history that seems to have gotten overlooked....

Long before the IBM PC hit your local Byte shop (remember them...?) I
was involved in the design and implementation of a microprocessor based
system.

We ended up with an 8086 and a homebrew bus since

1) we could use the Intel (8085-based) MDS (the "blue box") for 8086
development and support.

2) the 8086 had software (specifically, a Pascal compiler and an assembler)
that we could use.  We were not (then) concerned with software quality.
The concern was having tools available for the development engineers to use.

3) the 8086 could use 8080/8085/Z-80 (8-bit!!!) peripheral chips.

4) we could get everything (software, development support tools, etc.)
in the timeframe that we needed to support our [very aggressive] product
schedule.

At the time, the 68000 did not have  e v e r y t h i n g   available.
By the time the IBM PC hit the streets, the 8086 and 8086 tools had
been proven in a large number of embedded 16-bit microprocessor products.
The 8088 was the 8086 with the 8-bit peripheral/16-bit processor
bus problem "solved".  The IBM PC represented the packaging of the most
common solution available at the time.


Elliott S Frank    ...!{ihnp4,hplabs,amd,nsc}!amdahl!esf00     (408) 746-6384

[the above opinions are strictly mine, if anyone's]

tim@ism780c.UUCP (Tim Smith) (12/12/85)

In article <318@well.UUCP> farren@well.UUCP (Mike Farren) writes:
>In article <617@edison.UUCP>, jso@edison.UUCP (John Owens) writes:
>> 
>> (In addition to the PC, UNIX runs on other machines without memory
>>  management, including some bare (no MMU) 68000 machines.)
>> 
>   Yep. UniSoft had a version which ran on the Lisa (Macintosh XL), and I
>know of no MMU in there.
>  

The interview with the Macintosh designers in the February 1984 BYTE
implies that the Lisa has an MMU.
-- 
Tim Smith       sdcrdcf!ism780c!tim || ima!ism780!tim || ihnp4!cithep!tim