[comp.sys.ibm.pc] 640K limit

ray@philmtl.philips.ca (Ray Dunn) (01/13/90)

In a recent article:
>passed the 640K limit of DOS

I see this over and over again, and would like to try to put it to bed...

MSDos does *not* have a 640K limit, it has a 1Meg limit in its addressing.

640K is only an artifact of the architecture of the PC which *normally* uses
the memory map above 640K for RAM and ROM on option cards and BIOS.  If you
have for example a video card which does not occupy the A000 page with its
video memory, it is possible to map memory there to be used by DOS.

So long as the BIOS recognises it at initialization time, and reports it to
DOS, DOS will use it.

As an example, the Philips P3345 386SX based machine which allows RAM to be
mapped into any 64K segment of the A000 through E000 pages allows you to
configure with 720K of RAM available to DOS so long as you dont have a VGA
video.
-- 
Ray Dunn.                    | UUCP: ray@philmt.philips.ca
Philips Electronics Ltd.     |       ..!{uunet|philapd|philabs}!philmtl!ray
600 Dr Frederik Philips Blvd | TEL : (514) 744-8200  Ext : 2347 (Phonemail)
St Laurent. Quebec.  H4M 2S9 | FAX : (514) 744-6455  TLX : 05-824090

news@blackbird.afit.af.mil (News System Account) (01/14/90)

In article <4668.25aed7f2@uwovax.uwo.ca> 23031_676@uwovax.uwo.ca writes:
>elementary question:  Like many others, I have purchased a 1 mb system (NEC 
>286), yet it is not clear that the extra 360 can be used for anything. Modal
>response is yes/no(!) DesQ and other software manuals speak about use of 
>extended memory over 1 mb, but not between 640 and 1 mb. Is there any way, 
>for any kinds of software, of getting any use out of this memory?

IBM really messed things up when they designed the original PC (hindsight
is wonderful!) - they didn't think that anyone would ever need more than
640k of memory.  The original PC came with 64k!  They thought it would be 
safe to put all the video screen memory mappings, ROMs, and all that 
stuff up above the 640k line.

Nowadays with memory capacities STARTING at 1M, the stuff between 640k and
1M poses a slight problem.  With a 80286, the best that can be done is to
find the holes that aren't being used for video memory or ROMs and a 
program such as LOADHI (from Quarterdeck Software - it comes bundled with
QEMM) that can put device drivers and TSRs up there so they don't take up
much of the main 640k.

The 80386 has some special hardware that can be used to 'fool' the processor
into thinking that the memory between 640k and 1M is actually somewhere
else - usually above 1M.  That way, programs like DESQview (with QEMM-386)
can use it to run normal programs.

Hope this clears up some of the confusion!

Ed Williams

bcw@rti.UUCP (Bruce Wright) (01/15/90)

In article <4668.25aed7f2@uwovax.uwo.ca>, 23031_676@uwovax.uwo.ca writes:
> The technical report is fine for hackers, but please try to answer this more
> elementary question:  Like many others, I have purchased a 1 mb system (NEC 
> 286), yet it is not clear that the extra 360 can be used for anything. Modal
> response is yes/no(!) DesQ and other software manuals speak about use of 
> extended memory over 1 mb, but not between 640 and 1 mb. Is there any way, 
> for any kinds of software, of getting any use out of this memory?

It depends on what the manufacturer did with the extra memory.  It is
possible to arrange to map the memory above 1MB, so that it looks like
normal memory above 1MB.  It's also possible to use it to copy the ROM-
BIOS into RAM, since RAM tends to be faster than ROM (This tends to be
used more for 80386 rather than 80286 machines).  The only thing that
almost no compatibles do (since it means that they are no longer very
compatible) is to map the memory between 640K and 1MB.  I am not
familiar with the machine you have (I've seen them but not used them),
so you should ask the manufacturer or the dealer where you bought the
machine.  There may be a technical manual which explains how memory on
your machines works - many manufacturers produce such a manual.

							Bruce C. Wright

poffen@molehill (Russ Poffenberger) (01/15/90)

In article <1468@blackbird.afit.af.mil> ewilliam@galaxy-43.UUCP (Edward M. Williams) writes:
>In article <4668.25aed7f2@uwovax.uwo.ca> 23031_676@uwovax.uwo.ca writes:
>>elementary question:  Like many others, I have purchased a 1 mb system (NEC 
>>286), yet it is not clear that the extra 360 can be used for anything. Modal
>>response is yes/no(!) DesQ and other software manuals speak about use of 
>>extended memory over 1 mb, but not between 640 and 1 mb. Is there any way, 
>>for any kinds of software, of getting any use out of this memory?
>
>IBM really messed things up when they designed the original PC (hindsight
>is wonderful!) - they didn't think that anyone would ever need more than
>640k of memory.  The original PC came with 64k!  They thought it would be 
>safe to put all the video screen memory mappings, ROMs, and all that 
>stuff up above the 640k line.
>
>Nowadays with memory capacities STARTING at 1M, the stuff between 640k and
>1M poses a slight problem.  With a 80286, the best that can be done is to
>find the holes that aren't being used for video memory or ROMs and a 
>program such as LOADHI (from Quarterdeck Software - it comes bundled with
>QEMM) that can put device drivers and TSRs up there so they don't take up
>much of the main 640k.
>
>The 80386 has some special hardware that can be used to 'fool' the processor
>into thinking that the memory between 640k and 1M is actually somewhere
>else - usually above 1M.  That way, programs like DESQview (with QEMM-386)
>can use it to run normal programs.
>
>Hope this clears up some of the confusion!
>

Well, I think you confused it a little more by saying only 386 systems can map
memory above 1m. The truth is that MOST modern 286 systems these days have
"Split Memory Addressing". By using some clever address mapping schemes in the
RAM addressing hardware, the memory that normally falls between 640K and 1024K
is re-mapped to appear above 1024 as extended memory. (384K worth)

If your system displays extended memory during cold boot, then this system
has this capability.

Russ Poffenberger               DOMAIN: poffen@sj.ate.slb.com
Schlumberger Technologies       UUCP:   {uunet,decwrl,amdahl}!sjsca4!poffen
1601 Technology Drive		CIS:	72401,276
San Jose, Ca. 95110
(408)437-5254

thomasr@cpqhou.UUCP (Thomas Rush) (01/15/90)

In article <1468@blackbird.afit.af.mil>, news@blackbird.afit.af.mil (News System Account) writes:

>                  The original PC came with 64k!

	The original IBM PC came with 16K.  The BIOS did not allow
more than 64K of RAM, if I remember correctly.

	Anyone got an original IBM cassette tape drive for sale?
Mine is starting to wear out....

thomas.
uunet!cpqhou!thomasr
Look, anything after one sentence is questionable.

tat@pccuts.pcc.amdahl.com (Tom Thackrey) (01/16/90)

In article <1468@blackbird.afit.af.mil> ewilliam@galaxy-43.UUCP (Edward M. Williams) writes:
 >IBM really messed things up when they designed the original PC (hindsight
 >is wonderful!) - they didn't think that anyone would ever need more than
 >640k of memory.  The original PC came with 64k!  They thought it would be 
 >safe to put all the video screen memory mappings, ROMs, and all that 
 >stuff up above the 640k line.
Actually the original PC came with as little as 16K with a max of 64K on
the mother board and I think you could only go to 256K total.  Remember,
when the PC came out a BIG pc had 64K.  Even huge mainframes of 1982 were
in the 16MB range.  Also, I think a 64K memory card for the original PC
cost something like $500.
-- 
Tom Thackrey sun!amdahl!tat00

[ The opinions expressed herin are mine alone. ]

phil@pepsi.amd.com (Phil Ngai) (01/16/90)

In article <1468@blackbird.afit.af.mil> ewilliam@galaxy-43.UUCP (Edward M. Williams) writes:
|
|IBM really messed things up when they designed the original PC (hindsight
|is wonderful!) - they didn't think that anyone would ever need more than
|640k of memory.  The original PC came with 64k!  They thought it would be 
|safe to put all the video screen memory mappings, ROMs, and all that 
|stuff up above the 640k line.
|
|Nowadays with memory capacities STARTING at 1M, the stuff between 640k and

Don't be ridiculous. The 8088 can't ADDRESS more than 1 mega. Just where
did you expect IBM to put the IO stuff?

The problem is all the lazy software houses who haven't bothered to move
to OS/2 and take advantage of the 16 meg protected mode even though the
286 can handle it just fine.

--
Phil Ngai, phil@diablo.amd.com		{uunet,decwrl,ucbvax}!amdcad!phil
Peace through strength.

phil@pepsi.amd.com (Phil Ngai) (01/16/90)

In article <1990Jan15.030306.19993@sj.ate.slb.com> poffen@sj.ate.slb.com (Russ Poffenberger) writes:
|Well, I think you confused it a little more by saying only 386 systems can map
|memory above 1m. The truth is that MOST modern 286 systems these days have
|"Split Memory Addressing". By using some clever address mapping schemes in the
|RAM addressing hardware, the memory that normally falls between 640K and 1024K
|is re-mapped to appear above 1024 as extended memory. (384K worth)

Some of the newer chips sets even allow you to use the memory as EMS.
However, I doubt the original poster is capable of understanding
concepts like remapping. The best answer for him is to go ask the guy
that sold him the board.

--
Phil Ngai, phil@diablo.amd.com		{uunet,decwrl,ucbvax}!amdcad!phil
Peace through strength.

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (01/16/90)

In article <28808@amdcad.AMD.COM> phil@pepsi.AMD.COM (Phil Ngai) writes:

| The problem is all the lazy software houses who haven't bothered to move
| to OS/2 and take advantage of the 16 meg protected mode even though the
| 286 can handle it just fine.

  Wait a minute. Why would any company trying to make a profit go to
OS/2? There are so few users that they would be lucky to cover the cost
of a new version of the compiler, much less cover their labor,
packaging, stocking, and labor costs. And lucky to get enough ongoing
income to cover the maintenence.

  This is not a case of lazy, just "no market." Someone told me that
they sold more copies of their educational program for CP/M than OS/2!
That's not a market. If this lady wasn't motivated by a desire to make
social changes she would not have bothered.
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
            "Stupidity, like virtue, is its own reward" -me

acm@grendal.Sun.COM (Andrew MacRae) (01/17/90)

In article <28808@amdcad.AMD.COM> phil@pepsi.AMD.COM (Phil Ngai) writes:
>Don't be ridiculous. The 8088 can't ADDRESS more than 1 mega. Just where
>did you expect IBM to put the IO stuff?
>
>The problem is all the lazy software houses who haven't bothered to move
>to OS/2 and take advantage of the 16 meg protected mode even though the
>286 can handle it just fine.

Simple.  IBM/MicroSoft *should* have used soft pointers to the I/O memory
areas.  They made the same mistake that was made with CP/M, hardcoding
areas of memory for specific uses.  For those who don't remember, CP/M
was hardcoded so that the kernal *had* to reside in the top 4kb of a 64kb
address space, forever limiting its applications to 60kb in size.

phil@pepsi.amd.com (Phil Ngai) (01/17/90)

In article <2021@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes:
|In article <28808@amdcad.AMD.COM> phil@pepsi.AMD.COM (Phil Ngai) writes:
|
|| The problem is all the lazy software houses who haven't bothered to move
|| to OS/2 and take advantage of the 16 meg protected mode even though the
|| 286 can handle it just fine.
|
|  Wait a minute. Why would any company trying to make a profit go to
|OS/2? There are so few users that they would be lucky to cover the cost

Because it's a good way to get 16 megabytes and multi-tasking and when
all those 286 users figure that out, they'll jump for it. Also, once
more applications are available (Pagemaker IS available) and dealers
start figuring out how to sell plug-n-play systems that look like Macs,
they'll make a lot of money.

Have you figured out yet why Apple sued Microsoft?
Have you ever seen the screen of Microsoft Windows 3.0?

--
Phil Ngai, phil@diablo.amd.com		{uunet,decwrl,ucbvax}!amdcad!phil
Peace through strength.

solomon@rice.edu (Richard L. Solomon) (01/17/90)

In article <729@jethro.Corp.Sun.COM> acm@sun.UUCP (Andrew MacRae) writes:
>In article <28808@amdcad.AMD.COM> phil@pepsi.AMD.COM (Phil Ngai) writes:
>>Don't be ridiculous. The 8088 can't ADDRESS more than 1 mega. Just where
>>did you expect IBM to put the IO stuff?
>>
>>The problem is all the lazy software houses who haven't bothered to move
>>to OS/2 and take advantage of the 16 meg protected mode even though the
>>286 can handle it just fine.
>
>Simple.  IBM/MicroSoft *should* have used soft pointers to the I/O memory
>areas.

	NO....they SHOULD have mapped the I/O in the I/O ADDRESS SPACE where
it belongs......there are 64kB of I/O addresses, anyone use that much?
(Excluding VidRAM and EMS which obviously could legitimately be in the
memory address space.)  

>        They made the same mistake that was made with CP/M, hardcoding
>areas of memory for specific uses.  For those who don't remember, CP/M
>was hardcoded so that the kernal *had* to reside in the top 4kb of a 64kb
>address space, forever limiting its applications to 60kb in size.

	I beg to differ!  CP/M requires that its "kernal" (and we use that
term lightly :) ) reside at the top of the address space.  The actual
location of the CCP (user interface) was coded with just such a "soft pointer"
as suggested.  Admittedly the BDOS (2.5k) and the BIOS header table must
follow immediately after the CCP, giving a "kernal" of minimally some 5k.
Note that all of this info is for CP/M 2.2.  Beyond that, only the first
256 BYTES of memory were hardcoded - DRI was not so devious as to map
things into the MIDDLE of an address space so as to force you to buy
CP/M 9000000.......
	Also, most early CP/M systems had 16 or 32kB of memory....their
apps were limited to correspondingly less memory  :)

Richard Solomon
solomon@owlnet.rice.edu

TGOLDIN@amherst.bitnet (01/17/90)

In article <948@philmtl.philips.ca>, ray@philmtl.philips.ca (Ray Dunn) writes:
> In a recent article:
>>passed the 640K limit of DOS
> 
> I see this over and over again, and would like to try to put it to bed...
> 
> MSDos does *not* have a 640K limit, it has a 1Meg limit in its addressing.
> 
> 640K is only an artifact of the architecture of the PC which *normally* uses
> the memory map above 640K for RAM and ROM on option cards and BIOS.  

There is a software product called 386 MAX that will allow you to map extended
or expanded memory into the 640K to 1Meg range.  This is especially useful when
you have memory resident programs and drivers that are taking up too much of
the 640K conventional memory.

toma@tekgvs.LABS.TEK.COM (Tom Almy) (01/17/90)

In article <729@jethro.Corp.Sun.COM> acm@sun.UUCP (Andrew MacRae) writes:
>Simple.  IBM/MicroSoft *should* have used soft pointers to the I/O memory
>areas.  They made the same mistake that was made with CP/M, hardcoding
>areas of memory for specific uses.  For those who don't remember, CP/M
>was hardcoded so that the kernal *had* to reside in the top 4kb of a 64kb
>address space, forever limiting its applications to 60kb in size.


Not true! It could reside anywhere in the 64kb address space, and if the
address space happened to be larger (I don't know how?) you could still
put it anywere because you accessed DOS with "CALL 5". You had to be
trickier with BIOS calls, but with CP/M 3.0 you could do BIOS calls
with "CALL 5" as well. Also CP/M 3.0 was capable of bank switching
(if the hardware supported it) so that the bulk of the OS, as well as 
disk buffers could be placed outside the user 64k address space. 
It would be like placing MS/DOS in expanded memory.

Tom Almy
toma@tekgvs.labs.tek.com
Standard Disclaimers Apply

johnl@esegue.segue.boston.ma.us (John R. Levine) (01/17/90)

In article <4308@brazos.Rice.edu> solomon@screech.rice.edu (Richard L. Solomon) writes:
>>Simple.  IBM/MicroSoft *should* have used soft pointers to the I/O memory
>>areas.
>	NO....they SHOULD have mapped the I/O in the I/O ADDRESS SPACE where
>it belongs......there are 64kB of I/O addresses, anyone use that much?
>(Excluding VidRAM and EMS which obviously could legitimately be in the
>memory address space.)  

Hmmn, what did IBM actually put in the upper 384K?  Well, there's video
RAM, you want to leave that addressable for performance reasons.  Then
there's expanded memory, ditto, if you can't address it you might as well
use a RAM disk.  Then there's the system BIOS ROM and device BIOS ROMs,
again they really have to be addressable because they contain executable
code.  Gee, it looks like everything that's memory mapped really has to
be.  Oh well, it's a shame the 68008 wasn't ready in 1980.
-- 
John R. Levine, Segue Software, POB 349, Cambridge MA 02238, +1 617 864 9650
johnl@esegue.segue.boston.ma.us, {ima|lotus|spdcc}!esegue!johnl
"Now, we are all jelly doughnuts."

phil@pepsi.amd.com (Phil Ngai) (01/17/90)

In article <4308@brazos.Rice.edu> solomon@screech.rice.edu (Richard L. Solomon) writes:
|	NO....they SHOULD have mapped the I/O in the I/O ADDRESS SPACE where
|it belongs......there are 64kB of I/O addresses, anyone use that much?
|(Excluding VidRAM and EMS which obviously could legitimately be in the
|memory address space.)  

By "IO" Video RAM was exactly what I meant. The IO that you are thinking
about, such as serial ports, are ALREADY in the IO space.

|	I beg to differ!  CP/M requires that its "kernal" (and we use that
|term lightly :) ) reside at the top of the address space.  The actual

No doubt, since you can't spell it.

Anyway, I will say it again: protected mode is the way to go. This will
be available in Windows 3.0 or OS|2. Since these platforms provide GUIs
which mimic Apple's, they will be very popular and DOS along with its
640K limit will become less and less important.

--
Phil Ngai, phil@diablo.amd.com		{uunet,decwrl,ucbvax}!amdcad!phil
Peace through strength.

bcw@rti.UUCP (Bruce Wright) (01/17/90)

In article <2021@crdos1.crd.ge.COM>, davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes:
>   Wait a minute. Why would any company trying to make a profit go to
> OS/2? There are so few users that they would be lucky to cover the cost
> of a new version of the compiler, much less cover their labor,
> packaging, stocking, and labor costs. And lucky to get enough ongoing
> income to cover the maintenence.
> 
>   This is not a case of lazy, just "no market." Someone told me that
> they sold more copies of their educational program for CP/M than OS/2!
> That's not a market. If this lady wasn't motivated by a desire to make
> social changes she would not have bothered.

I'm no great fan of OS/2, but I think you are greatly overstating the 
case.  OS/2 is produced and sold as a high-end operating system,
competing with things like single-user Unix systems on workstations and
high-end PC's.  It isn't likely to find its way into things like home 
computers, which is where educational software primarily winds up.

On the other hand, unless a product is either impossible (or at least
very difficult) to do under DOS, or unless porting the product to OS/2
is essentially trivial, there's no particular reason to exclude OS/2
from high-end products.  The question would be whether to implement for
OS/2 or Unix for such high-end products ... and for this sort of product,
you may be in a position where the user will decide to use the product
_first_ and then buy the OS.  Some CAD and desktop publishing situations
come to mind.

The main advantage of OS/2 over Unix is that in this sort of market, 
it's more of a standard system, in the sense that every one of them 
is more alike than all the different Unix variants are like each other.
This means that you don't have to deal with a different version of your
product for each version of the OS.  But Unix is (slowly) getting its act 
together in bringing the different versions together, and it does have a 
larger installed base (for now).

It's not quite obvious to me which will win this particular battle, or
even if there will be a clear winner.  (Though I'm sure that everyone
will agree afterwards that whatever the result is, it was obvious and
everyone should have seen it).  It's going to depend on how soon all
the players can get their collective acts together, and whether Unix
can survive the rather bad black eye it has given itself in many big
companies by its plethora of incompatible versions and the chaotic nature
of the Unix market.

But it is pretty obvious that it will be a LONG time before OS/2 -OR- Unix
ever becomes the operating system of choice for home users ... or possibly
even for the "bread-and-butter" business users as opposed to the high-
end systems mentioned above.

						Bruce C. Wright

ted@helios.ucsc.edu (Ted Cantrall) (01/18/90)

In article <1990Jan17.031934.3374@esegue.segue.boston.ma.us> johnl@esegue.segue.boston.ma.us (John R. Levine) writes:
>In article <4308@brazos.Rice.edu> solomon@screech.rice.edu (Richard L. Solomon) writes:
>>>Simple.  IBM/MicroSoft *should* have used soft pointers to the I/O memory
>>>areas.
>>	NO....they SHOULD have mapped the I/O in the I/O ADDRESS SPACE where
--
How would things have worked out if IBM had put this 384k block at the bottom
of the memory (0-384). That would have left no constrictions on upward
expansion except the 8088. And that problem would have been remedied by
the 80286.		-ted-


-------------------------------------------------------------------------------
ted@helios.ucsc.edu         | "The opinions are mine...
(408)459-2110               |    ...the facts are public domain."
-------------------------------------------------------------------------------

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (01/18/90)

In article <729@jethro.Corp.Sun.COM> acm@sun.UUCP (Andrew MacRae) writes:
| Simple.  IBM/MicroSoft *should* have used soft pointers to the I/O memory
| areas.  They made the same mistake that was made with CP/M, hardcoding
| areas of memory for specific uses.  For those who don't remember, CP/M
| was hardcoded so that the kernal *had* to reside in the top 4kb of a 64kb
| address space, forever limiting its applications to 60kb in size.

  1) your first statement is not true.
  2) your conclusion is meaningless

  CP/M required that the o/s sit at the top of available memory, not the
top of 64k. Early systems usually didn't have 64k, 16k of memory on 4k
static memory boards costing $200 each was about standard at the
beginning. It was also possible to put the o/s and most of the BIOS in
an alternate bank, allowing 63.5k of user memory (the highest and lowest
"page" of 256 bytes were dedicated).

  Since the o/s ran on a CPU with only 64k addressing, and the o/s was
at least 4k unless you played the games described above, applications
were limited in size no matter where the o/s was located. There never
was an equivalent of an extended memory standard to allow bank switching
(which is *very* similar to EMS).
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
            "Stupidity, like virtue, is its own reward" -me

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) (01/18/90)

In article <28824@amdcad.AMD.COM> phil@pepsi.AMD.COM (Phil Ngai) writes:
| In article <2021@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes:
| |    [  stuff  ]
| |  Wait a minute. Why would any company trying to make a profit go to
| |OS/2? There are so few users that they would be lucky to cover the cost
| 
| Because it's a good way to get 16 megabytes and multi-tasking and when
| all those 286 users figure that out, they'll jump for it. 

  I don't buy this at all. Users want to get things done, and they are
not about to buy a program because it needs more MB or does
multi-tasking. Unless the application does something the that can't be
done on a smaller (cheaper) machine, neither the personal user nor the
company will spend the money to get a bigger, slower, o/s and add memory
to run it.
|                                                           Also, once
| more applications are available (Pagemaker IS available) and dealers
| start figuring out how to sell plug-n-play systems that look like Macs,
| they'll make a lot of money.

  The dealers around here are managing to sell applications which look
like Macs, and they seem to do it with apps which run on existing 640k
machines. There are some windows apps, and a few EMS, but I have yet to
see an app which really *needed* os/2, and which was so much better than
what was currently running that people *had to have it*. I have yet to
see a dealer who made as much as 10% of his/her sales in os/2 as opposed
to DOS.

  Until we get a "killer app" (I think Jim Seymore used that term first)
people won't rush into os/2. And with unix interfaces like Motif, and X,
and millions of existing unix boxes to run applications, there are good
reasons for software vendors to chase that market first. I suspect that
there are 100 times as many Sun users as os/2 (just to name one vendor),
and these users are used to paying three to ten times as much for
software as the PC users.
| 
| Have you figured out yet why Apple sued Microsoft?
| Have you ever seen the screen of Microsoft Windows 3.0?

  I'm sorry, I miss how this ties to the discussion of the future of
os/2. Is there a connection?
-- 
bill davidsen	(davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen)
            "Stupidity, like virtue, is its own reward" -me

alien@cpoint.UUCP (Alien Wells) (01/18/90)

In article <2026@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes:
>  I don't buy this at all. Users want to get things done, and they are
>not about to buy a program because it needs more MB or does
>multi-tasking. Unless the application does something the that can't be
>done on a smaller (cheaper) machine, neither the personal user nor the
>company will spend the money to get a bigger, slower, o/s and add memory
>to run it.

Gee ... this sort of sounds like the 64K CP/M & Apple II vs 640K DOS 
arguments ... doesn't it ...

>  Until we get a "killer app" (I think Jim Seymore used that term first)
>people won't rush into os/2.

I see ... and just what "killer app" was it that made the PC?  I had thought
the PC was based on spreadsheets and word processors, and those were all 
available for CP/M machines and Apple IIs.  Granted, 1-2-3 was somewhat
better than CP/M Visicalc, but no more than OS/2 1-2-3 release 3.0 is better 
than DOS 1-2-3 release 2.2 is (and do you really want to promote imcompatible, 
home-brew DOS extenders with DOS 1-2-3 release 3.0? ...), and 1-2-3 over 
Visicalc was certainly less of a change than Beta reviewers of 1-2-3/G say it 
is over 1-2-3 3.0 ... (See the current PC Week ...)

>And with unix interfaces like Motif, and X,
>and millions of existing unix boxes to run applications, there are good
>reasons for software vendors to chase that market first. I suspect that
>there are 100 times as many Sun users as os/2 (just to name one vendor),
>and these users are used to paying three to ten times as much for
>software as the PC users.

Ah ... yeah.  Right.  From the Wednesday 17 January Wall Street Journal:
	[This year's expected sales of] 150,000 Sun workstations.
This is across ALL of Sun's lines.  This is compared to and expected 1,000,000
Macintoshes and many, many million PC compatibles going out the door this
year.

And besides, I thought that Sun was in the Open-Look (ie: piss on Motif)
camp?  The nice thing about Unix standards is there are so many of them
to choose from ...

Finally, if you want quantity of units and shrink-wrapping - you want to hit 
the binary compatible 386 market (ie: the PCs running Unix).  What sort of
standard graphical shell do we have there?  Gee, I'm running on one now and
I don't have ANY.  I've seen X (a co-worker insists on using it to get
multi-tasking in separate windows) and I wouldn't consider the XT-like
performance of text under X.  (Gee ... is that where the name X-windows came 
from? ;-)  I thought you valued speed (from the rest of your posting).  OS/2 
and PM are BLINDING in comparison with Unix and X!  X-windows REQUIRES a 
graphics co-processor on the graphics board that implements the entire X 
protocol in hardware to be acceptable.  I will grant that these are coming, 
but they will cost ...
-- 
--------|	I die ... you die ... we all die ...
Alien   |   		- the Heavy Metal movie
--------|     decvax!frog!cpoint!alien      bu-cs!mirror!frog!cpoint!alien

Ralf.Brown@B.GP.CS.CMU.EDU (01/18/90)

In article <10344@saturn.ucsc.edu>, ted@helios.ucsc.edu (Ted Cantrall) wrote:
}How would things have worked out if IBM had put this 384k block at the bottom
}of the memory (0-384). That would have left no constrictions on upward
}expansion except the 8088. And that problem would have been remedied by
}the 80286.              -ted-

Not really, since to get past 1M on the 80826, you have to switch to protected
mode, in which the segment registers have a totally different interpretation.
Most software would have to have been rewritten, anyway.
--
UUCP: {ucbvax,harvard}!cs.cmu.edu!ralf -=- 412-268-3053 (school) -=- FAX: ask
ARPA: ralf@cs.cmu.edu  BIT: ralf%cs.cmu.edu@CMUCCVMA  FIDO: Ralf Brown 1:129/46
"How to Prove It" by Dana Angluin              Disclaimer? I claimed something?
14. proof by importance:
    A large body of useful consequences all follow from the proposition in
    question.

emmo@moncam.co.uk (Dave Emmerson) (01/18/90)

In article <28808@amdcad.AMD.COM|, phil@pepsi.amd.com (Phil Ngai) writes:
| In article <1468@blackbird.afit.af.mil| ewilliam@galaxy-43.UUCP (Edward M. Williams) writes:
| |
| |IBM really messed things up when they designed the original PC (hindsight
| |is wonderful!) - they didn't think that anyone would ever need more than
| |640k of memory.  The original PC came with 64k!  They thought it would be 
| |safe to put all the video screen memory mappings, ROMs, and all that 
| |stuff up above the 640k line.
| |
| |Nowadays with memory capacities STARTING at 1M, the stuff between 640k and
| 
| Don't be ridiculous. The 8088 can't ADDRESS more than 1 mega. Just where
| did you expect IBM to put the IO stuff?
| 

How about at 1000:0 .. 4000:0, with 16K system RAM at 0000:0, and as
much as your processor can address from 4000:0 .. infinity.
I'm not too familiar with the 8086, but I do recall that some processors 
expect to find a hard reset vector at the top of their addressing range,
and this has to be in ROM. Even that is no excuse, properly written
relocatable soft/firmware could easily have accomodated the BIOS area
being moved. The bottom line is that everything else has followed on from
the choice of processor. That's not to say that things wouldn't have
been worse if the illustrious PC had been based on a Z80 or a 6502.
So many later decisions were made on the basis of how the mass of existing
software would cope, and of course it wouldn't have, the newer improved
PCs would have had to start from scratch. Personally I'm more inclined
to lay blame at the feet of the major software house(s) of the late 70's -
early 80's. 
It's all purely academic of course, but I guess we all need to vent some
steam now and then, and MesSyDOS sure generates some!

Dave E.

emmo@moncam.co.uk (Dave Emmerson) (01/19/90)

In article <4308@brazos.Rice.edu>, solomon@rice.edu (Richard L. Solomon) writes:
> In article <729@jethro.Corp.Sun.COM> acm@sun.UUCP (Andrew MacRae) writes:
> >In article <28808@amdcad.AMD.COM> phil@pepsi.AMD.COM (Phil Ngai) writes:
[deleted]
> 	NO....they SHOULD have mapped the I/O in the I/O ADDRESS SPACE where
> it belongs......there are 64kB of I/O addresses, anyone use that much?
> (Excluding VidRAM and EMS which obviously could legitimately be in the
> memory address space.)  
> 
Umm, actually they did. It's the IO BIOS's which are causing the problems,
not the IO port addressing. The port allocations are only occasionally a
problem, and usually a soluble one on all but the cheapest boards. 
That aside, the mess caused by the allocated BIOS ROM area *could* have
been solved (and still can be) if the right people got their heads
together. It's really only a matter of being able to load and run software
in a linear space beginning just ABOVE the bios area, and donating the
entire area below the bios's to DOS as system memory. That should have
happened with the 80126. 
I doubt it will ever happen now, we're expected to go for OS2 when we
get really cheesed off with MeSsyDOS.
At least UN*X isn't running out of steam yet, and I can remember a
box which had a huge 1K RAM being around when the PC was conceived...

Dave E.

cs4g6ag@maccs.dcss.mcmaster.ca (Stephen M. Dunn) (01/19/90)

In article <4308@brazos.Rice.edu> solomon@screech.rice.edu (Richard L. Solomon) writes:
$In article <729@jethro.Corp.Sun.COM> acm@sun.UUCP (Andrew MacRae) writes:
$>In article <28808@amdcad.AMD.COM> phil@pepsi.AMD.COM (Phil Ngai) writes:
$>>Don't be ridiculous. The 8088 can't ADDRESS more than 1 mega. Just where
$>>did you expect IBM to put the IO stuff?
$>Simple.  IBM/MicroSoft *should* have used soft pointers to the I/O memory
$>areas.
$	NO....they SHOULD have mapped the I/O in the I/O ADDRESS SPACE where
$it belongs......there are 64kB of I/O addresses, anyone use that much?
$(Excluding VidRAM and EMS which obviously could legitimately be in the
$memory address space.)  

   Actually, have a look at the IBM PC memory map.  Up to segment A000 is
reserved for RAM.  A000-AFFF was reserved from day one for more capable
graphics cards (and has been used since EGA).  B000-BFFF is reserved for
display adapters (initially, B000-B0FF for the monochrome card, and B800-
BBFF for the CGA, with the rest to be used later - and it has been).
C000-F3FF is reserved for BIOS extensions - as in ROMs that belong in the
address space (this has also been used for RAM buffers on some cards,
such as IBM's Token Ring adapter).  F400-FFFF is for system ROM, IBM's
ROM BASIC, and the ROM BIOS.  The only ways to cut this down would be to
decrease the amount of space available for BIOS extensions (yeah, this
could be done, at the cost of more frequent collisions between cards)
or to have left less than 128K for display adapters and enforced the
sort of scheme used on VGA cards.  Oh, back to the token ring type of
card, which has its own RAM (up to 64K) on board:  if the I/O space
were to be reduced, it would have to use conventional memory for its
buffers.  Net gain for networks (which are becoming more and more
prevalent):  zero.

   So to get more memory, we would have had more awkward video cards and
more frequent problems when installing a new card in the system.  Worth
it?  Perhaps.  Don't get me wrong - I'm not trying to be a staunch IBM
defender (I don't think they came up with the best PC they could have, and
I don't think Microsoft came up with the best OS they could have, either);
I'm just pointing out that there isn't as much slack in there as one
might think there is.

$	Also, most early CP/M systems had 16 or 32kB of memory....their
$apps were limited to correspondingly less memory  :)

   Most early MS-DOS systems had 16 or 64K of memory ... perhaps IBM
didn't see their little computer growing so quickly that 10-40 times
its current memory would no longer be enough?  Or perhaps they were
under the impression that there would be an easier path to expansion
than Intel provided, so that users didn't have to recompile their
applications in order for them to take advantage of more advanced
processors (although I'm not sure how this would happen, exactly,
given how the limitations of the 8086/8088 come to pass).
-- 
Stephen M. Dunn                               cs4g6ag@maccs.dcss.mcmaster.ca
          <std_disclaimer.h> = "\nI'm only an undergraduate!!!\n";
****************************************************************************
       "I want to look at life - In the available light" - Neil Peart

cs4g6ag@maccs.dcss.mcmaster.ca (Stephen M. Dunn) (01/20/90)

In article <10344@saturn.ucsc.edu> ted@helios.ucsc.edu (Ted Cantrall) writes:
$How would things have worked out if IBM had put this 384k block at the bottom
$of the memory (0-384). That would have left no constrictions on upward
$expansion except the 8088. And that problem would have been remedied by
$the 80286.		-ted-

   It would have given us another 65 520 (64K-16) bytes when the 80286 came
out.  Don't forget that in order to use the 286's 16M address space, you must
be in protected mode.  In protected mode, segments have different meanings.
Instead of the meaning we're used to on the 8086/8088, they actually point
to a descriptor in memory giving information about the segment (such as
its address, length, privilege level, etc.)  So code written to run under
the 8086 processor will run fine in 80286 real mode, but not in protected
mode.
-- 
Stephen M. Dunn                               cs4g6ag@maccs.dcss.mcmaster.ca
          <std_disclaimer.h> = "\nI'm only an undergraduate!!!\n";
****************************************************************************
       "I want to look at life - In the available light" - Neil Peart

zech@leadsv.UUCP (Bill Zech) (01/20/90)

In article <10344@saturn.ucsc.edu>, ted@helios.ucsc.edu (Ted Cantrall) writes:s
.> In article <1990Jan17.031934.3374@esegue.segue.boston.ma.us> johnl@esegue.segue.boston.ma.us (John R. Levine) writes:
.> >In article <4308@brazos.Rice.edu> solomon@screech.rice.edu (Richard L. Solomon) writes:
.> >>>Simple.  IBM/MicroSoft *should* have used soft pointers to the I/O memory
.> >>>areas.
.> >>	NO....they SHOULD have mapped the I/O in the I/O ADDRESS SPACE where
.> --
.> How would things have worked out if IBM had put this 384k block at
.> the bottom
.> of the memory (0-384). That would have left no constrictions on upward
.> expansion except the 8088. And that problem would have been remedied by
.> the 80286.		-ted-
.> 

When the RESET line is asserted to the 8088, CS is forced to FFFF and
IP is cleared to zero.  So you must have some kind of read/only
memory at FFFF:0.

Some computers (the old Motorola EXORmacs comes to mind) had some
logic that modified the memory map temporarily during reset, so 
that ROM would appear where needed during bootup, then get out
of the way.  

The PC was designed as a simple machine, and most everything on
it is the minimum needed to get the job done.

- Bill

phil@pepsi.amd.com (Phil Ngai) (01/20/90)

In article <2026@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes:
|  I don't buy this at all. Users want to get things done, and they are
|not about to buy a program because it needs more MB or does
|multi-tasking. Unless the application does something the that can't be

I think they would. If you have a choice between DOS Pagemaker which
makes you wait several minutes while saving a file and OS/2 Pagemaker
which starts off a thread to do the saving so you can do something
else, which would you choose?

|done on a smaller (cheaper) machine, neither the personal user nor the
|company will spend the money to get a bigger, slower, o/s and add memory
|to run it.

I would consider the difference between something like Wordstar and Word
for Windows to be worth the extra hardware cost. The DOT is buying
40,000 sets of Windows applications. True, it's taxpayer dollars.

|  The dealers around here are managing to sell applications which look
|like Macs, and they seem to do it with apps which run on existing 640k

Sure, once you get into the application. As long as you don't run out of
memory. Surely you don't think DOS is easy to use or user-friendly?

|There are some windows apps, and a few EMS, but I have yet to
|see an app which really *needed* os/2, and which was so much better than
|what was currently running that people *had to have it*. I have yet to
|see a dealer who made as much as 10% of his/her sales in os/2 as opposed
|to DOS.

No, it is still under development but I'm excited about the prospects.

|people won't rush into os/2. And with unix interfaces like Motif, and X,
|and millions of existing unix boxes to run applications, there are good
|reasons for software vendors to chase that market first. I suspect that

"Millions of existing unix boxes"? What kind of drugs are you smoking?
How many can run the same binaries? Or will you need Sun-3 binaries,
Sun-4 binaries, DECstation binaries, MIPS binaries, etc?

|there are 100 times as many Sun users as os/2 (just to name one vendor),

But what's the ratio of Sun-4 to PC compatibles?

|and these users are used to paying three to ten times as much for
|software as the PC users.

Well, sure you can try to marry a rich girl and be well off, but there's
a flaw in that strategy. The vendors which offer the most cost
effective solutions will win big. Have you ever priced Interleaf on a
Sun-4 compared to a 386 PC? The people who charge lots of bucks for Unix
applications (are there ANY cheap Unix applications?) may be making
money now but they will be sitting on the sidelines as time goes by.

|| Have you figured out yet why Apple sued Microsoft?
|| Have you ever seen the screen of Microsoft Windows 3.0?
|
|  I'm sorry, I miss how this ties to the discussion of the future of
|os/2. Is there a connection?

Yes, because OS/2 and Windows 3.0 look a lot like a Mac. Apple didn't
sue because they are SOBs, they sued because they saw a very serious
threat to their sales. Maybe you don't have enough vision to understand
but Apple sure does.

--
Phil Ngai, phil@diablo.amd.com		{uunet,decwrl,ucbvax}!amdcad!phil
Peace through strength.

phil@pepsi.amd.com (Phil Ngai) (01/20/90)

In article <2025@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes:
|There never
|was an equivalent of an extended memory standard to allow bank switching
|(which is *very* similar to EMS).

Let's not confuse eXtended Memory(XMS) with Expanded Memory (EMS).

--
Phil Ngai, phil@diablo.amd.com		{uunet,decwrl,ucbvax}!amdcad!phil
Peace through strength.

leonard@bucket.UUCP (Leonard Erickson) (01/20/90)

davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) writes:

>In article <28808@amdcad.AMD.COM> phil@pepsi.AMD.COM (Phil Ngai) writes:

>| The problem is all the lazy software houses who haven't bothered to move
>| to OS/2 and take advantage of the 16 meg protected mode even though the
>| 286 can handle it just fine.

>  Wait a minute. Why would any company trying to make a profit go to
>OS/2? There are so few users that they would be lucky to cover the cost
>of a new version of the compiler, much less cover their labor,
>packaging, stocking, and labor costs. And lucky to get enough ongoing
>income to cover the maintenence.

Also, don't forget that while 286 and 386 machines are "sexy", the most
common platform is still the 8088/8086! See what happens when you suggest
to a customer that you write the software for OS/2 instead of for DOS. He
likes it right up the point where you tell him that it'll only run on
286 machines with several meg of RAM.

The company I work for has been buying *nothing* but 286 machines and
386 machines for several years now. That means that about the middle
of last year we got to the point where we had more 286/386 machines
than PCs and XTs (and I'm talking True Blue Pcs and XTs!). 

You have no idea how limited you can be by this sort of thing. Just
replacing them with 286 machines with 1 meg is going to take a minimum
of 2-3 years unless we make some sort of breakthrough in convincing the
poeople that have to sign the capital request (100 286 machines cost a
*lot* if you aren't willing to go with "Joe's clones" or the like).

Hell, we've got machines that aren't even *reliable* and it's been an uphill
fight to get them to let us buy replacements. "But they can still use
them, can't they?" <sigh> Sure. The machine only screws up once or twice
a week. And they rarely lose anything except time and patience. But there's
no way we are going to waste the money on *motherboard* problems on a PC.
That was what it took to convince them. All the micro support people
flatly told our new manager that repairs other than board swaps were
a waste of time and money on a PC. And since there aren't any spare
motherboars, no cost effectice means getting any...

THAT is why OS/2 won't catch on very fast. You have to wait for a
company's invest in machines that can't run it to go away.
-- 
Leonard Erickson		...!tektronix!reed!percival!bucket!leonard
CIS: [70465,203]
"I'm all in favor of keeping dangerous weapons out of the hands of fools.
Let's start with typewriters." -- Solomon Short

al@escom.com (Al Donaldson) (01/22/90)

In article <28842@amdcad.AMD.COM>, phil@pepsi.amd.com (Phil Ngai) writes:
> Let's not confuse eXtended Memory(XMS) with Expanded Memory (EMS).

Right.  We could also call exPanded Memory (PMS) but that doesn't change
the fact that only two letters (out of 14) differ between the two names.  
Whoever came up with those names should be shot. 

Part of this venom was cooked up in trying to install an Everex RAM 3000
memory board.  Yeah, they describe the different systems, sort of, but
the possibility for name confusion creates a lot of problems.  For example,
the reader reading `extended' as `expanded', or wondering if someone
(engineer, typist, tech editor, whoever) made a mistake and really meant
`extended' but wrote down `expanded'.  At least I didn't come across any
occurences of `expended memory' or `extanded memory'.  :-)

And all I really needed was a half meg more memory to run MINIX on...

Al

hwajin@wrs.wrs.com (Hwa Jin Bae) (01/22/90)

In article <28841@amdcad.AMD.COM> phil@pepsi.AMD.COM (Phil Ngai) writes:
>How many can run the same binaries? Or will you need Sun-3 binaries,
>Sun-4 binaries, DECstation binaries, MIPS binaries, etc?

I doubt OS/2, if ported to machines of different processor architecture,
will be able to support the same binaries with reasonable efficiency (one
can indeed imagine a interpretive binary definition but it would be
too slow).  So I fail to see the point of this argument.
-- 
hwajin@wrs.com
Wind River Systems, Emeryville CA

bcw@rti.UUCP (Bruce Wright) (01/23/90)

In article <370@marvin.moncam.co.uk>, emmo@moncam.co.uk (Dave Emmerson) writes:
> In article <28808@amdcad.AMD.COM|, phil@pepsi.amd.com (Phil Ngai) writes:
> | In article <1468@blackbird.afit.af.mil| ewilliam@galaxy-43.UUCP (Edward M. Williams) writes:
> | |
> | |Nowadays with memory capacities STARTING at 1M, the stuff between 640k and
> | 
> | Don't be ridiculous. The 8088 can't ADDRESS more than 1 mega. Just where
> | did you expect IBM to put the IO stuff?
> 
> How about at 1000:0 .. 4000:0, with 16K system RAM at 0000:0, and as
> much as your processor can address from 4000:0 .. infinity.

Since the largest address the 8086/8088 can generate is FFFF:F (offsets
higher than this wrap around to 0 on the 8086), this means that the
machine you are proposing would be able to address 768K of memory from
4000:0 up, and another 64K of "system memory" at 0000:0 (the segment
address 1000:0 refers to the second 64K of memory on the machine).

Even if you add these two numbers together, you have only 832K of memory
which although nicer than 640K is still not that big by modern standards.
You could achieve the same effect by having normal RAM extend from 0000:0
to D000:0 (instead of A000:0) and would still have the amount of I/O and
BIOS space you suggest (192K), without the complications.  But you would
also lose the flexibility of the extensible BIOS area which runs from
C000:0 to about F000:0, which is the most obvious place to economize;
or you would have to limit video RAM whose reserved area runs from A000:0
to BFFF:F on the PC.  Some PC's did do things like this, and consequently
could have more than 640K of RAM (the best-known was probably the DEC
Rainbow, which could address 896K of RAM), but they did so at the expense
of flexibility.

You can't get any more blood from the 8086 stone by doing this kind of
thing.  About the only way to do it would have been to include bank
switching hardware in the original system.

> I'm not too familiar with the 8086, but I do recall that some processors 
> expect to find a hard reset vector at the top of their addressing range,
> and this has to be in ROM. Even that is no excuse, properly written
> relocatable soft/firmware could easily have accomodated the BIOS area
> being moved. The bottom line is that everything else has followed on from
> the choice of processor.

The 8086 does expect to find a reset vector at the top of memory (FFFF:0).
You can of course circumvent this by catching the reset line and building
your own circuit to handle it differently (including mapping into a "shadow
ROM" at some other address that "goes away" after the boot sequence), but
this costs money for the extra design and manufacture.  Why bother unless
you have a clear reason?  At the time, 640K seemed like an awful *lot* of
memory;  also, it's not at all clear that a significant amount could be
gained by techniques like a "shadow ROM" - how big is the boot code anyway?
Well under 64K on the PC (more like 2-8K depending on the machine).  It's
most often done only when you _have_ to get the code at a particular address
which is inconvenient once the O/S is actually running. 

Since you wouldn't gain much additional memory by moving the BIOS area,
why should the effort be made to make it relocatable or use shadow ROM's?

Your comment about everything else following from the choice of the
processor is, as you can see, quite true.

> That's not to say that things wouldn't have
> been worse if the illustrious PC had been based on a Z80 or a 6502.

Of course, many earlier machines were ... but it's doubtful that the
original IBM PC would have been as popular or that its descendents would
have lasted so long.  It's not at all true that the IBM name guarantees
success - look at the PCjr or the 5100.

> Personally I'm more inclined
> to lay blame at the feet of the major software house(s) of the late 70's -
> early 80's. 

Probably true inasmuch as you talk about the problems with DOS - there
was no excuse even at the time for some of the design deficiencies of
Messy DOS (especially the early versions!).  But the problems with the
addressability of the 8086/8088 are cast in stone.

						Bruce C. Wright

phil@pepsi.amd.com (Phil Ngai) (01/23/90)

In article <10344@saturn.ucsc.edu> ted@helios.ucsc.edu (Ted Cantrall) writes:
|How would things have worked out if IBM had put this 384k block at the bottom
|of the memory (0-384). That would have left no constrictions on upward
|expansion except the 8088. And that problem would have been remedied by
|the 80286.		-ted-

Ha ha. You have to run in protected mode to get more than 1 meg on the
286. As long as you're doing that, may as well go to OS/2.

--
Phil Ngai, phil@diablo.amd.com		{uunet,decwrl,ucbvax}!amdcad!phil
Peace through strength.

dhinds@portia.Stanford.EDU (David Hinds) (01/25/90)

In article <1990Jan24.174315.13698@ux1.cso.uiuc.edu>, mcdonald@aries.scs.uiuc.edu (Doug McDonald) writes:
> Yes, but why did they have to botch it? One tiny, simple change
> in it, that would not increase (it probably would DECREASE) the
> gate count, and it would have been MUCH better: make the segment
> registers a simple extension of the address - that is, the
> 8086 would have used only 4 bits of the segment registers,
> but the 80286 would use 8 and the 80386 all 16. 
> 
> A second change to the 8086, that would have added two instructions, 
> would have been to add "add carry to segment register" and
> "subtract carry from segment register" instructions.
> 
> IT would still get flamed for segments, but at least the 80286 
> would have been able to run 8086 code directly.
> 
    This is wrong.  Segments are not such a bad way of increasing the
amount of addressable memory.  Remember, the 8086 does 16-bit arithmetic,
and segments were what made 1M of address space possible.  Having segment
registers just be the top address lines would have been a disasterous mess.
It would force things to be aligned to 64K boundaries to fully exploit the
range of a 16-bit address offset.  Note that the same basic segmentation
concept is preserved even in the 80386's protected mode.  Yes, the meaning
of the value loaded in a segment register is different, but the philosophy
is the same.  A segment can start at a nearly arbitrary location in memory,
paragraph-aligned for the 8086 or byte-aligned on the 80386, and all address
references are segment-relative.  The advantage of the 80386 is that the
range of offsets is bigger - not that the segmentation concept has been
removed.  As the meaning of a segment register value is different in the
protected modes of the 80286 and 80386, there is no way that Intel could
have made the 8086 segment registers transparently compatible.
    I find all these complaints about the 640K barrier to be very circular.
Given the design of the 8086, it seems to be a reasonable choice to allocate
memory above 640K for system use.  Is there any more reasonable place to
put the stuff?  IBM had sufficient foresight to allocate enough space for
system use, that we are only now running out of room.  Remember that the
"below 640K crunch" is also an "above 640K crunch".
    I don't think we can be critical of Intel for making the 8086 "so"
limited.  Ten years (or was it longer) ago when the chip came out, 1 meg of
memory was more than a lot of mainframes had, and more than a PC user
dreamed of.  The level of machine language compatibility between the 8086
and the protected modes of the '286 and '386 is extremely high for things
that don't muck around with segment registers.  I think the lack of a
suitable operating system for the 80386 is a much more fitting target for
complaints.  And before anyone says "just use Unix", I don't think the
present state of Unix ports is such that they are appropriate for the 
general PC community.  Most people don't want the responsibilities of a
system administrator just to keep a PC running.  And most people also don't
want to spend as much on their operating system as they paid for their
computer.

 - David Hinds
   dhinds@portia.stanford.edu

bcw@rti.UUCP (Bruce Wright) (01/25/90)

In article <1990Jan24.174315.13698@ux1.cso.uiuc.edu>, mcdonald@aries.scs.uiuc.edu (Doug McDonald) writes:
> In article <21990004@hpvcfs1.HP.COM> johne@hpvcfs1.HP.COM (John Eaton) writes:
> >In order to understand some of the limitations of the PC you must remember
> >how the 8086 came to be. Intel was on a roll with the 4004/8008/8080 and
> >decided to go all out and design a micro that really had some POWER. It
> >was to be something that could challenge the minis and become the chip
> >of the 80's. I believe it was originally called the "432". But they still
> >needed to keep business coming in the door until this wonder chip was ready
> >so they decided to do a quick and dirty enhancement of the 8080. This was the
> >8086. It's only purpose was to keep the 8080 family alive until Intel could
> >deliver its real processor. Its hard to fault the designers for any design
> >decisions that were appropriate for it's expected lifespan.
> >
> Yes, but why did they have to botch it? One tiny, simple change
> in it, that would not increase (it probably would DECREASE) the
> gate count, and it would have been MUCH better: make the segment
> registers a simple extension of the address - that is, the
> 8086 would have used only 4 bits of the segment registers,
> but the 80286 would use 8 and the 80386 all 16. 
> 
> A second change to the 8086, that would have added two instructions, 
> would have been to add "add carry to segment register" and
> "subtract carry from segment register" instructions.

The main problem with the first suggestion is that unless there are other
changes in the machine code, it makes writing position-independent routines
more difficult.  Things like .COM files exploit the fact that you don't
need to do any fixups to run a .COM file at any segment address (if the 
prorgram follows some fairly simple and obvious rules).  However, they
could have gotten much of the benefit of the 8086 segments without *quite*
so many of the drawbacks by making the segment values multiples of 64 or
256 bytes rather than multiples of 16.  This would not waste a great deal
of memory due to fragmentation, and would allow a _much_ larger address
space.   The main problem is that at the time, the memory lost due to
fragmentation probably looked bigger than it does today.

Probably they should have done something along this line, possibly
making all the memory address instructions relative or having both
relative and absolute forms.  But if you aren't careful you might
end up designing a 68000, which wouldn't do now would it? :-)

The second suggestion is a pretty good idea, but it would have complicated
the design (there's no need to have anything like counters or adders for
dealing with the segment registers, except as necessary for the address
generation hardware).

						Bruce C. Wright

emmo@moncam.co.uk (Dave Emmerson) (02/03/90)

In article <28874@amdcad.AMD.COM|, phil@pepsi.amd.com (Phil Ngai) writes:
| In article <10344@saturn.ucsc.edu| ted@helios.ucsc.edu (Ted Cantrall) writes:
| |How would things have worked out if IBM had put this 384k block at the bottom
| |of the memory (0-384). That would have left no constrictions on upward
| |expansion except the 8088. And that problem would have been remedied by
| |the 80286.		-ted-
| 
| Ha ha. You have to run in protected mode to get more than 1 meg on the
| 286. As long as you're doing that, may as well go to OS/2.
| 

BUT IT NEED NOT HAVE BEEN THAT WAY.
Which other 2nd/3rd/4th generation processor has this b****y stupid 
'protected' mode? 

Please nobody mention the 68020's memory management, that's not at all
the same thing, nothing like it!

WHEN I can get the tools I use under DOS at the price they sell for for
DOS I'll give OS2 some thought. I hope I live that long...

I keep saying I'm going to drop this thread, but....

Dave.

Elbereth@moncam.co.uk (Dave Emmerson) (02/25/90)

In article <Dme8D3w160w@gendep>,  writes:
> emmo@moncam.co.uk (Dave Emmerson) writes:
> 
> > Which other 2nd/3rd/4th generation processor has this b****y stupid 
> > 'protected' mode? 
> 
> [deleted]
> So basically what it comes down to is that Motorola is working with
> design ideas that are about 6 years younger than Intel's.  But what
> would happen if, say, the world turned to 64-bit microprocessors?
> Motorola's 32-bit linear address space couldn't be retained without
> (you guessed it) something resembling Intel segmentation.  The
> newer Intel processors would be in pretty much the same boat, even
> in protected mode, for the same reasons.
> [more deletions]

I generally agree with most of the article, but the above extract just 
doesn't hold water. 64 bit processors have been around for some time
btw, ask AMD.
A 64 bit register/address 68xx0 would cope very easily with all current
software without ANY extra mode instructions being needed. If you are
familiar with 680x0 assembler, consider what happens when you do a 16
bit load into an address register. The MSB automatically gets extended
into the upper 16 bits, right? So, literally all that's needed is for
32bit loads into 64bit address registers to follow the precedent.

Such a powerful processor would in any case very likely have more
registers than its predecessors, so while that mode is applied to the
bottom 8 A & D regs, the new ones would be true 64 bit registers AF..8
and DF..8 to give new programmes access to the processor's real power.

This is all speculation of course, the *facts* I have had from MOT are
covered by a non-disclosure agreement, and cover a different aspect
entirely, but I can tell you that the 680x0 is only in its infancy,
even now, unless they eventually kill it off to promote the 88xxx, and
that won't be for a while yet, the 88xxx isn't well enough established.

If you're *really* interested, try to wangle an invite to one of the
seminars, it can be quite enlightening, and gives you some idea which
way things are going in the next few years.

ATB

Dave E.