[comp.sys.ibm.pc] Segmented vs. linear architectures

friedl@vsi.UUCP (Stephen J. Friedl) (02/21/88)

Netpeople,

     I am sure this has been debated before and I certainly don't
want to start a net war.  Just about everybody has an opinion
about the segmentation-vs-linear architectures question, and in
many camps it seems to be a strongly religious question.  I hap-
pen to dislike segmentation quite a bit -- I used to hack on an
old Z8000 machine -- but it is largely out of ignorance about the
reasoning behind it.  I cannot believe that there are *no* good
reasons for segementation and that *nobody* could ever find a use
for this way of doing things.

     I'm not looking to pick a processor for a product and I'm
not really looking to be "converted", but I would really like to
hear from dedicated segmentationalists on the reasoning behind
this.  No flames please, no "If God wanted non-segmentation he
would have put all our fingers on one hand" arguments :-), just
good technical thoughts.  Also, people who have strong well-
informed feelings against segmentation are welcome to respond as
well.

     Any responses should be sent by e-mail (remember, I don't
want to start a net war).  Others interested in this may send a
note and I'll mail them a summary of what I hear.

     Thanks much,
     Steve
-- 
Life : Stephen J. Friedl @ V-Systems Inc/Santa Ana, CA    *Hi Mom*
CSNet: friedl%vsi.uucp@kent.edu
uucp : {kentvax, uunet, attmail, ihnp4!amdcad!uport}!vsi!friedl

AHS@PSUVM.BITNET (02/22/88)

Friedl

I prefer to program my utilities in assembler (and my applications in APL).

Hooking extensions to the OS of a 8085 (a 64k linear memory microprocessor) was
not fun because I had to constantly recompile to relocate the code in memory if
another extension was already there.  (I mean extensions such as:  keyboard
remapper, data compressor, keyboard macro, xmodem, special disk access,
commandline editor, etc).

For me, the 80x86 8088 (a segmented microprocessor) was a great relief because
each COM program owns its 64k absolute address space starting at address 0000.
That is, you program as if the memory was empty and you always start at address
0000 or (0100).  The OS then takes care of loading an *exact* copy of the
program somewhere in memory (at the first free starting point among the 64,000+
possible starting points for a COM program segment).  There is no need to
recompile (or dynamically patch) for each new position of the program in memory
because the OS loads an *exact* copy of the program in the segment  (with
segments and COM programs, there is no need to dynamically patch the code as
*must* be done in a linear memory scheme).

These are the reasons why I am very fond of segments: they totally eliminate
the relocation problem without any loss of loading speed, or of execution
speed, or of flexibility, and without any programming effort.

Note that I program in assembler, and that a 64k assembler *program* is a huge
program that I doubt few people have ever written.  Note also that a COM
program can access the full address space of 8086/8088 (ie,1-Meg) and therefore
can use for *data* *ALL* the free memory, even above 640k if there is some
memory there (such as the video memory).  To access this free memory is exactly
what the DS and ES segment registers are for.

As a last note, I program using the A86/D86 assembler/debugger which designed
for writing COM program and using modular blocks of assembly code (or libraries
of source code).  (It does not use libraries of OBJ modules because it
assembles code faster that a linker can link OBJ modules, and with the added
benefit that source modules can be edited (or set by equates) for exact array
sizes while OBJ cannot be edited).  Since the assembler deals only with COM
files, it is free of all the complications and speed degradation needed to
generate linkable code (ie, OBJ) that are used (in conjunction with an OBJ
library) to either create EXE programs or to be linked into other language OBJ
modules that need assembly program modules for speed or flexibility.

Note also that with a 80386, the size of a segment can be as large as the full
address space.  That should take care of people writing compilers who need
large blocks of linear memory to make their programming life easier.

If one does not program in assembler, the benefits of segments are harder to
use and appreciate.

Michel

PS:  Wordstar, DBase, TurboPascal, TurboBasic, and probably TurboC were written
in assembler.

--e-o-f--

campbell@maynard.BSW.COM (Larry Campbell) (02/23/88)

From article <34208AHS@PSUVM>, by AHS@PSUVM.BITNET:

> I prefer to program my utilities in assembler (and my applications in APL)...

Assembler is for cave men.
APL is for Martians.
-- 
Larry Campbell                                The Boston Software Works, Inc.
Internet: campbell@maynard.bsw.com          120 Fulton Street, Boston MA 02109
uucp: {husc6,mirror,think}!maynard!campbell         +1 617 367 6846

jamesa@amadeus.TEK.COM (James Akiyama) (02/25/88)

First, I should probably mention that I'm not sold on segmentation.  I have
examined both architectures and believe that each have advantages in certain
applications.

Intel's segmentation turns out to be an effective method of providing a
relatively low cost memory management scheme.  Privileges can be assigned thru
the segments resulting in certain memory locations being protected.  Using
this scheme, potential privilege violations need only be checked when crossing
segment boundaries--resulting in faster execution speeds and/or lower cost
(over schemes requiring privilege testing on every access).  This type of
protection scheme does have a drawback of not being entirely transparent to the
software (since memory areas with different privileges must live in different
segments; requiring software to load the appropriate segment register before
accessing).  Other protection schemes exists which are more transparent but
also, generally, more expensive (in either dollars and/or speed penalties).
Also, these software idiosyncrasies are often handled in the operating system,
making them relatively "transparent" to applications.

Execution speed between the Intel segmented architecture and a linear
architecture (such as Motorola's) is also very dependent on application.  I
think you'll find that both chips (80X86 and 680X0) both offer comparable
speeds--oftentimes speed being more related to the support hardware, compiler,
or operating system.

I think the biggest drawback to the 8086, 80286 segments are their 64K boundary
limits.  This is what causes all the memory model headaches.  This was, for the
most part, fixed in the 80386 if you are willing to lose downward compatibility
(to the older processors).

In summary, which architecture is appropriate is very dependent on your
particular application.

					James E. Akiyama
					Tektronix, Inc.

nather@ut-sally.UUCP (Ed Nather) (02/25/88)

In article <1032@amadeus.TEK.COM>, jamesa@amadeus.TEK.COM (James Akiyama) writes:
> 
> I think the biggest drawback to the 8086, 80286 segments are their 64K boundary
> limits.  This is what causes all the memory model headaches.  This was, for the
> most part, fixed in the 80386 if you are willing to lose downward compatibility
> (to the older processors).
> 
> In summary, which architecture is appropriate is very dependent on your
> particular application.
> 

I agree.  I've just completed the "port" of a real-time application program,
originally written for the Nova computers, to the 808x series.  The display
is live (animated) and requires a circular data buffer, which is a real
pain to implement at display refresh speeds.  However, by choosing the
buffer length as 64K and assigning it to a separate segment, I was able to
let the segmenting hardware do the "wrap" for me, at no cost in speed.

Specialized, sure.  But as the man said ...


-- 
Ed Nather
Astronomy Dept, U of Texas @ Austin
{allegra,ihnp4}!{noao,ut-sally}!utastro!nather
nather@astro.AS.UTEXAS.EDU