[net.micro.pc] 80286 microprocessor problems

wizard@intelca.UUCP (Kevin Supinger) (11/30/84)

	Here we go again with more astounding ignorance.
I suggest that you reread the 80286 programmers reference manual
The 80286 was not intended as a processor for simple minds, such
as some other so called 32 bit architectures. The protection mechanism
works very well when used properly and provides the control that is
needed by application and system programmers. 

	1) The concept of segment descriptors may at first be
	hard to grasp. But simply replaces the real address segment
	with a descriptor index. This uses the upper 13 bits of the
	segment to index into the Global or Local descriptor table.
	The other bits are used as 2 for the requested privledge level
	and 1 bit to indicate whether the descriptor is local or
	global. Being that each descriptor consists of 8 bytes, the
	selector or real address segment is loaded with the byte offset
	of the desired descriptor. There is no shifting of addresses 
	needed.

	2) Each descriptor contains a linear 24 bit segment base
	address that gives byte granularity to the starting address.
	The segment limit is 16 bits or 64K in size. There is a
	access rights byte that tells the 80286 about the type of
	descriptor. This allows the same 8 bytes to describe call
	gates,interrupt gates, data, code and etc. One point  that
	is most misunderstood is the operation of the 286 when the
	segment register is changed in protected mode. When a new
	value is placed in the segment register the 286 automatically
	fetches all the descriptor information into internal storage.
	This allows for fast protection and access verification with
	minimal execution impact. Also with the pipelined bus and
	pre decoding of 3 instruction for the execution unit the
	80286 is still very fast even in protected mode.

	3) As far as the ENTER and LEAVE instructions go. These
	instructions when used properly facilitate writing compact
	and very fast code most notably "C" code. 

Conclusion: I think that most software writers should get a better
grip on the 80286 architecture and I myself at first thought it to be
too complex. But after programming on the IBM AT in protected mode
and understanding the 80286 operation. I wouldn't settle for anything
less than this processor.

The views of the author do not necessarily reflect Intel's viewpoint

guy@rlgvax.UUCP (Guy Harris) (12/04/84)

> 	Here we go again with more astounding ignorance.
> I suggest that you reread the 80286 programmers reference manual
> The 80286 was not intended as a processor for simple minds, such
> as some other so called 32 bit architectures.

OK, does that mean you can do a compiler that can generate large-model
code that runs as efficiently or more efficiently on the '286 than on
machines with 32-bit registers?  If not, why should developers waste their
time programming around a machine which isn't 100% at home with large
address spaces?  If so, where is this compiler, and why are several
presumably-knowledgable compiler writers quoted as saying that writing
a compiler to generate good large-model code for the 8086 family is a
bitch?

> Conclusion: I think that most software writers should get a better
> grip on the 80286 architecture and I myself at first thought it to be
> too complex.

Why should software writers have to get a better grip on a particular
chip's architecture?  Why should chip makers pump out chips that require
people other than the OS and compiler developers to deal with the gory
details?  People developing a program whose purpose isn't dealing with
the hardware should be free to spend their time making the program serve
its purpose better, not making it deal more efficiently with the problems
the chip provides.  That's why applications are written in high-level
languages.

	Guy Harris
	{seismo,ihnp4,allegra}!rlgvax!guy

gnu@sun.uucp (John Gilmore) (12/04/84)

> intelca!wizard sez:
> 	Here we go again with more astounding ignorance.
> I suggest that you reread the 80286 programmers reference manual
> The 80286 was not intended as a processor for simple minds...

I'll just leave that one alone.

> as some other so called 32 bit architectures. The protection mechanism
> works very well when used properly and provides the control that is
> needed by application and system programmers. 

I believe the original author wanted to write programs that dealt with
more than 64K of data.  That is, he didn't want to be protected, he
wanted to get at his data.  Given that, the question is:  how can a
compiler generate code to subscript a (say) 128Kbyte array, such as a
bitmap of a 1024x1024 screen, without somehow knowing how the operating
system is going to allocate segment descriptors?  On the 8086, it was
easy:  You take the address you want to get to, shift it down 4 bits,
and load that into a segment register.  Then use the low 4 bits to get
at the element you want within that segment.  On the 286, though, when
you load that segment register, it goes indirect thru a descriptor
which is NOT under the compiler's control.

Please believe that some of us understand how segment descriptors work,
and even understand the magic process that wastes a dozen cycles every
time you load one doing a subscript in the inner loop.  Can you just
give us a short listing of what assembler code should be generated to
subscript an array FOO of 128K bytes with index I, that is:

	static char foo[128*1024], c;
	long i;

	c = foo[i];

Please then show us the code to generate if 'static' is replaced by
'auto', that is, the array (say, a Z-buffer) is on the stack.

> 	3) As far as the ENTER and LEAVE instructions go. These
> 	instructions when used properly facilitate writing compact
> 	and very fast code most notably "C" code. 

The original question was how to use them with a stack bigger than
64K.  ("The display pointers are *only 16 bits*...").  I think stacks
bigger than 64K don't work on the 286 anyway, so there's no problem, by
definition.
--

The opinions expressed here are not those of Intel.

louie@umd5.UUCP (12/05/84)

In article <1838@sun.uucp> gnu@sun.uucp (John Gilmore) writes:
>
>I believe the original author wanted to write programs that dealt with
>more than 64K of data.  That is, he didn't want to be protected, he
>wanted to get at his data.  Given that, the question is:  how can a
>compiler generate code to subscript a (say) 128Kbyte array, such as a
>bitmap of a 1024x1024 screen, without somehow knowing how the operating
>system is going to allocate segment descriptors?  On the 8086, it was
>easy:  You take the address you want to get to, shift it down 4 bits,
>and load that into a segment register.  Then use the low 4 bits to get
>at the element you want within that segment.

This is EASY??  With microprocessors like the 68000 and the 32000 available,
with large linear address spaces, why bother?  Will this hardware induced
software rot never end?

Louis A. Mamakos
Computer Science Center - Systems Programming
University of Maryland, College Park

Internet: louie@umd5.arpa
UUCP: ..!seismo!cvl!umd5!louie