[net.micro] 64k segmentation

bernie (04/07/83)

The fact is that 64k segments just aren't that much of a problem.  Even on
the 8086 it's easy to have programs that exceed the 64k boundary in a clean
and painless fashion; indeed, you can have programs close to a meg in size
without difficulty.  Granted, there's a little extra overhead since the
inter-segment jumps have to indirect through a location containing an CS,IP
pair... still, that's a relatively small price to pay.  A good high-level
language makes the whole thing transparent to the user anyway.
				--Bernie Roehl
				...decvax!watmath!watarts!bernie

dyer (04/08/83)

Generally, those who complain about 64k segments aren't complaining
about CODE segmentation, since this is easy to get around.  Rather,
the problem is support for large data objects within a segmented
architecture.

For example, most of the new crop of C compilers for the 8086/88
have dismissed this problem as "too hard" (in the best UNIX tradition)
and so, give you access only to 64k of data.  Even those
few C compilers which store pointers as 32-bit segment/offset pairs
keep the segment fixed, while the offset wraps around a 64k limit.
And, it really is hard to expect them to do much else without incurring
lots of overhead.

The moral is: give a compiler writer a reasonable architecture, and you'll
get reasonable behavior.  Otherwise...

dmk (04/08/83)

     There may be some misunderstanding here.  The limitation
on inter-segment jumps is that WHEN YOU DO THEM INDIRECTLY,
you have to do it through memory instead of a register.  However,
you don't have to do an indirect jump to get an inter-segment jump.
You can do it in immediate mode, just like the intra-segment
jumps.  So it's even easier than the previous article makes
it appear.

                        David Keaton
                        nmtvax!keaton
                        (and lanl-a!dmk)

mark (04/08/83)

Oh?  If I want to have a 100K array, or to be able to dynamically
allocate gobs small pieces of memory amounting to 1 MB or more,
you're going to make references to a[i] or p->f transparent and
reasonably efficient?

Granted, overlays can be transparent for program, but what about data?

johnl (04/09/83)

#R:watarts:-177300:ima:16900005:000:1202
ima!johnl    Apr  8 10:56:00 1983

    "The fact is that 64k segments just aren't that much of a problem."

Sorry, pal, it's just not so.  Certainly, dealing with segmented code is
not too hard.  The limit of 64K per procedure is entirely reasonable, and
the calling sequences found on such segmented machines as the 8086 and
the HP3000 are entirely reasonable.

Segmented data, though, is a disaster.  Defining an array that is bigger
than 64K is, to say the least, convoluted.  On the 8086, you have to know
that segment numbers are addresses shifted left by 4 bits and do lots of
shifting and masking.  Unfortunately, when you trade up to an 80286, the
segmentation scheme changes (if you want to use the protection features)
and all your programs break.  You could put each item in a different
segment, but on the 286 there are segment tables which will become quite
large.

For systems with many tiny processes, e.g. process control, bank
teller machines, and vending equipment, segmented architectures are fine.
For general purpose systems, they're horrible, which is why there are 100
68000 ports of Unix for each 8086 port.

John Levine, decvax!yale-co!jrl, ucbvax!cbosgd!ima!johnl,
{research|alice|rabbit|floyd|amd70}!ima!johnl