[net.micro.amiga] C compiler comparison

dillon@CORY.BERKELEY.EDU (Matt Dillon) (10/02/86)

	Manx code is smaller due to the fact that it defaults to 16-bit ints.
If you use it in a comparable mode (+l option for 32-bit ints), I think
Lattice wins.  This, by the way, is why Manx code is usually a *little*
faster.  Lattice bungled their library for 3.03 and below, and thus a lot
is generally included if you reference even supposedly short routines.

	However, if you do not use Lattice's current library (for instance, 
my SHELL and the utilities I posted use my own library), the difference in
executable sizes is negligible.

	Lattice's 3.04 library is supposedly much improved and no longer
causes these huge executables.

----
	What I don't like about Manx is the fact that they decided not to
use the Amiga's standard for object modules, libraries, etc... which causes
headaches.  the new linker, BLINK, is quite a bit faster than Manx's
linker anyway.

	Talking to Jay Denebeim (sysop for the BBS w/carries Blink), Lattice's
3.04 compiler will have PC relative addressing modes for calls and jumps, and
relative addressing through an address register for static/global variables.
He pointed out that this would mean that I could make my shell and any other
program re-entrant without any problem.  This will also, incidently, probably
make the code more compact than Manx as Lattice already has some great
optimization hacks in the compiler.

					-Matt

dillon@CORY.BERKELEY.EDU (Matt Dillon) (10/05/86)

>From: ewhac@well.UUCP (Leo 'Bols Ewhac' Schwab)
>	Also probably due to their use of 16 bit ints, which are, in
>general, twice fast as equivalent 32 bit operations.

	Not twice as fast, since '32 bit operations' does not imply 32
bit memory access *all* the time.  I seem to remember going through all
this before (and being corrected on some points):

	32 bit register operations are only slightly slower than 16 bit
register operations.  Memory operations are, of course, twice as slow, but
if you include instruction fetch as well, the overall slowdown is not all
that much.  Also, you should note that there are far more register operations
performed over memory (data fetch) operations in generic code.  Thus, the
overall speedup using 16 bit ints isn't as much as you might think.


>>5.  Manx DOES NOT USE THE STANDARD AMIGA FILE FORMAT.  This
>>    is why the Manx compiler refused to work under AmigaDOS
>>    1.2 (and perhaps still refuses, I don't know.)  Manx
>>    made up their own object file format and wrote their own
>>    linker.  Object produced with Manx will not be compatible
>>    with the Commodore assembler or Commodore libraries.
>>
>	Jim Goodnow, author of the MANX compiler, had this to say about the
>Alink object format in the Amiga conference on The WELL:
>--------
>Actually, the Metacomco format is fairly limited since it doesn't
>support any type of expression and is only a slight superset of
>the load format. There are a lot of things we do with our load format
>that can't be done within the limitations of the Metacomco format.
>On the other hand, it is fairly simple to convert the Metacomco
>format to the Aztec format and there will be a utility for that
>purpose, that will convert object modules and/or libraries.
>The object format we use has actually been evolved over several years
>and is virtually identical for the 8080, 6502, 8086 and 68000 systems
>that we support.

	I'm interested in exactly what the Metacomco format doesn't have
that is needed.  One thing that it doesn't have has to do with relative
and address base calculations: I hear A new hunk type will be added by
Lattice in 3.04 and supported by Blink. Anything else that the current
format doesn't have?

	In diverging from the 'standard' object and library formats,
Manx users must do a little work to get the latest revision of amiga
libraries (a conversion program would make this trivial).


>-------- In a later message, Jim also writes:
>There are several problems. The biggest, is that a number of the items
>in a hunk do not have a size. For example, a symbol item consists of a
>number of chunks consisting of a name length, name, and offset. When the
>loader loads your program it doesn't care about your symbols, but it can't
>just seek past them, it has to look at each name size, skip the name and offset
>and then look at the next name size. If you have a lot of symbols, can make a
>big difference. A similar problem comes when you want to look at external
>references. The first pass of a linker just wants to look at symbols and
>their offsets to get everything resolved. It would have been nice if each
>module had a header that told you where in the file the symbols were located
>so you could go right to them and get them.

	Here I agree with the comment... A provision should have been made
for the equivalent of UNIX RANLIB (Create a Random access library).  The
thing about the symbol-size (not being able to skip entire symbol sections)
is justified, but those sections are generally so small that the file
buffering reads them in anyway; I don't think you would see much speed
improvment there.

>>You don't need LSE at all to run Lattice C;  it's just a
>>text editor.  I use MicroEMACS 3.6, which is public 
>>domain.  MAKE can be had for free in the public domain,
>>so don't buy Manx just for MAKE and don't buy Lattice
>>MAKE either.  
>>
>	Agreed.  But having 'make' is still nice.
>	I've also observed that some people on the net actually *like* the
>lattice compiler, saying that the new version has advantages over MANX.
>
>	Let me put it to you this way:  Which would you rather type?  This:
>
>1> df1:lc1 -idf1:include/ -oram: foo
>1> df1:lc2 -v ram:foo
>1> alink df1:lib/lstartup.obj+ram:foo.o library df1:lib/lc.lib+df1:lib/amiga.lib to ram:foo faster
>
>	or this:
>
>1> cc foo.c
>1> ln foo.o -lc -o foo
>
>	I realize I may be exaggerating the situation a bit, but I think
>MANX is *infinitely* easier to use.  And no flames about using batch files
>to run the Lettuce compiler; my disks grind quite enough, thank you.
>
>	What can I say?  I like MANX.


	Gosh, I guess the MAKE I'm using w/ Lattuce is just a ghost.  As for
'cc', there is a version of that for Lettice also (though I just use some
nice aliases from my shell to do all the dirty work).

Points for Manx:
	-Based on 16-bit ints making for more compact code.
	-Compilation is faster (this is the biggest point)
	-has options to save and restore the symbol table so you don't need
	 to include all those include files all the time.
	-optional in-line assembly
	-optional assembly generation (?)
	-comes with a vi-like editor (?)
	-comes with a make utility
	-stdio library is well done

Points not for Manx:
	-Doesn't use the Amiga standard for modules
	-The Manx linker is not any faster than Blink (so much for RanLibs).
	-there are PD MAKE's and CC's for Lattice
	-Doesn't have error reporting as in-depth as lattice.
	-32 bit ints seems to be a hack (not the major optimization point)
	-Code compaction gained with 16 bits ints completely lost with 32.


Points for Lattice:
	-Adheres to the Amiga standard
	-very nice error reporting

Points not for Lattice:
	-two pass compiler executable's for both passes total 200K+ (takes
	 a long time to load)
	-current released stdio library not in very good shape.
	-doesn't have equivalent save/restore state
	-doesn't directly support the FFP library for floating point


Points of NO INFORMATION: (that I do not have any info on)
	-Does Manx directly support the FFP library?
	-Is the Manx compiler one pass or two? executable size?
	-how large is a symbol table dump compared to the size of
	 The #include files dumped?

	-do utilities exist in PD to convert standard libraries/object
	 modules to Manx and back?

	-has anyone done any speed tests Lattice vs Manx which DO NOT
	 include the time it takes to load the executables? (this is
	 for compilation).  Also, don't include manx's ability to load
	 symbol table dumps.. What I'm interested in is a comparison
	 of actual compilation speeds.


I agree that Lattice has a bit of catching up to do in terms of speed,
but think that Manx is off track by going off standard.

					-Matt

higgin@cbmvax.cbm.UUCP (Paul Higginbottom) (10/05/86)

In article <8610050658.AA12992@cory.Berkeley.EDU> dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
>...
>Points for Manx:
>	-Based on 16-bit ints making for more compact code.
>	-Compilation is faster (this is the biggest point)
>	-has options to save and restore the symbol table so you don't need
>	 to include all those include files all the time.
>	-optional in-line assembly
>	-optional assembly generation (?)
>	-comes with a vi-like editor (?)
>	-comes with a make utility
>	-stdio library is well done
>
>Points not for Manx:
>	-Doesn't use the Amiga standard for modules

This has never made the slightest bit of difference to me.  I only use C
so am hardly worried about mixing object files.  Also, you mentioned that
this would make it harder to have the latest library because they're
released in MetaComco format - well, how many have versions have been
released?  Two?  Three MAYBE?  There's no "disadvantage" here, Matt.

>	-The Manx linker is not any faster than Blink (so much for RanLibs).

That's not a DISADVANTAGE - you're just writing points about areas where
Lattice is "AS GOOD AS" Manx.

>	-there are PD MAKE's and CC's for Lattice

So?  This is a negative for Manx?

>	-Doesn't have error reporting as in-depth as lattice.

Definitely agree, Manx should be improved on this score.  The latest beta
version IS beta, er, better.

>	-32 bit ints seems to be a hack (not the major optimization point)
>	-Code compaction gained with 16 bits ints completely lost with 32.

Definitely agree, the 32 bit mode of the compiler is not only weak, it
has BUGS.  But then, the Manx system was DESIGNED to only use 16 bit ints,
and I think anyone who buys Manx and only uses the 32 bit option is
foolish.  I have used it just to compile things I know will need 32 bit
ints (i.e., generally that means sloppy code that interchanges pointers
and ints - do these people run their compilers in SILENT mode?  Surely
they must get tons of warnings.)

>...
>Points of NO INFORMATION: (that I do not have any info on)
>	-Does Manx directly support the FFP library?

Yes, it's pretty nice - the "float" type uses mathffp, and the double
(in 3.20a was unimplemented and was basically internally treated like
a float except for storage and passing size) can now be treated as
mathieee, or manx-double-precision (if you don't want to have to rely
on a library being around or the load time off disk), or 68881 support
(don't know how this works).

>	-Is the Manx compiler one pass or two? executable size?

Compiler LITERALLY produces an assembler source file and 'cc' invokes
the assembler with the temporary source file (which can be kept around
with an option).  The temp file can be made wherever you want (I use RAM
of course).  The compiler is about 74K (v3.30c) and the assembler is
about 42K (impressive, and the assembler now supports all the assem
directives for compatibility.

>	-how large is a symbol table dump compared to the size of
>	 The #include files dumped?

Well, I don't know if this helps, but here's how I run.  I have a
symbol table file of EVERY SINGLE INCLUDE Amiga has in RAM, and it's 114K.
The ram driver under 1.2 appears to be INCREDIBLY fast, and the compiler seems
to take about a tenth of a second to load it in.

>	-do utilities exist in PD to convert standard libraries/object
>	 modules to Manx and back?

You can convert TO Manx, but not back (i.e., upward compatible).  HOWEVER!!!
The next release of Manx will allow you to FREELY INTERMIX Amiga and Manx
object code, and link libraries.

>	-has anyone done any speed tests Lattice vs Manx which DO NOT
>	 include the time it takes to load the executables? (this is
>	 for compilation).  Also, don't include manx's ability to load
>	 symbol table dumps.. What I'm interested in is a comparison
>	 of actual compilation speeds.

Well, this isn't an ideal speed example, but it might give you an idea.
Consider the following:

	cd ram:
	edit in test.c which is the standard hello world program
	cc test (1.5 seconds - that includes compile and assemble)
	ln test.o -lc (5 seconds)

>I agree that Lattice has a bit of catching up to do in terms of speed,
>but think that Manx is off track by going off standard.
>
>					-Matt

The non-standard issue that Manx lacks is completely irrelevant for me,
because a) it doesn't affect my FULL-TIME development, and b) if a standard
is bad, why should one use it (e.g IFF)?

Lattice has a TON of catching up.  I read recent praises about Lattice's
"slick optimizing ability" but I've gone through tons of Manx compiler
output, and some of the code it producing is nothing short of astonishing.
Math operations converted into bit shifts if constants are powers of two,
knowledge of "easy" constants like 0, 1, -1, etc.  Uses short relative
addressing wherever possible.

My development system by the way, is an Amiga with Comspec's 2Mb AX2000
add-on, a MicroForge hard drive, and one external floppy.  A lot of stuff
goes into ram at powerup but all source/object remains on the hard drive.
My system is usually faster than a VAX-750 I've used (even UNLOADED).

My only gripe with Manx is that the 32 bit option isn't perfect.

	Paul Higginbottom.

Disclaimer:I do not work for Manx or Commodore and my opinions are my own.

phils@tekigm.UUCP (Phil Staub) (10/06/86)

In article <8610021606.AA18976@cory.Berkeley.EDU> dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
>
>	Manx code is smaller due to the fact that it defaults to 16-bit ints.
>If you use it in a comparable mode (+l option for 32-bit ints), I think
>Lattice wins.  This, by the way, is why Manx code is usually a *little*
>faster.  Lattice bungled their library for 3.03 and below, and thus a lot
>is generally included if you reference even supposedly short routines.
>
>	However, if you do not use Lattice's current library (for instance, 
>my SHELL and the utilities I posted use my own library), the difference in
>executable sizes is negligible.
>
>	Lattice's 3.04 library is supposedly much improved and no longer
>causes these huge executables.
>

I use the Manx compiler with the +l option almost exclusively, and typically
find that (at least on some of the code supplied on the Fish disks) I still
only wind up with about half the amount of code as the Lattice version. I
suspect this is due to the library problems you mentioned. I normally find
only minimal code size increase from 16 to 32 bit ints.

>----
>	What I don't like about Manx is the fact that they decided not to
>use the Amiga's standard for object modules, libraries, etc... which causes
>headaches.  the new linker, BLINK, is quite a bit faster than Manx's
>linker anyway.
>

I agree it would have been nice if the Manx format would have been
compatible, but that would have made things too easy!  8-)

>	Talking to Jay Denebeim (sysop for the BBS w/carries Blink), Lattice's
>3.04 compiler will have PC relative addressing modes for calls and jumps, and
>relative addressing through an address register for static/global variables.
>He pointed out that this would mean that I could make my shell and any other
>program re-entrant without any problem.  This will also, incidently, probably
>make the code more compact than Manx as Lattice already has some great
>optimization hacks in the compiler.
>

Manx 3.20a already uses these optimizations if small code and data models
are used (the default).

Phil Staub
Tektronix, Inc.
ISI Engineering
P.O. Box 3500
Vancouver, Washington 98668
C1-904, (206) 253-5634
..tektronix!tekigm!phils

john13@garfield.UUCP (10/06/86)

[]

I haven't really used the Manx compiler for anything more than compiling
sources downloaded from the net, and compiling my Big Project (a calculator
personalized to fit my own needs), but I've used Lattice quite a bit,
and for the same basic purposes, so I guess I shouldn't be afraid to comment.

I was *amazed* when I first compiled a program under Manx that I had also
compiled under Lattice. The Big Project executable was 29K and change.
Manx executable, after the source has been *lengthened* considerably, is
just over 11K. Matt commented on the short/long integer issue, as it relates
to the size of executables - the 11K is compiled with the 32 bit integers,
until I work out a better way to handle big factorials. Without the option,
the file is only a few hundred bytes shorter.

This discrepancy is not just due to the stuff Lattice puts in at compile time.
Back in the early stages, I reduced the size of the executable under Lattice
by 1K, just by replacing a "if A then func1(); if B then func2();" with
"if A then func = func1; if B then func = func2(); func();". 1K just to
add in *one* function call? If anyone has reliable figures for the amount
of memory taken by specific compiler actions, I'd like to see it.

Also, I haven't run gfxmem with a Manx link yet, but with Alink the memory
is just chewed right up! There isn't enough on a 512K Amiga for all the
important Dos commands, lc.lib and amiga.lib in memory while doing a link.
Using Manx, there is no need for the Dos commands in ram: (it's a full
Workbench), and I've had no problems fitting c.lib, m.lib, source, all
quad files, and executables in memory during the full process. All that
have to be loaded are the compiler, assembler, and linker. Just for 
comparison, I can compile and link the Big Project in 51 seconds with
Manx. Lattice? Take a coffee break in the 4-5 minutes it takes. What are
the experiences of other people who have used both?

As to Lattice being the "standard" by which Amiga sources are to be
measured: the number of sources I have downloaded or found on Fish
and Amicus disks that have hitchlessly compiled with Lattice is.......
ZERO. The vt100 emulator that I am using right now caused a _Task Held_
message and guru on the first phase pass through window.c. Manx didn't
even burp on it (yes, I incorporated the Lattice #define and all the
posted bug fixes before trying Lattice). Freedraw, remember that way back
when? The number of errors that generated was unbelievable. Etc, etc.
Admittedly I haven't tried compiling *every* source I have...I gave up in
disgust after about the twentieth. (With letter all typed and ready to
go to Mr. Wecker, pleading for help on the vt100; decided not to post it
that day, and the next day I got to try Manx).

BTW, my Lattice didn't include (oh those puns) [L|A]Startup.obj. The listing
of files on the disk was at odds with the documentation. Stdio.h and others
were in the include/lattice directory, from which they had to be rescued
if *any* program was to work properly. First out of the factory? No, a
relatively recent 3.03.

I'm open to comments and criticisms regarding the merits of both compilers.
However, my mind is not likely to be changed, largely because of the 2
months I spent twiddling my thumbs while the sources I had downloaded
sat around getting moldy.

Disclaimer: All preceding opinions are my own as an Amiga user, not those
of my employer or this university.

Plea: Anyone have a path to cbmvax that WORKS?

Wish: That we see lots more ray trace pictures posted! Lots'n'lots!

John Russell
UUCP:	{akgua,allegra,cbosgd,ihnp4,utcsri}!garfield!john13
CDNNET:	john13@garfield.mun.cdn

swalton@well.UUCP (Stephen R. Walton) (10/07/86)

In article <8610050658.AA12992@cory.Berkeley.EDU> dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
> (...a fairly accurate Manx vs. Lattice comparison omitted, then)
>
>
>Points of NO INFORMATION: (that I do not have any info on)
>	-Does Manx directly support the FFP library?

   Yes.  As a result, Manx is nearly 10 times faster than Lattice on
floating point calculations.

>	-Is the Manx compiler one pass or two? executable size?

    The compiler is two pass;  the second pass is an assembler which takes
most (all,in the new release) of the constructs accepted by the Metacomco
assembler.  Executables, even with int=32 bits, are always significantly
smaller.

>	-how large is a symbol table dump compared to the size of
>	 The #include files dumped?
>
    They're fairly large, but they load MUCH faster than the #include files.
>	-do utilities exist in PD to convert standard libraries/object
>	 modules to Manx and back?
    For the reasons Jim Goodnow gave, a standard -> Manx converter is easy,
but a Manx -> standard converter is not.  There is no PD converter;  one
comes with the commercial Aztec package.  Jim also promises that the new
version of Manx (now shipping) will have a linker which will read standard
object files.
>	-has anyone done any speed tests Lattice vs Manx which DO NOT
>	 include the time it takes to load the executables? (this is
>	 for compilation).  Also, don't include manx's ability to load
>	 symbol table dumps.. What I'm interested in is a comparison
>	 of actual compilation speeds.
      Do you have any suggestions on how to do this?  Also, it seems a bit
unfair to exclude executable size and Manx's table dump capability.  When
benchmarking two compilers, only one of which optimizes, we don't turn
off the optimizer to get a "fair" comparison.  I'd do the test, but I
can't fit Lattice into RAM: :-)
>I agree that Lattice has a bit of catching up to do in terms of speed,
>but think that Manx is off track by going off standard.
>
>					-Matt
     One further note:  I compiled Matt's original Shell program with Manx.
I added some #define's in shell.h to get rid of most of his xstdio routines
so that the only routine I needed from MY.LIB was xprintf, and that only to
get fprintf() to an AmigaDOS file handle.  (Memory allocated with malloc()
with Manx is automatically free()'d upon return from main() or upon a call
to exit()).  The resulting executable was 23K plus a bit long.  I believe
this compares favorably to what Matt gets with Lattice and MY.LIB.  One has
to wonder about a company whose library can be bettered by one guy working
in his spare time...
					Steve Walton, representing myself

dillon@CORY (Matt Dillon) (10/07/86)

>Phil Staub writes
>I use the Manx compiler with the +l option almost exclusively, and typically
>find that (at least on some of the code supplied on the Fish disks) I still
>only wind up with about half the amount of code as the Lattice version. I
>suspect this is due to the library problems you mentioned. I normally find
>only minimal code size increase from 16 to 32 bit ints.

	This is exclusively due to Lattice's current LC.LIB.  However, from
people who've compiled my shell w/ Manx, the code *is* smaller, but only
by about 10-20%.

>>	Talking to Jay Denebeim (sysop for the BBS w/carries Blink), Lattice's
>>3.04 compiler will have PC relative addressing modes for calls and jumps, and
>>relative addressing through an address register for static/global variables.
>>He pointed out that this would mean that I could make my shell and any other
>>program re-entrant without any problem.  This will also, incidently, probably
>>make the code more compact than Manx as Lattice already has some great
>>optimization hacks in the compiler.
>>
>
>Manx 3.20a already uses these optimizations if small code and data models
>are used (the default).

	I used Lattice on an IBM system... couldn't stand the FOUR memory
models.  But on an 8086/8 you don't have much choice since the registers
aren't 32 bits and you have that idiotic segmentation scheme.  One of the
reasons C on the 680x0 is *so* much better is that you don't really have
to worry about code size.  The pointer problem that occured with 8086/8
doesn't exist.  

	The 68000 and 68010 do, however, limit the address space for 
PC relative operations.  Does MANX's flag for PC-RELATIVE addressing
on absolute calls require that the code be smaller than 32K? Or will
it employ some sort of relative jump table for calls beyond the addressing
range?

					-Matt

higgin@cbmvax.cbm.UUCP (Paul Higginbottom) (10/09/86)

In article <8610071955.AA15436@cory.Berkeley.EDU> dillon@CORY (Matt Dillon) writes:
>...The 68000 and 68010 do, however, limit the address space for 
>PC relative operations.  Does MANX's flag for PC-RELATIVE addressing
>on absolute calls require that the code be smaller than 32K? Or will
>it employ some sort of relative jump table for calls beyond the addressing
>range?
>
>					-Matt

Manx's small model uses a jump table (the jumps can jump anywhere of course)
and lumps that with the static data in one segment.  Thus the JUMP table
can only be up to 64K, which means 16K functions (HARDLY LIKELY!).
The small model is only useless if you're going to have more than 64K
variables (also pretty unlikely).

	Paul.

Disclaimer: my opinions are my own, and I don't work for Commodore.

rokicki@navajo.STANFORD.EDU (Tomas Rokicki) (10/09/86)

[ / | \  cruisin' down . . . ]

Hi, Matt!  You write:
>
> 	This is exclusively due to Lattice's current LC.LIB.  However, from
> people who've compiled my shell w/ Manx, the code *is* smaller, but only
> by about 10-20%.
>

Hell, I'd be happy to get 10-20% anytime, anywhere.

> 	The 68000 and 68010 do, however, limit the address space for 
> PC relative operations.  Does MANX's flag for PC-RELATIVE addressing
> on absolute calls require that the code be smaller than 32K? Or will
> it employ some sort of relative jump table for calls beyond the addressing
> range?

The PC-relative flag (which is on by default) will handle code larger
than 32K correctly; this was of major importance for me with TeX.
(Which I've gotten down to 141,220K executable; anyone know of any
smaller on *any* machine?)  It builds a jump table in the data
segment for those long jumps.

-tom

phils@tekigm.UUCP (Phil Staub) (10/09/86)

In article <8610071955.AA15436@cory.Berkeley.EDU> dillon@CORY (Matt Dillon) writes:
>by about 10-20%.
>
>>>	Talking to Jay Denebeim (sysop for the BBS w/carries Blink), Lattice's
>>>3.04 compiler will have PC relative addressing modes for calls and jumps, and
>>>relative addressing through an address register for static/global variables.
>>>He pointed out that this would mean that I could make my shell and any other
>>>program re-entrant without any problem.  This will also, incidently, probably
>>>make the code more compact than Manx as Lattice already has some great
>>>optimization hacks in the compiler.
>>>
>>
>>Manx 3.20a already uses these optimizations if small code and data models
>>are used (the default).
>
>	I used Lattice on an IBM system... couldn't stand the FOUR memory
>models.  But on an 8086/8 you don't have much choice since the registers
>aren't 32 bits and you have that idiotic segmentation scheme.  One of the
>reasons C on the 680x0 is *so* much better is that you don't really have
>to worry about code size.  The pointer problem that occured with 8086/8
>doesn't exist.  
>
>	The 68000 and 68010 do, however, limit the address space for 
>PC relative operations.  Does MANX's flag for PC-RELATIVE addressing
>on absolute calls require that the code be smaller than 32K? Or will
>it employ some sort of relative jump table for calls beyond the addressing
>range?
>
>					-Matt

Exactly. A jump table is built in the data segment to handle the case of
the overgrown code segment (more than 32k), and even that is only necessary
for cases where pc-relative goes beyond 32k away. (i.e., it is only used in
those cases, not for all accesses).
Data segments, however, are limited to 64k in small model. Data are accessed
by 16-bit offsets from a pointer to the *center* of the data segment.
I suspect that the new Lattice approach is similar, if not identical to this
method.

I fully agree with your comment about the 8086 requirement for 4 memory
models. I've been there. However, I'm not quite so uncomfortable with this
approach, for a couple of reasons. First, of course is the ability to use any
size code segment with small code model (as described above), even if it
does require the overhead of the additional level of indirection. 
For really large programs, you may be able to arrange modules to minimize 
the references to routines more than 32K away, though I realize this would 
be time consuming and (yeech) a manual operation. Second, you can mix and 
match large and small model code and data in any combinations you wish. 
(i.e., you don't have to have large code to get large data model).

Now the bad news (there had to be some). The libraries were compiled using
small code, small data. So what do you do if you want to use large data
model to get a data segment larger than 64K? You have two options one of
which is rather messy, the other only slightly messy. The messier one is to
re-compile the libraries (implies you have bought the commercial version).
The less messy one is to just go ahead and use the libraries as is.
What the linker does in this case is to generate the data segment with 
A4 pointing to the middle of it, no matter how big it is. Of course, any
references from small data modules will have to be within 32K of the center
of the data segment. This may require some tweaking (the messy part) to get
modules linked in the right order, but at least it *should* work, unless
you've got more than 64k of "small" data, in which case I suppose you have
to revert to the first solution.

Yes, I know, this is not an ideal solution. I feel that an ideal solution
would have to be a) completely transparent to the user, and b) always "know"
when to do what to generate the best (read "smallest and fastest") possible
code. (I'm sure there are other criteria I'm missing, but those two are
tough enough to meet). But for (perhaps) 95% or more of applications, there 
may well be no problem. 

Phil Staub
Tektronix, Inc.
ISI Engineering
P.O. Box 3500
Vancouver, Washington 98668
C1-904, (206) 253-5634
..tektronix!tekigm!phils

walker@sas.UUCP (Doug Walker) (10/09/86)

In article <8610050658.AA12992@cory.Berkeley.EDU> dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
>	-The Manx linker is not any faster than Blink (so much for RanLibs).

Actually, several people have told me that BLink is significantly faster than
the Manx linker.  I haven't actually tried it, though.

rokicki@navajo.STANFORD.EDU (Tomas Rokicki) (10/11/86)

In article <962@tekigm.UUCP>, phils@tekigm.UUCP (Phil Staub) writes:
> Data segments, however, are limited to 64k in small model. Data are accessed
> by 16-bit offsets from a pointer to the *center* of the data segment.

  . . . he goes on about using the large data model with the
        libraries which are compiled with the small data model . . .

Usually a better solution, and one I use, is to allocate my large
data structures with AllocMem() (or malloc().)  This way, not
only can that large data segment be split into chunks, making it
more likely to load into a fragmented memory, but the compilation
is straightforward.  64K is a lot of data segment . . .

-tom

hamilton@uiucuxc.CSO.UIUC.EDU (10/15/86)

>The newest beta test version of Lattice (3.10) has a -L option on the 'lc' 
>command which allows you to invoke BLINK directly from lc.  The user interface 
>is generally cleaned up immensely.  It also supports 16-bit offsets for both
>code and data.  I certainly have no problems with remaining with Lattice.

    well, that's fine for you.  and the folks who have beta A-Live's
are in no hurry to switch to digiview.  until i can buy this 3.10,
my choice is between manx and the old lattice.

	wayne hamilton
	U of Il and US Army Corps of Engineers CERL
UUCP:	{ihnp4,pur-ee,convex}!uiucdcs!uiucuxc!hamilton
ARPA:	hamilton%uiucuxc@a.cs.uiuc.edu	USMail:	Box 476, Urbana, IL 61801
CSNET:	hamilton%uiucuxc@uiuc.csnet	Phone:	(217)333-8703
CIS:    [73047,544]			PLink: w hamilton

papa@bacall.UUCP (Marco Papa) (10/18/86)

> 
> >The newest beta test version of Lattice (3.10) has a -L option on the 'lc' 
> >command which allows you to invoke BLINK directly from lc.  The user interface 

>     well, that's fine for you.  and the folks who have beta A-Live's
> are in no hurry to switch to digiview.  until i can buy this 3.10,
> my choice is between manx and the old lattice.
> 
> 	wayne hamilton
> 	U of Il and US Army Corps of Engineers CERL

Well, I stopped waiting and bought MANX.  It took me about 1 week to convert 
A-Talk 1.1 from Lattice 3.03 to Manx 3.20A.  I used the lc32 libraries, so
I am still using 32-bit ints.  The main purpose for me was to gain speed
and decrease code size.  The second goal was clearly achieved: code size
went down from 145K to 98K.  This should go down even more when I start
using 16-bit ints.  In terms of speed, screen refresh seems to be much faster.
A-Talk seems to keep up with 9600 bauds with no loss of data.  One thing that 
I miss are the overlays, but the manx update should provide that.  

Converting the code was no big deal.  Mainly adding some extra casting to
stop MANX from complaining, changing the use of stci_d to sscanf. I had more
problems converting the single assembly file from Metacomco's Assembler to
the MANX assembler, since their rules and keywords are different.

All in all, I am clearly satisfied of the switch.

-- Marco Papa
   Felsina Software