[comp.sys.amiga] Small C code

mwm@eris.UUCP (04/02/87)

In article <8704020726.AA05684@cory.Berkeley.EDU> dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
>Also, I've noticed that 32 bit ints don't really increase
>code size all that much.  What really makes the difference is the small
>code and data model (using relative address for globals).
>
>					-Matt


Damn straight. By switching to small code/data and stripping the
linked binary (or NODEBUG on blink), Lattice generates code that's
less than 10% larger than MANX (assuming I've got the straight data on
how big the MANX version of what I'm working on is correct).

What peeves me is that the NODEBUG option for BLINK wasn't documented.
I've got a paper by the Software Distillery people describing how to
get small binaries out of Lattice. That's where I found out about
NODEBUG. Applying most of the rest of it would break mg's portability
to other things.

BTW, Matt, exactly what makes writing software for both compilers so
inherently ineffecient? Mg doesn't have much problems with that,
though trying to stay portable to BSD Unix (first among many others)
causes problems.

	Thanx,
	<mike
--
Here's a song about absolutely nothing.			Mike Meyer        
It's not about me, not about anyone else,		ucbvax!mwm        
Not about love, not about being young.			mwm@berkeley.edu  
Not about anything else, either.			mwm@ucbjade.BITNET

dillon@CORY.BERKELEY.EDU.UUCP (04/03/87)

>What peeves me is that the NODEBUG option for BLINK wasn't documented.
>I've got a paper by the Software Distillery people describing how to
>get small binaries out of Lattice. That's where I found out about
>NODEBUG. Applying most of the rest of it would break mg's portability
>to other things.
>
>BTW, Matt, exactly what makes writing software for both compilers so
>inherently ineffecient? Mg doesn't have much problems with that,
>though trying to stay portable to BSD Unix (first among many others)
>causes problems.
>
>	Thanx,
>	<mike

	Well, actually, ND is documented when you give Blink with no 
arguments (or was it with '?'.. forgot).

	'standard' C programs aren't difficult write for both compilers, but
when you get into other things, like my MWB, which require assembly language,
it gets quite different.  For instance, I have to specify the large code and
data model using the FAR and NEAR operations in the assembly, which is
incompatible with the C-A assembler.

	As far as the compiler goes, with 2Meg of RAM I have it entirely in
VD0: (thanks Perry!), and have cc aliased to 'cc +L +Ivd0:clib/symbols'...
that is, I've pre-compiled all the symbols in every Amiga include file that
exists.  Needless to say I can still put #include statements in the source
without loosing efficency, but I have on way of knowing if I forgot any
#includes since I get all the symbols whether I have them or not.

	So the problem is more trying to explain the enviroment to 
people than actually coding changes.  But you can see the trouble someone
might have to go through to get Aztec code to compile with Lattice...
Adding #include files, modifying assembly language, taking out dependances
on certain Aztec library functions (though I personally do not usually use
those functions), differences with floating point.  I hesitate to 
distribute source because I *know* I'm going to get dozens of letters from
people trying to compile it.


				-Matt