[comp.sys.amiga] Using an Amiga for C programming

schwager@m.cs.uiuc.edu (02/02/90)

> /* ---------- "Using an Amiga for C programming" ---------- */
> 
> I am interested in using my Amiga for C programming and would like to know how
> much my machine would need to be expanded to make something like the Aztec
> compiler usable. Is a 1 Meg, twin floppy A500 sufficient (currently only 512K,
> 1 floppy but will be upgraded soon) or do I really need even more memory and a
> hard disk?
> 
> Thanks in advance
> 

This is what I've got: 1 Meg, two floppies A500.  I put my c.lib in my
Recoverable Ram Disk.  I also put my makefile (using DMake, Matt Dillon's
make... through away Aztec's make!!!), and my .o files in there.  My
startup disk contains the include directory, and I make sure my precompile
file has everything I need.  It's also in my RRD.  After I boot up, I pop
out the startup disk and pop in my work disk.  Using rez (not ares or
resident; these two don't handle it so well...), I make cc, as, and dmake
resident.  ln can be made resident, but now you're running into tight
memory on larger jobs and ln doesn't take as long to fire up, nor do you
need to call it for every file changed, as with cc and as.  If you can
afford the space, then rez ln, also.  My disk in df1: contains all these
things (you still need to have these certain commands you rez in the search
path, even after you've rez'd them).  It also contains such special stuff
as the text editor.

So now, the write-compile-test cycle goes like this:
	-edit file
	-write out to work disk in df0:
	-run dmake
	-dmake looks to see what's changed in df0:, and calls cc and as on
		each one.  Since you're *only* reading from df0:, it's quick.
		Also, the more you divide up the program, the faster it is to
		compile each little module
	-compilation, assembling, and linking all take place in the RRD.
		Very fast!
	-bang head against wall
	-go back to beginning.
And I use three disks for everything- boot disk, auxiliary disk in df1:
that contains all the stuff I'll need to do the above, and finally my work
disk, which contains *only* the code for my project (the less loaded the disk
is, the faster!).

At certain points, I checkpoint my work by loading the .o files back into
df0:Objects directory.  DMake can be used for this, too.  Putting the .o
files into a seperate directory means that the top level df0: directory is
as uncluttered as possible, thus making reads faster.  I use dmake to
install stuff, too, after I've booted up and popped in the work disk.

This took a while to evolve, but I've discovered this is about as fast as
I think I'll ever be able to compile.  If I'm working on one module in
particular, I'll change things so that the one particular .c file is in 
the RRD, too- but care must be taken so that it's not lost upon a crash.
Actually, I do all editing on a Sun 3/50 and use Dnet to upload a file I
want to compile.  It's pretty quick at 19.2 kbaud!  

All of this makes it the next best thing to a hard drive, but from what I
hear, you can't beat 'em.  Getting one means you wouldn't have to resident
your commands or use a recoverable ram disk, and thus you'll have more room
to run your application (or debugger!).  Plus, boot after a crash is a lot
quicker.  Finally, there's none of this messing around and tuning the
system; everything's available on (a big) disk.  I wish I had one, but two
floppies and a Meg is certainly doable.  I wouldn't want to work with less,
though!

-Mike