[comp.sys.amiga] Feeping Creaturism

peter@nuchat.UUCP (Peter da Silva) (02/16/88)

I think that all of us, particularly those who are writing applications
programs for money, need be aware of the lesson UNIX teaches about
the utility of having many small programs each of which does one thing
only... but does it well.

Oh, sure, it saves a couple of K of 'C' runtime and common code to put
a pipe device into a console handler, or a terminal program into a shell,
or a keyboard macro handler into a directory utility... but only if you
use the extra functions.

Otherwise it costs a couple of K, or a couple of dozen K, to have two or
three copies of a pipe device... a macro handler... a terminal program...
a command language processor... or whatever else you have to drag around
when you prefer another version of whatever it is that you're using.

I thought editor/assemblers went out back when 1K monitors stopped being
state-of-the-art... now integrated environments that combine a weird
non-standard Pascal and a weird non-standard Wordstar variant are the in
thing. I know IBM PC users who actually fired up Turbo Pascal for doing
text editing.

On the Amiga this sort of thing shouldn't be necessary. There's plenty of
room for you to have your favorite editor (I like vi, and I think Emacs
only has one use: an example of why you don't want 400 people designing
a program... but other people think vi fits that very niche) and your
language (let's hear it for BCPL, now) and let them work together.

So what language comes with the Amiga? AmigaBasic: a huge combined Basic
interpreter and editor that requires so many files at run-time that I don't
think anyone without a hard disk keeps it in their workbench. I hear it's
a pretty good Basic. When I get a hard disk I might actually fire it up.

Then there's Zing! A directory utility keyboard macro handler who knows what
else package.

Oh well. There are, of course, exceptions. On the Amiga it looks like the
exceptions are actually the general rule. Let's keep it that way.

Boycott integrated packages. What do you need that stuff for, anyway? The
Amiga gives you an integrated *environment*...
-- 
-- a clone of Peter (have you hugged your wolf today) da Silva  `-_-'
-- normally  ...!hoptoad!academ!uhnix1!sugar!peter                U
-- Disclaimer: These aren't mere opinions... these are *values*.

pds@quintus.UUCP (Peter Schachte) (02/18/88)

In article <655@nuchat.UUCP>, peter@nuchat.UUCP (Peter da Silva) writes:
...
> I thought editor/assemblers went out back when 1K monitors stopped being
> state-of-the-art...
...
> Boycott integrated packages. What do you need that stuff for, anyway? The
> Amiga gives you an integrated *environment*...

I thought BATCH compilers went out of style a year or two ago (:-).
Integrated environments are all the rage in the IBM PC market now.  And
I think this is one case where the PC market is ahead of the Amiga.

Sure, the "different tools for different jobs" school has a lot to be
said for it.  I agree with the principle.  But until I can compile,
link, and load executable into my source level debugger in a second or
two with a standard editor-compiler-linker-debugger configuration, the
integrated systems are going to have an advantage that will be hard to
beat.  The separate tools are too inefficient; they repeat too much
work, reparsing, recompiling and relinking stuff that hasn't changed,
every time through the modyfy/test cycle.

There is a compromise position, though.  We could have IFF standards
for source and object code in different languages.  This standard would
keep the source and object together, and allow the editor to mark what
has changed, so the compiler can reuse the compiled code for procedures
that haven't changed.  Of course, they would have to handle changed
macros properly, which is not easy.  But it would save a lot of work.
This standard could also allow a format for a pre-parsed form.  This
would further speed up compilation.

Given this approach, you could supply your own editor, compiler, and
debugger, as long as they understood this format, and they would still
operate efficiently together.  Turnaround time would drop sharply.
Programmer productivity would climb dramatically.  More good public
domain and commercial software would be written, in a shorter period of
time.  Seeing the growing supply of good software for the Amiga, more
people would buy the machine.  Everyone would be happy, and all good
things would come to pass.

I've used a nice integrated Lisp environment with a structure editor,
compiler, interpreter, (source level) debugger, profiling tools, etc.
Changing code and retesting is almost instantaneous.  And I've been
programming the Amiga using emacs, a slow compiler, slow linker, and
many visits from the guru.  Turn around is a couple of minutes to try a
small experiment.  If the guru stops by, rebooting is another couple of
minutes.  I hope it's clear which I find to be the more productive
programming enviornment.

Of course the Lisp machine costs many times as much as my Amiga.  But
the "integrated" style of development, with fast turnaround times, is
certainly possible on the Amiga.  If the IBM PC can do it, the Amiga
certainly can!
-- 
-Peter Schachte
pds@quintus.uucp
...!sun!quintus!pds

dillon@CORY.BERKELEY.EDU (Matt Dillon) (02/19/88)

:Sure, the "different tools for different jobs" school has a lot to be
:said for it.  I agree with the principle.  But until I can compile,
:link, and load executable into my source level debugger in a second or
:two with a standard editor-compiler-linker-debugger configuration, the
:integrated systems are going to have an advantage that will be hard to
:beat.  The separate tools are too inefficient; they repeat too much
:work, reparsing, recompiling and relinking stuff that hasn't changed,
:every time through the modyfy/test cycle.

	I could think of several good words to describe your view, but
I'm afraid they would be unacceptable to the net.  To put it simply,
you are wrong in every sense of the word.  You blame the method instead
of the program.

:Given this approach, you could supply your own editor, compiler, and
:debugger, as long as they understood this format, and they would still
:operate efficiently together.  Turnaround time would drop sharply.
:Programmer productivity would climb dramatically.  More good public
:domain and commercial software would be written, in a shorter period of
:time.  Seeing the growing supply of good software for the Amiga, more
:people would buy the machine.  Everyone would be happy, and all good
:things would come to pass.

	What you are advocating would simply remove the load time
for each program.  Considering that RAM: on the Amiga goes at 800K/sec, the time
lost waiting for programs to load compared to the time the programs are actually
working is negligible.  This so called 'reduction' in turnaround time has
nothing whatsoever to do with an integrated vs nonintegrated enviroment, but
to the individual design of the programs in question.  Unless you use  
radically different algorithms (example: incremental compilation vs normal
compilation), you will not see much of an increase in efficiency no matter
how integrated your enviroment is.

:compiler, interpreter, (source level) debugger, profiling tools, etc.
:Changing code and retesting is almost instantaneous.  And I've been
:programming the Amiga using emacs, a slow compiler, slow linker, and
:many visits from the guru.  Turn around is a couple of minutes to try a
:small experiment.  If the guru stops by, rebooting is another couple of
:minutes.  I hope it's clear which I find to be the more productive
:programming enviornment.

	You are blaming the lack of an integrated enviroment for the slow
turnaround time, and comparing it to your lisp machine.  I would like to
point out that good Lisp machine's usually have incremental compilers.
I would also like to point out that one does not need to be in an
integrated enviroment to use an incremental compiler.

	Integrated enviroments are for the birds.  Integrated design plays havoc
on software developers trying to upgrade or port programs to other machines, and
was originally conceived to combat even longer cycles on machines which had,
essentially, no operating systems.  With the advent of better operating 
systems, many 'integrated' programs these days are actually completely
separate modules glued together, and thus hardly any more efficient than
loosly coupled systems.  The one advantage integrated systems have is the
ability to quickly flip from one module to another (though frankly, it's just
a mouse click in the enviroment I use).

	To be sure, such monsters as Lotus 1 2 3 and other giants running on
IBM PC's will allow you to quickly flip from one module to another, to move
data from one module to another , etc..., but one might wonder how easy it
is to move data between a module and an external program.  These Giants, after
all, cannot do everything.  The user gets into a hole; he becomes dependant
on the integrated program in question and never progresses any faster than
the company wants him to progress.

	And then there is always the underlying hardware one has.  Ever try
running Lotus from a floppy?  So much for the fast integrated system...  Ever
try running it without extended memory?  I've used various $5K+ CAD systems
on the IBM that are 'integrated'... works fine if you have 2+ Megs in your
system, though there are limitations to what they can do... limitations in
the size of files you can edit in their integrated editor, limitations in how
big a circuit it can handle... limitations mainly due to the fact that the
stupid machine is not intelligently removing modules not currently being used.

	Now you've got me talking about program problems rather than conceptual
problems.  But you can see that many current day integrated enviroment suffer
from problems to.

					-Matt

peter@nuchat.UUCP (Peter da Silva) (02/21/88)

In article <657@sandino.quintus.UUCP>, pds@quintus.UUCP (Peter Schachte) writes:
> In article <655@nuchat.UUCP>, peter@nuchat.UUCP (Peter da Silva) writes:

> > I thought editor/assemblers went out back when 1K monitors stopped being
> > state-of-the-art...

> > Boycott integrated packages. What do you need that stuff for, anyway? The
> > Amiga gives you an integrated *environment*...

> I thought BATCH compilers went out of style a year or two ago (:-).

I'm sure they went out of style, but then "in style" is usually not a good
reason for buying a product. After all, the IBM-PC is much more "in style"
than the Amiga...

> Integrated environments are all the rage in the IBM PC market now.  And
> I think this is one case where the PC market is ahead of the Amiga.

I think you're wrong... let's see why...

> Sure, the "different tools for different jobs" school has a lot to be
> said for it.  I agree with the principle.  But until I can compile,
> link, and load executable into my source level debugger in a second or
> two with a standard editor-compiler-linker-debugger configuration, the
> integrated systems are going to have an advantage that will be hard to
> beat.  The separate tools are too inefficient; they repeat too much
> work, reparsing, recompiling and relinking stuff that hasn't changed,
> every time through the modyfy/test cycle.

Let's turn that around...

But until I can *enter the environment*, *load the source*, compile, link, and
load executable without having to read all my source files off the disk,
the modular systems are going to have an advantage that's hard to beat. The
integrated systems are too inefficient; they require too much stuff to be
resident in RAM through your whole modify-crash-debug cycle. They require
you to reload the system after the crash stage. Modular tools work very well
with this simple-yet-briliant tool called "make"...

Of course if you *have* the RAM you can always keep your whole modular
environment in there. With VD0: you don't even have to reload it after
a crash.

> There is a compromise position, though.  We could have IFF standards
> for source and object code in different languages.  This standard would
> keep the source and object together, and allow the editor to mark what
> has changed, so the compiler can reuse the compiled code for procedures
> that haven't changed.

Why reinvent the wheel? Make already does a good job of this.

>  Of course, they would have to handle changed macros properly, which
> is not easy.  But it would save a lot of work.

Easy as pie:

module1.o: module1.c macro1.h macro2.h
	cc +P module1.c

module2.o: module2.c macro1.h
	cc +P module2.c

> This standard could also allow a format for a pre-parsed form.  This
> would further speed up compilation.

Manx already supports this, you can precompile all your header files
and use make to properly maintain that as well.

> Given this approach, you could supply your own editor, compiler, and
> debugger, as long as they understood this format, and they would still
> operate efficiently together.  Turnaround time would drop sharply.

Now why didn't I think of that?

> Programmer productivity would climb dramatically.  More good public
> domain and commercial software would be written, in a shorter period of
> time.  Seeing the growing supply of good software for the Amiga, more
> people would buy the machine.  Everyone would be happy, and all good
> things would come to pass.

Hey, I'm working as hard as I can :->.

> I've used a nice integrated Lisp environment with a structure editor,
> compiler, interpreter, (source level) debugger, profiling tools, etc.

Lisp is a whole different kind of flying altogether. Could you also use
C, Fortran, and Modula-2 on the same machine?

> Changing code and retesting is almost instantaneous.  And I've been
> programming the Amiga using emacs, a slow compiler, slow linker, and
> many visits from the guru.  Turn around is a couple of minutes to try a
> small experiment.  If the guru stops by, rebooting is another couple of
> minutes.  I hope it's clear which I find to be the more productive
> programming enviornment.

Ever worked with UNIX? :->

Or do you just read news here? It's a whole different kind of flying
altogether.

> Of course the Lisp machine costs many times as much as my Amiga.  But
> the "integrated" style of development, with fast turnaround times, is
> certainly possible on the Amiga.  If the IBM PC can do it, the Amiga
> certainly can!

I haven't seen an integrated environment on the PC that was worth the disk
it came on. Turbo Pascal and M2SDS Modula-2 were disasters for all but the
smallest project. It's interesting to note that Turbo-C is *not* an integrated
package...

Perhaps you should try the Manx environment.
-- 
-- a clone of Peter (have you hugged your wolf today) da Silva  `-_-'
-- normally  ...!hoptoad!academ!uhnix1!sugar!peter                U
-- Disclaimer: These aren't mere opinions... these are *values*.

pds@quintus.UUCP (Peter Schachte) (02/23/88)

In article <8802181921.AA19069@cory.Berkeley.EDU>, dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
> :Sure, the "different tools for different jobs" school has a lot to be
> :said for it.  I agree with the principle.  But until I can compile,
> :link, and load executable into my source level debugger in a second or
> :two with a standard editor-compiler-linker-debugger configuration, the
> :integrated systems are going to have an advantage that will be hard to
> :beat.  The separate tools are too inefficient; they repeat too much
> :work, reparsing, recompiling and relinking stuff that hasn't changed,
> :every time through the modyfy/test cycle.
> 
> 	I could think of several good words to describe your view, but
> I'm afraid they would be unacceptable to the net.  To put it simply,
> you are wrong in every sense of the word.  You blame the method instead
> of the program.

Perhaps you're right, I AM blaming the method rather than the program.
Compilers and linkers COULD get around the problems I'm complaining
about, if they really wanted to.  But they don't.  It's not that the
method makes it impossible, just much more difficult.

In the following quote, you omitted an important part.  I proposed an
IFF form for program source code that would allow an editor to mark
what had changed since the last time the program was compiled.  This
would allow for incremental compilers.  I suppose a current-technology
compiler could make a copy of the source code it compiled most
recently, and diff it against the code it's being asked to compile to
determine what has changed.  It might even be worth it.  But I have yet
to see a compiler that does it.  I have yet to see (or even hear of) an
incremental compiler that was not integrated with an editor.  Do you
have any examples you can cite?  (This is not a rhetorical question:
I'd be very interested to hear of one).
> 
> :Given this approach, you could supply your own editor, compiler, and
> :debugger, as long as they understood this format, and they would still
> :operate efficiently together.  Turnaround time would drop sharply....
> 
> 	What you are advocating would simply remove the load time
> for each program....  Unless you use  
> radically different algorithms (example: incremental compilation vs normal
> compilation), you will not see much of an increase in efficiency no matter
> how integrated your enviroment is.

Of course.  Wasn't I clear?  I'm talking about an incremental compiler.
> 
> :compiler, interpreter, (source level) debugger, profiling tools, etc.
> :Changing code and retesting is almost instantaneous.  And I've been
> :programming the Amiga using emacs, a slow compiler, slow linker, and
> :many visits from the guru.  Turn around is a couple of minutes to try a
> :small experiment.  If the guru stops by, rebooting is another couple of
> :minutes.  I hope it's clear which I find to be the more productive
> :programming enviornment.
> 
> 	You are blaming the lack of an integrated enviroment for the slow
> turnaround time, and comparing it to your lisp machine.  I would like to
> point out that good Lisp machine's usually have incremental compilers.
> I would also like to point out that one does not need to be in an
> integrated enviroment to use an incremental compiler.
> 
Please give an example of an incremental compiler that is not part of
an integrated environment.

> 	Integrated enviroments are for the birds.  Integrated design plays havoc
> on software developers trying to upgrade or port programs to other machines, and
> was originally conceived to combat even longer cycles on machines which had,
> essentially, no operating systems....

Now YOU're talking about programs, rather than conceptual problems.  If
Turbo C or Saber-C (an integrated C interpreter environment on Suns)
programs can't easily be ported to other machines, that's the fault of
Turbo C and Saber-C , not the fault of the concept of an integrated
environment.

But I agree with all your points about integrated environments.  I'd
rather have the OS act as the integrator of my editor, my INCREMENTAL
compiler, and my INCREMENTAL linker, and my source-level debugger, too.
But I'd rather have an integrated environment with an incremental
compiler/assembler than a non-integrated non-incremental setup.  I'm
willing to be stuck with someone else's editor (and other tools) if it
means I can make a change and be testing it in a few seconds, rather
than mintues.

This all started when someone was (I believe) reacting to AssemPro.
They suggested that integrated environments are obsolete, and that no
one should buy such a thing.  I've never seen AssemPro, nor even seen an
ad for it, so I have no idea what it can do.  Perhaps it is truely a
bad package.  But if it will let an assembly language developer build
his code much more quickly, and still present reasonably portable code
(remember that Amiga C code isn't portable between Aztec and Lattice,
either), why should we dismiss it?  Sure, I'd rather use my own editor
(as long as I can customize it for the language I'm coding in, so that
it knows something about the syntax, comment conventions, etc.), and
rather have the flexibility of the OS to tie together the components of
the system.  But loosing that is not such a terribly high price to pay
for a significant improvement in turnaround time.

Again, just to make sure I'm being clear (it seems my writing is not as
clear it should be :-(), I'm not saying that I've seen any Amiga
integrated environments that I think are good things.  Nor am I saying
that I like the concept of an integrated environment.  All I'm saying is
that they have some potential advantages over separate components, since
individual tools within an integrated environment can share information
(parsed code, indications of what has changed, etc.)  more easily than
current purely text-based tools.  (Yes, yes, I know there are advantages
to purely text-based tools.  Let's not start up THIS war.  It just died
down in comp.lang.lisp.)  That is why I proposed a new IFF form for
source code.  This would allow for separate tools from different
suppliers to behave in useful ways as if they were integrated (e.g.,
incremental compilation).
-- 
-Peter Schachte
pds@quintus.uucp
...!sun!quintus!pds

dillon@CORY.BERKELEY.EDU (Matt Dillon) (02/23/88)

>down in comp.lang.lisp.)  That is why I proposed a new IFF form for
>source code.  This would allow for separate tools from different
>suppliers to behave in useful ways as if they were integrated (e.g.,
>incremental compilation).
>-- 
>-Peter Schachte
>pds@quintus.uucp
>....!sun!quintus!pds

	Sounds like the proper approach, but how about a slightly different
approach to the IFF... Specifically, the 'changes since last compile' 
should be a strictly temporary item, and thus would not be contained in
the actual source file.  The IFF should be a separate file in RAM: (or some
other temporary place) that can be refered to by the incremental compiler.
A crash simply means the compiler must recompile everything... no biggie.

	The advantages of doing that is that the source is simply standard
ascii... compatible with any text editor and transferable to any machine
(and readable on that machine).  I like to keep my source disks clean and
compact ... and virtually 100% full, and I don't want a lot of extranious
information in all those 'text' files when I'm not working on them.

	In fact, forget about the IFF entirely... the incremental compiler
would keep its own copy of the source in RAM:... or memory.  When you tell it 
to compile, it simply figures out what changed between the master and its 
personal copy and recompiles just those parts.  The comparisons required
are essentially a diff, which takes virtually no time.  Only those files
that were modified are actually diff'd (you can figure it out simply by
looking at the time stamps).

				-Matt

pds@quintus.UUCP (Peter Schachte) (02/25/88)

In article <8802230741.AA29070@cory.Berkeley.EDU>, dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
> >...I proposed a new IFF form for source code....  This would allow for ...
> > incremental compilation).
> 
> ...The IFF should be a separate file in RAM: (or some
> other temporary place) that can be refered to by the incremental compiler.
> 
> 	The advantages of doing that is that the source is compatible with
>  any text editor and transferable to any machine.  I like to keep my
>  source disks compact ...
> 
> 	In fact, forget about the IFF entirely... the incremental compiler
> would keep its own copy of the source in RAM:... or memory.
> 
> 				-Matt


I agree that it is desirable to keep the file small, and to make it easily
transferable to another machine, or a printer, or whatever.  But suppose
it were easy to move it into and out of IFF.  And further suppose that the
IFF file were maintained in a compressed form.  Knowing that the file is
source code makes it easy to do something like maintain it
pre-tokenized.  This would make the compile go (a little) faster, too.
And unless IFF added a lot of overhead, it should be pretty easy to
make the file SMALLER than the pure ascii file it stands for.

Adding memory of what has changed would be pretty easy, and pretty
small.  You could take the easy approach and just mark which procedures
have changed and which haven't.

Also, a smart program to put a source file into this IFF form could
check to see if there is an existing IFF file, and if so, do a quick
diff as it is updating the file, and at that point mark changed
procedures.  And an incremental compiler would wipe these "changed"
marks.  Then you could write a simple shell script to extract an ascii
file out of the IFF file, fire up DME, and then put it back into the
IFF file, marking the changes, when you're done editing.  Voila!  You
can still use DME unchanged.

Now all we need is an incremental compiler.
-- 
-Peter Schachte
pds@quintus.uucp
...!sun!quintus!pds

dillon@CORY.BERKELEY.EDU (Matt Dillon) (02/25/88)

:I agree that it is desirable to keep the file small, and to make it easily
:transferable to another machine, or a printer, or whatever.  But suppose
:it were easy to move it into and out of IFF.  And further suppose that the
:IFF file were maintained in a compressed form.  Knowing that the file is
:source code makes it easy to do something like maintain it
:pre-tokenized.  This would make the compile go (a little) faster, too.
:And unless IFF added a lot of overhead, it should be pretty easy to
:make the file SMALLER than the pure ascii file it stands for.

	I used to set my tabs to 4 in VI (on my UNIX account). (I *like*
	using the TAB key).

	It is quite easy to expand them to spaces before sending them to a
	printer (expand -4 file).

	I no longer have my tabs set to 4 in VI.

	It just got ridiculous.. every time I wanted to do something with
	the file other than VI it I had to expand it.

					-Matt

pds@quintus.UUCP (Peter Schachte) (02/26/88)

In article <670@nuchat.UUCP>, peter@nuchat.UUCP (Peter da Silva) writes:
> In article <657@sandino.quintus.UUCP>, pds@quintus.UUCP (Peter Schachte) writes:
> > In article <655@nuchat.UUCP>, peter@nuchat.UUCP (Peter da Silva) writes:
> > > I thought editor/assemblers went out back when 1K monitors stopped being
> > > state-of-the-art...
> > I thought BATCH compilers went out of style a year or two ago (:-).
> I'm sure they went out of style, but then "in style" is usually not a good
> reason for buying a product. After all, the IBM-PC is much more "in style"
> than the Amiga...
> 
> > ... Separate tools are too inefficient; they repeat too much
> > work, reparsing, recompiling and relinking stuff that hasn't changed,
> > every time through the modyfy/test cycle.
> 
> Let's turn that around...
> 
> But until I can *enter the environment*, *load the source*, compile, link, and
> load executable without having to read all my source files off the disk,
> the modular systems are going to have an advantage that's hard to beat. The
> integrated systems are too inefficient; they require too much stuff to be
> resident in RAM through your whole modify-crash-debug cycle. They require
> you to reload the system after the crash stage. Modular tools work very well
> with this simple-yet-briliant tool called "make"...

Hey, I use 'make' too (thanks Fred).  But the level of granularity is
wrong.  I want to be recompiling PROCEDURES not FILES.  Sure, I could
put one procedure in each file, but it would be a real pain to change
things.  Often changes made in one procedure require changes in several
other procedures.  Perhaps a real EMACS with the TAGS package would make
that easier.  I don't know, I've never used TAGS.  But even besides
that, it would still be terribly inefficient, since the compiler would
then have to read in all the includes FOR EACH PROCEDURE.  If I change
10 procedures, I have to do 10 compiles.  Even ignoring the time to load
the compiler (which should be negligible when REZ comes out!), and
even with precompiled includes, this would still be slower than having
all that include information and linkage information already around, so
all we have to do is modify the few things that have changed, and
recompile the 10 procedures.  If they're small procedures, that
shouldn't take long.  More on this below....

> > I've used a nice integrated Lisp environment with a structure editor,
> > compiler, interpreter, (source level) debugger, profiling tools, etc.
> 
> Lisp is a whole different kind of flying altogether. Could you also use
> C, Fortran, and Modula-2 on the same machine?

Nope.  But Lisp isn't as different as you think.  The thing Lisp has
that C doesn't is dynamic typing.  That makes it difficult to pass data
structures between them.  And Lisp has the ability to create code on
the fly and run it.  C can't do that either.  So what?  Just because
the languages are different doesn't mean that the development styles
have to be.  There's nothing in my favorite development environment
(the Xerox Lisp environment, if you're interested) that couldn't be
done for C.  At least nothing really important.

> Ever worked with UNIX? :->
> 
> Or do you just read news here? It's a whole different kind of flying
> altogether.

Yea, I've been using UNIX for about 3 years, although not exclusively.
That doesn't mean I like it.  I find it too big and baroque.

> I haven't seen an integrated environment on the PC that was worth the disk
> it came on.

Neither have I. In fact I haven't seen any integrated PC environments.
Maybe none of them ARE any good.  But that doesn't mean that they
couldn't be.  That you'll have to prove.

> Perhaps you should try the Manx environment.

I might just do that.  I've been using Lattice 3.03 (came with my Amy),
and am trying to decide between the $75 upgrade to 4.0 and switching to
Manx.  SDB (if the reviews I see here are positive enough) might just
sway me to Manx.  (Unless Lattice has plans for a source-level debugger
of their own.  Any comments, Lattice?  :-)

But, according to the Transactor review of Lattice, which compared it
to Manx (3.4, I think), Manx compile times aren't all that much better.
How long does it take you to compile and link, say, a 400 line (not
counting comments) program?  I'd really like to know.  If it's in the
30-45 second range, that might be ok.

And after you compile and link your program, and you edit it and change
one variable name somewhere, how long does it take you to compile and
link again.  I thought so.  I still stick to my assertion that the usual
development style wastes an awful lot of work.  Remember the two rules
of computer science:

	(1)  Don't do it.
	(2)  Don't do it again.

Of course, sometimes you have to brake rule (1).  But the usual scheme
breaks rule (2) all the time, too.  Let's brake down our development
cycle into these steps:

	1.  editing
	2.  parsing
	3.  compiling, optimizing, and assembling
	4.  load and linking
	5.  debugging

Once a procedure is compiled, there's no reason to parse or compile, or
relocate it again, or change any other procedure's calls to it unless
it (or a macro it uses) changes.  If a procedure changes, you only NEED
to parse, compile, and load it, and then go back-patch everyone else's
calls to it.  I can't believe this would take a second for a, say, 50
line procedure (I'd welcome accurate figures from a compiler writer).
And if you use a structure editor (calm down.  it'll be okay.  no one
will MAKE you use it), you can drop the parsing phase, too.

I don't care whether my environment is "integrated" or not.  In fact,
I'd rather it weren't. I'd rather use the OS, and multiple processes, to
tie it all together.  As long as it's fast.  It's just that it seems
easier to have a system that takes the kind of short-cut I propose above
in an integrated environment.
-- 
-Peter Schachte
pds@quintus.uucp
...!sun!quintus!pds

andy@cbmvax.UUCP (Andy Finkel) (02/27/88)

In article <688@sandino.quintus.UUCP> pds@quintus.UUCP (Peter Schachte) writes:
>In article <8802230741.AA29070@cory.Berkeley.EDU>, dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
>> >...I proposed a new IFF form for source code....  This would allow for ...
>> > incremental compilation).
>Now all we need is an incremental compiler.

And an IFF.library.  And (possibly) an IFF handler.
-- 
andy finkel		{ihnp4|seismo|allegra}!cbmvax!andy 
Commodore-Amiga, Inc.

"Never test for an error condition you don't know how to handle."
		
Any expressed opinions are mine; but feel free to share.
I disclaim all responsibilities, all shapes, all sizes, all colors.

peter@nuchat.UUCP (Peter da Silva) (02/28/88)

In article <682@sandino.quintus.UUCP>, pds@quintus.UUCP (Peter Schachte) writes:
> In the following quote, you omitted an important part.  I proposed an
> IFF form for program source code that would allow an editor to mark
> what had changed since the last time the program was compiled.

No, no, no, a thousand times no. Look at the way the Mac is isolated in its
own pretty little room by its programming model. A large part of this is
due to the fact that source code on this baby includes a weird proprietary
chunk of data called the resource fork. The last thing the Amiga needs is
to duplicate more of the negative aspects of the Mac.

> This
> would allow for incremental compilers.  I suppose a current-technology
> compiler could make a copy of the source code it compiled most
> recently, and diff it against the code it's being asked to compile to
> determine what has changed.

What's wrong with looking at the file dates and recompiling the modules
that have changed since they were last compiled? If you're being well behaved
then you have your source highly modularised in a bunch of little files.
You only recompile the files that have changed. In fact the compiler only
looks at the files that have changed.

The program "make" does a pretty good job of this. The syntax is baroque, but
you only have to climb that hill once.

And for really large projects (megabytes of source) this is much better.

> It might even be worth it.  But I have yet
> to see a compiler that does it.  I have yet to see (or even hear of) an
> incremental compiler that was not integrated with an editor.  Do you
> have any examples you can cite?  (This is not a rhetorical question:
> I'd be very interested to hear of one).

See above. I hope most of us are already using it.

Another thing to note, if you use an integratedcompilerandeditor you lose
the ability to easily support packages containing code written in multiple
languages. I could have a package that included a device driver in BCPL, C,
or assembly, an IFF handler written in C, and a graphics editor written in
Modula. I'd have to run the IFF handler and the editor in parallel and talk
to them via messages, but that's totally cool on the Amiga. On the PC or under
UNIX where the object module format is more standardised I could link them all
together. And I can recompile the whole shebang, modified files only mind you,
by typing "make".

By the way, I'm the original guy who was griping... not just about AssemPro,
but about Turbo-this and Deluxe-that in general.
-- 
-- a clone of Peter (have you hugged your wolf today) da Silva  `-_-'
-- normally  ...!hoptoad!academ!uhnix1!sugar!peter                U
-- Disclaimer: These aren't mere opinions... these are *values*.

dillon@CORY.BERKELEY.EDU (Matt Dillon) (02/29/88)

>What's wrong with looking at the file dates and recompiling the modules
>that have changed since they were last compiled? If you're being well behaved
>then you have your source highly modularised in a bunch of little files.
>You only recompile the files that have changed. In fact the compiler only
>looks at the files that have changed.
>
>The program "make" does a pretty good job of this. The syntax is baroque, but
>you only have to climb that hill once.

	You gotta be kidding!  My code is modular, but not *that*
modular.  Breaking source up into thousands of little files makes it
unreadable.  This can hardly be compared with true incremental compiler
theory.  Besides, the whole point of the original argument was having
something better than the current edit-long_compile-link cycle.

	There is no disagreement that incremental compilers are the 
quickest and most convenient way to go.  The problem is that they are
usually very expensive to design, build, and thus buy.

					-Matt

>Another thing to note, if you use an integratedcompilerandeditor you lose
>the ability to easily support packages containing code written in multiple
>languages. I could have a package that included a device driver in BCPL, C,
>or assembly, an IFF handler written in C, and a graphics editor written in
>Modula. I'd have to run the IFF handler and the editor in parallel and talk

	True, but let's not get confused.  An integrated enviroment does
not an incremental compiler make.  They are two totally different concepts.
In real life, they are usually bundled together, but this is not a 
requirement.  You can still have object modules with incremental 
compilers and integrated enviroments (not all such enviroments currently 
available support it, but this does not effect the theory).

						-Matt

thomson@utah-cs.UUCP (Richard A Thomson) (02/29/88)

In article <700@nuchat.UUCP> peter@nuchat.UUCP (Peter da Silva) writes:
>In article <682@sandino.quintus.UUCP>, pds@quintus.UUCP (Peter Schachte) writes:
>Look at the way the Mac is isolated in its
>own pretty little room by its programming model. A large part of this is
>due to the fact that source code on this baby includes a weird proprietary
>chunk of data called the resource fork. The last thing the Amiga needs is
>to duplicate more of the negative aspects of the Mac.

I don't really know what a resource fork is, but a terse explanation by my
friendly mac guru leads me to believe that it's not the demon you make it
out to be.  What's really so wrong with it?  Is source code transfer your
problem?  Apparently you just sent the mac file as two separate ascii files;
one containing the source and the other containing the (possibly uuencoded?)
resources.  Perhaps I'm being a little dense, but what exactly is your
beef with this idea?
						Rich Thomson

peter@nuchat.UUCP (Peter da Silva) (02/29/88)

In article <692@sandino.quintus.UUCP>, pds@quintus.UUCP (Peter Schachte) writes:
> Hey, I use 'make' too (thanks Fred).  But the level of granularity is
> wrong.  I want to be recompiling PROCEDURES not FILES.
...
>                                                     the compiler would
> then have to read in all the includes FOR EACH PROCEDURE.  If I change
> 10 procedures, I have to do 10 compiles.  Even ignoring the time to load
> the compiler (which should be negligible when REZ comes out!), and
> even with precompiled includes, this would still be slower than having
> all that include information and linkage information already around, so
> all we have to do is modify the few things that have changed, and
> recompile the 10 procedures.

Since you have to have that info around anyway, why not just keep your
common precompiled include in RAM:. So all you do is recompile the 4 or 5
files those 10 procedures are in... since the info you'd be caching with
your incremental compiler is all in RAM anyway...

>                                                         Just because
> the languages are different doesn't mean that the development styles
> have to be.

Oh no, the development styles for interactive languages like Lisp, Basic,
Forth, and SmallTalk are quite different than for batch languages like
Fortran, Pascal, Modula, and 'C'. I've been on both sides of that fence.

> Yea, I've been using UNIX for about 3 years, although not exclusively.
> That doesn't mean I like it.  I find it too big and baroque.

So you never worked at understanding how to use it terribly efficiently,
nor why certain design decisions were made?

> But, according to the Transactor review of Lattice, which compared it
> to Manx (3.4, I think), Manx compile times aren't all that much better.
> How long does it take you to compile and link, say, a 400 line (not
> counting comments) program?  I'd really like to know.  If it's in the
> 30-45 second range, that might be ok.

hang on...

Compiling in RAM:, with libraries and includes in VD0:, no precompiled
includes (for this program thay'd save about 15 seconds)... 150 lines
in... 36 seconds. Let me try something bigger... how about browser?

info.c                       945 rwed Today     20:45:25
pointer.c                    873 rwed Today     20:45:24
vollist.c                    733 rwed Today     20:45:17
menu.c                      7340 rwed Today     20:45:15
browser.c                  48862 rwed Today     20:45:07
toolmenu.c                  2621 rwed Today     20:44:54
copy.c                      5913 rwed Today     20:44:53

As you can see, I don't really practice what I preach about small files.
Let's time a full RAM: compile, eh?

20:47:00 at start. Doing a bit of blitting around...
20:49:46 when browser.c has finished compiling. 2:46 for ~50K of code.
20:51:00 when all the rest have compiled. Another 1:14.
20:51:47 and it's all linked. Total of 4 minutes and 47 seconds. I guess
I could take at most about a minute off for precompiled includes. I keep
promising myself I'll chop browser.c up one of these days. Judge for yourself
if this is fast enough.

> And after you compile and link your program, and you edit it and change
> one variable name somewhere, how long does it take you to compile and
> link again.

Maximum of 5 minutes for a sizable program. More like a minute if I was
being totally cool about modularization, and none of the files were over
the 6K of copy.c.

If you have an incremental compiler, that goes down to 47 seconds, absolute
minimum.

> 	1.  editing
> 	2.  parsing
> 	3.  compiling, optimizing, and assembling
> 	4.  load and linking
> 	5.  debugging

	6.  documenting. Don't forget that one.
-- 
-- a clone of Peter (have you hugged your wolf today) da Silva  `-_-'
-- normally  ...!hoptoad!academ!uhnix1!sugar!peter                U
-- Disclaimer: These aren't mere opinions... these are *values*.

pds@quintus.UUCP (Peter Schachte) (03/01/88)

In article <8802250823.AA09003@cory.Berkeley.EDU>, dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
>  But suppose
> :it were easy to move it into and out of IFF.  And further suppose that the
> :IFF file were maintained in a compressed form.
> 
> 	I used to set my tabs to 4 in VI (on my UNIX account). (I *like*
> 	using the TAB key).
> 
> 	It is quite easy to expand them to spaces before sending them to a
> 	printer (expand -4 file).
> 
> 	I no longer have my tabs set to 4 in VI.

Yea, I know what you mean.  This is one of the reasons I hate SCCS.  And
I guess what I'm proposing is kind of like SCCS, in that respect.  What
we really need is a file system that is smart enough to recognize
alternate file storage techniques, and expand the files when they are
opened (or as they're read), and squeeze them as they're written.
Someone mentioned the concept of an object-oriented operating system
here a while ago.  That's what we need:  each file can supply it's own
procedures for opening it, getting bytes from it, writing to it, etc.
And it could supply special procedures, like to get the next C token....

Just musing....
-- 
-Peter Schachte
pds@quintus.uucp
...!sun!quintus!pds

pds@quintus.UUCP (Peter Schachte) (03/01/88)

In article <709@nuchat.UUCP>, peter@nuchat.UUCP (Peter da Silva) writes:
> Compiling in RAM:, with libraries and includes in VD0:, no precompiled
> includes (for this program thay'd save about 15 seconds)... 150 lines
> in... 36 seconds. Let me try something bigger... how about browser?
> 
> info.c                       945 rwed Today     20:45:25
> pointer.c                    873 rwed Today     20:45:24
> vollist.c                    733 rwed Today     20:45:17
> menu.c                      7340 rwed Today     20:45:15
> browser.c                  48862 rwed Today     20:45:07
> toolmenu.c                  2621 rwed Today     20:44:54
> copy.c                      5913 rwed Today     20:44:53
> 
> As you can see, I don't really practice what I preach about small files.

Few people do.  For reasons I cited before:  too much work to make a
change to one procedure that requires changes to others.  That, and the
fact that one often adds "just this one more procedure" to the file.
It never seems to make sense to start a whole file for a single
procedure....

> Let's time a full RAM: compile, eh?
> 
> 20:47:00 at start. Doing a bit of blitting around...
> 20:49:46 when browser.c has finished compiling. 2:46 for ~50K of code.
> 20:51:00 when all the rest have compiled. Another 1:14.
> 20:51:47 and it's all linked. Total of 4 minutes and 47 seconds. I guess
> I could take at most about a minute off for precompiled includes. I keep
> promising myself I'll chop browser.c up one of these days. Judge for yourself
> if this is fast enough.

A bit slow, but not too bad IF I DON'T HAVE TO DO IT VERY OFTEN.

> > And after you compile and link your program, and you edit it and change
> > one variable name somewhere, how long does it take you to compile and
> > link again.
> 
> Maximum of 5 minutes for a sizable program. More like a minute if I was
> being totally cool about modularization, and none of the files were over
> the 6K of copy.c.
> 
> If you have an incremental compiler, that goes down to 47 seconds, absolute
> minimum.

Why?  I still claim that a small change that only affects two or three
procedures needn't take more than a few seconds to recompile and
relink.  Remember, I'm talking about an INCREMENTAL compiler and
INCREMENTAL linker.  They would only parse the few procedures that have
changed (which, let's assume, are on ram disk of some sort), and
compile them.  This can't take more than a second or two, can it?  If
so, why?  Let's assume you're generating machine code directly (why
waste the time of generating assembler?), so now all you have to do is
modify the executable to include the new machine code, and back-patch
in the new addresses.  That can't take long, either.  I know I'm
glossing over some issues of how to extend an existing executable.  But
in principle it should be possible, and remember I'm talking about what
is POSSIBLE.  An integrated environment (let me hasten to repeat:  I'm
not endorsing the idea of integrated environments, just pointing out
again that they have some advantages) would not have to produce an
executable until you ask for one.  It could compile right into memory,
back-patch, free up the memory consumed by the old version of changed
procedures, and fire up the debugger.  That would be FAST, much faster
than 47 seconds.  Where'd the 47 number come from, anyway?

> 
> > 	1.  editing
> > 	2.  parsing
> > 	3.  compiling, optimizing, and assembling
> > 	4.  load and linking
> > 	5.  debugging
> 
> 	6.  documenting. Don't forget that one.

Actually, this should be 1.5, or even a part of 1.  Or maybe 0.

You mentioned that languages like Basic, FORTH, LISP, Smalltalk, etc.,
that are interactive are basically different from languages like C,
Modula-2, Pascal, Assembler, etc., that are batch-oriented.  Why?  It's
really the environments that are different, not so much the language.
Basic is pretty much an interactive FORTRAN environment.  Languages are
not batch, environments are.  There's no reason you can't have a C or
Pascal interpreter, and in fact such things exist.

In fact, you can even have an interactive development system without an
interpreter.  I've used a Lisp system that didn't have an interpreter,
only a compiler.  When you type in an expression, it compiles and
executes it.  This could be done for C, as well (and maybe it has).
This would make it MUCH easier to prototype things on the Amiga.  How
many times have you gone through the edit-compile-link-execute cycle
for short experiments?  Ones that really COULD have been run in not
much longer than it takes to type them?

It's not the language that makes development slow.  We CAN have speed,
and portability, and still have a decent development and
experimentation environment.  When the program is tested, THEN you
produce the stand-alone takes-5-minutes-to-compile-and-link version.
This should be possible.  At least in principle.

I hope JimG and JohnT are listening :-).
-- 
-Peter Schachte
pds@quintus.uucp
...!sun!quintus!pds

bts@sas.UUCP (Brian T. Schellenberger) (03/01/88)

[it has been suggested that incremental compiles share information
so only the changes stuff need be re-compiled]

I do this all the time on Unix and other non-integrated environments.
It's called  *make*.  You keep your individual files sizes small and
presto--incremental compiles of only the parts that need to be re-compiled.
-- 
                                                         --Brian.
(Brian T. Schellenberger)				 ...!mcnc!rti!sas!bts

DISCLAIMER:  Whereas Brian Schellenberger (hereinafter "the party of the first 

pds@quintus.UUCP (Peter Schachte) (03/03/88)

In article <367@sas.UUCP>, bts@sas.UUCP (Brian T. Schellenberger) writes:
> [it has been suggested that incremental compiles share information
> so only the changes stuff need be re-compiled]
> 
> I do this all the time on Unix and other non-integrated environments.
> It's called  *make*.

Make is a good thing.  Much better than not having it.  But it does not
make an incremental compiler out of an ordinary compiler.  Why?  Several
reasons.  Firstly, you'd have to keep each procedure in a separate file.
This is impractical (for reasons I've presented before).  And even if
you do this, you find that you have to fire up your compiler and read in
all your includes once for each procedure.  This is a lot of overhead
for a 10 line procedure (and 200 lines of includes!). And finally, when
you're all done compiling your 3 or 4 changed files, you have to link
everything FROM SCRATCH.  This is not insignificant.  In fact it could
well be the dominant factor in the time it takes you from finish of
editing to start of debugging.
-- 
-Peter Schachte
pds@quintus.uucp
...!sun!quintus!pds

peter@nuchat.UUCP (Peter da Silva) (03/03/88)

In article <714@sandino.quintus.UUCP>, pds@quintus.UUCP (Peter Schachte) writes:
> > > And after you compile and link your program, and you edit it and change
> > > one variable name somewhere, how long does it take you to compile and
> > > link again.

If I changed a variable name, and the incremental compiler *didn't* go
back and recompile quite a lot, I'd be real wary about trusting it. Changing
declarations tend to have far reaching consequences.

> > Maximum of 5 minutes for a sizable program. More like a minute if I was
> > being totally cool about modularization, and none of the files were over
> > the 6K of copy.c.

And of course it took more than that to find the bug, make the change,
document it, look around for anything else that might be hosed, and then
after I've compiled it and run it, about 10% of the time, guru and reboot.

> > If you have an incremental compiler, that goes down to 47 seconds, absolute
> > minimum.

> Where'd the 47 number come from, anyway?

That was the amount of time Browser spent linking.

> > > 	1.  editing

This is the big time consumer. Really.
-- 
-- a clone of Peter (have you hugged your wolf today) da Silva  `-_-'
-- normally  ...!hoptoad!academ!uhnix1!sugar!peter                U
-- Disclaimer: These aren't mere opinions... these are *values*.

ain@s.cc.purdue.edu (Patrick White) (03/04/88)

In article <722@sandino.quintus.UUCP> pds@quintus.UUCP (Peter Schachte) writes:
>In article <367@sas.UUCP>, bts@sas.UUCP (Brian T. Schellenberger) writes:
>> [it has been suggested that incremental compiles share information
>> so only the changed stuff need be re-compiled]

   Sounds good to me Peter, why don't you write it..
   
   Oh, and be sure to include lots of ram with the package because it will
eat lots of ram... you know.. with enough ram I could rule the wo... scratch
that.. wrong saying...     with enough ram, I could keep everything in ram,
and with MANX's precompiled symbol tables, I wouldn't need to read in 200
lines of includes for a 10 line function...
   You know, I bet this would be *almost* as fast as your idea Peter.

-- Pat White
UUCP: k.cc.purdue.edu!ain  BITNET: PATWHITE@PURCCVM   PHONE: (317) 743-8421
U.S.  Mail:  320 Brown St. apt. 406,    West Lafayette, IN 47906

lsr@Apple.COM (Larry Rosenstein) (03/04/88)

In article <5295@utah-cs.UUCP> thomson@cs.utah.edu.UUCP (Richard A Thomson) writes:
>
>I don't really know what a resource fork is, but a terse explanation by my
>friendly mac guru leads me to believe that it's not the demon you make it
>out to be.  What's really so wrong with it?  Is source code transfer your
>problem?  Apparently you just sent the mac file as two separate ascii files;

In fact all the development systems on thr Mac provide a pure text resource
format, and a corresponding resource compiler for creating the actual
resource.  The MPW system even has a resource decompiler, which allows you
to create a resource graphically and still be able to get a textual form.

The resource fork provides a nice mechanism for storing typed data that can
be accessed by name or ID.  It is used on the Macintosh as an alternative to
hardwiring data into the application.  Originally, this was for the phrases
that appear in an application (for the purpose of translating to another
language), but it is also used for things like application preferences
(window position, size, color, ...).

There are a variety of programs that modify resources, which allow users to
customize some aspects of their applications.

-- 
		 Larry Rosenstein,  Object Specialist
 Apple Computer, Inc.  20525 Mariani Ave, MS 32E  Cupertino, CA 95014
	    AppleLink:Rosenstein1    domain:lsr@Apple.COM
		UUCP:{sun,voder,nsc,decwrl}!apple!lsr

peter@nuchat.UUCP (Peter da Silva) (03/05/88)

In article <5295@utah-cs.UUCP>, thomson@utah-cs.UUCP (Richard A Thomson) writes:
> In article <700@nuchat.UUCP> peter@nuchat.UUCP (Peter da Silva) writes:
> >In article <682@sandino.quintus.UUCP>, pds@quintus.UUCP (Peter Schachte) writes:
> >Look at the way the Mac is isolated in its
> >own pretty little room by its programming model. A large part of this is
> >due to the fact that source code on this baby includes a weird proprietary
> >chunk of data called the resource fork. The last thing the Amiga needs is
> >to duplicate more of the negative aspects of the Mac.

> What's really so wrong with it?  Is source code transfer your
> problem?

My problem is that what people do on the Mac is pretty much irrelevent to
what people do on every other computer in the world, and vice versa. The
two main reasons are that a program on the Mac has to be written essentially
as a device driver, and that it's not even entirely examinable unless you
have another Mac... because the resource fork contains information that is
integral to dealing with the file but which is unintelligable even when
it's available to people on other machines.

For example, if I write a well-behaved program on the Amiga to generate
sucessive generations of an artificial organism for playing with the idea
of evolution, someone on an IBM-PC could take that program and adapt it for
their machine simply by stripping out the Intuition stuff and putting in
IBM stuff. Similarly I can port Emacs from UNIX to the Amiga, as a well-
behaved application, simply by writing the screen-display code and compiling
the rest.

A well-behaved Mac application has a number of attributes that make this hard:

It has to call the operating system at least once every 100 milliseconds to
maintain desk accessories.

It has to be built around an "event loop" that handles all events in the system,
and passes down to other programs the ones it's not interested in.

It has to allow for arbitrary relocation of large parts of its data as the
system collects garbage.

It has to use the resource fork for text messages and displays... so unless
you have the resource fork in a human-readable form you can't even tell what
the program's supposed to say.

The idea of porting a Mac program to anything else, or porting anyone else's
programs to the Mac, is pretty much a fantasy. Unless you write a badly-behaved
Mac program or use A/UX.
-- 
-- a clone of Peter (have you hugged your wolf today) da Silva  `-_-'
-- normally  ...!hoptoad!academ!uhnix1!sugar!peter                U
-- Disclaimer: These aren't mere opinions... these are *values*.

dave@gtmvax.UUCP (Dave Hanna) (03/05/88)

In article <367@sas.UUCP>, bts@sas.UUCP (Brian T. Schellenberger) writes:
> [it has been suggested that incremental compiles share information
> so only the changes stuff need be re-compiled]
> 
> I do this all the time on Unix and other non-integrated environments.
> It's called  *make*.  You keep your individual files sizes small and
> presto--incremental compiles of only the parts that need to be re-compiled.
> -- 

While I am a great fan of *make*, and use it extensively, I think you're
kidding yourself if you think you can achieve incremental compilation
with it in an application of significant size.  It is incremental
only at the file level, not at the information level.  The application
I'm on at work has 12 subsystems inter-related, the largest of which
has 66 .c files, all appropriately decomposed for functional cohesiveness.
Each .c file has an accompanying .h file, many of which #include
other .h file.  I may change a single #define or enum typedef in one 
of the .h files, but make is going to recompile every file that includes
that .h, directly or indirectly, whether the particular information that
I have changed is relevant in that file or not.  (A re-make of this project
may take 2+ hrs. on a VAX-8200 -- but that's okay - it used to take 12+
on our 68000-based UniSoft machines!)

You would have to interpret "small" (referring to file sizes) as 
meaning "containing a single piece of information"  in order for 
make to substitute for an incremental compiler.

All of which should not be interpreted as meaning I am against make
and in favor of incremental compilers.  I suspect we would have to
give up too much flexibility and/or efficiency in an incremental
compiler, and I like make just the way it is.  I just think we
should be clear about what we're talking about.

	Dave Hanna

pds@quintus.UUCP (Peter Schachte) (03/08/88)

In article <2403@s.cc.purdue.edu>, ain@s.cc.purdue.edu (Patrick White) writes:
>    Sounds good to me Peter, why don't you write it..

I'd love to.  Got an extra year or two you don't need :-)?  I could
sure use the time.
>    
>    Oh, and be sure to include lots of ram with the package

Of course, keeping it in ram would be best, but you could always keep
it on (preferably hard) disk.

> ...with MANX's precompiled symbol tables, I wouldn't need to read in 200
> lines of includes for a 10 line function...
>    You know, I bet this would be *almost* as fast as your idea Peter.

No, you'll still have to link the *WHOLE* program from scratch.  I'm
beginning to come to the conclusion that with current compiler
technology, one can make compile time pretty small.  But there's
nothing we can do about long link times.  Before we go off and build an
incremental compiler, I think an incremental linker would be a bigger
gain.

Anybody given any thought to an incremental linker?  How hard would
this be?
-- 
-Peter Schachte
pds@quintus.uucp
...!sun!quintus!pds

peter@sugar.UUCP (Peter da Silva) (03/08/88)

An incremental compiler is nice. But I'm not going to give up either my
collection of seperate tools, each of which does its job well, nor my
nice and simple text format source to do it.

Make isn't perfect, but it's the best tool we have at this time to do the
job. It's a better solution than a huge does-everything-but-nothing-well
monolithic system. And it's a better solution than making my source files
tokenised, IFF, or any other weird variant thereof.
-- 
-- Peter da Silva  `-_-'  ...!hoptoad!academ!uhnix1!sugar!peter
-- Disclaimer: These U aren't mere opinions... these are *values*.

lsr@Apple.COM (Larry Rosenstein) (03/09/88)

Allow me to correct some errors.

>two main reasons are that a program on the Mac has to be written essentially
>as a device driver, and that it's not even entirely examinable unless you
>have another Mac... because the resource fork contains information that is
>integral to dealing with the file but which is unintelligable even when
>it's available to people on other machines.

This is wrong.  A program on the Mac is written much like a program on the
Amiga.  It is not a device driver.  Also, as people have mentioned, it is
possible to get a textual representation of the resources, which would look
much like the equivalent data definitions in an Amiga program.  (The
difference is that the defns are not compiled in and can easily be changed.)

>It has to call the operating system at least once every 100 milliseconds to
>maintain desk accessories.

It is not essential to do this.  It is courteous to keep DAs running, but
most applications do not do this while doing compute-intensive things.

>It has to be built around an "event loop" that handles all events in the
>system, and passes down to other programs the ones it's not interested in.

This is not much different than what you have to do on the Amiga in response
to certain messages.  On the Mac, you can take advantage of frameworks such
as MacApp, which handle these things automatically.  (A programmer using
MacApp has to do a lot less work than an Amiga programmer.)

>It has to allow for arbitrary relocation of large parts of its data as the
>system collects garbage.

The system does not collect garbage and it is not arbitrary.  It coalesces
the memory blocks to prevent heap fragmentation.  This allows the program to
make better use of the available memory.  The programmer can lock these
blocks as needed and can allocate non-relocatable blocks.

>It has to use the resource fork for text messages and displays... so unless
>you have the resource fork in a human-readable form you can't even tell what
>the program's supposed to say.

This is nonsense.  Using resource makes it easy to customize the messages
using a simple program, without recompiling the application.  If you don't
care about this, you just use strings.

>The idea of porting a Mac program to anything else, or porting anyone
>else's programs to the Mac, is pretty much a fantasy. Unless you write a
>badly-behaved Mac program or use A/UX.

People at Microsoft, Aldus, ... would disagree with you.  They ported their
programs between the Mac and Windows.  I have read where this was not a
major effort (provided you allow for this in the first place).

It seems to me that porting an Amiga program would be even more difficult,
unless that program was designed to be ported and didn't take advantage of
the unique Amiga features.

If you start out with a goal of writing a portable application, then you
will isolate the unique features of each machine so that they can be handled
separately.  Much of what goes on in a program deal with data structures and
algorithms, which are universal.  

-- 
		 Larry Rosenstein,  Object Specialist
 Apple Computer, Inc.  20525 Mariani Ave, MS 32E  Cupertino, CA 95014
	    AppleLink:Rosenstein1    domain:lsr@Apple.COM
		UUCP:{sun,voder,nsc,decwrl}!apple!lsr

ali@polya.STANFORD.EDU (Ali Ozer) (03/09/88)

In article <7599@apple.Apple.Com> lsr@apple.UUCP (Larry Rosenstein) writes:
>>It has to call the operating system at least once every 100 milliseconds to
>>maintain desk accessories.  ...
>>It has to be built around an "event loop" that handles all events in the
>>system, and passes down to other programs the ones it's not interested in.
>>
>This is not much different than what you have to do on the Amiga in response
>to certain messages.  On the Mac, you can take advantage of frameworks such
>as MacApp, which handle these things automatically.  (A programmer using
>MacApp has to do a lot less work than an Amiga programmer.)

But one difference is on the Mac your code *has* to check to see where
the user clicked, and depending on where the click was, you take appropriate
action --- such as moving the window (if clicked in the drag bar), or 
activating another window (if clicked in another window), or calling
the menu manager (if the click was in the menus), etc. I was
shocked to find this out when I was doing my share of Mac programming.

You mention MacApp --- MacApp unfortunately has a learning curve like
the Everest --- I have friends who started learning it two years ago and
are finally beginning to create programs in it. Of course, even with
MacApp, it's still *your* application that does the grunge work,
while on the Amiga, the code to move/depth arrange/activate & deactivate
windows, etc, is all run as a seperate task. Thus these events will occur no
matter what the overall system load is. (Intuition, the task responsible
for these events, runs at a high priority). You can drop out of your
event loop, enter another one, or do whatever you want --- the user can still
play around with the windows & screens without any slowdown.

Finally, there's the issue of "smart-refresh" windows --- That was a shocker
too --- To find that you have to refresh your window yourself! On the Amiga
you are not only relieved of checking for input events the system can deal
with, you are also relieved of having to redraw windows when they get 
revealed. The system will keep the covered areas in its own bitamps
and restore contents whenever necessary. Of course, if you want, you can still
fall back to the "simple" refresh method of having the application redraw
the window whenever it's necessary to do so --- Except, on the Amiga,
where windows can be moved/depth arranged asynchronously with program
flow, this would cause problems and you'd have to pay attention to 
"refresh_window" type messages.

Having programmed both the Amiga and the Mac --- one thing is
apparent to me: The Amiga tries to offload as much runtime processing
from user programs as possible by making the programmer to specify 
many options at initialization time (which most of the time means compile
time). The Mac, on the other hand, lets you open a window real easily, in a
few lines of code, but then makes you do all the grunge work at runtime.
I guess the two are just different ways of looking at the world. In my
opinion, in a multitasking environment, Amiga's method makes much more
sense. 

Ali Ozer

dillon@CORY.BERKELEY.EDU (Matt Dillon) (03/09/88)

	Two excerps from Larry Rosenstein's mac-misconceptions posting.
All his comments are, of course, true, though it is necessary to talk 
about the degree of trueness on two of them.

:>It has to be built around an "event loop" that handles all events in the
:>system, and passes down to other programs the ones it's not interested in.
:
:This is not much different than what you have to do on the Amiga in response

	As a general concept, just about every computer program I know
has some sort of event loop, no matter what computer it was written on.
From a Mac programmer's perspective, with little knowledge of the Amiga,
the mac's event calls are sufficient.  From an Amiga programmer's 
perspective, however, there are a huge number of deficiencies in the way the
Mac handles events.

:to certain messages.  On the Mac, you can take advantage of frameworks such
:as MacApp, which handle these things automatically.  (A programmer using
:MacApp has to do a lot less work than an Amiga programmer.)

	To a point.  As long as you restrict yourself to MacApp's 
capabilities.  One thing the Mac does have going for it are more 'system'
default constructs.  But apart from this I just don't see it.

:>The idea of porting a Mac program to anything else, or porting anyone
:>else's programs to the Mac, is pretty much a fantasy. Unless you write a
:>badly-behaved Mac program or use A/UX.
:
:People at Microsoft, Aldus, ... would disagree with you.  They ported their
:programs between the Mac and Windows.  I have read where this was not a
:major effort (provided you allow for this in the first place).

	As you said, provided you allow for this in the first place.  The
original message and response are assuming two different types of 
programs anyway, so point and counter-point don't exactly mesh.  Graphic
oriented IBM programs are easy to port anyway because the IBM (PC) has no
kernel... at least nothing real.

	In fact, up until recently, compilers and assemblers on the IBM-PC
were woefully out of date.  Did you hear they finally upgraded MASM from,
what was it? compiled fortran? or was it compiled basic?  Can you believe
that?

:
:It seems to me that porting an Amiga program would be even more difficult,
:unless that program was designed to be ported and didn't take advantage of
:the unique Amiga features.

	Porting things *to* the Amiga is simple... very simple in fact.
In fact, I can compile some UNIX programs without a single modification.
The only major problems I've come up against (for UNIX programs) can be 
blamed on deficiencies in the compiler (Aztec 3.4a in my case) not
allowing more than 32K static BSS data per object module... you
wind up having to malloc() the space.

	The difficultly of porting things from the Amiga depends on the
application.  Usually, I just use the same source and write a compatibility
library to handle things like OpenWindow(), etc...  Some things are much
more difficult to port.  For instance, if you are using huge amounts of
asynchronous IO the system you are porting to usually doesn't have 
sufficient capability and you have to actually change your algorithm.
This is the problem I had porting DNET to UNIX.  I wound up rewriting it
essentially from scratch, keeping only select sections of the protocol.
That was the only way I could make it streamlined and efficient.

	Likewise for porting things FROM the Mac.... about equivalent to
porting something FROM the Amiga.  

:
:If you start out with a goal of writing a portable application, then you
:will isolate the unique features of each machine so that they can be handled
:separately.  Much of what goes on in a program deal with data structures and
:algorithms, which are universal.  

	This applies to any machine, and thus doesn't constitute any kind
of comparison or misgiving.

					-Matt

jwhitnel@csi.UUCP (Jerry Whitnell) (03/10/88)

In article <714@sandino.quintus.UUCP> pds@quintus.UUCP (Peter Schachte) writes:
|In article <709@nuchat.UUCP>, peter@nuchat.UUCP (Peter da Silva) writes:
|> Let's time a full RAM: compile, eh?
|> 
|> 20:47:00 at start. Doing a bit of blitting around...
|> 20:49:46 when browser.c has finished compiling. 2:46 for ~50K of code.
|> 20:51:00 when all the rest have compiled. Another 1:14.
|> 20:51:47 and it's all linked. Total of 4 minutes and 47 seconds. I guess
|> I could take at most about a minute off for precompiled includes. I keep
|> promising myself I'll chop browser.c up one of these days. Judge for yourself
|> if this is fast enough.
|
|A bit slow, but not too bad IF I DON'T HAVE TO DO IT VERY OFTEN.

Actually, very slow.    ~60K bytes of source is ~3000 lines (plus includes)
on my Mac plus would take < 30 seconds.  From hard disk.   Even less on
my Mac II (with 1/2 mb of cache :-).

|
|
|Why?  I still claim that a small change that only affects two or three
|procedures needn't take more than a few seconds to recompile and
|relink.  Remember, I'm talking about an INCREMENTAL compiler and
|INCREMENTAL linker.  They would only parse the few procedures that have
|changed (which, let's assume, are on ram disk of some sort), and
|compile them.

The compiler I'm talking about above is LightspeedC.  It is not an incrmental
compiler, but a full file compiler.  Similar compilers exist on the IBM PC
(TurboC and QuickC).  The technology for writting simple fast compilers
exists, why write complicated compilers?

|  An integrated environment (let me hasten to repeat:  I'm
|not endorsing the idea of integrated environments, just pointing out
|again that they have some advantages) would not have to produce an
|executable until you ask for one.  It could compile right into memory,
|back-patch, free up the memory consumed by the old version of changed
|procedures, and fire up the debugger.  That would be FAST, much faster
|than 47 seconds.  Where'd the 47 number come from, anyway?

On the order of 30 seconds for a full recompile, less then 10 for changing
a single file (on a Mac Plus, somewhat faster on a Mac II).  With a
multitasking operating system such as MultiFinder or the Amiga OS, one
can run the program under test as a seperate process and when it completes
(assuming the machine is in a usable state), switch back to the
integrated enviroment.  One can switch at any time to look at source
code or make changes.  Not theory, fact on the Macintosh.  If you
havn't seen it, find some Mac hacker with LightspeedC (preferably one
with enough memory to run MultiFinder and a fast hard disk).  It will
make you jealous of the Mac for the first time :-).  Believe me, compared
to 5 minute compiles, it's the only way to program.
|
|It's not the language that makes development slow.  We CAN have speed,
|and portability, and still have a decent development and
|experimentation environment.  When the program is tested, THEN you
|produce the stand-alone takes-5-minutes-to-compile-and-link version.
|This should be possible.  At least in principle.

Oh it is possible, and practicle and doable.  See LightspeedC as an
example.

|
|I hope JimG and JohnT are listening :-).
|-- 
|-Peter Schachte
|pds@quintus.uucp
|...!sun!quintus!pds


Jerry Whitnell				Been through Hell?
Communication Solutions, Inc.		What did you bring back for me?
						- A. Brilliant

g/^>/s//|/     Take that, inews!  Too many lines, indeed.

jwhitnel@csi.UUCP (Jerry Whitnell) (03/11/88)

In article <8803090725.AA05559@cory.Berkeley.EDU> dillon@CORY.BERKELEY.EDU (Matt Dillon) writes:
|
|	Two excerps from Larry Rosenstein's mac-misconceptions posting.
|All his comments are, of course, true, though it is necessary to talk 
|about the degree of trueness on two of them.
|
|:>The idea of porting a Mac program to anything else, or porting anyone
|:>else's programs to the Mac, is pretty much a fantasy. Unless you write a
|:>badly-behaved Mac program or use A/UX.
|:
|:People at Microsoft, Aldus, ... would disagree with you.  They ported their
|:programs between the Mac and Windows.  I have read where this was not a
|:major effort (provided you allow for this in the first place).
|
|	As you said, provided you allow for this in the first place.  The
|original message and response are assuming two different types of 
|programs anyway, so point and counter-point don't exactly mesh.  Graphic
|oriented IBM programs are easy to port anyway because the IBM (PC) has no
|kernel... at least nothing real.

IBM Programs written for Windows are difficult to port to other enviroments
(other then the Mac) for the same reason that Mac programs are difficult to
port because Windows using an event-driven model similar to the Macs.  Porting
graphics programs that don't use Windows would be very difficult to port
because they depend on the hardware enviroment (ega vs. cga vs. vga) and
not on any higher-level interface.

|
|	In fact, up until recently, compilers and assemblers on the IBM-PC
|were woefully out of date.  Did you hear they finally upgraded MASM from,
|what was it? compiled fortran? or was it compiled basic?  Can you believe
|that?

This happend a couple of years ago.

|
|:
|:It seems to me that porting an Amiga program would be even more difficult,
|:unless that program was designed to be ported and didn't take advantage of
|:the unique Amiga features.
|
|	Porting things *to* the Amiga is simple... very simple in fact.
|In fact, I can compile some UNIX programs without a single modification.
|The only major problems I've come up against (for UNIX programs) can be 
|blamed on deficiencies in the compiler (Aztec 3.4a in my case) not
|allowing more than 32K static BSS data per object module... you
|wind up having to malloc() the space.

I can do the same on the Macintosh, again with the same caveats.  All of
the Mac C compilers provide a stdio package that translates unix style
calls into Mac OS style calls.  The problems I've had are either compiler
related (poor implementation of the libary, 32K static BSS, etc.) or
all-the-world-is-a-vax-itis.

|	Likewise for porting things FROM the Mac.... about equivalent to
|porting something FROM the Amiga.  

True.  BTW, there is a book out that discusses writting applications that
are portable among the four major windowing systems in the PC market
(Mac, Amiga, Atari and Microsoft windows).  I don't have it here so I
don't have the title.  Will post it latter if possible and if interested.

|
|
|					-Matt

Jerry Whitnell				Been through Hell?
Communication Solutions, Inc.		What did you bring back for me?
						- A. Brilliant

jwhitnel@csi.UUCP (Jerry Whitnell) (03/12/88)

In article <2122@polya.STANFORD.EDU> ali@polya.UUCP (Ali Ozer) writes:
|Finally, there's the issue of "smart-refresh" windows --- That was a shocker
|too --- To find that you have to refresh your window yourself! On the Amiga
|you are not only relieved of checking for input events the system can deal
|with, you are also relieved of having to redraw windows when they get 
|revealed. The system will keep the covered areas in its own bitamps
|and restore contents whenever necessary. Of course, if you want, you can still
|fall back to the "simple" refresh method of having the application redraw
|the window whenever it's necessary to do so --- Except, on the Amiga,
|where windows can be moved/depth arranged asynchronously with program
|flow, this would cause problems and you'd have to pay attention to 
|"refresh_window" type messages.

The main reason for the non-smart-refresh windows on the Mac is memory.
Remeber the orignal Mac was 128K, of which only 80K was available for
programs to run in.  Hence saving away the bit map was a very expensive way
to waste that memory.  Even today on my Mac II, the bit map for a screen-size
window (640x480x8) is approximatly 256K, 4 such windows and I've blown
the megabyte of memory the Mac comes with!  However, the programmer still
has the option of saving bit maps at window deactivation and restoring it
when it is reactivated, but it is up to the programmer to decide to use
memory that way.  I'm not familiar with the Amiga, so I'd like to know how
do you as a programmer handle the memory requirments for the bit maps for a
large number of windows?

|
|Having programmed both the Amiga and the Mac --- one thing is
|apparent to me: The Amiga tries to offload as much runtime processing
|from user programs as possible by making the programmer to specify 
|many options at initialization time (which most of the time means compile
|time). The Mac, on the other hand, lets you open a window real easily, in a
|few lines of code, but then makes you do all the grunge work at runtime.
|I guess the two are just different ways of looking at the world. In my
|opinion, in a multitasking environment, Amiga's method makes much more
|sense. 

The grunge work, however, is a couple of 100 lines of code that can be
done once and reused in other applications.  
|
|Ali Ozer



Jerry Whitnell				Been through Hell?
Communication Solutions, Inc.		What did you bring back for me?
						- A. Brilliant

dillon@CORY.BERKELEY.EDU (Matt Dillon) (03/12/88)

>port because Windows using an event-driven model similar to the Macs.  Porting
>graphics programs that don't use Windows would be very difficult to port
>because they depend on the hardware enviroment (ega vs. cga vs. vga) and
>not on any higher-level interface.

	Wanna bet?  Most major commercial programs use ascii streams to a 
device driver to handle *ALL* graphics output, with minimal calls to the
operating system beyond that.  Even CAD systems...

	Games, on the otherhand, usually go directly to the hardware.

>|	In fact, up until recently, compilers and assemblers on the IBM-PC
>|were woefully out of date.  Did you hear they finally upgraded MASM from,
>|what was it? compiled fortran? or was it compiled basic?  Can you believe
>|that?
>
>This happend a couple of years ago.

	About a year ago, I believe.  At least, that is when it was first
heard of in the co. I work in.

			-Matt

peter@nuchat.UUCP (Peter da Silva) (03/12/88)

In article <7599@apple.Apple.Com>, lsr@Apple.COM (Larry Rosenstein) writes:
> Allow me to correct some errors.

Ready when you are.

> >two main reasons are that a program on the Mac has to be written essentially
> >as a device driver, and that it's not even entirely examinable unless you
> >have another Mac... because the resource fork contains information that is
> >integral to dealing with the file but which is unintelligable even when
> >it's available to people on other machines.

> This is wrong.  A program on the Mac is written much like a program on the
> Amiga.  It is not a device driver.

Actually, it's written exactly like a device driver on the Amiga.
Desk Accesories are even worse, they're written like UNIX device drivers.

The Amiga supports, directly, quite a few programming models.

(1) What the Mac people call the "Event Loop" model. Useful for completely
    interactive programs and device drivers. Even for highly interactive
    programs the similarity is only on the surface. The Amiga event loop is
    under the program's control... it can choose to get a particular set of
    events, and safely ignore all others.

(2) The normal Amiga message loop, which is similar to the event loop but
    allows extensive forays out of that mode. For example, a paint
    program. Once you tell it that you want it to do some extensive process
    it's permitted to go away and do that. You still have control of the
    machine and can do whatever you want with other programs.

(3) The normal UNIX filter model, where each program has one input and one
    output, both of which consist of a stream of ASCII characters.

(4) The batch model, where the program just goes away and does its thing,
    control-Cs permitting. For example, a 'C' compiler.

(5) The parallel model, where the program spawns off tasks to do jobs like
    user input and just sits back and spins its web. Like HaiCalc or
    SoundScape.

(6) The server model, where the program is really a collection of seperate
    programs that talk to each other. SoundScape has aspects of this model,
    but the classic is Warrior Cycles.

> Also, as people have mentioned, it is
> possible to get a textual representation of the resources, which would look
> much like the equivalent data definitions in an Amiga program.  (The
> difference is that the defns are not compiled in and can easily be changed.)

Except that nobody ever distributes sources in this format. In fact very often
sources code itself isdistributed in encoded form. Every program in
comp.sources.mac is completely useless for anything but another Mac.

> >It has to call the operating system at least once every 100 milliseconds to
> >maintain desk accessories.

> It is not essential to do this.  It is courteous to keep DAs running, but
> most applications do not do this while doing compute-intensive things.

So while FooCalk is computing, you lose control of the machine. Great.

> >It has to be built around an "event loop" that handles all events in the
> >system, and passes down to other programs the ones it's not interested in.

> This is not much different than what you have to do on the Amiga in response
> to certain messages.

If you use that model.

> On the Mac, you can take advantage of frameworks such
> as MacApp, which handle these things automatically.  (A programmer using
> MacApp has to do a lot less work than an Amiga programmer.)

A lot more code, though. Can you even run modern programs on a Mac in less
than 1 Meg? I've multitasked a terminal program and Deluxe Music in half
that memory. Comes from every program carrying around a copy of stuff
that should have been in the O/S.

> >It has to allow for arbitrary relocation of large parts of its data as the
> >system collects garbage.

> The system does not collect garbage and it is not arbitrary.  It coalesces
> the memory blocks to prevent heap fragmentation.

Beautiful example of DoubleTalk (a new Mac Application). It's arbitrary in
the sense that this could occur at any time... the program has to assume
it's happened every time through the loop.

> >The idea of porting a Mac program to anything else, or porting anyone
> >else's programs to the Mac, is pretty much a fantasy. Unless you write a
> >badly-behaved Mac program or use A/UX.

> People at Microsoft, Aldus, ... would disagree with you.  They ported their
> programs between the Mac and Windows.  I have read where this was not a
> major effort (provided you allow for this in the first place).

I don't know about Aldus, but Microsoft is notorious in the Mac community
for badly-behaved programs that bend the User Interface rules and have to
be rewritten every time a new system comes out. Maybe they made their port
real easy by doing a poor job.

What this amounts to is that to port a program to the Mac you had to have
written the program around the Mac event loop and emulated it in the O/S
you did the original version in.

> It seems to me that porting an Amiga program would be even more difficult,
> unless that program was designed to be ported and didn't take advantage of
> the unique Amiga features.

You mean like an operating system? We'll see how hard it is to port programs
when time comes to do ports for our video-game. Some models would be hard
to port, to be sure. There aren't any other small real-time operating systems
that come as standard equipment on a personal computer.
-- 
-- a clone of Peter (have you hugged your wolf today) da Silva  `-_-'
-- normally  ...!hoptoad!academ!uhnix1!sugar!peter                U
-- Disclaimer: These aren't mere opinions... these are *values*.

ali@polya.STANFORD.EDU (Ali Ozer) (03/13/88)

In article <1451@csib.csi.UUCP> jwhitnel@csib.UUCP (Jerry Whitnell) writes:
> ...  I'm not familiar with the Amiga, so I'd like to know how 
>do you as a programmer handle the memory requirments for the bit maps for a 
>large number of windows?  

The only screen which normally gets most crowded with windows is the Workbench
screen (the equivalent of the Finder), but this screen is normally 2-bitplanes
deep, so backups don't create a problem. A program wanting more or less
bitplanes or a different set of colors simply goes out an opens it's own
screen. Thus a game program that can live in low-res but that needs 32 colors
can open a 320x400x5 screen, while a word processor can open a 700x470x1
screen. The user can drag these screens up and down (like blackboards) to
switch between them. Different screens are just different memory blocks (the
system can read scanlines from anywhere in the chip memory), so there is no
problem of backup bitplanes when screens cover eachother. And the normal
windows still remain in the 2-bitplane Workbench screen. Of course, a program
that can live with the Workbench configuration or that needs to open a window
on the Workbench (say a clock program) can still open windows on the Workbench.

>| ... The Mac, on the other hand, lets you open a window real easily, in a
>|few lines of code, but then makes you do all the grunge work at runtime.
>The grunge work, however, is a couple of 100 lines of code that can be
>done once and reused in other applications.  

The problem is, it's 100 lines of code that has to be executed, preferably
many times a second. And by every application that is running.

Ali Ozer, ali@polya.stanford.edu

jmpiazza@sunybcs.uucp (Joseph Piazza) (03/14/88)

In article <775@nuchat.UUCP> peter@nuchat.UUCP (Peter da Silva) writes:
>In article <7599@apple.Apple.Com>, lsr@Apple.COM (Larry Rosenstein) writes:
>> Allow me to correct some errors.
>
>Ready when you are.
>
>> People at Microsoft, Aldus, ... would disagree with you.  They ported their
>> programs between the Mac and Windows.  I have read where this was not a
>> major effort (provided you allow for this in the first place).
>
>I don't know about Aldus, but Microsoft is notorious in the Mac community
>for badly-behaved programs that bend the User Interface rules and have to
>be rewritten every time a new system comes out. Maybe they made their port
>real easy by doing a poor job.

	Sigh.  Are you suffering from a short memory? (i.e., Word 3.0)
I've seen the same copy of Word 1.0 work on a 512K Mac, Lisa 2 running MacWorks,
a Mac+, and the Mac SE.

	"... rewritten every time a new system comes out? ..."

	Nonsense.

	This subject is worn out.


Flip side,

	joe

lsr@Apple.COM (Larry Rosenstein) (03/17/88)

There were a few articles posted here discussing Macintosh programming.
Rather than following up, I intend to reply via E-Mail.  If anyone else has
comments or questions about Mac programming, MacApp, etc., send me mail and
I will reply.

-- 
		 Larry Rosenstein,  Object Specialist
 Apple Computer, Inc.  20525 Mariani Ave, MS 32E  Cupertino, CA 95014
	    AppleLink:Rosenstein1    domain:lsr@Apple.COM
		UUCP:{sun,voder,nsc,decwrl}!apple!lsr

jesup@pawl17.pawl.rpi.edu (Randell E. Jesup) (03/18/88)

In article <1451@csib.csi.UUCP> jwhitnel@csib.UUCP (Jerry Whitnell) writes:
[ Re: Mac, Amiga, and smart_refresh ]

>  However, the programmer still
>has the option of saving bit maps at window deactivation and restoring it
>when it is reactivated, but it is up to the programmer to decide to use
>memory that way.  I'm not familiar with the Amiga, so I'd like to know how
>do you as a programmer handle the memory requirments for the bit maps for a
>large number of windows?

	Amiga windows (in term of refresh) are either SIMPLE (like mac),
SMART (saves obscured areas for you), or SUPERBITMAP (refreshed automatically
from a large, off-screen bitmap).
	For SMART_REFRESH windows, all the saving of non-visible portions
is done automatically by the system (and restoration as well).  The save areas
are dynamicly allocated by the system, and freed when no longer needed.  If
you have a LARGE number of SMART_REFRESH windows, and they cover each other,
it will use more memory.  In general, this isn't a problem.  Many
applications open 'screens', which are essentially entire seperate displays,
each of which can have it's own windows.  Screens require ram also, dependant
on their size (x & y), and their color depth.  For example, the MicroEmacs
I use opens a 640x200 monochrome screen, which requires 16K.  If it opened
a window on the Workbench, and it totally covered a SMART_REFRESH window,
that would require 32K.  However, if it covered a SIMPLE_REFRESH window, or
it didn't overlap a SMART_REFRESH window, it would require 0K.  Partial
overlaps are in between.
	Choosing SMART_REFRESH vs SIMPLE_REFRESH has some tradeoffs.  One
is programming ease/complexity:  SMART_REFRESH is VERY easy to handle, as
you can continue to do stuff without worrying wether you should check to see
if the window needs refreshing.  Your code is smaller, simpler, easier to
maintain.  Also, for ports of things like UNIX software, it makes life
SO MUCH easier.  The downside is memory consumption.  On a high-res, many color
display, overlapping windows will cost you quite a bit of memory.

     //	Randell Jesup			      Lunge Software Development
    //	Dedicated Amiga Programmer            13 Frear Ave, Troy, NY 12180
 \\//	beowulf!lunge!jesup@steinmetz.UUCP    (518) 272-2942
  \/    (uunet!steinmetz!beowulf!lunge!jesup) BIX: rjesup

(-: The Few, The Proud, The Architects of the RPM40 40MIPS CMOS Micro :-)