[comp.sys.apple2] Re- HLLs vs. Assembly

Joe_Luzzi.FULLERTON_CC@QMBRIDGE.CALSTATE.EDU (Joe Luzzi) (04/06/91)

                                                               Time: 10:24 AM
                                                               Date: 12/21/90
Subject:  Re: HLLs vs. Assembly


=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

>>Don't feel sorry for my students.  They come out of that class with a major
>>prejuidice against assembly removed.  By the time they're seniors they've
>>written a *lot* of C and Ada code, yet very little assembly.  They are
>>excited by the end of the quarter.  A good number of them exclaim happily
>>that "Gee, I really learned assembly language this quarter, it's nowhere
>>near as bad as I thought it was."  To bad *YOU* never had this experience.

I agree 100%!  Assembly is basically touched upon and that's it in most
Computer Science curriculums.  I wish I had the opportunity Randy's students
had.  

I think Computer Science students shouldn't graduate without adequate assemlby
coding experience.  I'm not saying to drop the emphasis of High Level
langauges, but to add more classes requiring assembly coding.  Programmers who
know assemlby, I believe, are better programmers overall since they are more in
tune with how the computer works.  

I'm not looking down upon high level languages, they're great!  I'd just like
to see Computer Science curriculums consider Assembly language just as
important as Pascal, C, or Ada. 

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Joe M. Luzzi

Internet: Joe@CSU.FULLERTON.EDU
BITNET:   LJMLUZZ@CALSTATE.EDU

GENIE : JM.LUZZI
AOL   : JMLUZZI
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

gwyn@smoke.brl.mil (Doug Gwyn) (04/06/91)

In article <9104051936.AA16687@apple.com> Joe_Luzzi.FULLERTON_CC@QMBRIDGE.CALSTATE.EDU (Joe Luzzi) writes:
>I agree 100%!  ...
>I think Computer Science students shouldn't graduate without adequate assemlby
>coding experience.  I'm not saying to drop the emphasis of High Level
>langauges, but to add more classes requiring assembly coding.  Programmers who
>know assemlby, I believe, are better programmers overall since they are more in
>tune with how the computer works.  

That's commendable, but that's not what RHyde was saying.  He was maintaining
that applications generally SHOULD be developed in assembler.  If somebody
applied for a programming job here and indicated that he planned to use
assembler to accomplish most of his assigned tasks, I can assure you he would
not get the job.

rhyde@ucrmath.ucr.edu (randy hyde) (04/06/91)

>>>>>>
That's commendable, but that's not what RHyde was saying.  He was maintaining
that applications generally SHOULD be developed in assembler.  If somebody
applied for a programming job here and indicated that he planned to use
assembler to accomplish most of his assigned tasks, I can assure you he would
not get the job.
<<<<<<

Clearly you're missing an important point here.  Let me define the term
"Application" from my perspective.  A *commercial application* is a high
volume piece of software which I can walk into some place like Egghead
and purchase over the counter.  One which sells in the tens of thousands
(or more).  A good sized subset of these applications *should* be developed
in assembly language for reasons I will shortly get to.

If somebody applied for a job where I worked and announced that they intended
to do the job in a way which was generally incompatibly with the way everyone
else did it, they would not get the job.  I have hired people in the past
to work on various projects and had they announced their intention to use
assembly language, I wouldn't have hired them either; I needed them to work
in Pascal or C for various reasons.  They were not, however, working on
application programs that fell into the above category.

Concerning you're not hiring such a person, that is very commendable.  You
(I assume "U.S. Army Ballistic Research Laboratory, APG, MD." implies that
you work at the ABRL) haven't developed any commercial quality software
products recently (that I've seen anyway) so ABRL is not under the same
performance pressures as, say Electronic Arts.  The Army can afford machines
that run 10x faster, a typical PC user cannot.  Furthermore, the Army often
needs to purchase computers of various archetectures, the average PC user
does not (they get locked into a Mac, PC, or [as you can tell by reading
posts in this newsgroup] an Apple IIgs).  Therefore, portability is a much
bigger concern to the DoD (hence Ada) than it is to a typical PC user.

Could you imagine Appleworks running on an Apple IIe had it been written in
C?  Of course, Appleworks GS running on a machine equipped with a TWGS (or
comparable device) might be bearable (were it written in C), but imagine
how much better it is written in assembly.

Someday machines will be so fast that a 10x performance difference between
assembly and C won't make much of a difference to the average end user.
When the assembly language program completes the task in .001 sec and the
C program completes it in .01 sec, performance will no longer be a good
reason to write commercial applications in assembly rather than in C.  Then
we'll be left with using assembly only for those applications to which
assembly is most appropriate because it is easier to express the alogrithm
in assembly rather than C (or some other HLL).

Be ye forewarned, however, when that day arrives, all you C jockeys are
going to be on the defensive.  By then, people will be arguing for the use
of LISP, Prolog, ICON, APL, SETL, 4GLs, and other languages not in existence
yet for the same reasons you're claiming everyone should be using C (or Ada,
or Pascal).
*** Randy Hyde
 

MQUINN@UTCVM.BITNET (04/06/91)

On Fri, 5 Apr 91 11:39:40 LCL Joe Luzzi said:

>I agree 100%!  Assembly is basically touched upon and that's it in most
>Computer Science curriculums.  I wish I had the opportunity Randy's students
>had.

This is true.  I'm taking a VAX assembly course now.  It's great, but my
professor prevents us from doing as much as I'd like.  His attitude is, "Well,
you'll probably never use assembly (he calls it assemblER) language after you
get out of this class."                                ^^

>I think Computer Science students shouldn't graduate without adequate assemlby
>coding experience.  I'm not saying to drop the emphasis of High Level
>langauges, but to add more classes requiring assembly coding.  Programmers who
>know assemlby, I believe, are better programmers overall since they are more in
>tune with how the computer works.

This is true too.  Knowing assembly, you know the limitations of the machine
your working on (and sometimes machines you're not all that familiar with).
This is very important when doveleping software.  One person I work with
doesn't even know what assembly is (she barely knows what a HLL is either).
I have to spend half of my time at work explaining to here why something will
or won't work because she knows nothing about the limitations of the machine.
Also, another person I work with just graduated with a degree in computer
science.  Computers are his life.  He knows all about HLLs but he doesn't
understand why alot of things work or why they don't work, because he doesn't
understand assembly language or the limitations of the hardware.
[by the way, he's a mac guru].  I have to explain to him how his mac works,
and I hardly know anything about the mac.  It's all because he was 'raised'
on nothing but HLLs, GUIs, and macs.  Computer Science courses should begin
with explaining computers from the bit level and working their way up.  By
the time the students get to the HLLs, data structures, files, etc..., they
should have virtually no problems undterstanding those concepts and how they
came to be, and what the limitations are.

Unfortunately, most computer science courses are tought in the opposite
direction... HLL first, then, when you've had enough of it for several years,
they'll give you just a tainted taste of assembly.  It makes it very difficult
for students to understand all these intangible concepts without first knowing
how they came about.

>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>Joe M. Luzzi
>
>Internet: Joe@CSU.FULLERTON.EDU
>BITNET:   LJMLUZZ@CALSTATE.EDU
>
>GENIE : JM.LUZZI
>AOL   : JMLUZZI
>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

----------------------------------------
  BITNET--  mquinn@utcvm    <------------send files here
  pro-line-- mquinn@pro-gsplus.cts.com

toddpw@nntp-server.caltech.edu (Todd P. Whitesel) (04/06/91)

MQUINN@UTCVM.BITNET writes:

>This is true too.  Knowing assembly, you know the limitations of the machine
>your working on (and sometimes machines you're not all that familiar with).
>This is very important when doveleping software.

Hear, hear. I taught myself a LOT from the ground up, by hacking apart the
][+ I had in high school -- the original Apple ][ is one of the most perfect
training machines ever designed. It's simple enough for a single person to
understand quite well in a reasonable amount of time, and it has lots of
sneaky little hacks (some were worth patents) that make you really think
hard about real-life engineering trade-offs -- especially when you consider
the conditions Wozniak had to deal with when he was designing it (board area,
chip availability, NTSC -- itself an admirable hack, and so on).

Learning assembly on the Apple introduced other concepts: I/O redirection,
simple CLI and mini-assembler (I kludged a RAM resident one from INTBASIC
that I could BRUN), hooks to support a memory-resident language system,
ROM-resident routines for handling the built in I/O hardware, driver-based
peripheral hardware (the Pascal and smartport protocols), and the fact
that you could drop down to the bottom of the system and figure out exactly
how things worked. This hands-on feel is totally absent from a lot of modern
machines and the trend bothers me.

When I got to Caltech, I breezed through the freshman digital electronics
course, and similarly aced the sophomore level microcomputer fundamentals
course, in which we did 80188 assembly and then built a speech digitizer
around an 80188. (My project kicked butt, and would have actually garnered
an A, had I not overloaded that term and been too burned out to finish the
final software assignment on time. I got an extension and when it was
finished the TA wanted me to get an A anyway, but the prof wouldn't make an
exception. I got dropped one letter grade and got a B+.)

About that time, I started seriously getting into C, and the assembly
experience I had gained proved extremely valuable. I took to C like a
natural, and impressed the Microsoft interviewers with only ONE real
C program under my belt. I got a better offer from Apple though, although
the work there was boring once the group hired a permanent guy to do all
the interesting stuff. This year, with considerably more C programming behind
me, I managed to get a summer offer from Microsoft while I was still at their
offices for the interview. (buff, buff)

Why am I bragging like this? Because I believe I owe most if not all of it
to having learned assembly at an early age.

I'm a positive example of the consensus that seems to be evolving here, that
Assembly concepts should be a vital part of a good CS curriculum, and that
they should be taught early on, because most of the hard-to-learn HLL concepts
derive directly from Assembly concepts that are considerably easier to learn.

In my specific case, I taught myself at first, and then honed things while
taking the first levels of Caltech's well structured (but small) CS program.
Many of us took a course on provability and HLL's at the same time as the
assembler course, so we were learning fundamentals from both ends at once.
Unfortunately, there are no O/S design or Compiler design courses here, because
there are only 6 CS profs actually teaching.

>understand why alot of things work or why they don't work, because he doesn't
>understand assembly language or the limitations of the hardware.
>[by the way, he's a mac guru].  I have to explain to him how his mac works,

This is what really bothers me about the Mac -- it has produced an astounding
number of people who think they know a lot about computers but when something
really goes wrong they are soo clueless...

Todd Whitesel
toddpw @ tybalt.caltech.edu

P.S. maybe Doug Gwyn will tell us how much his computers cost and what super
studly software they are running, so we can all run out and buy it and never
use assembler again...
:)

2hnemarrow@kuhub.cc.ukans.edu (04/06/91)

> 
> Be ye forewarned, however, when that day arrives, all you C jockeys are
> going to be on the defensive.  By then, people will be arguing for the use
> of LISP, Prolog, ICON, APL, SETL, 4GLs, and other languages not in existence
> yet for the same reasons you're claiming everyone should be using C (or Ada,
> or Pascal).
> *** Randy Hyde
>  
No one will ever build a computer fast enough.

gwyn@smoke.brl.mil (Doug Gwyn) (04/07/91)

In article <13391@ucrmath.ucr.edu> rhyde@ucrmath.ucr.edu (randy hyde) writes:
>You (I assume "U.S. Army Ballistic Research Laboratory, APG, MD." implies
>that you work at the ABRL) haven't developed any commercial quality software
>products recently (that I've seen anyway) so ABRL is not under the same
>performance pressures as, say Electronic Arts.  The Army can afford machines
>that run 10x faster, a typical PC user cannot.  Furthermore, the Army often
>needs to purchase computers of various archetectures, the average PC user
>does not (they get locked into a Mac, PC, or [as you can tell by reading
>posts in this newsgroup] an Apple IIgs).  Therefore, portability is a much
>bigger concern to the DoD (hence Ada) than it is to a typical PC user.

You really shouldn't try to guess about an environment that you are not
familiar with.  I have personally developed interactive graphics systems
for a company whose sole livelihood depended on its ability to compete
in the commercial marketplace.  Those systems were primarily coded in
Fortran (also RatFor), even the tablet driver that was involved in
tracking input where 1/10 second was the absolute upper limit that human
physiology would allow for each complete cycle of input_coordinates-
process-update_display.  They ran on the DEC PDP-11/34a, which was less
powerful than almost any machine you can find these days.  The only
assembly code (other than that used in the vendor's operating system,
which was used solely for disk file I/O) was for that portion of the
dynamic memory allocation package that had to temporarily fudge the
memory management unit's segmentation registers, and the only reason for
using assembler there is that the Fortran run-time environment assumed
by the vendor's compiler was being temporarily violated while the MMU
registers were being fiddled.  The resulting systems did more things in
better ways than any competing product, with better interactive response.
Another system I was later involved with had as a major subcomponent a
real-time data acquisition process.  A contractor originally implemented
it with a critical function coded in assembler supposedly for speed.  A
later rewrite by another employee of the company I worked for relied on
careful analysis of the actual requirements, threw out the assembler
code (resorting entirely to C), simplified the rest of the implementation,
and ended up with better (and certainly more maintainable) performance
than the original implementation.

As to BRL, it is irresponsible of you to suggest that we are spending
taxpayer funds prolifically in order to avoid doing things right.  Quite
to the contrary, our mission requirements caused us to develop our own
interactive solid modeling system, which initially was based on small
PDP-11s and PDP-11/70s, which we continued to use well past the
availability of faster systems.  Naturally these applications too were
coded in C.  Therefore, it was possible to PORT them to newer hardware
when it eventually was advantageous to do so.  We have never found the
applications to run excessively slowly, and there is absolutely no basis
for your repeated claims of factors of 10 execution speed differences
between HLLs and assembler.  A few carefuly identified "bottleneck"
sections of code were, for some systems, recoded in assembler to see if
it would help.  In no case was there more than a few percent improvement,
which makes this almost never worth doing.  We have better ways to invest
our limited software development resources.  Perhaps you don't?

>Be ye forewarned, however, when that day arrives, all you C jockeys are
>going to be on the defensive.  By then, people will be arguing for the use
>of LISP, Prolog, ICON, APL, SETL, 4GLs, and other languages not in existence
>yet for the same reasons you're claiming everyone should be using C (or Ada,
>or Pascal).

Unlike you, I have not made ridiculous claims that everybody ought to be
using any specific programming language.  I have said that assembler is
seldom justified, but, like ANY programming language, when it is justified
it should be used.  You seem to have decided that there is only one
criterion for programming, namely execution speed, and have further made
unwarranted claims for the gains that assembler could provide even there.
Considering the overall software development picture, most computing
professionals have long since discovered the advantages of suitable HLLs.

gwyn@smoke.brl.mil (Doug Gwyn) (04/07/91)

In article <1991Apr6.100927.21953@nntp-server.caltech.edu> toddpw@nntp-server.caltech.edu (Todd P. Whitesel) writes:
>P.S. maybe Doug Gwyn will tell us how much his computers cost and what super
>studly software they are running, so we can all run out and buy it and never
>use assembler again...

I personally have an 8MB Apple IIGS with TWGS and FPE, but certainly none
of my usual applications require resources like that.  I used to run some
of them on a 128KB Apple //e.  I do have one application under development
that will need most of these resources.  (It would be folly to develop it
in assembler since I may migrate to an IBM PC/AT clone some day just to be
able to obtain a decent selection of commercial applications for other
purposes -- I don't want to have to write EVERYthing myself!)

BRL has a variety of different computers, most of them perfectly ordinary.
Sun-3s predominate, with a lot of SGI Iris workstations and file servers.
There are still a few VAXes, Gould PowerNodes, and Alliant FX/8s in service;
our PDP-11s have mostly been converted into network gateway processors.
We do have two Crays, and X-MP/48 and a Cray-2, but most of use seldom if
ever use the Crays, which are administered on a "central site" philosophy
rather than in the open manner that the other computers are.  All systems
run some version of UNIX, all are networked (there is a separate network
for classified traffic), and most applications are available on all systems.
Most regular users have bitmap graphics interfaces, at least.  All major
applications of which I am aware are coded in either Fortran or C, with
occasional examples of heavy use of UNIX tools in shell scripts.  We
have long considered typical IBM PCs to be "toy" computers, primarily
because of the incredibly inept way they are utilized rather than because
of their theoretical level of computing power.

Note that occasionally code originally developed on a Cray-2 finds its
way to my Apple IIGS.  I assure you that it would not do so if it had
been coded in assembler, and probably it would never have done the needed
job even on the Cray-2.

toddpw@nntp-server.caltech.edu (Todd P. Whitesel) (04/08/91)

Thanks to Doug Gwyn, for explaining the systems he works with.

STOP THIS ARGUMENT.

Randy works almost entirely with microcomputers.

Doug works with nothing but mainframes.

If Randy started working on Doug's machines, he'd agree with Doug pretty fast.

AND VICE VERSA!!

Doug, I can personally vouch that 2-8X speed improvements can easily be gained
over Orca/C by recoding in 65816 assembly.

Todd Whitesel
toddpw @ tybalt.caltech.edu

stephens@latcs2.lat.oz.au (Philip J Stephens) (04/09/91)

Todd P. Whitesel writes:
>
>I'm a positive example of the consensus that seems to be evolving here, that
>Assembly concepts should be a vital part of a good CS curriculum, and that
>they should be taught early on, because most of the hard-to-learn HLL concepts
>derive directly from Assembly concepts that are considerably easier to learn.

  I totally agree with Todd.  I too taught myself 6502 assembly
language, when I was only in Year 10 of school.  By Year 12 I had
produced several hi-res arcade-type games, a hi-res character
generator, an Applesoft renumbering utility, a disk sector viewer, a
sector copy program -- all in 6502 assembly.  I started off using the
mini-assembler in Integer Basic before moving onto Lisa, Big Mac and
Orca/m respectively.  By the time I started Uni, I already knew the
hardware of my Apple ][+ inside out, and had studied Pascal in my
spare time.  I was hacking into games (NOTE: I don't mean I was
pirating, I mean I was modifying games to do different things, such as
providing unlimited lives etc).  Learning digital electronics was a
breeze, and learning other HLL was easy.  In short, I had done myself
a favour by getting into the guts of my Apple when I did.
  So quite aside from the advantages and disadvantages of programming
in assembly, it is a MUST for anyone who wants a REAL understanding of
the computers they are fiddling with.  You can't program effectively
in a HLL if you don't know the hardware you're working on, it's
limitations and it's features.  It's as simple as that.

<\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/><\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/>
<  Philip J. Stephens                ><   "Many views yield the truth."        >
<  Hons. student, Computer Science   ><   "Therefore, be not alone."           >
<  La Trobe University, Melbourne    ><   - Prime Song of the viggies, from    >
<  AUSTRALIA                         ><   THE ENGIMA SCORE by Sheri S Tepper   >
</\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\></\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\>

gwyn@smoke.brl.mil (Doug Gwyn) (04/10/91)

In article <1991Apr9.150402.563@latcs2.lat.oz.au> stephens@latcs2.lat.oz.au (Philip J Stephens) writes:
>You can't program effectively in a HLL if you don't know the hardware
>you're working on, it's limitations and it's features.

Wrong -- any decent HLL should be exploited in terms of the abstract
model of computation that it supports, most definitely not in terms
of any specific machine architecture.  Indeed, most errors we see in
the C newsgroup appear to spring from people holding to inappropriate
low-level models that they developed in terms of a specific machine,
which are not correct when applied to another machine.  The only
exception to this should be when writing extremely low-level systems
code such as device drivers that have to diddle memory-mapped I/O
locations just so, etc.  (You can't even write such code in most HLLs,
C being the most notable exception.)

There are a few situations in which certain basic information about
the system can help, for example in a Fortran program that accesses
a huge multidimensional array; if it accesses it in the wrong order,
performance might be poor due to virtual memory "thrashing".  But
knowing what machine opcodes and addressing modes exist is of no real
value to a Fortran programmer.

lang@rex.cs.tulane.edu (Raymond Lang) (04/10/91)

In article <1991Apr9.150402.563@latcs2.lat.oz.au> stephens@latcs2.lat.oz.au (Philip J Stephens) writes:
>You can't program effectively in a HLL if you don't know the hardware
>you're working on, it's limitations and it's features.

I always thought one of the reasons for using a HLL was so you could
program without having to know the grungy details about the hardware.

Ray
lang@rex.cs.tulane.edu

MQUINN@UTCVM.BITNET (04/11/91)

Y
 .EDU>

On Wed, 10 Apr 91 12:16:26 GMT Raymond Lang said:
>In article <1991Apr9.150402.563@latcs2.lat.oz.au> stephens@latcs2.lat.oz.au
> (Philip J Stephens) writes:
>>You can't program effectively in a HLL if you don't know the hardware
>>you're working on, it's limitations and it's features.
>
>I always thought one of the reasons for using a HLL was so you could
>program without having to know the grungy details about the hardware.
>
>Ray
>lang@rex.cs.tulane.edu

It's true that you don't NEED to know the grungy details of the hardware
to program in a HLL, but knowing the grungy details will help you create
MUCH better and more efficient programs than if you didn't know assembly.
Also, I think one of the reasons why HLLs were created was because not
EVERYTHING needs to be written in Assembly.  Take Applesoft for example.
I use that about as much as I use assembly.  My final projects never end up
in Applesoft though.  Applesoft is a quick and dirty way to check to see if
an idea works (so is the mini-assembler), but I certainly wouldn't want
my code to be in Applesoft.  It's good for speedy programming, but it has
severe limitations:  Exucution speed is pathetic and there are some things
it just cannot do, specifically, time critical code, which can only be
implemented in assembly, and C in SOME cases.

Imagine a Chemist that didn't know the detailed properties of the basic
elements, although he knows what many different combinations of chemicals
did.  He may be able to do alot with the knowledge he has, but he doesn't
understand WHY it works or HOW he can come up with something completely
different.  On the OTHER hand, a chemist that knows the physical properties
of all the elements and how and WHY they react with other elements.
This chemist, can, undoubtably, do everything the other chemist can plus a
WHOLE lot more.

----------------------------------------
  BITNET--  mquinn@utcvm    <------------send files here
  pro-line-- mquinn@pro-gsplus.cts.com

rhyde@gibson.ucr.edu (randy hyde) (04/11/91)

>> Any HLL should be exploited in terms of its abstract model rather than the
>> low-level machine model (due to portability concerns)

Absolutely!  If you're going to use a HLL, you should exploit that HLL,
not attempt to write "assembly language" in that HLL.  This is what most
people do to a large degree.  That's why assembly language programmers
tend to produce better assembly language code than compilers, not
because of compiler technology, but because their programming paradigm
takes into account the low-level machine architecture.

Clearly if portability is your overriding concern and you *must* use a
HLL, getting down to the lowest level possible in C may produce
performance benefits on certain architectures but fail miserably on
others.  The 65816 vs. 68000 is a good example of this.

OTOH, if portability is not a concern (you're only going to market your
program on one machine) attempting to "optimize" C for the architecture
is almost as much work as writing in assembly in the first place and you
still won't do as good.

rhyde@gibson.ucr.edu (randy hyde) (04/11/91)

>> I always thought that one of the reasons for using a HLL was to avoid the
>> gungy details of the architecture...

Sure.  Also to make it easier to learn how to take to a computer.  However, if
you want to be a fully rounded programmer you really need to understand the low
level details as well.  

toddpw@nntp-server.caltech.edu (Todd P. Whitesel) (04/11/91)

lang@rex.cs.tulane.edu (Raymond Lang) writes:

>I always thought one of the reasons for using a HLL was so you could
>program without having to know the grungy details about the hardware.

Yes, but modifying the program to improve performance requires

	(a) better overall algorithms &/or data structures
	(b) knowledge about the hardware and what the compiler will produce

For instance, one of the unix machines I get to work on is a Convex XP-1,
it's screamingly fast BUT there is a ferocious amount of overhead per I/O
transfer. A program that fread's 32K chunks will scream past a program that
uses fgetc() for any reasonable application that I know of. It also has a
vector processor, but the vectorizer can't optimize code like

double p[dim], dp[dim];	int i, j;
for (j=0; j<255; ++j)
	{
	for (i=0; i<dim; ++i)
		p[i] += dp[i];
	action(p);
	}

You'd think it could optimize the inner loop too, but it can't because dim is
a variable and not a constant. On the other hand,

double p[MAXDIM], dp[MAXDIM]; int i, j;
for (j=0; j<255; ++j)
	{
	for (i=0; i<MAXDIM; ++i)
		p[i] += dp[i];
	action(p);
	}

should allow vectorization of the inner loop. what would be even faster (I'll
be trying this sometime over the next week, for a grade, so if anybody's
interested in the results I'll post them) is to write a vectorizable version
of the outer loop as well -- this requires knowledge of action() which in
my case is a simple quantization routine (it's a polygon tiler that performs
linear interpolation across scan lines).

The point of all this is that while HLL's insulate you from the grunge, they
often make it hard to obtain optimal performance. The set of fully portable
modifications that yield pure improvement on any machine is very small
compared to the set of modifications you can make when you have some
knowledge of the machine hardware and how the compiler will react to the
program (including the various optimizer switches you can invoke).

Writing portable code that compiles well on your machine is generally the
best way to go; only when performance or coding time is more important than
portability should assembly be used -- and it too can be made portable with
#ifdef's. This is in fact how I have the same graphics system compiled and
running on a GS (with a single low-level driver file) as well as on the Convex
XP-1 (also with a single low-level driver file). There is some GS-specific
assembly and type casting (Orca's math library is somewhat nonstandard) but
that is #ifdef'd to equivalent ANSI C code or dummy functions when compiled
under anything but __ORCAC__.

Todd Whitesel
toddpw @ tybalt.caltech.edu

MQUINN@UTCVM.BITNET (04/11/91)

On Wed, 10 Apr 91 05:04:41 GMT Doug Gwyn said:
>In article <1991Apr9.150402.563@latcs2.lat.oz.au> stephens@latcs2.lat.oz.au
> (Philip J Stephens) writes:
>>You can't program effectively in a HLL if you don't know the hardware
>>you're working on, it's limitations and it's features.
>
>Wrong -- any decent HLL should be exploited in terms of the abstract
>model of computation that it supports, most definitely not in terms
>of any specific machine architecture.

If data structures and such were the only things related to computer
programing, then HLLs would be perfect.  There's more to programming than
STacks, Queues, binary trees, etc.  If you don't know the maximum amount
of RAM, Minimum amount of RAM, Graphics resolutions, colors, bits per
pixel, CPU speed, sound capabilities, port I/O addresses, hardware buffer
sizes, hardware stacks, maximum I/O rate of ports, timing delays needed,
reserved RAM space, ROM addresses, ROM entry point routines, the concept
of hardware address (needed for pointer variables in HLLs), then you are
missing out on ALOT and therefore, not taking advantage of that knowledge
and applying it to your HLL code to make it more compatible with the the
hardware works (and I don't mean any specific hardware).

For a real example, if you know the paging size of a particular system,
you can set up arrays to take advantage of this size.  Let's assume a
page size is, say... 1k just as an example.  You'd know that if you needed
an array, somewhere around that size, then it would be better to have it
just that size or a little smaller, because ONE byte larger, and another
page has to be allocated, wasting 1023 bytes of RAM and forcing one less
page to be accessed by other users (or yourself).
I don't know much about paging, so let me give a better example.  One that
might be more valid:

If you use dynamically allocated memory for an array instead of declaring
a definite 200k array that may not even be used to 50%.  Understanding how
variables use RAM, you'd know that you might be able to save 100k.  There are
just so many things like this (and better examples than this) that will make
you a more effective programmer by knowing Assembly, even if you never plan
on writing a single application in assembly, just knowing it will improve
your programing in ALL languages.  Knowing Assembly also gives you an
appreciation for what High level languages do and WHY they are the way they
are and also gives you ideas on exactly how the languages could be improved.
The list of advantages just goes on and on and on.  It's sort of like
experiencing another culture in another, less advanced country than your
own:  It gives you an appreciation for your own county and helps you
understand how we are and why we are the way we are.

Learning a HLL without learning assembly is very similar to the way they
used to teach math in my old high school:  Athletic *COACHES*, who didn't
have the foggiest idea of math, were teaching the classes.  When someone
asked a question on why a certain formula worked, the coach would just
say, don't worry about how it works, just memorize the formula.  Well,
Just memorizing the formula will just get you so far.  UNDERSTANDING the
formula is a whole different story.  It helps you REMEMBER the formula,
and if you ever happen to forget the formula, if you understand how it
works, you could always re-think out the formula.

Many of the computer science professors around now are much like the
coaches I had for math in High schooll.  They don't completely understand
it themselves and they're churning out more and more people like
themselves and the real 'thinkers' are slowly vanishing.

This is part of the overall education problem in the U.S. today.  To many
teachers just want you to memorize the facts and forget about trying to
understand them.

So, I guess the moral of my story is, "You're much better off knowing what's
under the hood and how it works, in case you get stalled out in the middle
of nowhere."  :)

>There are a few situations in which certain basic information about
>the system can help, for example in a Fortran program that accesses
>a huge multidimensional array; if it accesses it in the wrong order,
>performance might be poor due to virtual memory "thrashing".  But
>knowing what machine opcodes and addressing modes exist is of no real
>value to a Fortran programmer.

For this ONE particular example, knowing OPcodes isn't really necessary,
but knowing how memory is reserved for arrays certainly helps.  See my
above example.

I'm sure you could pick out MANY instances where it's not necessary to know
assembly, but MANY instances, it is either almost a necesity, or it's
extremely helpful to know assembly.  There ARE quirks about machines that
you just can't work around without knowing assembly.  Sure, you're code
will more than likely work without knowing assembly, but the more assembly
you know, the better your code is going to be.

----------------------------------------
  BITNET--  mquinn@utcvm    <------------send files here
  pro-line-- mquinn@pro-gsplus.cts.com

rhyde@ucrmath.ucr.edu (randy hyde) (04/11/91)

Another classic example as to why you should know assembly in order to write
decent HLL code (in this case Pascal):

Most introductory CS texts equate IF..THEN..ELSE IF..THEN..ELSE IF... 
statements with CASE statements.  Many HLL programmers go through life
thinking they're equivalent.  Sure, logically they do the same thing; reality
is another story.  In general, they are implemented in completely different
ways.  The CASE statement uses the index variable as an index into an array
of addresses and transfers control to the corresponding location.  If..then..
else if... statements work in the obvious fashion.  Poor souls who think that
these two mechanisms are equivalent are missing out.  On the one hand, CASE
statements can be *Orders of Magnitude* faster than the IF..THEN..ELSE
approach (depending on the number of cases).  Unfortunately, the code generated
can also be *orders of magnitude* larger if the cases are widely separated.
The "abstract model" of the language doesn't consider this important point.
In the "abstract model", CASE is logically equivalent to the IF..THEN..ELSE IF
approach.

Likewise, too many HLL programmers have *no idea* of the performance loss they
get hit with when they use 2-D or higher dimension arrays (yeah, C programmers
know about this, but C isn't exactly a "high" level language, most properly
call it a medium level language).  The HLL hides this little detail from
them.  Had they been forced to indexed into two-dimensional arrays at one
point in their lives they would know about this.

In one quarter (10 weeks) at Cal Poly or UCR I haven't anywhere near enough
 time to teach assembly language properly to disinterested students.  Out of
30 students, most of them are going to wind up like the dissenting opinion
around here because they are not good enough to handle assembly language
(that is not to say that the dissenters around here aren't very good, I'm
just stating that these students aren't going to be very good and they'll be
the first to chime in and knock assembly in the future).  Maybe 5-10 students
will actually get good at it.  Faced with this, I make *damn* sure that if
they learn nothing else in the class, they learn exactly what that Pascal
compiler is doing for them in terms of Pascal->8086 code generation.  That
way, even if they never write another line of assembly code in their lives,
at least they're (presumably) aware of what the compiler is doing.  I consider
this the minimum necessary information (concerning assembly) that they possess.
*** Randy Hyde

whitewolf@gnh-starport.cts.com (Tae Song) (04/11/91)

|>
|> Be ye forewarned, however, when that day arrives, all you C jockeys are
|> going to be on the defensive.  By then, people will be arguing for the use
|> of LISP, Prolog, ICON, APL, SETL, 4GLs, and other languages not in existence
|> yet for the same reasons you're claiming everyone should be using C (or Ada,
|> or Pascal).
|> *** Randy Hyde
|>
|No one will ever build a computer fast enough.

Uh-uh... and 4K is all the memory you'll ever need.
 
whitewolf@gnh-starport!info-apple

stephens@latcs2.lat.oz.au (Philip J Stephens) (04/12/91)

Doug Gwyn writes:
>
>I wrote:
>>You can't program effectively in a HLL if you don't know the hardware
>>you're working on, it's limitations and it's features.
>
>Wrong -- any decent HLL should be exploited in terms of the abstract
>model of computation that it supports, most definitely not in terms
>of any specific machine architecture.

  Sorry, I didn't phase that particular comment very well, did I?  I
agree that people should not be writing programs that only work
correctly on their machine, if they wish it to be portable to other
systems.  Unfortunately there is no such thing as a completely
portable program (although C programs under Unix tend to be pretty
good in that regard).
  However, portable programs aren't always the best in terms of speed
or efficiency.  It is sometimes necessary (or just plain desirable)
to use machine-dependent features for a particular implementation of a
program.  Obviously the best way to achieve this is to specify an
interface to machine-dependent routines and let the people who are
porting the program write the necessary code for those routines, which
honours the interface requirements.
  To do this, you must have knowledge of *your* machine.  It is silly
*not* to use machine-independent features when they can improve the
execution of a program such as a compiler, and the more people who are
capable of learning about the hardware of different machines when the
need arises, the better the software will be.
  C already allows this to happen, with it's include file syntax.
Every machine must provide a library of modules to perform the basic
I/O, and all these are machine-dependent.  But normal programmers are
not expected to know how to develop their own machine-dependent
library modules; they are expected to write all their libraries in C
using the existing libraries, and that results in more and more
inefficient programs.
  Curses is a good example of a library module that is unbearably slow
on *all* the Unix systems I've had the misfortune of needing it.  It
really needs to be written in assembly for it to operate at a decent
speed (and no, I'm not confusing the speed of the Curses package with
the speed of the baud rate on the terminal line).  The package is
small enough in it's functionality that an assembly version would not
be all that difficult to implement -- but how many people are actually
willing to do it?  I, for one, would love to tackle that project if I
had the time to do so.

<\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/><\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\>
<  Philip J. Stephens                ><   "Many views yield the truth."       >
<  Hons. student, Computer Science   ><   "Therefore, be not alone."          >
<  La Trobe University, Melbourne    ><   - Prime Song of the viggies, from   >
<  AUSTRALIA                         ><   THE ENGIMA SCORE by Sheri S Tepper  >
</\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\></\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/>

lang@rex.cs.tulane.edu (Raymond Lang) (04/13/91)

In <1991Apr10.203703.21010@nntp-server.caltech.edu> toddpw@nntp-server.caltech.edu (Todd P. Whitesel) writes:
>Yes, but modifying the program to improve performance requires

>       (a) better overall algorithms &/or data structures
>       (b) knowledge about the hardware and what the compiler will produce

I don't follow how (a) comes from experience in assembly language.


>[convex example deleted]

>The point of all this is that while HLL's insulate you from the grunge, they
>often make it hard to obtain optimal performance. The set of fully portable
>modifications that yield pure improvement on any machine is very small
>compared to the set of modifications you can make when you have some
>knowledge of the machine hardware and how the compiler will react to the
>program (including the various optimizer switches you can invoke).

I've had some _very_ limited experience with the Convex here at Tulane, and
you're right: you have to write the code a certain way in order for the
vectorizer to be able to do much with it. Also, your point about I/O was
well taken.

However, as I've been following this discussion over the past couple of
weeks (and I've found it very interesting), I've sensed an overriding concern
for efficiency. While that's probably understandable in a newsgroup dealing
with a 2.8 Mhz home computer, I believe many times other factors tip the
scales against programming in assembly. For example, I'm sure that with
the revolving door mentality many people have about jobs, a lot of software
development houses put a premium on well modularized code with clean
interfaces that is easy to modify even by someone who didn't write it.
I don't deny this is possible in assembly, but I'd certainly say this is
easier to achieve in principle with an HLL.


>Writing portable code that compiles well on your machine is generally the
>best way to go; only when performance or coding time is more important than
>portability should assembly be used

Seems to me an HLL would be better when coding time is limited. I would say
it a little stronger: only when performance is more important than
_every_other_consideration_ should assembly be used.

Knowledge of assembly is certainly useful, which I'm sure is why it's
part of almost all computer science curriculums. But frankly, I think
there are a lot of considerations that make it an inappropriate choice for
most projects.

Ray
lang@rex.cs.tulane.edu

toddpw@nntp-server.caltech.edu (Todd P. Whitesel) (04/13/91)

lang@rex.cs.tulane.edu (Raymond Lang) writes:

>In <1991Apr10.203703.21010@nntp-server.caltech.edu> toddpw@nntp-server.caltech.edu (Todd P. Whitesel) writes:
>>Yes, but modifying the program to improve performance requires

>>       (a) better overall algorithms &/or data structures
>>       (b) knowledge about the hardware and what the compiler will produce

>I don't follow how (a) comes from experience in assembly language.

It doesn't. I just didn't want to ignore that and have somebody waste all our
time calling me on it.

>>Writing portable code that compiles well on your machine is generally the
>>best way to go; only when performance or coding time is more important than
>>portability should assembly be used

>Seems to me an HLL would be better when coding time is limited. I would say
>it a little stronger: only when performance is more important than
>_every_other_consideration_ should assembly be used.

That's what I meant, when coding time is limited. Actually limited coding
time is sometimes a reason to throw a dash of assembly in the middle of a
project, so you don't have to spend lots of time figuring out the HLL
expression for what you want to do. This is only valid for projects that
aren't intended to be portable, of course.

>Knowledge of assembly is certainly useful, which I'm sure is why it's
>part of almost all computer science curriculums. But frankly, I think
>there are a lot of considerations that make it an inappropriate choice for
>most projects.

Few of us (and I certainly am not) arguing against that, at least not any more.
What I am still arguing for is the idea that knowing assembly on ANY machine
gives you a big edge over other HLL programmers who are assembly-ignorant,
and that there are way too many assembly-ignorant programmers out there.

Todd Whitesel
toddpw @ tybalt.caltech.edu

psonnek@pro-mansion.cts.com (Patrick Sonnek) (04/13/91)

In-Reply-To: message from MQUINN@UTCVM.BITNET

Not needing to know opcodes?!?  Lets see that person figure out whats wrong
with thier program when all they get is a core dump layed on thier desk in
the morning!  :)    These HLL programmers are a real pain in the neck,
guess who they all come running to when thier program blows up?  Like I
have nothing else to do!  I agree with the above posts, that knowing
Assembler will make anyone a better programmer, being able to properly
allocate arrays (You did get it right mquinn!) and debug a program from a
core dump are VERY valuable skills.  Skills that a lot of programmers lack.
I guess I should look on the bright side, this apparent lack of skills
just makes me more valuable (can I have a raise now boss?)

----
ProLine:  psonnek@pro-mansion    Sysop Pro-mansion: 507/726-6181
Internet: psonnek@pro-mansion.cts.com  MCImail      psonnek
UUCP:     crash!pro-mansion!psonnek
BITNET:   psonnek%pro-mansion.cts.com@nosc.mil
ARPA:     crash!pro-mansion!psonnek@nosc.mil

unknown@ucscb.UCSC.EDU (The Unknown User) (04/14/91)

In article <8560@crash.cts.com> psonnek@pro-mansion.cts.com (Patrick Sonnek) writes:
>In-Reply-To: message from MQUINN@UTCVM.BITNET
>
>Not needing to know opcodes?!?  Lets see that person figure out whats wrong
>with thier program when all they get is a core dump layed on thier desk in
>the morning!  :)    

	Uhh, they could use DBX! Of course I'm talking about on UNIX systems,
but your comments seemed to be generic and not specifically talking about
Apple II programmers.
-- 
/unknown@ucscb.ucsc.edu Apple IIGS Forever! ULTIMA VI GS -mail me. CDs-mail me\
\          McIntosh Junior:  The Power to Crush the Other Kids.               /

tsouth@techbook.com (Todd South) (04/14/91)

In article <9104110129.AA11471@apple.com> MQUINN@UTCVM.BITNET writes:
>On Wed, 10 Apr 91 05:04:41 GMT Doug Gwyn said:
>>In article <1991Apr9.150402.563@latcs2.lat.oz.au> stephens@latcs2.lat.oz.au
>> (Philip J Stephens) writes:
>>>You can't program effectively in a HLL if you don't know the hardware
>>>you're working on, it's limitations and it's features.
>>
>>Wrong -- any decent HLL should be exploited in terms of the abstract
>>model of computation that it supports, most definitely not in terms
>>of any specific machine architecture.
>
>
>Knowing Assembly also gives you an
>appreciation for what High level languages do and WHY they are the way they
>are and also gives you ideas on exactly how the languages could be improved.
>The list of advantages just goes on and on and on.  It's sort of like
>experiencing another culture in another, less advanced country than your
>own:  It gives you an appreciation for your own county and helps you
>understand how we are and why we are the way we are.

Truer words have never been spoken.  Although I am not now an active
programmer in my job, I deal with the local school systems on a continual
basis and laugh at the kids who think that 2 years of Pascal in junior and
senior high will be all the programming knowledge they ever need to get
and understanding of today's computers.  In one school in particular the
students were lucky enough to get a grant which purchased a number of
Amiga 1000's.  What does the teacher teach on a machine that was designed
in C?  Pascal!!!

>Learning a HLL without learning assembly is very similar to the way they
>used to teach math in my old high school:  Athletic *COACHES*, who didn't
>have the foggiest idea of math, were teaching the classes.  When someone
>asked a question on why a certain formula worked, the coach would just
>say, don't worry about how it works, just memorize the formula.  Well,
>Just memorizing the formula will just get you so far.  UNDERSTANDING the
>formula is a whole different story.  It helps you REMEMBER the formula,
>and if you ever happen to forget the formula, if you understand how it
>works, you could always re-think out the formula.
>
>Many of the computer science professors around now are much like the
>coaches I had for math in High schooll.  They don't completely understand
>it themselves and they're churning out more and more people like
>themselves and the real 'thinkers' are slowly vanishing.
>
>This is part of the overall education problem in the U.S. today.  To many
>teachers just want you to memorize the facts and forget about trying to
>understand them.

I would go farther than this.  I feel that the attitudes have come from
a yuppie sickness that permiates the schools all over America.  Basically,
I like to call it the Business/Marketting syndrome.  So many kids today
are told that they basically just have to get a general degree in this
area and the jobs will jump at them from right and left.  Unfortunately,
they find that when EVERYONE of them gets the same degree only the most
agressive and/or assertive in the market with a good amount of natural
talent are the ones that actually lock up the jobs in these upper manage-
ment fields.  Now, with the majority of baby-bommers having locked up these
jobs the youth of America are finding that they have to revert to mail
room jobs at UPS or becoming assistant managers in various _small_
establishments.  In Oregon alone the past 5 years worth of graduates in
Business/Marketting cannot find jobs in their field and they would have
probably been great assets to the engineering world or other hands-on
job environments.

As for the CS major, I feel that some of the professors they have to learn
under were Bus./Marketting dropouts who found that they had to get a real
job eventually so they went back to school, got the basic credentials
for the position and hoped to God they would be able to struggle through
till they developed tenure and could sit back on easy street.  I have only
seen one really good course at the local community college and the
instructor basically sat back and read the material verbatim from the
book (I wanted to brush up on calculus applications).  As for the larger
ones the system administrators I have talked to are lucky as hell they
were hackers early on in life or they wouldn't even have a chance at a
serious position with followon, or career potential.  The majority of
CS people that I have met eventually end up in nowhere jobs writing RPG II
instead of the state of the art applications they may have dreamed of.
Many of the programmers I see in the community of Portland are good, but
usually have a group of workers that struggle to follow the lead of the
head of a team and form code into expectations of finally finishing
a project instead of finishing a PROGRAM!

>So, I guess the moral of my story is, "You're much better off knowing what's
>under the hood and how it works, in case you get stalled out in the middle
>of nowhere."  :)
>
>>There are a few situations in which certain basic information about
>>the system can help, for example in a Fortran program that accesses
>>a huge multidimensional array; if it accesses it in the wrong order,
>>performance might be poor due to virtual memory "thrashing".  But
>>knowing what machine opcodes and addressing modes exist is of no real
>>value to a Fortran programmer.
>
>For this ONE particular example, knowing OPcodes isn't really necessary,
>but knowing how memory is reserved for arrays certainly helps.  See my
>above example.
>
>I'm sure you could pick out MANY instances where it's not necessary to know
>assembly, but MANY instances, it is either almost a necesity, or it's
>extremely helpful to know assembly.  There ARE quirks about machines that
>you just can't work around without knowing assembly.  Sure, you're code
>will more than likely work without knowing assembly, but the more assembly
>you know, the better your code is going to be.
>
>  pro-line-- mquinn@pro-gsplus.cts.com

I see it as attitude.  The computer society within America has long since
changed from cutting edge to complacency.  HLL's are good if they are
used to improve concepts within the program, bad if they are used simply
as a faster way of doing something.  While a person needs to know assembly
in case they might use it, one would be better off if they learned assembly
with the attitude that learning the most basic components of a CPU will
allow them to understand how concepts within the program would be better
for that system and user thereof.

Todd South

-- 
--
tsouth@techbook.COM  ...!{tektronix!nosun,uunet}techbook!tsouth
Public Access UNIX at (503) 644-8135 (1200/2400) Voice: +1 503 646-8257
Public Access User --- Not affiliated with TECHbooks

reanor@speedy.cs.pitt.edu (Jeffrey Getzin) (04/15/91)

In article <9104060651.AA18946@apple.com> MQUINN@UTCVM.BITNET writes:
>On Fri, 5 Apr 91 11:39:40 LCL Joe Luzzi said:
>
>
>This is true.  I'm taking a VAX assembly course now.  It's great, but my
>professor prevents us from doing as much as I'd like.  His attitude is, "Well,
>you'll probably never use assembly (he calls it assemblER) language after you
>get out of this class."                                ^^
>
>
>This is true too.  Knowing assembly, you know the limitations of the machine
>your working on ...  This is very important when doveleping software.  ...
>Also, another person I work with just graduated with a degree in computer
>science.  Computers are his life.  He knows all about HLLs but he doesn't
>understand why alot of things work or why they don't work, because he doesn't
>understand assembly language or the limitations of the hardware....
>It's all because he was 'raised'
>on nothing but HLLs, GUIs, and macs.  Computer Science courses should begin
>with explaining computers from the bit level and working their way up.  By
>the time the students get to the HLLs, data structures, files, etc..., they
>should have virtually no problems undterstanding those concepts and how they
>came to be, and what the limitations are.
>
>Unfortunately, most computer science courses are tought in the opposite
>direction... HLL first, then, when you've had enough of it for several years,
>they'll give you just a tainted taste of assembly.  It makes it very difficult
>for students to understand all these intangible concepts without first knowing
>how they came about.
>
>
>----------------------------------------
>  BITNET--  mquinn@utcvm    <------------send files here
>  pro-line-- mquinn@pro-gsplus.cts.com

I strongly disagree.  I have been studying Computer Science for over six
years now, and I have seen many different approaches to education.  (one
program I was in actually started with formal language theory, can you
believe it?!)  

Let me explain why I feel that starting with a HLL is vital:

First there's the "hook":   how to get people interested in computer science?
If there is nobody interested in computer science, there are no computer
science majors and the field on the whole greatly suffers.   So it is vital
to get people interested in the field.

But most people who sign up for a computer course are only interested in
learning a little bit about the subject, and the more useful that is, the
better.   Very few people will EVER need to program in Assembly(-er) Language
during their entire lives.   HLL's, however, may prove invaluable in many
walks of life.   Therefore, the introductory courses MUST be useful in order
to attract people to the classes.

Second, Assmbly(-er) Languages are machine-specific, and give surprising
little understanding about machines in general.  Sure, you'll learn about
registers and memory, but everything else is different.   Would you advise
learning 6502 Assembly(-er), with two registers and an accumulator (and
correspondingly ugly code), a VAX 9000 with an amazing powerful instruction
set, or a RISC machine with its simple instructions, pipelines and zillions
of registers?

The answer is that you can learn a lot from an Assembler Language, but not
enough, since it is all machine-specific, which in Computer Science is next
to useless.   Instead, I feel that a Computer Architecture course should
be a mandatory element in a Computer Science program,  and it is here
that the understanding of "what is a computer" be investigated.   All the
rest, I feel, should remain at a higher level where algorithms and data
structures are the important considerations and not how to bum another
three instructions off of your assembler language program.


			=*()*()* Jeff *()*()*=


Internet:  saintdor@vms.cis.pitt.edu

gwyn@smoke.brl.mil (Doug Gwyn) (04/16/91)

In article <9104101742.AA13905@apple.com> MQUINN@UTCVM.BITNET writes:
-Imagine a Chemist that didn't know the detailed properties of the basic
-elements, although he knows what many different combinations of chemicals
-did.  He may be able to do alot with the knowledge he has, but he doesn't
-understand WHY it works or HOW he can come up with something completely
-different.  On the OTHER hand, a chemist that knows the physical properties
-of all the elements and how and WHY they react with other elements.
-This chemist, can, undoubtably, do everything the other chemist can plus a
-WHOLE lot more.

In actual fact, not even the very best chemists attempt to predict
properties of compounds on the basis of properties of elements.
Chemistry is just not that simple.

gwyn@smoke.brl.mil (Doug Gwyn) (04/16/91)

In article <10408@pitt.UUCP> reanor@speedy.cs.pitt.edu.UUCP (Jeffrey Getzin) writes:
>Very few people will EVER need to program in Assembly(-er) Language
>during their entire lives.

That is true for most computers, including what I term "real" computers.
Unfortunately, however, the Apple II's puny processor offers only very
poor support for high-level languages, so there are occasions in the
life of any serious Apple II programmer when resorting to at least a
modest amount of assembler is called for.

>The answer is that you can learn a lot from an Assembler Language, but not
>enough, since it is all machine-specific, which in Computer Science is next
>to useless.   Instead, I feel that a Computer Architecture course should
>be a mandatory element in a Computer Science program,  and it is here
>that the understanding of "what is a computer" be investigated.

I fully agree.  Computer architecture is worthy of serious study, but any
particular machine language is not  -- until such time, if ever, when you
are personally called upon to deal with it, as for example in establishing
the vectors for interrupt handlers or in providing nitty-gritty run-time
support for a high-level language.

MQUINN@UTCVM.BITNET (04/17/91)

On Mon, 15 Apr 91 07:20:37 GMT <info-apple-request@APPLE.COM> said:
>
>Let me explain why I feel that starting with a HLL is vital:
>
>First there's the "hook":   how to get people interested in computer science?
>If there is nobody interested in computer science, there are no computer
>science majors and the field on the whole greatly suffers.   So it is vital
>to get people interested in the field.

I would imagine that most people that enroll in computer science are already
interrested in coputers, although, you make a good point.

>But most people who sign up for a computer course are only interested in
>learning a little bit about the subject, and the more useful that is, the
>better.

I'm not so sure about this (but I don't ocmpletely disagree).  In University
level computer courses, they're set up for a full computer education (well,
supposively, anway... it'd be full if they emphasized Assembly more) usually.
Although, I'm sure that there are some places that offer courses that are only
meant to introduce people to computers with no intention of going any further,
but, to the best of my knowledge, most courses are intended for a full four
year education, at least.  It's those courses that should begin at the bit
level.  Although, a 'hook' as you put it, might not be a bad idea.  They caould
have several GSs lined up in the front of a classroom with some FTA demos
running and then tell the students that they're going to show them how they
can accomplish the same thing :), then, after everyone is ooing and awing over
what computers can really do, they'll be hooked and the professor could get
started on the bit level.  (Getting more specific now...), The professor could
teach Hexadecimal and the concept of memory and addresses, then, immediately,
(probably in the first day), demonstrate what can be done with softswitches
by turning on the graphics modes, clicking the speaker, making a tiny
assembly routine that generates a siren sound (about 10 lines of assembly).
Then, the professor might want to make an identical program, line per line,
in BASIC and show how much faster assembly is than BASIC and show that assembly
is great for some things and get them interrested in assembly.

>          Very few people will EVER need to program in Assembly(-er) Language
>during their entire lives.   HLL's, however, may prove invaluable in many
>walks of life.

This is true.  Most people will probably never need to program in assembly,
but KNOWING assembly will greatly improve their concept of just about everythin
g having to do with computers.  I'm not arguing that HLLs are bad, or that
people should use Assembly rather than HLLs.  I'm arguing that people should
KNOW assembly and understand the bit level operations that go on in a computer.
Just like, the more a mechanic understands how a car engine works, the better
off he'll be (and his customers) than if he only knew how to operate the
computers that tell him what's wrong with the car.

>                 Therefore, the introductory courses MUST be useful in order
>to attract people to the classes.

I think assembly can be EXTREMELY attractive, if people would just teach it the
right way (maybe something similar to what I mentioned about the FTA demos).

>Second, Assmbly(-er) Languages are machine-specific, and give surprising
>little understanding about machines in general.

They ARE machine specific, I agree, but... give surprising little understanding
about machines in general?????  This, is, DEFINITELY not true!  I learned
assembly on an 8-bit Franklin (Apple ][+ compatible) early last decade.
It DRAMATICALLY increased my understand of how machines (ALL MACHINES) work.
I learned how I/O works (I had absolutely NO idea before, even though I was
fluent in BASIC, on both the Apple II and on a TRS-80).  I didn't understand
how the computer knew what to do when I type CLS or HOME.  I knew I didn't
have a program running that was waiting for those commands, and yet, there
was not even a program in memory (that I knew of).  I couldn't understand
how information was put and read from disk or tape.

As soon as I got into assembly language, I imediately started to comprehend
how all this works.  Without knowledge of assembly language, I could do alot
of stuff, but I was basically, lost as to what was really going on.  When
I understood assembly, I finally found a use for POKE and PEEK, and used them
quite often.  I wrote mini-machine language subroutines to increase the
efficiency of my BASIC programs.  I found the limitations of my computer
AND understood them, which made it much less frustrating when programing
and even increased my effectiveness in BASIC (even when I didn't use ML subs).

>Sure, you'll learn about
>registers and memory, but everything else is different.   Would you advise
>learning 6502 Assembly(-er), with two registers and an accumulator (and
>correspondingly ugly code), a VAX 9000 with an amazing powerful instruction
>set, or a RISC machine with its simple instructions, pipelines and zillions
>of registers?

I would DEFINITELY recomend learning 6502 assembly on an Apple II.  It's
quick and dirty and easy to get to the 'development' interface (call-151).
You can play with softswitches without worrying about crashing the system
and screwing something up and having to go through the reboot cycle, etc...
You can start the mini-assembler, you can write to and read from memory
locations, list the RAM, list the ROM and since the memory size is so small,
it's very easy to comprehend the entire memory map.  I wouldn't limit it to
6502 though.  I'd get them started in that, because, it's by far, the easiest
and fastest way to learn assembly, IMHO.  I'd then move them to say, an
XT, introduce them to the DEGUG command, then get them into a full featured
assembler... then move them to say, a VAX 11/70 or quite possibally, an
IBM mainframe.  But after some quick and dirtly lessons on the Apple II,
they'd understand assembly much better and learn it much faster on main
frames.

>The answer is that you can learn a lot from an Assembler Language, but not
>enough, since it is all machine-specific, which in Computer Science is next
>to useless.

The fact that any ONE assembly language is machine specific means virtually
nothing.  The concepts are the same for all machines.
  -they all have RAM
    -RAM can be written to and read from
  -they all have ROM
    -ROM can be read from, looked at, but not changed.
  -they all have I/O
  -if any particular machine has more than one video mode, it has
   softswitches to switch between them.
  -if it can produce sound, the speaker must be clicked.

Hardly any of this is machine specific.  Sure, machine one may have a different
address for a particlar softswitch or strobe or input registers or locations,
than machine number two,
but knowing HOW they work is much more important than exactly where they are.
But, of course, if you're going to write a driver, it's necessary to know
the specifics of that one machine.  But that doesn't mean that learning and
understand the low level operations behind the high level structures isn't
helpful.

>              Instead, I feel that a Computer Architecture course should
>be a mandatory element in a Computer Science program,  and it is here
>that the understanding of "what is a computer" be investigated.   All the
>rest, I feel, should remain at a higher level where algorithms and data
>structures are the important considerations and not how to bum another
>three instructions off of your assembler language program.

I feel that that should be mandatory too, but it's very important to understand
what is really going on AND how YOU can do it, behind the 'protective' user
interface.  There's nothing wrong with shielding a USER from knowing what's
going on inside, but a PROGRAMMER should definitely know what's going on.

It looks like we're moving into an era where there will be two types of
'programmers'.  One type (the less educated one) will be more of a user than
a progammer... much like my boss, who uses a shell or script psuedo language
to get things done.  She understands NOTHING of what's really going on.
There won't be much difference between these users and Hypercard 'programmers'.
Then, there will also be, educated programmers, who are REAL programmers.  They
will understand what's really going on.  They'll be able to dip into a hex
dump and fix something.  They'll be the ones to save the 'psuedo programmers'
butts when they get an unexpected glitch in a program and don't know what to
do because they were taught that they'd never have any need for assembly
language.  They'll be the ones who'll be able to write a quick hack patch to
something just to get it working until a more structured version can be
completed.  I could go on and on about the advantages of it, but I think I'll
stop here.

>			=*()*()* Jeff *()*()*=
>Internet:  saintdor@vms.cis.pitt.edu

----------------------------------------
  BITNET--  mquinn@utcvm    <------------send files here
  pro-line-- mquinn@pro-gsplus.cts.com

psonnek@pro-mansion.cts.com (Patrick Sonnek) (04/18/91)

In-Reply-To: message from MQUINN@UTCVM.BITNET

[Lots deleted......]

>They ARE machine specific, I agree, but. give surprising little understanding
>about machines in general?????  This, is, DEFINITELY not true!  I learned
>assembly on an 8-bit Franklin (Apple ][+ compatible) early last decade.
>It DRAMATICALLY increased my understand of how machines (ALL MACHINES) work.
>I learned how I/O works (I had absolutely NO idea before, even though I was
>fluent in BASIC, on both the Apple II and on a TRS-80).  I didn't understand
>how the computer knew what to do when I type CLS or HOME.  I knew I didn't
>have a program running that was waiting for those commands, and yet, there
>was not even a program in memory (that I knew of).  I couldn't understand
>how information was put and read from disk or tape.

I got interested in Assembler durring 2nd quarter, I found out just how much
stuff I could do with just a little bit of code.  The clincher was when I got
an assignment for COBOL 4 that If written in Assembler would have been a
very short program.  But in COBOL, I ended up writting a small novel!!

>The fact that any ONE assembly language is machine specific means virtually
>nothing.  The concepts are the same for all machines.
>  -they all have RAM
>    -RAM can be written to and read from
>  -they all have ROM
>    -ROM can be read from, looked at, but not changed.
>  -they all have I/O
>  -if any particular machine has more than one video mode, it has
>   softswitches to switch between them.
>  -if it can produce sound, the speaker must be clicked.

>Hardly any of this is machine specific. Sure, machine one may have a
>different address for a particlar softswitch or strobe or input registers or
>locations, than machine number two,
>but knowing HOW they work is much more important than exactly where they are.
>But, of course, if you're going to write a driver, it's necessary to know
>the specifics of that one machine.  But that doesn't mean that learning and
>understand the low level operations behind the high level structures isn't
>helpful.

I was taught only one assembler (BAL) I've since easily taught myself 2
others (TMS9900 & 6502)  I'm currently working on my 4th Assembler (80x86)

>It looks like we're moving into an era where there will be two types of
>'programmers'.  One type (the less educated one) will be more of a user than
>a progammer... much like my boss, who uses a shell or script psuedo language
>to get things done.  She understands NOTHING of what's really going on.

We call these types 'coders' :-p

>There won't be much difference between these users and Hypercard
>'programmers' Then, there will also be, educated programmers,
>who are REAL programmers.They
>will understand what's really going on.  They'll be able to dip into a hex
>dump and fix something.  They'll be the ones to save the 'psuedo programmers'
>butts when they get an unexpected glitch in a program and don't know what to
>do because they were taught that they'd never have any need for assembly
>language.  They'll be the ones who'll be able to write a quick hack patch to
>something just to get it working until a more structured version can be
>completed.  I could go on and on about the advantages of it, but I think I'll
>stop here.

The main reason I'm so overworked!

----
ProLine:  psonnek@pro-mansion    Sysop Pro-mansion: 507/726-6181
Internet: psonnek@pro-mansion.cts.com  MCImail      psonnek
UUCP:     crash!pro-mansion!psonnek
BITNET:   psonnek%pro-mansion.cts.com@nosc.mil
ARPA:     crash!pro-mansion!psonnek@nosc.mil

davewh@microsoft.UUCP (04/18/91)

 MQUINN@UTCVM.BITNET writes:

On Wed, 10 Apr 91 05:04:41 GMT Doug Gwyn said:
>In article <1991Apr9.150402.563@latcs2.lat.oz.au> stephens@latcs2.lat.oz.au
> (Philip J Stephens) writes:
>>You can't program effectively in a HLL if you don't know the hardware
>>you're working on, it's limitations and it's features.
>
>Wrong -- any decent HLL should be exploited in terms of the abstract
>model of computation that it supports, most definitely not in terms
>of any specific machine architecture.

Doug's right:

>If you don't know the maximum amount
>of RAM, Minimum amount of RAM, Graphics resolutions, colors, bits per
>pixel, CPU speed, sound capabilities, port I/O addresses, hardware buffer
>sizes, hardware stacks, maximum I/O rate of ports, timing delays needed,
>reserved RAM space, ROM addresses, ROM entry point routines, the concept
>of hardware address (needed for pointer variables in HLLs), then you are
>missing out on ALOT and therefore, not taking advantage of that knowledge
>and applying it to your HLL code to make it more compatible with the the
>hardware works (and I don't mean any specific hardware).

All of the stuff you just mentioned should NOT be needed for
programming a general application. If you do "know" what it is today,
what happens next year when they come out with a new model? Look at
all the //e stuff that doesn't instantly take advantage of the GS's
capabilities. Take a look at all the Mac stuff that does take
advantage of the features of new models. I can take a program running
on a 2MB Mac with 1-bit video and run it on a 8MB Mac with 24-bit
video. The program code is the same. It doesn't need to know that the
hardware supports 24-bit video, because Quickdraw handles that. The
new features available are used right away. Don't believe me? Decode
a GIF on a 24-bit system and cut the image to the clipboard. Paste it
into some draw document. Now display that draw document on a 4 bit
system. It won't look as good, but it'll still look OK. The draw
program isn't doing anything to the bitmap - it just asks Quickdraw
to draw it.

>For a real example, if you know the paging size of a particular system,

Now you're mixing Apples and Unixes. One certainly does NOT need to
know any hardware facts about a unix system (namely, color res, I/O
space, etc). Knowing page sizes is somewhat helpful, but just knowing
the page size may not help if you happen to absolutely need
(pagesize+1) bytes.

On a Mac, with 32-bit QuickDraw, one simply asks for a color mix
before drawing. If that specific color can't be generated on this
machine (because it's running with 4-bit video and the color table is
full) then QuickDraw will draw with a dither of available colors to
give you the illusion of that color. It works quite well. The
programmer need not worry. Only in extreme cases will this totally
wipe out. In that event, the programmer probably couldn't do anything
about it anyway. (BTW- QuickDraw does a superb job at this while
X-Windows basically does not. In fact, X is lousy at dealing with
colors and color tables correctly. Knowing what's available hardly
helps with X.)

Remember that a properly designed system has the hardware abstracted
away from the applications programmer. When's the last time you
accessed the SCSI controller directly to open a file? In fact,
programmers who shirk the OS and go to the hardware directly on most
machines are asking for serious trouble. For the most part, you can
get away with such stunts only on the Apple // and not so much any
more on the GS.

>If you use dynamically allocated memory for an array instead of declaring
>a definite 200k array that may not even be used to 50%.

This has nothing to do with knowledge of assembly. This has to do
with how to program efficiently. They teach efficient programming in
HLL classes too...

>When someone
>asked a question on why a certain formula worked, the coach would just
>say, don't worry about how it works, just memorize the formula.  Well,
>Just memorizing the formula will just get you so far.  UNDERSTANDING the
>formula is a whole different story.

Very true. But knowing assembly doesn't teach you how the HLL works.
Knowing how a compiler works and how computers work is the best
approach. See my other post...

>Many of the computer science professors around now are much like the
>coaches I had for math in High schooll.  They don't completely understand
>it themselves and they're churning out more and more people like
>themselves and the real 'thinkers' are slowly vanishing.

Depends on the school. Depends on the class. My first CS class was in
SCHEME and we centered on how to design a program. Other classes went
into how to break down and design large programs (how to divide the
work among several people, etc). My compiler class took the mystery
of compilers away. My computation structures class took away the
mystery of hardware and OS's.

Now, I program on a Compaq 486 and have no knowledge of its assembly
language. I have no idea how the IBM-compatibale hardware works, and
I have little or no desire to learn it. I don't need to know it. I
can still program very well because I know how the compiler operates
and how the computer and OS function. I have no use for screen
resolutions or color tables, even though I'm doing Windows
programming.

>----------------------------------------
>  BITNET--  mquinn@utcvm    <------------send files here
>  pro-line-- mquinn@pro-gsplus.cts.com

Dave Whitney	Microsoft Corp, Work Group Apps  dcw@goldilocks.lcs.mit.edu or
I wrote Z-Link and BinSCII - send me bug reports. {...}!uunet!microsoft!davewh
I only work here. All opinions herein aren't Bill's, they're mine.
"We're samplin' - Yeah we're doin' it. We take good music an' we ruin it."
   -- "Rap Isn't Music"

gwyn@smoke.brl.mil (Doug Gwyn) (04/21/91)

In article <9104191901.AA13371@beaver.cs.washington.edu> davewh@microsoft.UUCP writes:
>In fact, X is lousy at dealing with colors and color tables correctly.
>Knowing what's available hardly helps with X.)

This brings to mind a recent example of why it is BAD to know too much
about your specific implementation.  The X11 version of the "sam" text
editor worked okay on Sun black-and-white workstations, but failed
utterly on Silicon Graphics color workstations.  Our local Xpert found
that the basic problem was that the programmer had assumed that 1-bit
deep "visuals" would always be available in X environments, and had
relied on that "fact" (which was true for the Sun implementation) in
writing the "sam" X terminal code.  After we removed that assumption,
we then got the code to work on the color workstation, but it was
painfully slow at times.  That problem turned out to be due to an
assumption (also true for the Sun implementation) that the X server
would provide an efficient implementation of the XOR operation.  We
also eliminated that assumption and now have a program that runs much
faster in all X environments with essentially the same complexity as
the original.

I only wish that programmers would be more concerned with portability
when they implement the first version of a program.

MQUINN@UTCVM.BITNET (04/22/91)

On Fri, 19 Apr 91 16:07:30 CDT <microsoft!davewh@CS.WASHINGTON.EDU> said:
> MQUINN@UTCVM.BITNET writes:
>
>On Wed, 10 Apr 91 05:04:41 GMT Doug Gwyn said:
>>In article <1991Apr9.150402.563@latcs2.lat.oz.au> stephens@latcs2.lat.oz.au
>> (Philip J Stephens) writes:
>>>You can't program effectively in a HLL if you don't know the hardware
>>>you're working on, it's limitations and it's features.
>>
>>Wrong -- any decent HLL should be exploited in terms of the abstract
>>model of computation that it supports, most definitely not in terms
>>of any specific machine architecture.
>
>Doug's right:

[about 100 lines of why it's not necessary to know assembly]

Let me give another real world example (with my boss again)...

My boss calls herself a programmer.  She uses a pseudo language on both the
IBM and Mac.  They're both pathetically slow (although that has nothing to do
with this).  Whenever she declares an integer variable, she ALWAYS declares
it as a FOUR BYTE variable... even when she KNOWS that it's value will always
contain a number between 0 and 199 (a variable for the Y location on a 320x200
res. screen).  She does that because she doesn't know that a 2 byte variable
will do the job just as well (even better) and save RAM space, disk space,
load time, and most importantly... execution time in animation loops that
constantly recalculate that variable.  I could give 100 other examples with
this one person alone, but I believe I've made my point.

The main point of all of this is, you're MUCH better off knowing Assembly,
on top of what you already know, than NOT knowing assembly.

---sorry, I accidentally deleted the first line in your sig.---
>I wrote Z-Link and BinSCII - send me bug reports. {...}!uunet!microsoft!davewh
>I only work here. All opinions herein aren't Bill's, they're mine.
>"We're samplin' - Yeah we're doin' it. We take good music an' we ruin it."
>   -- "Rap Isn't Music"

----------------------------------------
  BITNET--  mquinn@utcvm    <------------send files here
  pro-line-- mquinn@pro-gsplus.cts.com

ericmcg@pnet91.cts.com (Eric Mcgillicuddy) (04/27/91)

>with this).  Whenever she declares an integer variable, she ALWAYS declares
>it as a FOUR BYTE variable... even when she KNOWS that it's value will always
>contain a number between 0 and 199 (a variable for the Y location on a
320x200
>res. screen).  She does that because she doesn't know that a 2 byte variable
>----------------------------------------
>  BITNET--  mquinn@utcvm    <------------send files here
>  pro-line-- mquinn@pro-gsplus.cts.com

You are incorrect. You do not KNOW that the variable will always be less than
256, two bytes might be possible 64k scan lines is unlikely in the near
future. However, what if the output device is not a screen? What if this were
outputing to a laser printer? Or what if the output were being sent to chained
monitors with some hardware connection such that scan lines greater than 65535
were possible?

Her programming structure makes fewer assumptions about the environment than
yours does and is planned for the future. You have programmed for today and
screw the maintenance programmer that has to fix it in coming years. Memory is
cheap. Speed is cheap. The only exceptions are the Apple II in both cases and
the GS in the second case. It is worth the extra to make allowance for ports
to a Mega-Pixel display which might just be NeXT year's standard.

Eric McGillicuddy

UUCP: bkj386!pnet91!ericmcg
INET: ericmcg@pnet91.cts.com

MQUINN@UTCVM.BITNET (04/29/91)

On Fri, 26 Apr 91 23:15:06 GMT <info-apple-request@APPLE.COM> said:
>>with this).  Whenever she declares an integer variable, she ALWAYS declares
>>it as a FOUR BYTE variable... even when she KNOWS that it's value will always
>>contain a number between 0 and 199 (a variable for the Y location on a
>320x200
>>res. screen).  She does that because she doesn't know that a 2 byte variable
>
>You are incorrect. You do not KNOW that the variable will always be less than
>256,

YOU are ABSOLUTELY, POSTIVITELY, WRONG!!!
First of all, I -=>DO<=- know that it will always be less than 201 (not 256
as you stated).  This program will not (YES, I KNOW THIS) be ported to anyting
higher than CGA in the near future (or not to distant future).  It's intended
to be run on a 4.77Mhz XT with CGA graphics.  Speed in this animation program
is of  utmost  importance, concidering the language AND computers are both
extremely slow.   This program will almost definitely NOT be upgraded ever.
It's a tutorial and will be useful as is for decades to come.

>two bytes might be possible 64k scan lines is unlikely in the near future.

Again, you're wrong.  Knowing how this compiler uses bytes for variables, I
know that it always assumes a sign bit.  In other words, one byte can contain
the range (-127..+127).  Therefore, we need TWO bytes to represent 200.
I looked in the documentation for this but couldn't find anything to suggest
that there's a way to use unsigned bytes for variables.

>However, what if the output device is not a screen?

but it IS.  What else would you use for animation?

>What if this were
>outputing to a laser printer? Or what if the output were being sent to chained
>monitors with some hardware connection such that scan lines greater than 65535
>were possible?

If this were a possibility, then I'd change the code, but it most certainly
is not and will not be.

>Her programming structure makes fewer assumptions about the environment than
>yours does and is planned for the future.

HA!   She doesn't know anything about planning code for the future.   We KNOW
the environment will be XTs.  The program is designed to work on virtually
any IBM compatible system, so, the lowest demoninator that suits the purpose
is a 4.77Mhz XT with CGA graphics.  Knowing that, MY code is better.

>You have programmed for today and
>screw the maintenance programmer that has to fix it in coming years. Memory is

As I've said before.  This is a 'dead end' project designed to work on the
lowest denominator.

>cheap. Speed is cheap. The only exceptions are the Apple II in both cases and
>the GS in the second case. It is worth the extra to make allowance for ports
>to a Mega-Pixel display which might just be NeXT year's standard.

A mega-pixel display will certainly be a standard in the future, but most
students at home, who are not 'computer people' will, more than likely, not
have that.  Our 'assumptions' about the lowest denominator will almost
certainly work on old AND new alike.

>Eric McGillicuddy
>
>UUCP: bkj386!pnet91!ericmcg
>INET: ericmcg@pnet91.cts.com

----------------------------------------
  BITNET--  mquinn@utcvm    <------------send files here
  pro-line-- mquinn@pro-gsplus.cts.com