[comp.lang.misc] assembly programming prefereable to HLL programming ?

orr@instable.UUCP (Orr Michael ) (12/03/86)

	Have you seen an article in "Computer Language" magazine
of Oct. 86 about "Universal assembly language" ?

    This article claims that it is (always/usually) better to
use assembly language rather than HLL. (any). this is based on the 
following claims: 

    1. The ONLY siginificant advantage of HLL is a shorter CODING time.
    2. Design, documentation, testing time is (almost) the same in both cases.
    3. The assembly program will run 2-5 times faster.
    4. So, after enough runs of the program the coding time gap will be 
       swallowed. From then on, the assembly program gains non-stop.

   Seems to me that this does not hold water.
     
    1. No mention of changes/maintenance issues anywhere.
    2. I strongly question ALL of the above assumptions. 
    3. As one of the compiler writers for NS , If assembler programs
	, as a rule , ran 2 times faster than our compiler, 
	I would be greatly surprized and FIX THE COMPILER !
    
   The autor also suggests a "UNIVERSAL ASSEMBLER" to run on many machines.
I think FORTH already fits the bill, & has many other advantages.

			Any Comments,  netlanders ?

-- 
orr%nsta@nsc			 IBM's motto: Machines should work,
                                              People should think.
                                 Orr's remark: Neither do.
Disclaimer: Opinions, come home. All is forgiven. Papa.

ken@rochester.ARPA (SKY) (12/04/86)

I wonder who the author is and how much programming he has done.

Some relevant points:

HLL's make it easier to program good algorithms with speedup that make
a factor of 2 look sick.

If you need to squeeze every last microsecond out of the program,
consider that normally 20% of the program takes 80% of the run time.
(Or was it 10/90?) Find the hot spots and optimize the hell out of
that, in assembler if need be. This mixed approach gets you the
advantages of HLLs most of the time and the efficiency you want.

Finally, I'm writing this compiler. It's only a student project but
already 6000 lines and growing, 1000 of which I added in the last
week.  I don't know anybody in his right mind who would suggest writing
it in assembler these days.

A good book to read is Jon Bentley's "Writing Efficient Programs".

	Ken

dyer@atari.UUcp (Landon Dyer) (12/04/86)

>  1. The ONLY siginificant advantage of HLL is a shorter CODING time.
>  2. Design, documentation, testing time is (almost) the same in both cases.
>  3. The assembly program will run 2-5 times faster.
>  4. So, after enough runs of the program the coding time gap will be 
>     swallowed. From then on, the assembly program gains non-stop.
> 
> Seems to me that this does not hold water.
>      
>  1. No mention of changes/maintenance issues anywhere.
>  2. I strongly question ALL of the above assumptions. 
>  3. As one of the compiler writers for NS , If assembler programs,
>     as a rule, ran 2 times faster than our compiler, 
>     I would be greatly surprized and FIX THE COMPILER !

Generally, assembly-language programs will run faster (and will
be smaller) than compiler-generated code.  Depending on your
machine's architecture, though, you may never want to program in
its assembly language (e.g. IBM 801 and other RISCs).  Friendlier
machines like the 68000 are another story.

     1. No one cares how much you suffered.  The *only* thing that
	a user sees is the product's performance . . . or lack therof.
	The user doesn't care what you wrote it in.  Once the product
	gets to market it stands on its own merit.

     2. It probably *is* more expensive to write a system in assembly
	than in an HLL.  Is it worth the effort?  What's the return?

     3. Given a piece of source code to compile, some present day
	compilers may well be able to out-perform a human (not for
	compilation speed, but for execution speed and space).

	But this is not what happens when a human writes in assembly
	language.  Many perfectly acceptable structures in assembly
	are illegal or difficult to express in an HLL.  The whole
	flavor of the project changes . . . not just the code generator.

	Humans are perfectly capable of using those "wierd, high level"
	instructions that compiler writers claim are useless.  (Show me
	a 68000 C compiler that uses MOVEP, TAS or SNE!)


Lotus-123 is non-portable, written mostly in assembly, and has made
its writers a *bundle* of money.  I don't suppose they're complaining
about how hard it was to write.

-- 

-Landon Dyer, Atari Corp.		        {sun,lll-lcc,imagen}!atari!dyer

/-----------------------------------------------\
| The views represented here do not necessarily | "If Business is War, then
| reflect those of Atari Corp., or even my own. |  I'm a Prisoner of Business!"
\-----------------------------------------------/

fouts@orville (Marty Fouts) (12/05/86)

I can't speak to the various claims of the author about coding of a single
application which only runs on a single processor, but I would like to point
out that the issue of moving code from one place to another seems to have
been ignored.

I don't think that a universal assembler is reasonable or even likely.
Machines vary widely, and contain concepts which don't even relate.  Some
machines (particularly multiprocessors) have a hierarchy of memory
requiring different machine instructions to access different classes of
memory, while others don't.  Some machines have particular register sets,
such as vector registers or register windows.

Transportable code is important in distributed environments.  I regularly
write "throwaway" code on a VAX which is routinely executed on Amdahls,
68K workstations and Crays.  Most of this code doesn't get executed enough
times before it is replaced to justify the tradeoff in development time
it would take to implement it in assembler for one machine, let alone for
the four that it runs on.

I'm not saying that all code is of this nature.  Some of the code I've written
for the Cray 2 has been profiled, and then had the hot spots reimplemented
in tightly crafted assembler code.  Other people have handcrafted entire
libraries and even complete applications to take advantage of the machine.
There are (arguably) somethings for which assembler is better, but there
are many distributed, light duty, "throwaway" applications for which
assembly language crafting doesn't make sense.

Actually, as I write this, I think I would address some of the claims in
favor of assembler code.  The idea that it takes the same amount of effort
to maintain and debug assembler based systems as for HLL systems is not
born out in operating system maintenance.

We have 4 Vaxen and a Cray 2 running native Un*x implementations, and a
dual processor Amdahl running MVS and UTS under VM.  We have as many people
doing Amdahl software support as we have supporting the rest of the systems
combined.

Ports of HLL applications are easier, even when the architecture is similar.
In one case I am aware of, a vendor chose to port an existing assembly lanaguge
Fortran compiler to a new machine and also to implement a new Fortran compiler
in Pascal for both the existing and the new machine.  They have put less
(maybe less than half?) effort into the Pascal implementation then in the
assembly language port and have about equal products.

I reject any claim that language A or language type X is better than language
B or language type Y for all purposes.  Somethings make sense to do in
assembler and others in HLLs.  As HLL implementations get better, machines
become more difficult to understand, and distributed applications become
more prevalent, the number of things which it makes sense to do in assembler
becomes less.

karl@haddock.UUCP (Karl Heuer) (12/05/86)

In article <646@instable.UUCP> orr@instable.UUCP (Orr Michael ) writes:
>    3. As one of the compiler writers for NS , If assembler programs
>	, as a rule , ran 2 times faster than our compiler, 
>	I would be greatly surprized and FIX THE COMPILER !

Careful coding in assembler can indeed result in a doubling of speed (but
not uniformly).  There are some things that current compilers just don't
know how to optimize, or can't because of lack of information.  E.g., one
hand-optimization I recently did on a VAX was to convert a recursive routine
from standard form (CALLS instruction, args starting at 4(AP)) to a BSBB with
args in scratch registers and a brief CALLS-callable prolog.  I'd be quite
impressed with a mechanical optimizer that could match what I did.

>   The autor also suggests a "UNIVERSAL ASSEMBLER" to run on many machines.
>I think FORTH already fits the bill, & has many other advantages.

I suspect it's much easier to write a portable program in C than in FORTH.
Doesn't FORTH assume that integers and all flavors of pointer are identical?
(Many older C programs do too, but I think FORTH depends on it.)

Karl W. Z. Heuer (ima!haddock!karl or karl@haddock.isc.com), The Walking Lint

faustus@ucbcad.BERKELEY.EDU (Wayne A. Christopher) (12/05/86)

Maybe the situation is different for a people who write widely-used
programs for PC's like Lotus, but in a reasearch environment, the
critical factor isn't speed of execution, but speed of development.
Why pay a programmer to spend twice as long to make the program more
efficient when you can spend a smaller amount buying a faster machine?
I know, this may not be the case very often yet, but the fact is that
the time has past when programmers existed for their machines.  Now the
machines exist for the programmers, and we'd better be paying more
attention to how we can make it easier for the programmer than for the
machine.

	Wayne

mao@blipyramid.BLI.COM (Mike Olson) (12/08/86)

> in article 476, landon dyer ({sun,lll-lcc,imagen}!atari!dyer) writes:

> Generally, assembly-language programs will run faster (and will
> be smaller) than compiler-generated code.  Depending on your
> machine's architecture, though, you may never want to program in
> its assembly language (e.g. IBM 801 and other RISCs).  Friendlier
> machines like the 68000 are another story.

i agree completely.  i've done a significant amount of development work
in assembly language on a macintosh (68k), and i didn't have any real
trouble coding "high-level" constructs.  sneaky c data structures like
linked lists and unions are no harder to create or understand in a
sufficiently rugged assembly language than they are in c.  most of the
stuff that would be really difficult (disk i/o, memory management, etc)
to implement repeatedly is supported by o/s calls (just like unix).

> 	Humans are perfectly capable of using those "wierd, high level"
> 	instructions that compiler writers claim are useless.  (Show me
> 	a 68000 C compiler that uses MOVEP, TAS or SNE!)

well spoken, landon.  one other point:  i think it's important for a human
to know that such instructions exist.  forget optimizers.  you'll write
better code if you understand your machine's instruction set.  every
assembly hacker knows the native integer size, byte ordering, and so forth,
for the machine he/she works on.  that's not true for every c programmer.

finally, development time probably *is* longer for assembly code, and
it's certainly non-portable.  on the other hand, when i want to write
a routine that sits on an digital input port and transforms incoming
data *instantly*, i'm not going to waste my time doing it in c.

					mike olson
					(...!ucbvax!blia!mao, i think)

all of my opinions have prior convictions, and are not to be trusted.

gaynor@topaz.RUTGERS.EDU (Silver) (12/09/86)

I think the key to being able to conveniently write assembly code lies
in a really bitching macro facility, with the general approach being
'define whatever wierd constructs using the macro expander, but,
because YOU are supplying the underlying code, you can make it as
efficient as you want'.  Now, the program is not necessarily
non-portable, just the macro code.  Since most of the macro routines
are not too complex when singled out, it's easier to rewrite.

Silver.

uucp:  ...!topaz!gaynor  ...!topaz!remus!gaynor
arpa:  gaynor@topaz      silver@gold

Disclaimer: This posting contains the authors two cents, not the two
            cents of anybody else (they might notice the shortage).

nather@ut-sally.UUCP (Ed Nather) (12/09/86)

In 1975 I wrote a real-time data display and acquision program for a DGC
Nova computer entirely in assembly language.  It took very close to a year
to get it debugged and working, despite the fact I had written essentially
the same program 6 times before.  (New peripherals became available.)
I did my best to use modular constructs and other good things, but the code 
is still very difficult to work on.  It runs fast.

I'm now re-writing the same program in C for an IBM PC, and I am amazed at
how much faster it is going.  I have it about 1/2 done and it's taken less
than a month.  I still did the time-critical parts in assembly (the display
updating, and the driver for the input port), because they were too slow in
C -- and I learned how GOD-AWFUL segments are ...

Because the job is much more quickly done in C, I am including a (simple)
full-screen editor for use as a logbook maintainer, and the program set-up
has a restricted full-screen editor that works in a window.  I would NEVER
include these goodies if I had to do the whole thing in assembly code.  It
would just be too much work.  

So, for what it's worth, writing in C will result in a better, more versatile
program that is easier to use, with no sacrifice in the basic operation or
essential speed.

It's easy to write poor, slow programs in any language.  This is the first
chance I've had to compare the SAME program written two different ways.  I
am now convinced about HLLs -- well, C anyway.
-- 
Ed Nather
Astronomy Dept, U of Texas @ Austin
{allegra,ihnp4}!{noao,ut-sally}!utastro!nather
nather@astro.AS.UTEXAS.EDU

rb@cci632.UUCP (Rex Ballard) (12/09/86)

In article <646@instable.UUCP> orr@instable.UUCP (Orr Michael ) writes:
>	Have you seen an article in "Computer Language" magazine
>of Oct. 86 about "Universal assembly language" ?
>    This article claims that it is (always/usually) better to
>use assembly language rather than HLL. (any). this is based on the 
>following claims: 
>
>    1. The ONLY siginificant advantage of HLL is a shorter CODING time.

I wonder what HLL he was comparing.  I don't know of any assemblers that
do the error checking to the extent of lint.  I would guess that the author
also has a "bottom up" style of implementation, which means debugging and
testing the primatives before moving to higher levels.

>    2. Design, documentation, testing time is (almost) the same in both cases.

Again, this depends on the development environment.  On a per line basis,
given a fully debugged library, it can be possible to keep assembler costs
down.  Unfortunately, most assembler environments do not provide for unit
testing.  The end result is that even with good primatives and macros, one
is still faced with testing say, 10,000 lines of assembler code on a target
machine which may have minimal debugging tools, or unit testing C code on the
development machine and than testing the port to the target.

>    3. The assembly program will run 2-5 times faster.

This seems to be true when assembly routines are not re-entrant and module
interconnectivity (which registers are significant...) are not critical
issues.  Unfortunately, good primatives are difficult to connect if they
use frames rather than registers.

>    4. So, after enough runs of the program the coding time gap will be 
>       swallowed. From then on, the assembly program gains non-stop.

Assuming that there is only one release, and no intent to upgrade the
system, this may be true.  Unfortunately, because of the "batch mode"
environment of assembler, not every path is always tested.

>   Seems to me that this does not hold water.
>    1. No mention of changes/maintenance issues anywhere.

Perhaps the author has not been required to work on a product that required
frequent enhancements.  More importantly, the it would appear that the
author has never had to work on someone elses large system, written entirely
in assembler.

>   The author also suggests a "UNIVERSAL ASSEMBLER" to run on many machines.
>I think FORTH already fits the bill, & has many other advantages.

It sounds more like the old dream of UCSD Pascal P-code.  If forth had
some "standard entry vectors", it could possibly fit the bill, but the
biggest problem there is that portability is strictly at the source code
level, lacking intermediate "object code".

My observations of most "language/assembler" debates is that programmers
often defend the working environment of a language, rather than the merits
of the language itself.  An interactive environment like FORTH or Prolog
is preferable to a batch/target environment.  This does not mean I would
prefer a "Batch mode FORTH" to an "interactive C" environment.

I can't help but think the author is defending a programming style, and
environment, rather than the language itself.

Finally, there is the issue of the "right tool for the right job".  I use
several different languages, ranging from assembler and C to Lex, Yacc,
and Prolog.  I wouldn't want to do a communications driver in Prolog, and
I wouldn't want to do an associative data base in assembler.  I also don't
use a hammer to drive screws, or a wrench to drive nails.

Rex B.

billw@navajo.STANFORD.EDU (William E. Westfield) (12/10/86)

Well, as long as it's under discussion anyway, does anyone have any
hints as to how to implement ELSEIF as an assembler macro (specifically
for the 8086 MASM v4)?

It turns out that IF, ELSE, and ENDIF are pretty simple:
  IF increments n, and generates a conditional jmp to BEGn.
  ELSE defines BEGn, adds 1 to n (make in m), and generates a jmp to BEGm.
  ENDIF just defines BEGn or BEGm.
  Another counter (x) is used to keep track of N for each level of nesting.

the problem is that ELSEIF need to skip over an arbitrary number of N's,
which is unknown at the time the JMP is generated.  Anybody have any
ideas ?

ELSEIF should generate an unconditional JMP to the ENDIF, define BEGn,
create a new conditional jmp to the next elsif, else, or endif.  eg,
for an 8086:
ELSEIF(Condition,instruction) yields:
	JMP ENDIF
	instruction
BEGn:	JNcondition BEGm

Id settle for code where the JMP ENDIF was actually a jmp to the next
unconditonil jmp that would eventually end up at ENDIF (hmm, I have
an idea Ill have to look at).  Anybody else have suggestions ?

BillW

rentsch@unc.UUCP (Tim Rentsch) (12/10/86)

From previous articles....
> > 1. The ONLY siginificant advantage of HLL is a shorter CODING time.
> > 2. Design, documentation, testing time is (almost) the same in both cases.
> > 3. The assembly program will run 2-5 times faster.
> > 4. So, after enough runs of the program the coding time gap will be 
> >     swallowed. From then on, the assembly program gains non-stop.
> 
> Generally, assembly-language programs will run faster (and will
> be smaller) than compiler-generated code.  Depending on your
> machine's architecture, though, you may never want to program in
> its assembly language (e.g. IBM 801 and other RISCs).  Friendlier
> machines like the 68000 are another story.
> 
>      1. No one cares how much you suffered.  The *only* thing that
> 	a user sees is the product's performance . . . or lack therof.
> 	The user doesn't care what you wrote it in.  Once the product
> 	gets to market it stands on its own merit.
> 
>      2. It probably *is* more expensive to write a system in assembly
> 	than in an HLL.  Is it worth the effort?  What's the return?
> 
>      3. Given a piece of source code to compile, some present day
> 	compilers may well be able to out-perform a human (not for
> 	compilation speed, but for execution speed and space).

I can't believe I am reading this debate in 1986!  Where were you
guys during the 1970's and 80's?  Didn't you ever hear of structured
programming?  Sheesh!

Software engineering has taught us a few things which should by now
be common knowledge.  (I presumed everyone knew them.  I see I was
in error.)  For example:

	1. A given programmer writes the same number of lines of code
	   per day, independent of language.

	2. HLL code is about 5 times as functional, on a per-line
	   basis, as assembly language.

	3. The variability between programmers is much greater, on
	   the order of 25 to 1.

	4. "Most" of the time in a program is spent in a "small"
	   amount of the program.  Sometimes these numbers are
	   reported as 80% and 20%; sometimes 90% and 10%.

	5. It is very difficult to know ahead of time where the 10%
	   will be.

The lesson to be learned is to code your entire program in HLL, then
after it is running go back and recode the "important" parts in
assembly language.  To know what the important parts are, use a
performance monitor (e.g., statement level execution statistics).

It is true (in my experience, anyway) that human-generated assembly
language tends to be smaller than compiler-generated code.  But not
by a lot.  Programs running on contemporary machines (where a mere
eight chips is already one megabyte) tend to need space for DATA
rather than for CODE.  The space-time tradeoff between various data
representations is better expressed in HLL, unless it is at a
time-critical point in the program.  But remember, you don't know
when you write the program where those bottlenecks will be, so you
are still better off doing the initial version in a HLL, and then
recoding the important parts in assembly.

> 	But this is not what happens when a human writes in assembly
> 	language.  Many perfectly acceptable structures in assembly
> 	are illegal or difficult to express in an HLL.  The whole
> 	flavor of the project changes . . . not just the code generator.

Question: do those structures even need to be expressed in the HLL?
The answer is, No, they do not.  Once the program is working and the
bottlenecks re-written in assembly (10% of the total program,
remember?), the structures can be changed so that only the assembly
language primitive routines can access them, and the HLL structures
can be filled in with appropriate "dummy records".  The global
program structure remains intact, with only local changes to
representation of key data types and access methods.


> 	Humans are perfectly capable of using those "wierd, high level"
> 	instructions that compiler writers claim are useless.  (Show me
> 	a 68000 C compiler that uses MOVEP, TAS or SNE!)

Those instructions are useless to compiler writers, and useful to
human assembly language programmers.  That still doesn't mean the
entire program should be written in assembly; the assembly language
part should still be the 10% that is necessary for the program to run
fast.  


> Lotus-123 is non-portable, written mostly in assembly, and has made
> its writers a *bundle* of money.  I don't suppose they're complaining
> about how hard it was to write.

That doesn't prove a thing.  Yes, it *can* be done.  So what?  It is
possible to simulate weather on a Turing machine, too -- and I'm
sure that if there was someone willing to pay it, someone else would
do it.  But it still would be easier to write in another (HL) language.

So, cmon, guys, get with it.  Join the present.  The assembly
language debate was settled in the last decade.  Let's also end it
here.

"Can you say 'BALR?'  Horrors!"

cheers,

txr

rentsch@unc.UUCP (Tim Rentsch) (12/10/86)

From previous articles....
> i've done a significant amount of development work
> in assembly language on a macintosh (68k), and i didn't have any real
> trouble coding "high-level" constructs.  sneaky c data structures like
> linked lists and unions are no harder to create or understand in a
> sufficiently rugged assembly language than they are in c.

Most programmers don't have trouble coding in assembly language, it's
just that they code faster in a HLL.  If the language provides
support for things like type checking, they code more reliably in a
HLL as well.  Assembly language does not provide these "luxuries".
("Sufficiently rugged assembly language"?  By this do you mean an
assembly language with control structures, data structures, and type
checking?  How is a "sufficiently rugged assembly language"
different from a HLL?)  

In this regard I should point out that C is only barely a HLL, and
should be nominated to replace FORTRAN as the world's most popular
assembly language.


> one other point:  i think it's important for a human
> to know that such instructions exist.  forget optimizers.  you'll write
> better code if you understand your machine's instruction set.  every
> assembly hacker knows the native integer size, byte ordering, and so forth,
> for the machine he/she works on.  that's not true for every c programmer.

The whole point is that HLL programmers DO NOT need to think about
such things, because most of the time it is not important.  Who
cares about all that stuff?  Sure, some of the time it matters, and
then you (or someone) needs to know it.  But most of the time (and
for most of the code) it does NOT matter, and what the compiler
generates is acceptable.


"Can you say 'BALR'?  Horrors!"

cheers,

txr

rentsch@unc.UUCP (Tim Rentsch) (12/10/86)

From previous article....
> I think the key to being able to conveniently write assembly code lies
> in a really bitching macro facility, with the general approach being
> 'define whatever wierd constructs using the macro expander, but,
> because YOU are supplying the underlying code, you can make it as
> efficient as you want'.  Now, the program is not necessarily
> non-portable, just the macro code.  Since most of the macro routines
> are not too complex when singled out, it's easier to rewrite.

Gee, along with that macro facility, wouldn't it be nice to have a
facility for automatically passing parameters to a routine?  And
maybe some simple data-structuring macros?  Maybe a flexible
macro-call syntax which allows expressions to generate code?  And
how about some macros to do IF's and WHILE's?  Maybe some routine
calling macros, and some simple checks on argument validity?

Before laughing too hard, ask an assembly language programmer with
"a really bitching macro facility" if he has ever wanted those
things, or if he has tried to implement any himself.  Then tell me
(with a straight face) that the result is significantly different
from a HLL.

"Can you say 'BALR'?  Horrors!"

cheers,

txr

usenet@ucbvax.UUCP (12/11/86)

Summary:if time is really critical, use hardware

the people who favor assembly language, all push how much faster it executes.
even if this is true, it isn't relevant.  you can implement the same
functions in hardware and do it even faster.  for example, i've developed
control systems (proportional plus integral control actions) for dc motors
in c, assembly language and in hardware (designing a digital circuit).  the
c code took the least time and was slightly slower than the assembly language
code.  the hardware design took slightly less time to develop than the
assembly language code but was an order of magnitude faster.  assembly language has it's place, but there aren't any universals as to what's .  most of
my software needs to be highly transportable so i generally use FORTRAN.  it
may not be elegant, but everyone has it.  i use c for programs i'll be the
only one to use.  assembly language is fine for real time applications, althoughhardware implementations are preferable for linear problems.  knowing how to
do all of these (at least having a "functional literacy" in all) is more useful
than allowing prejudices to interfere with real productivity.
From: dma@euler.Berkeley.EDU (Controls Wizard)
Path: euler.Berkeley.EDU!dma

stuart@bms-at.UUCP (Stuart D. Gathman) (12/12/86)

In article <386@unc.unc.UUCP>, rentsch@unc.UUCP (Tim Rentsch) writes:

> Gee, along with that macro facility, wouldn't it be nice to have a

	[ various high level features desired in an assembler deleted ]

> What the difference between this and a HLL?

The difference is that the assembler would still give you detailed low level
control but would not be portable.
-- 
Stuart D. Gathman	<..!seismo!dgis!bms-at!stuart>

gaynor@topaz.UUCP (12/12/86)

In article <386@unc.unc.UUCP>, rentsch@unc.UUCP (Tim Rentsch) writes:
> From previous article.... [myself]
> > I think the key to being able to conveniently write assembly code lies
> > in a really bitching macro facility, with the general approach being
> > 'define whatever wierd constructs using the macro expander, but,
> > because YOU are supplying the underlying code, you can make it as
> > efficient as you want'.  Now, the program is not necessarily
> > non-portable, just the macro code.  Since most of the macro routines
> > are not too complex when singled out, it's easier to rewrite.
> 
> Gee, along with that macro facility, wouldn't it be nice to have a
> facility for automatically passing parameters to a routine?  And
> maybe some simple data-structuring macros?  Maybe a flexible
> macro-call syntax which allows expressions to generate code?  And
> how about some macros to do IF's and WHILE's?  Maybe some routine
> calling macros, and some simple checks on argument validity?
> 
> Before laughing too hard, ask an assembly language programmer with
> "a really bitching macro facility" if he has ever wanted those
> things, or if he has tried to implement any himself.

>                                                       Then tell me
> (with a straight face) that the result is significantly different
> from a HLL.

It is not, which is what makes it reasonable...

Now, come on.  I'm NOT saying that programming in assembler is
preferable to programming in a given HLL - *read* the portions that
you've included!  The point I'm trying to get across is, that with the
proper support, programming in assembler (for whatever purpose and
reason) can be done in a portable and structured manner that makes the
task much easier.  Consider the following circumstance:

  You've just written a real CPU hog which will have to be able to
  port easily.  Because it's a hog, you want to write the hungry code
  in assembler for efficiency.  You DO rewrite the pertinent, say, 5%
  in assembler.  Now you try to port it to a different machine.

Do you want to rewrite all of the assembler code?

I personally wouldn't ever write in assembler (unless I'm being
grossly over-compensated), for all the previously stated reasons.

Silver.

/-------------------------------------------------\
| uucp:  ...!topaz!gaynor  ...!topaz!remus!gaynor |
|        ~~~~~~~~~~~~~~~~                         |
| arpa:  gaynor@topaz      silver@gold            |
\-------------------------------------------------/

ps
   alias hell /dev/null
   Send flames to hell (where they belong :-).

moraes@utcsri.UUCP (Mark A. Moraes) (12/15/86)

HLL vs Assembly => Sticks and stones

I thought that Djikstra, Wirth, Brooks et al. had terminated this
argument a long time back. Anyway, while its on, here are some 
pro-HLL points.

One BIG problem is learning all the assembly languages for all
those machines out there. If you're talking about Universal 
Assembly languages, what happens to originality in instruction
sets. You're taking away all they fun from those of us who design
chips. Try getting any CPU designers to agree on a standard (!)
Universal Assembly language and I suspect it will end up with 
a definition the size of Ada. 

Further, when a new CPU turns up, the main reason we can all start
getting useful work done immediately is because we port our 
SLOW BIG HLL code across and re-compile (OK- so its not that simple :-)
including OSs like UN*X. 

It saves the time of having to learn 
the assembly language of a new chip. As a matter of fact, I
think this debate was more or less settled when UNIX was written in
C (most of it, anyway). 

 > in article 476, landon dyer ({sun,lll-lcc,imagen}!atari!dyer) writes:
 
 > Generally, assembly-language programs will run faster (and will
 > be smaller) than compiler-generated code.  Depending on your
 > machine's architecture, though, you may never want to program in
 > its assembly language (e.g. IBM 801 and other RISCs).  Friendlier
 > machines like the 68000 are another story.
 
Unfortunately, RISC instruction sets can be learnt quickly, IF you
have to proggram in ASSEMBLER.
 
 > 	Humans are perfectly capable of using those "wierd, high level"
 > 	instructions that compiler writers claim are useless.  (Show me
 > 	a 68000 C compiler that uses MOVEP, TAS or SNE!)
 
By the time some of us humans learn what MOVEP, TAS and SNE are useful for,
and what their equivalents on, say the NS32332 or WE32000 are, the chips will
probably be outdated.

As someone who programs applications (CAD Tools for VLSI), I find 
portability across a large range of machines far more vital than
gaining a questionable 10-20% increase in speed. As an example, all
I had to do to port a simulator from a VAX to a NS32332, both running 
UNIX was to re-compile the source. It worked first time. I'm certain
enough people on the net will have had similar experiences.
I don't doubt that writers of compilers find the writing-speed/running-speed
trade-off much more difficult than I do. But today, when machine time
and memory are generally cheap, even they are coming down on the
side of portability - more machines = more software sold = more money.

Sure, assembler is really nice for transforming data coming from some 
sensor or something, but do you also want to write the user interface in
assembler? It boils down to the old software engineering saying about
re-writing the time-critical parts in assembler.

I'm willing to stick my neck out and claim that debugging is a lot 
easier in HLL's even if they don't have such things as symbolic 
debuggers. (as long as they have a printf/WRITE/PRINT statement.
Profilers are usually relatively easy to write if you don't get one with 
your source, and so are simple debugging tools. This pays off in
functionality of programs. 

Insidious bugs are a lot easier to make in assembler. In HLLs, with
such things as modularity and structured programming, it is actually 
possible to get something near bug-free the first time round. If not, 
bugs don't lie dormant for long, and even when they do, the are 
comparitively easy to correct. 

Many of us tend to like assembler BECAUSE it is difficult, and more
of a challenge. But there are a vast number of people who produce
useful programs in languages like C (which is comparitively low-level),
FORTRAN, Pascal, Euclid ...... If the high priests had their way, most of
these people would not be allowed near a computer - What!Make programming easy
so that the masses can write their own programs - Blasphemy!

About Lotus 1-2-3, I sometimes wonder why we haven't seen it on the
'new' 68000 machines like the 520ST, Amiga etc. Jazz is nowhere near it.
Maybe it has something to do with 8088 assembly not running on the
68000 :-) :-) Ah well! Makes Sidecar a very profitable business.

Mark Moraes.

uucp:{ihnp4 decwrl utzoo uw-beaver decvax allegra linus}!utcsri!moraes
arpa:moraes%toronto@csnet-relay
csnet:moraes@toronto
moraes%utoronto.bitnet

bills@cca.CCA.COM (Bill Stackhouse) (01/07/87)

The deviation between programmers is very important when discussing
Asm vs. HLL coding. Only a small percentage of programmers can 
consistently code at the same level of efficency and therefore
one part of system may really scream and another may be a real pig
(often this is due more to using a poor algorithim and may also
be true in an HLL system).

Compiler technology should allow all HLLs to have an optimizer that
can be used just prior to shipping a piece of code to make the
result equal to or better than and coded assembler. If there is a 
gripe to pick, it is with the code generators that we are forced
to live with.

In general, I agree that HLLs are the only way go and I can think
of very few reasons to use more than 0.5% assembler in any large system.

Down with assemblers and all the old generation lagnuages like COBOL,
Fortran, etc. Oops! Sorry about that.


-- 
Bill Stackhouse
Cambridge, MA.
bills@cca.cca.com

spencert@rpics.RPI.EDU (Thomas Spencer) (01/08/87)

      I've lost most of the messages in this discussion.  Sombody claimed
that one should sometimes (often ?, always ?) program in assembly language
because HLL compilers use complicated subroutine calling conventions that
are often not needed.  It seems to me that the obvious thing to do is to
build smarter compilers that can make these optimizations.  What is the
state of the art on this issue ?  I'll summarize any responces that I get
by mail.
                               -Tom Spencer

neff@hpvcla.HP.COM (Dave Neff) (01/09/87)

The new HP "RISC" machine gets much of its high level language
performance benefits due to the fact it has 32 general purpose
registers.  The HLL compilers partition the usage of these registers
and use them for both local variables as well as parameter
passing (unless all else fails).  When all else fails the optimizer
decides what data should be put on the stack. If I recall correctly
the conventions allow for up to 16 parameters passed to another
subroutine via registers.  In my opinion much of the performance
benefits of this architecture come from the fact that the machine
has many general purpose registers and the compilers sound like
they do excellent optimization.

I have no direct experience with these machines, all the above
info comes from HP Journal articles filtered through my imperfect
memory.  By the way, the Unix based "RISC" machine has been shipping
for a month and seems to be selling well.

Dave Neff
hpfcla!hpvcla!neff

Disclaimer: This is not an ad.

lum@osupyr.UUCP (01/15/87)

In article <3910001@hpvcla.HP.COM> neff@hpvcla.HP.COM (Dave Neff) writes:
>The new HP "RISC" machine ... has 32 general purpose registers ... [and] up
>to 16 parameters [may be] passed ... via registers.  In my opinion much of
>the performance benefits of this architecture come from ... many general
>purpose registers and compilers [which] do excellent optimization.

That's probably true; the pdp-10 takes great advantage of "registered"
parameters, especially for monitor calls.  I've been thinking of an extended
LS (large system) architecture with word sizes of 48, 60, or 72.  LSX machines
may never be built, but they are interesting to think about.

The basic instruction format for the pdp-10 is:

     -----------------------------------------------------------------------
    |       op        |  ac   |@|  ix   |             base-addr             |
     -----------------------------------------------------------------------
     0      ...      8 9 ...12   14...17 18              ...              35

_op_, 9-bits, allows 512 operations (365 implemented ca '67, more since);
_ac_, 4-bits, specifies source/destination register; _@_, 1-bit, supports
indirect addressing (LS was intended (ca '65) as a giant LISP machine);
_ix_, 4-bits, supports indexed addressing; and _base-addr_, 18-bits,
specifies a local address (relative to the current section).

With this format (a 9-bit op-code, an indirect bit, two reserved(!) bits,
(n/4)-6-bit register addresses, and half-word local addresses), we get:
	  64 registers and    16M addresses for 48-bit words,
	 512 registers and  1024M addresses for 60-bit words, and
	4096 registers and 65536M addresses for 72-bit words!

Oh, what fun!  Wrap it up!  I'll take it!

Lum Johnson  lum@ohio-state.arpa  ..!cbosgd!osu-eddie!lum

franka@mmintl.UUCP (Frank Adams) (01/20/87)

In article <12037@cca.CCA.COM> bills@CCA.UUCP (Bill Stackhouse) writes:
>Compiler technology should allow all HLLs to have an optimizer that
>can be used just prior to shipping a piece of code to make the
>result equal to or better than and coded assembler.

Better make that just prior to final testing.  Better yet, use the optimizer
throughout the development process.  Changing the code generator creates/
uncovers new bugs.

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Multimate International    52 Oakland Ave North    E. Hartford, CT 06108