[comp.software-eng] Software Technology is NOT Primitive

crowl@cs.rochester.edu (Lawrence Crowl) (10/22/87)

Software technology is not in the primitive state that people so constantly
moan about.  First, software has a much more difficult task than hardware.  We
have more expectations from software than hardware, so the perceived state of
the technology is less than the actual state.  

Consider the "great advance" in memory chip size.  Most of the impressiveness
of the advance comes from replication of design effort.  Is a software engineer
given accolades when he calls a subroutine a million times instead of a
thousand?  Hardly.  Much of hardware design is replication, but very little of
software design is replication.  

In 1950, a processor had hundreds to thousands of gates and a large program had
a hundred to a thousand lines.  Today, a processor has (say) a hundred thousand
to a million gates, while a large program has a hundred thousand to a million
lines of code.  The track well don't they?  But let's look closer.  In 1950, a
processor had a control unit, a few registers and an ALU while a program had a
simple routine to read cards, a simple routine to drive the printer, and a
simple core algorithm.  Today, a processor had a control unit, a few registers
and an ALU (note the less than radical change), while a program has a graphics
interface, a file manager, a recovery scheme and a performance monitor in
addition to the core algorithm.  There has been quite a deal of change in the
tools and _functional_ capabilities of software systems.  

"But hardware systems have improved performance by four orders of magnitude
since 1950!"  True, but for many problems, you are better off running today's
algorithms on 1950's hardware than 1950's algorithms on today's hardware.  Do
not deny software its contribution to performance.  (Although to be fair, we
have already reached the theoretical maximum performance for many algorithms.)

In article <3349@uw-june.UUCP> bnfb@uw-june.UUCP (Bjorn Freeman-Benson) writes:
>However, one must consider that current software technology is at the level of
>individual transistors and resistors, and that we could use the step up to
>"7000 series ICs".  After that, custom and semi-custom would be great.

Well, I apply different analogies.  I think you will find these better related
to the actual scale of work.

transistors == machine language
7400 series == assembly language
msi         == higher level languages
lsi         == libraries
vlsi        == utility programs (ala Unix)
custom ICs  == inheritance and generics (needs more experience to say for sure)

It looks to me like software and hardware technology have tracked fairly well.
The cause for the difference in perception is that hardware has done the same
task cheaper and faster while software has performed an ever more difficult
task.  Because hardware has simpler measures, it has more apparent progress.
The actual progress is roughly equivalent.

-- 
  Lawrence Crowl		716-275-9499	University of Rochester
		      crowl@cs.rochester.edu	Computer Science Department
...!{allegra,decvax,rutgers}!rochester!crowl	Rochester, New York,  14627

jdu@ihopa.ATT.COM (John Unruh) (10/23/87)

In article <3471@sol.ARPA>, crowl@cs.rochester.edu (Lawrence Crowl) writes:
> Software technology is not in the primitive state that people so constantly
> moan about.  First, software has a much more difficult task than hardware.  We
> have more expectations from software than hardware, so the perceived state of
> the technology is less than the actual state.  
> 
> transistors == machine language
> 7400 series == assembly language
> msi         == higher level languages
> lsi         == libraries
> vlsi        == utility programs (ala Unix)
> custom ICs  == inheritance and generics (needs more experience to say for sure)
> 
> It looks to me like software and hardware technology have tracked fairly well.
> The cause for the difference in perception is that hardware has done the same
> task cheaper and faster while software has performed an ever more difficult
> task.  Because hardware has simpler measures, it has more apparent progress.
> The actual progress is roughly equivalent.
> 

I really like Lawrence's analysis.  He has a few vital points.

1.  Today's software projects attack extremely difficult tasks.  It is 
    unlikely that these tasks would be attempted in a circuit or in a
    mechanical system.

2.  Software technology has evolved.  I would not have built the same
    equivalence that he did, but his way of looking at things is certainly
    reasonable.

I have a few observations that I would like to add.

1.  I believe many of the problems attributed to
    the "software crisis" are a by product of the difficulty of the task
    itself, not of the fact that the solution is implemented in software.
    Often we run into problems because we do not understand the problem
    that we are trying to solve well enough.  Therefore, the program that
    is built doesn't solve the right problem.

2.  Often the people who wish to purchase a software system to inprove the
    efficiency of their operation do not understand the true problem that
    they want to solve.  Introducing automation does not always make a
    business run more efficiently, largely because the change introduced
    in work methods by the automation changes what should be automated.
    I think those who specify what is to be done often don't get this
    right.  I am not sure it is possible to get it right in many cases.
    An example is the stories I have read about the introduction of robots
    on assembly lines.  Supposedly many of these installations have not
    worked out well on an economic basis.

3.  I am not entirely convinced by the arguments presented of where money
    is spent during the "software life cycle".  I have seen varying estimates
    up to 80% for maintenance.  I always wonder what maintenance means.  Does
    it include enhancements to the product because customer needs were not
    understood?  Does it include enhancements to address more markets?
    Does it include ports to new operating environments such as porting a PC
    program to the MAC environment?  These things are very different than
    bug fixes.  If things are counted this way, the newest IBM 370 series
    computers should probably be counted as engineering changes on the
    original 360.

4.  Program bugs after release are a serious problem.  Just read
    comp.micro.ibm.pc to see how many postings are related to bugs
    in standard packages.  I would hate to have to pay to send a free
    update diskette set to every registered owner of LOTUS 1-2-3
    (This is probably a trademark of LOTUS development).  I would still
    rather do this than put a white wire change into every IBM PC.
    With today's hardware, the answer to hardware problems may be to
    document them and program around them.  This is equivalent to a
    work around solution for software problems.  I will not make
    apologies for buggy software.  We need to make some improvements
    here.

5.  The theoretical base is better for hardware than for software.
    People have been studying the physics of electricity for MANY
    years, and many things are well understood.  One of the reasons
    many new hardware designs work well is because they are extensively
    simulated.  They can be simulated because the theoretical base allows
    software to simulate them.  Most of the work in theory supporting
    software that I have seen falls into two classes.  The first is the
    analysis of algorithms.  This area has been extremely fruitful.  We
    have been able to find good algorithms for many computations.  We also
    have been able to find theoretical upper bounds for performance and 
    see how close we have come to those bounds.  The second area is in the
    theoretical basis for computing in general.  In my mind, this area
    covers many basic problems such as the halting problem and proof of
    correctness.  The advances in this area have been significant, but they
    are very difficult for the working programmer to use.  They are more
    like Maxwell's equations for electromagnetic fields.  I don't know
    of any theoretical basis that supports software like circuit analysis
    supports circuits (both digitial and analog) or like boolean algebra
    supports digital circuits.  We have no good way of doing automatic
    (or really semi-automatic) checks of software boundary conditions,
    or of driving software through many states.  Currently, we manually
    generate many tests, and then we manually run them.  It would be
    really nice if we could have some system generate and run tests
    and then we could analyze the results.

6.  The state of the theory as listed in 5 also makes correctness by
    construction difficult.  I don't really think circuit designers 
    do everything like that anyway.

I hope I have managed to shed some light on the matter, and perhaps
to spur some thought.
                               John Unruh

shebs%defun.uucp@utah-cs.UUCP (Stanley T. Shebs) (10/23/87)

I think one of the reasons for the perception of software technology as
primitive is that the languages have not changed very much in 30 years.
C seems to be in the ascendant for research work, yet it is not too far
removed from Fortran (no language fanatic flames please).  Of course, the
*use* of the languages now is very different, as are the algorithms they
express, but it is an article of faith in some circles that progress in
software == use of higher-level languages.

							stan shebs
							shebs@cs.utah.edu

cik@l.cc.purdue.edu (Herman Rubin) (10/25/87)

Unfortunately, software technology seems concerned only with attempts to do a
highly restricted set of operations.  This has also recently gotten into 
hardware development.  I think that the ideas of the software and hardware
gurus can be likened to producing automobiles which can be programmed to
get you to an address you type in, but will not let you back the car out of
the garage into the driveway.  I suggest that the software developers first
consider the power of existing hardware, next the natural power of hardware
(what unrealized operations can be easily done in hardware but are not now
there), and finally the "current use."  FORTRAN, when it was produced, was
not intended for system subroutine libraries.  There are instruction present
on most machines which someone knowing the machine instructions will want to
use as a major part of a program, but which the HLL developers seem to ignore.

I suggest that a language intended for library development be approached by
the developers with the attitude that a machine cycle wasted is a personal
affront.  I think we will find that the resulting languages will be easier
to use and less artificial than the current ones.  Implementing an arbitrary
set of types is no more difficult for the user than the 5 or 6 that the
guru thinks of.  Allowing the user to put in his operations and specifying
their syntax is not much more difficult for the compiler than the present
situation.  For example, I consider it unreasonable to have any language
which does not allow fixed point arithmetic.  It may be said that this would
slow down the compiler.  However, every compiler I have had access to
is sufficiently slow and produces sufficiently bad code that it would be
hard to do worse.

I suggest that there be a major effort to produce an incomplete language
which is 
        1.  Not burdened by the obfuscated assembler terminology.

        2.  Easily extended by the user.

        3.  Designed with the idea that anything the user wants to do
should be efficiently representable in the extended language.

        4.  Restrictions on the use of machine resources should be as
non-existent as possible, and should be overridable by the user if at
all possible.  The restrictions on register usage in the C compilers
I have seen I consider a major felony.

        5.  Remember that the user knows what is wanted.  I hereby 
execrate any language designer who states "you don`t want to do that"
as either a religious fanatic or sub-human :-).  Those who say that
something should not be allowed because "it might get you into trouble"
I consider even worse.

        6.  Realize that efficient code does not necessarily take longer
to produce than inefficient, and that there are procedures which are not
now being used because the programmer can see that the resources available 
to him will make the program sufficiently slow that there is no point in
doing the programming.

I think that if a reasonable language were produced, we will see that there
will be a new era in algorithm development, and that the hackers will be
competing in producing efficient software.
-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (ARPA or UUCP) or hrubin@purccvm.bitnet

KEN@ORION.BITNET (Kenneth Ng) (10/25/87)

>In 1950, a
>processor had a control unit, a few registers and an ALU while a program had a
>simple routine to read cards, a simple routine to drive the printer, and a
>simple core algorithm.  Today, a processor had a control unit, a few registers
>and an ALU (note the less than radical change), while a program has a graphics
>interface, a file manager, a recovery scheme and a performance monitor in
>addition to the core algorithm.  There has been quite a deal of change in the
>tools and _functional_ capabilities of software systems.
>
Shouldn't you be taking the software program as a whole and the hardware
as a whole?  Saying the hardware today is just an ALU, registers and
memory is like saying all of today's software is an assignment, compare
and branch statement.  To compare your description of software today to
hardware today, try adding LRU caching, address lookaside buffers,
I/O processors, massive error detection and correction logic, ethernet
and other communication technology, paging, the entire virtual memory
schemes on a lot of machines, etc., etc, etc.
>  Lawrence Crowl                716-275-9499    University of Rochester
     
Kenneth Ng: ken@orion.bitnet
     
     
     

crds@ncoast.UUCP (Glenn A. Emelko) (10/26/87)

Lawrence (and all),

I have been involved at an engineering level in both hardware and software
design for about 10 years.  Across these years I have seen advances in both
industries, but I must say that I am more impressed with what I see in the
way of hardware technology.  In some ways, it seems, software has degraded
somewhat because of hardware progress.  This has been, and most likely shall
continue to be one of my pet peeves of the whole industry.

First of all, I can remember (as many of you can too) machines which had 4K
of memory, or 16K, and still had to have a rudimentary OS running which could
perform some tasks and still allow the user to load a program.  In those rough
times, people were very concerned and concious about storage, both on disk, as
well as memory usage.  This forced software engineers to think creatively, to
tighten code, and to produce very powerful LITTLE algorithms that could do
many things and could be called from many other parts of the code.  Code was
many times designed to be recursive, if not reentrant, and it was expected
that the software engineer knew alot about the machine and environment (in
terms of the hardware) that was hosting the program, which allowed for tricks
and little magical wonders to happen (laughing, in other words, the designer
took advantage of what was available).  In contrast, today, many designers
know nothing about the destination machine (it's a UNIX box?), and most are
not very concerned about how much memory, disk space, or anything else that
they use, and really could care less if there are four subroutines in their
code which could be combined into one general purpose all encompasing sub.

Further more, it seems, now that we have 3-MIPS and 5-MIPS and higher-MIPS
machines being sold over the counter at reasonable prices, very little time
is spent by the software engineer to really PUSH that machine to it's gills.
In fact, I have noted many specific cases of generally sloppy, slow, and
out-and-out crude coding techniques passing as "state of the art," simply
because it is running on a machine that is so blindingly fast that it does
make that code seem reasonable.  Case and point:  I can recall designing a
sorting algorithm for a data management system which was very popular in the
early '80s for a specific Z80 based computer, and then one day pulling out
a copy of the "state of the art" data management/spread sheet program of '84,
which ran on a 8088 based system, and benchmarking them against each other
for about 20 different test cases (already sorted, sorted reverse, alpha only,
numeric only, fractional numbers, alpha numerics, and numerous data sets),
and watching the Z80 based computer beat the 8088 based system 10 to 1 at
WORST!  And as an added note, the sort on the Z80 was stable, and the one
on the 8088 was not!!!  Yes, I am talking about the "most popular" spreadsheet
of 1984, and a lowly Z80 machine ripping it to shreads!  Is this the current
state of software technology?  Yes, I believe so.

At the same time that I have shared these "beefs" about software development,
I have mentioned advances in hardware which allows the software guru to get
lazy and still be considered current.  We have processors today which can
out-run the processors of 1978 (10 years ago) by 20 to 1.  We have software
today which can probably be out-run by the software of 1978, even when we use
it on that powerful processor.  Sure, it's USER FRIENDLY, ERGONOMIC, etc., but
does it keep pace with the advances in the industry?  Why don't I see ANY
software advances that are even an order of magnitude above 1978?  This would
give me 200 times the power, considering the advances in hardware!!  I would
like to get my hands on a copy of "current software" which shows me this
capability, please email me info, or respond on the net.

Yes, these opinions are specifically mine, and reflect upon nothing other than
my own personal experiences, pet peeves, and general bullheadedness.  Feel
free to differ with any or all of the above, but at least get a laugh out of
it.  I find it funny myself.

Glenn A. Emelko
...cbosgd!hal!ncoast!crds

Somehow, this borrowed caption seems appropriate:

     "When I want your opinion, I'll give it to you."

stevel@haddock.ISC.COM (Steve Ludlum) (10/26/87)

In article <3471@sol.ARPA>, crowl@cs.rochester.edu (Lawrence Crowl) writes:
> It looks to me like software and hardware technology have tracked fairly well.
> The cause for the difference in perception is that hardware has done the same
> task cheaper and faster while software has performed an ever more difficult
> task.  Because hardware has simpler measures, it has more apparent progress.
> The actual progress is roughly equivalent.

Hardware doing the same thing faster? Parallel processors are not just faster
they are different. Symbolic machines use different hardware techniques, i.e.
tags. Laser disk do simply store more information but the difference means
much more than just being able to do more of the same thing. Specialized
hardware designs such as DSPs are opening up new areas such as speech and
vision automation, oh and don't forget those little network controllers
on a chip.

Anyway all of the progress in software has just been taking care of a 
few more special cases :-)

ima!stevel 

crowl@cs.rochester.edu (Lawrence Crowl) (10/27/87)

In article <594@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>I suggest that the software developers first consider the power of existing
>hardware, next the natural power of hardware (what unrealized operations can
>be easily done in hardware but are not now there), and finally the "current
>use."

No, software developers should first consider the task to be solved.  The 
efficient use of the hardware is moot if the software does not meet the need.

>There are instruction present on most machines which someone knowing the
>machine instructions will want to use as a major part of a program, but which
>the HLL developers seem to ignore.  

The task of language developers is not (or should not be) to directly support
the hardware, but to provide an environment in which a programmer can
effectively express the solution to a problem.  In those cases where efficiency
matters, the language model is generally chosen to be efficiently realized on
conventional machines.  Catering to specific instructions on specific machines
is generally a loss because the resulting programs are useful only on that
machine.  Supporting common instructions directly in the language often means
presenting an inconsistent model.  For instance, the arithmetic right shift
provided by many machines provides division by two except when the number is
negative and odd.  Should languages be designed around this quirk?  I do not
think so.

>I suggest that a language intended for library development be approached by
>the developers with the attitude that a machine cycle wasted is a personal
>affront.  I think we will find that the resulting languages will be easier
>to use and less artificial than the current ones.

I think that just the opposite is true.  Designing a language around
optimizing the use of a specific machine is likely to leave a language so
riddled with special case restrictions as to be hopelessly complex.

>Implementing an arbitrary set of types is no more difficult for the user than
>the 5 or 6 that the guru thinks of.

Who implements the arbitrary set of types?  The user?  The language designer?
If the language provides mechanisms to allow the user to implement types, then
the task of the language implementer is only difficult.  There are many issues
in value instantiation, etc. which must be delt with in a language that allows
definition of arbitrary types.  If the implementer must implement the arbitrary
set of types, then the task is impossible.

>Allowing the user to put in his operations and specifying their syntax is not
>much more difficult for the compiler than the present situation.

User modifiable syntax is a very difficult to define consistently and very
difficult to parse.  The consensus so far appears to be that it is not worth
the cost.

>For example, I consider it unreasonable to have any language which does not
>allow fixed point arithmetic.  It may be said that this would slow down the
>compiler.  However, every compiler I have had access to is sufficiently slow
>and produces sufficiently bad code that it would be hard to do worse.

If the target audience of the language does not need fixed point arithmetic,
the introduction of a fixed point type is counter productive.  How many
compilers have you looked at?  Some produce very good code.  Others are
atrocious.  It is far easier to criticize code generators than to provide a
generator that produces better code.

>I suggest that there be a major effort to produce an incomplete language
>which is 
>        1.  Not burdened by the obfuscated assembler terminology.

We already have this.

>        2.  Easily extended by the user.

What do you mean by "extended".  Different notions have different costs and
benefits.

>        3.  Designed with the idea that anything the user wants to do should
>be efficiently representable in the extended language.

What do you mean?  Is the resulting representation efficient with respect to
execution time?  Is the resulting representation efficient with respect to
run-time storage requirements?  Is the representation of what the user wants
concisely represented in the source?  Choosing one of these options might lead
one to design C, Forth or Prolog, respectively.

>        4.  Restrictions on the use of machine resources should be as
>non-existent as possible, and should be overridable by the user if at all
>possible.  The restrictions on register usage in the C compilers I have seen I
>consider a major felony.

Many such restrictions are present to allow the resulting programs to be
time efficient.  This requirement is in conflict with what I suspect you want
for point 3 above.

>        5.  Remember that the user knows what is wanted.  I hereby execrate
>any language designer who states "you don`t want to do that" as either a
>religious fanatic or sub-human :-).  Those who say that something should not
>be allowed because "it might get you into trouble" I consider even worse.

The user may know what is wanted, but translating that into code is not always
a simple task.  Consider assigning a boolean value to an integer.  Is this
something that "the user knows what he's doing" and the language should accept,
or is it something that "the user doesn't want to do" and may "get him in
trouble"?  Almost always it is not what the user wants.  If it is what the
user wants, the result is usually a non-portable, difficult-to-understand
program.  (The management usually does not want the latter even if the
programmer does.)

>        6.  Realize that efficient code does not necessarily take longer to
>produce than inefficient, ...

True, but the one in a thousand cases where this is true don't help much.
Efficient code almost always takes longer to produce than inefficient code.
You must invenst development time to get efficiency.

>... and that there are procedures which are not now being used because the
>programmer can see that the resources available to him will make the program
>sufficiently slow that there is no point in doing the programming.

If that is the case, the procedure was either not worth much to begin with,
or not within the bounds of feasible computations given today's hardware.

>I think that if a reasonable language were produced, we will see that there
>will be a new era in algorithm development, and that the hackers will be
>competing in producing efficient software.

Algorithm development is independent of any specific language, so a new
language will probably have little affect on algorithms.  Hackers are already
competing in producing efficient software, so a new language will have little
affect here also.

>Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
-- 
  Lawrence Crowl		716-275-9499	University of Rochester
		      crowl@cs.rochester.edu	Computer Science Department
...!{allegra,decvax,rutgers}!rochester!crowl	Rochester, New York,  14627

shebs%defun.uucp@utah-cs.UUCP (Stanley T. Shebs) (10/27/87)

In article <594@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:

>I suggest that there be a major effort to produce an incomplete language
>which is 
>        1.  Not burdened by the obfuscated assembler terminology.
>
>        2.  Easily extended by the user.
>
>        3.  Designed with the idea that anything the user wants to do
>should be efficiently representable in the extended language.

Sounds like you want Forth.

>        4.  Restrictions on the use of machine resources should be as
>non-existent as possible, and should be overridable by the user if at
>all possible.  The restrictions on register usage in the C compilers
>I have seen I consider a major felony.

There are reasons for those restrictions, as is clear from studying one
or two compilers.  Some registers are needed for implementing protocols,
particularly in function calling.  Other registers are used for constant
values (0, 1, -1 are popular).  You should try writing a register allocator
before engaging in name-calling.

>        5.  Remember that the user knows what is wanted.  I hereby 
>execrate any language designer who states "you don`t want to do that"
>as either a religious fanatic or sub-human :-).  Those who say that
>something should not be allowed because "it might get you into trouble"
>I consider even worse.

By your rationale, every language should include PEEK and POKE, and the
hardware shouldn't generate any of those silly "segmentation violation"
traps.  Lisp systems should allow the programmer to acquire the address
of an object, even if it will be worthless one millisecond later when the
garbage collector kicks in.  I hardly think a responsible language designer
would include a capability that has proven from experience to be a source
of catastrophic and mysterious bugs, especially if the capability itself
is not particularly important.
 
>        6.  Realize that efficient code does not necessarily take longer
>to produce than inefficient, [...]

Efficient code *will* take longer, if only because you have to spend time 
on profiling and analysis.  Of course, you might be lucky and write efficient
code on the first try, but it doesn't happen very often.

>I think that if a reasonable language were produced, we will see that there
>will be a new era in algorithm development, and that the hackers will be
>competing in producing efficient software.

The whole proposal sounds like something out of the 50s or 60s, when debates
raged over the value of "automatic programming systems" (like Fortran!).

It is interesting to note that the proposal omits what is probably the #1
reason for HLLs: PORTABILITY.  It's all well and good to talk about exploiting
the machine, but my tricky code to exploit pipelined floating point operations
on the Vax will be utterly worthless on a Cray.  The prospect of rewriting
all my programs every couple years, and maintaining versions for each sort
of hardware, would be enough to make me go work in another field!

In fact, the view that software should be independent of hardware is one of
the great achievements of software technology.  The battle of abstraction vs
specialization has been going on for a long time, and abstraction has won.
The victory is rather recent; even ten years ago it was still generally 
assumed that operating systems and language implementations had to be written
in assembly language...

>Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907

							stan shebs
							shebs@cs.utah.edu

rober@weitek.UUCP (Don Rober) (10/27/87)

When was the last time you wrote a Data Management System; or a spreadsheet
program; or a transcendental package; or a compiler, linker, etc?

When was the last time you wrote a mail package from scratch, a grep program
and all of the other UNIX utilities?

WHile there is a long way to go, I think if you look at it, we've done okay.


-- 
----------------------------------------------------------------------------
Don Rober				UUCP: {pyramid, cae780}!weitek!rober
Weitek Corporation	1060 East Arques		Sunnyvale, CA 94086

karl@mumble.cis.ohio-state.edu (Karl Kleinpaste) (10/27/87)

crds@ncoast.UUCP writes:
   We have
   processors today which can out-run the processors of 1978 (10 years
   ago) by 20 to 1.  We have software today which can probably be
   out-run by the software of 1978, even when we use it on that
   powerful processor.  Sure, it's USER FRIENDLY, ERGONOMIC, etc., but
   does it keep pace with the advances in the industry?  Why don't I
   see ANY software advances that are even an order of magnitude above
   1978?  This would give me 200 times the power, considering the
   advances in hardware!!

In watching this discussion, it seems the only metric which people
want to apply to software is performance.  Raw performance, how many
widgets can you prefrobnicate in minimal picoseconds.  It seems to me
that this is not the right way to view software.

Aside from the fundamental problems of P?=?NP and similar questions, I
tend not to like looking at software from the sole standpoint of how
fast it goes.  Yes, speed is important - anything written by a
programmer should be designed to be fast, as fast as possible.  I
think I do much more than an adequate job of that in the code I write.
But I am not looking for pure speed in what I design; I am also
looking for new functions, new capabilities, a different way of doing
something, a different way of looking at an old problem.

Consider the case of bit-mapped window displays.  The hardware to
implement them is relatively simple.  A largish memory, an ability to
move large bit patterns fast, possibly a dedicated co-processor for
managing the whole beast.  The rest is software.  I think this is an
excellent example of a wonderful marriage between advances in hardware
and software.  Hardware provided the ability to do something in new
software.  The new software provides new capabilities and new power
for the user of the whole package.

Now consider the resulting power of that package.  I'm sitting in
front of a Sun3 using X.  The hardware provided here is exactly what
is needed to support the software concepts required.  But to me, the
resulting power of this box is really embodied in what the software is
doing.  On my screen right now, there are 10 windows.  All of them are
doing useful things.  Granted, some of them are not doing very
interesting things; I have an analog clock in one corner, and two load
meters in another.  But that still leaves 7 other windows doing real,
vital, practical work.  Even those two load meters are vital to my
work, because it's those meters that I will be watching to see if
something causes a performance spike, and that will cue me to go look
at the indicated system to see what's going wrong.  So in practical
terms, I have 9 simultaneous activities going on which are of value to
me.

That's roughly an order of magnitude improvement over the days of
1978, when I sat in front of a dumb CRT, connected to exactly one
system, doing exactly one thing.  The parallelism inherent in my brain
is being put to positive use.  I didn't even need parallel hardware to
do it.  An approximation of parallelism through timesharing is
sufficient for my mind to fill in the remaining gap.

I am not attacking or denigrating the advances in hardware in any way
whatever.  (My graduate work was in software performance increases,
after all, trying to take better advantage of existing recent hardware
improvements.)  But I think that software has come rather a long way
in the last 10 years.  It just hasn't come to us in terms of raw
performance.  It's exactly those areas of user-friendliness,
ergonomics, and expanded user capability that provide the final
improvement in the real power of the machine.
-=-
Karl

cik@l.cc.purdue.edu (Herman Rubin) (10/27/87)

In article <3603@sol.ARPA>, crowl@cs.rochester.edu (Lawrence Crowl) writes:
> In article <594@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
<< I suggest that the software developers first consider the power of existing
<< hardware, next the natural power of hardware (what unrealized operations can
<< be easily done in hardware but are not now there), and finally the "current
<< use."
> 
> No, software developers should first consider the task to be solved.  The 
> efficient use of the hardware is moot if the software does not meet the need.

Which of the tasks to be solved should the software developer consider?  On one
machine there may be hundreds or even tens of thousands of users.  Some of
these users may have dozens of different types of problems.  
> 
<< There are instruction present on most machines which someone knowing the
<< machine instructions will want to use as a major part of a program, but which
<< the HLL developers seem to ignore.  
> 
> The task of language developers is not (or should not be) to directly support
> the hardware, but to provide an environment in which a programmer can
> effectively express the solution to a problem.  In those cases where efficiency
> matters, the language model is generally chosen to be efficiently realized on
> conventional machines.  Catering to specific instructions on specific machines
> is generally a loss because the resulting programs are useful only on that
> machine.  Supporting common instructions directly in the language often means
> presenting an inconsistent model.  For instance, the arithmetic right shift
> provided by many machines provides division by two except when the number is
> negative and odd.  Should languages be designed around this quirk?  I do not
> think so.

Are we to be limited to those features which are portable to all machines?  If
we do this, the only arithmetic operations we can allow are fairly short inte-
ger arithmetic; of the machines I have looked into in the past few years, no
two have the same floating-point arithmetic.  This includes the CDC6x00, VAX,
CYBER205, CRAY, PYRAMID, IBM360 and its relatives.  And should we use the same
algorithm on different machines?  I, for one, would question the intelligence
of those who would attempt to enforce such restrictions.  The languages should
not be designed around the quirks; they should be powerful enough to enable
the programmer to make use of the quirks.
> 
<< I suggest that a language intended for library development be approached by
<< the developers with the attitude that a machine cycle wasted is a personal
<< affront.  I think we will find that the resulting languages will be easier
<< to use and less artificial than the current ones.
> 
> I think that just the opposite is true.  Designing a language around
> optimizing the use of a specific machine is likely to leave a language so
> riddled with special case restrictions as to be hopelessly complex.
> 
I am not advocating a language to optimize a specifice machine.  I believe that
a language should be sufficiently powerful that the intelligent programmer can
optimize on whatever machine is being used at the time.  If the instruction is
not there, it cannot be used.  It may be worth replacing, or it may be desira-
ble to completely revise the computational procedure.  After a program is
written, especially a library program, it may be found that quirks in the 
machine cause the program to be too inefficient.  In that case, it is necessary
to think the whole program design over.  A good programmer will soon learn
what is good on a particular machine and what is not.  I can give competitive
algorithms for generating normal random variables on the CYBER205 which I know,
without programming them, will not be competitive on any of the other machines
I have listed above.  These algorithms cannot be programmed in any HLL and be
worthwhile.  (Any machine architecture can be simulated on any other machine
with any sufficiently complex language if there is enough memory, but the
resulting program is not worth attempting.)  There are algorithms which are
clearly computationally very cheap, using only a few simple bit operations,
for which it is obvious that no HLL can give a worthwhile implementation, and
for which it is questionable as to which machines have architectures which
make those algorithms not cost an arm and a leg.

<< Implementing an arbitrary set of types is no more difficult for the user than
<< the 5 or 6 that the guru thinks of.
> 
> Who implements the arbitrary set of types?  The user?  The language designer?
> If the language provides mechanisms to allow the user to implement types, then
> the task of the language implementer is only difficult.  There are many issues
> in value instantiation, etc. which must be delt with in a language that allows
> definition of arbitrary types.  If the implementer must implement the arbitrary
> set of types, then the task is impossible.
> 
Of course the user must decide what is needed.

<< Allowing the user to put in his operations and specifying their syntax is not
<< much more difficult for the compiler than the present situation.
> 
> User modifiable syntax is a very difficult to define consistently and very
> difficult to parse.  The consensus so far appears to be that it is not worth
> the cost.
> 
If the syntax is somewhat limited, it will still be very powerful and not so
difficult to parse.  The reason that the typical assembler language is so 
difficult to use is due to the parsing difficulty 35 years ago.  Except for
not using types, Cray's assembler constructs on the CDC6x00 and on the CRAY's
go far in the right direction.

<< For example, I consider it unreasonable to have any language which does not
<< allow fixed point arithmetic.  It may be said that this would slow down the
<< compiler.  However, every compiler I have had access to is sufficiently slow
<< and produces sufficiently bad code that it would be hard to do worse.
> 
> If the target audience of the language does not need fixed point arithmetic,
> the introduction of a fixed point type is counter productive.  How many
> compilers have you looked at?  Some produce very good code.  Others are
> atrocious.  It is far easier to criticize code generators than to provide a
> generator that produces better code.
> 
Who is the target audience?  There is no adequate language for the production
of library subroutines.  If you say that the audience does not exist, then you
are certainly wrong.  If you say that the audience is small, then one could
equally criticize the existence of a Ph.D. program in any field.  I question
the need for a language which will keep the user ignorant of the powers of the
computer.  I also question whether such a language, unless very carefully 
presented as incomplete, with sufficiently many indications of its deficiencies,
will facilitate the eventual enlightenment of the learner; in fact, I believe
that this exemplifies one of the major reasons for the brain-damaged nature of
our youth.  How can a compiler produce good code if it cannot use the instruc-
tions necessarily involved in that code?  If fixed point arithmetic is needed,
the tens of instructions needed to achieve that in a language such as C do not
constitute good code if the hardware is available.  Unfortunately, some 
machines such as the CRAY do not provide decent fixed point multiplication;
on such machines it is necessary to work around it, and one may find it
advisable to totally revise the algorithm.

<< I suggest that there be a major effort to produce an incomplete language
<< which is 
<<         1.  Not burdened by the obfuscated assembler terminology.
> 
> We already have this.
> 
<<         2.  Easily extended by the user.
> 
> What do you mean by "extended".  Different notions have different costs and
> benefits.
> 
The user should be able to introduce type definitions (structures in C), new
operators (such as the very important &~, which may or may not compile
correctly), and overload old operators.  This should be very flexible.

<<         3.  Designed with the idea that anything the user wants to do should
<< be efficiently representable in the extended language.
> 
> What do you mean?  Is the resulting representation efficient with respect to
> execution time?  Is the resulting representation efficient with respect to
> run-time storage requirements?  Is the representation of what the user wants
> concisely represented in the source?  Choosing one of these options might lead
> one to design C, Forth or Prolog, respectively.
> 
First execution time, second storage.  I do not advocate obfuscated code to 
attain this, and I do not believe it is necessary.  None of the languages you
have cited meet any of my points.

<<         4.  Restrictions on the use of machine resources should be as
<< non-existent as possible, and should be overridable by the user if at all
<< possible.  The restrictions on register usage in the C compilers I have seen I
<< consider a major felony.
> 
> Many such restrictions are present to allow the resulting programs to be
> time efficient.  This requirement is in conflict with what I suspect you want
> for point 3 above.
> 
The VAX has twelve general-purpose registers available.  If I write a program
which uses eleven registers, I object to the compiler, which does not need any
registers for other purposes, only giving me six.

<<         5.  Remember that the user knows what is wanted.  I hereby execrate
<< any language designer who states "you don`t want to do that" as either a
<< religious fanatic or sub-human :-).  Those who say that something should not
<< be allowed because "it might get you into trouble" I consider even worse.
> 
An obvious example of this is structured programming and the denial of such
simple elegant ideas as goto.  The bit-efficient algorithm above starts out
with a case structure; I immediately observed that, by 99.999% of the time
keeping the case as a location only and using goto's, the code would be
considerably speeded up without any loss of clarity.

> The user may know what is wanted, but translating that into code is not always
> a simple task.  Consider assigning a boolean value to an integer.  Is this
> something that "the user knows what he's doing" and the language should accept,
> or is it something that "the user doesn't want to do" and may "get him in
> trouble"?  Almost always it is not what the user wants.  If it is what the
> user wants, the result is usually a non-portable, difficult-to-understand
> program.  (The management usually does not want the latter even if the
> programmer does.)
> 
I agree that the resulting program will probably be non-portable.  I do not
agree that it will necessarily be difficult to understand.  There are cases
where the program will be difficult to understand.  The algorithm I mentioned
above which is bit-efficient would be incomprehensible to someone as
knowledgeable as myself about the mathematics involved without having
a description of what is being done at each stage.  This would be true
regardless of any programming tools available.  The algorithm is not very
difficult with the explanation.

<<         6.  Realize that efficient code does not necessarily take longer to
<< produce than inefficient, ...
> 
> True, but the one in a thousand cases where this is true don't help much.
> Efficient code almost always takes longer to produce than inefficient code.
> You must invenst development time to get efficiency.
> 
This is because of the badness of the languages.  I do not mean completely
efficient code is easy to produce, but I thik that 80% to 90% will not be
that hard to get.

<< ... and that there are procedures which are not now being used because the
<< programmer can see that the resources available to him will make the program
<< sufficiently slow that there is no point in doing the programming.
> 
> If that is the case, the procedure was either not worth much to begin with,
> or not within the bounds of feasible computations given today's hardware.
> 
I think the fact that the use of machine instructions which I produce without
trying very hard to get reasonably efficient procedures is a sufficient
rebuttal.

<< I think that if a reasonable language were produced, we will see that there
<< will be a new era in algorithm development, and that the hackers will be
<< competing in producing efficient software.
> 
> Algorithm development is independent of any specific language, so a new
> language will probably have little affect on algorithms.  Hackers are already
> competing in producing efficient software, so a new language will have little
> affect here also.
> 
It is true that the language does not directly affect the algorithm.  However,
someone who considers whether or not there is any point in implementing the
resulting algorithm will necessarily consider the available tools.  If an
algorithm involves many sqare roots, I would be hesitant in using it rather
than a less efficient one which does not unless square root is a hardware
instruction, which it should be but is not on most machines.  The number of
reasonable algorithms is infinite, not merely very large.

<< Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
> -- 
>   Lawrence Crowl		716-275-9499	University of Rochester
> 		      crowl@cs.rochester.edu	Computer Science Department
> ...!{allegra,decvax,rutgers}!rochester!crowl	Rochester, New York,  14627


-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (ARPA or UUCP) or hrubin@purccvm.bitnet

bpendlet@esunix.UUCP (10/27/87)

in article <36KEN@ORION>, KEN@ORION.BITNET (Kenneth Ng) says:
> hardware today, try adding LRU caching, address lookaside buffers,
> I/O processors, massive error detection and correction logic, ethernet
> and other communication technology, paging, the entire virtual memory
> schemes on a lot of machines, etc., etc, etc.

> Kenneth Ng: ken@orion.bitnet

Why compare hardware and software at all? Its like comparing roads and cars.
Cars even run on roads, much like software runs on hardware. ( Forgive the
awful analogy. It is deliberately obsurd. ) Even though they affect each 
others development, the technologies are very different.

Look at LRU caching and paging. This technique depends on a statistical 
property of programs in general, but to get maximum performance from a
given machine a programmer needs detailed knowledge of its implementation
on that specific machine.

Two very different technologies. Two very different points of view.
Two different cultural heritages.

		Bob P.
-- 
Bob Pendleton @ Evans & Sutherland
UUCP Address:  {decvax,ucbvax,ihnp4,allegra}!decwrl!esunix!bpendlet
Alternate:     {ihnp4,seismo}!utah-cs!utah-gr!uplherc!esunix!bpendlet
        I am solely responsible for what I say.

mac3n@babbage.UUCP (10/27/87)

>I think that if a reasonable language were produced, we will see that there
>will be a new era in algorithm development, and that the hackers will be
>competing in producing efficient software.

Is "hacker" being used here in the derogatory sense ("one who makes furniture
with an axe")?

shebs%defun.uucp@utah-cs.UUCP (Stanley T. Shebs) (10/28/87)

In article <596@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>In article <3603@sol.ARPA>, crowl@cs.rochester.edu (Lawrence Crowl) writes:
>> In article <594@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:

><< I suggest that the software developers first consider the power of existing
><< hardware, next the natural power of hardware (what unrealized operations can
><< be easily done in hardware but are not now there), and finally the "current
><< use."
>> 
>> No, software developers should first consider the task to be solved.  The 
>> efficient use of the hardware is moot if the software does not meet the need.
>
>Which of the tasks to be solved should the software developer consider?  On one
>machine there may be hundreds or even tens of thousands of users.  Some of
>these users may have dozens of different types of problems.  

Then the software developer has dozens of tens of thousands of tasks to solve!
There aren't enough software people in the world to solve each problem in an
individual and idiosyncratic fashion, so instead several tasks are decreed to
be "similar", which really means that some compromises have to be made.
This situation is not unique to software.  For example, bridge designers don't
usually get new and unique rivets for each bridge - instead, they have to order
from a catalog.

Everybody wants every one of their programs to be maximally efficient on all
imaginable hardware, while at the same time spitting on programmers because
it doesn't happen at the press of a couple keys.  It's unfortunate that the
popular culture encourages the belief that hackers can conjure up complicated
programs in a few seconds.  I suspect it even influences people who should
know better.  To quote Fred Brooks, "there is no silver bullet".

>And should we use the same algorithm on different machines?
>I, for one, would question the intelligence
>of those who would attempt to enforce such restrictions.

I find it interesting that the people who have been programming the longest -
numerical analysts - are exactly those who use dozens of different algorithms
to solve the same problem, each with slightly different characteristics.
Could mean either that multiplicity of algorithms is coming to the rest of
computing soon, or that numerical analysts haven't yet found the correct forms
for their algorithms...

>> User modifiable syntax is a very difficult to define consistently and very
>> difficult to parse.  The consensus so far appears to be that it is not worth
>> the cost.
>> 
>If the syntax is somewhat limited, it will still be very powerful and not so
>difficult to parse.  The reason that the typical assembler language is so 
>difficult to use is due to the parsing difficulty 35 years ago.  Except for
>not using types, Cray's assembler constructs on the CDC6x00 and on the CRAY's
>go far in the right direction.

I think you would have a hard time proving that the minor syntactic 
improvements of the Cray assembly language (with which I am familiar)
have any effect at all on productivity, reliability, etc.  After all, Lisp
programmers can be immensely productive even though the assignment statement
has a verbose prefix (!) syntax.

>Who is the target audience?  There is no adequate language for the production
>of library subroutines.

Aha!  I wondered if that was the motivation for the original posting...
You have touched upon my thesis topic; but you will not have to die for it :-).
My thesis is most directly concerned with the implementation of Lisp runtime
systems, which have many similarities to Fortran libraries (really).
The basic angle is to supply formal definitions of the desired functionality
and the hardware, then to use an rule-based system to invent and analyze
designs.  Not very intelligent as of yet, but I have hopes...

Of course, this approach involves a couple assumptions:

1. As you say, there is no "adequate" language for the production of library
subroutines.  After poking through the literature, it is pretty clear to me
that no one has *ever* designed a high-level language that also allows direct
access to different kinds of hardware.  There are reasons to believe that such
a beast is logically impossible.  The output of my system is machine-specific
assembly language.

2. I also assume that human coding expertise can be embodied in machines.
What compilers can do today would have amazed assembly language programmers
of 30 years ago.  There is no reason to believe that progress in this area
will encounter some fundamental limitation.  I've seen some recent papers (on
the implementation of abstract data types using rewrite rules) that still seem
like magic to me, so certainly more advances are coming along.  Closer to
reality are some compiler projects that attempt to figure out optimal code
generation using a description of the target machine.  This is extremely
hard, since machine "quirks" are usually more like machine "brain damage".

>The user should be able to introduce type definitions (structures in C), new
>operators (such as the very important &~, which may or may not compile
>correctly), and overload old operators.  This should be very flexible.

Take a look at HAL/S, which is a "high-level assembly language" used in the
space shuttle computers.  Allows very tight control over how the machine
code will come out.  In fact, there are probably a few people reading this
group who could offer a few choice comments on it...

><<         3.  Designed with the idea that anything the user wants to do should
><< be efficiently representable in the extended language.
>> 
>> What do you mean?  Is the resulting representation efficient with respect to
>> execution time?  Is the resulting representation efficient with respect to
>> run-time storage requirements?  Is the representation of what the user wants
>> concisely represented in the source?  Choosing one of these options might lead
>> one to design C, Forth or Prolog, respectively.
>> 
>First execution time, second storage.  I do not advocate obfuscated code to 
>attain this, and I do not believe it is necessary.  None of the languages you
>have cited meet any of my points.

What's wrong with Forth?  It can be extended in any way you like, it can be
adapted to specific machine architectures, the syntax is better than raw
assembly language, and its compilers don't do awful things behind the user's
back.  You'll have to be more specific on how it fails.  I agree with you that
C and Prolog cannot always be adapted to machine peculiarities.

>The VAX has twelve general-purpose registers available.  If I write a program
>which uses eleven registers, I object to the compiler, which does not need any
>registers for other purposes, only giving me six.

How do you know it "does not need any registers for other purposes"?  Are you
familiar with the compiler's code generators and optimizers?  Writers of code
generators/optimizers tend to be much more knowledgeable about machine
characteristics than any user, if for no other reason than that they get the
accurate hardware performance data that the customers never see...

>An obvious example of this is [...] the denial of such
>simple elegant ideas as goto.

"Goto" is neither simple nor elegant on parallel machines.

>The bit-efficient algorithm above starts out
>with a case structure; I immediately observed that, by 99.999% of the time
>keeping the case as a location only and using goto's, the code would be
>considerably speeded up without any loss of clarity.

OK, this is the time to use assembly language.  What's the problem with that?
It's the accepted technique; no one will fault you for it.

>> Efficient code almost always takes longer to produce than inefficient code.
>> You must invenst development time to get efficiency.
>> 
>This is because of the badness of the languages.  I do not mean completely
>efficient code is easy to produce, but I thik that 80% to 90% will not be
>that hard to get.

(I assume 80-90% of optimal is meant)
You had better come up with some hard evidence for that.  Various computer
scientists have been trying to prove for years that the right language, or 
the right method, or the right environment, or the right mathematics will
make efficient programming "easy".  So far no one has succeeded, but the
journals are full of polemic and anecdotes masquerading as scholarly work.
It's really depressing sometimes.

No silver bullet...

><< Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
>>   Lawrence Crowl		716-275-9499	University of Rochester
>Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907

							stan shebs
							shebs@cs.utah.edu

ben@hpldola.HP.COM (Benjamin Ellsworth) (10/28/87)

> Are we to be limited to those features which are portable to all machines?  If
> we do this, the only arithmetic operations we can allow are fairly short inte-
> ger arithmetic; of the machines I have looked into in the past few years, no
> two have the same floating-point arithmetic.  This includes the CDC6x00, VAX,
> CYBER205, CRAY, PYRAMID, IBM360 and its relatives.  And should we use the same
> algorithm on different machines?

This problem is best answered inside of the code generator of the 
compiler.  The HLL doesn't need to know about the machine's abilities,
but the compiler writer MUST.  The language shouldn't have to be 
extended in order to take advantage of the machine.  The compiler should
produce efficient code for the machine.

> I am not advocating a language to optimize a specifice machine.  I believe that
> a language should be sufficiently powerful that the intelligent programmer can
> optimize on whatever machine is being used at the time.

Once again this optomization should be the responsibility of the
compiler not the application programmer.

> After a program is
> written, especially a library program, it may be found that quirks in the 
> machine cause the program to be too inefficient.

These problems are not quirks in the machine!  They are the result of
poor code generation by the compiler.

> A good programmer will soon learn
> what is good on a particular machine and what is not.  

He(She) shouldn't have to!  When given a statement, the compiler should
exploit the machine to the fullest advantage of the user.

> However, every compiler I have had access to is sufficiently slow
> and produces sufficiently bad code that it would be hard to do worse.

I can hardly disagree.  However, your solution is to have the programmer
become the code generator.  I find this a step backwards.  If you 
sincerely feel that compiler technology is hopeless, then I suggest that
you refine your assembly writing skills and work on enlarging your 
personal macro library.

> The user should be able to introduce type definitions (structures in C), new
> operators (such as the very important &~, which may or may not compile
> correctly), and overload old operators.  This should be very flexible.

This is already available, if I understand you correctly, in C++ and ADA.
However, efficiency may not be up to your standards yet.

> First execution time, second storage.

Ahh... The narrow minded short-sighted perspective of a specialized 
professional.  How refreshing!  I restores my faith in the nature of
mankind. :-) :-)

I have a degree in Computer Science and Statistics, so I will presume
to have more than a passing familiarity with the class of problems 
which you spend most of your time solving.  In your environment, at 
the present, nothing runs nearly fast enough.  Perhaps you have spent
so much emotional energy in frustration at 24hr. machine runs that 
you are no longer able to see other first priority than speed.  However,
I think that you just haven't tried.

A case to consider is a small battery powered system.  Let's say a 
security application. In this case 10 or 20 sec. slower is not of 
much concern, however, a fast processor with fast memories is going
to kill your batteries in short order.  Cutting response time to 
1usec just isn't worth the cost to your power supply.

I can think of many other examples.  I think that the one above is
sufficiently general and simple to illustrate the trade-offs that
exist outside of AOV.  Trade-offs that you have chosen to ignore.

> The VAX has twelve general-purpose registers available.  If I write a program
> which uses eleven registers, I object to the compiler, which does not need any
                                               ^^^^^^^^
> registers for other purposes, only giving me six.

I object also.  However, once again, it is the compiler which is at
fault not the language. 

> An obvious example of this is structured programming and the denial of such
> simple elegant ideas as goto.

This statement identifies you as one who has worked exclusively in
research, the purer the better.  I would bet that you have NEVER
worked on a large software product (PRODUCT, not project).  

These products must of neccessity have a long market life in order
to recoup the large investment required to produce them.  A long
market life requires easily grasped algorithms, and more importantly
easily modifiable coding style.  In order to have a long market life
the product must be enhanced, often in ways unforeseen at product 
introduction.  Unconditional branching in the name of speed is not
cost justifiable.

I suspect that for you however, programs are just tools.  Tools which
are very problem specific, and which can be discarded as you go. 
This is entirely defensible for academia.  Just don't blindly attribute
your constraints to the rest of the world.

> The bit-efficient algorithm above starts out
> with a case structure; I immediately observed that, by 99.999% of the time
> keeping the case as a location only and using goto's, the code would be
> considerably speeded up without any loss of clarity.

Once again a problem which could be solved by better compiler
technology.  Once again, you have chosen (perhaps under duress)
to be your own code generator.

> I agree that the resulting program will probably be non-portable.

And unsuitable for those of us who rely on sales instead of tuition
to eat and pay our rent.

> This is because of the badness of the languages.  

Nope, the badness of the compilers.

Your original statement (i.e. efficient code takes no longer to 
produce than inefficient code) is Horse-Pucky in any case.  A 
program's first constraints are input and desired output.  You wish
to add to these first constraints that the program be as fast and 
small as possible.  You then state that it is no harder (takes no 
longer) to solve the problem in this further constrained state.  
This (your assertion) is only true when the algorithm is already
known and understood because the majority of the time is spent 
translating the algorithm not solving the problem.

Sorry prof. can't pass you this semester (I will consider giving 
you an incomplete :-).

> I do not mean completely
> efficient code is easy to produce, but I thik that 80% to 90% will not be
> that hard to get.

Once again your background is showing through.  Most of the problems
to which you apply a computer (again I presume) have algorithms which
rest upon well known and well understood numerical analysis 
techniques.  Because you are not generating very much of the algorithm
when you code, you have lots of free "brain cycles" to devoted to 
efficiency.  For you in your environment it really isn't that hard
to get the efficiency, given assembly language hooks.

Many of us operate in a totally different environment.  My problems 
have to do with the presentation, organization and storage of data.  
I have to consider human factors, and worse yet the marketplace.  
Adding demanding efficiency constraints greatly slows me down.  
(I work under them anyway but I would be MUCH faster without them.)

> I think that if a reasonable language were produced, we will see that there
> will be a new era in algorithm development, and that the hackers will be
> competing in producing efficient software.

I think that you have an interesting definition of the computer idiom
"hacker."  I also think that what is necessary is better compilers
rather than different languages.  For you these compilers should come
with complete well written math libraries.

> It is true that the language does not directly affect the algorithm.  

I disagree.  This like saying that the form in which the data is
available does not directly affect the actual analyses done (Most
people will work with it in its given form before going to the 
effort of translation).  Perhaps it oughtn't have any effects, but 
it usually does.

> I would be hesitant in using it rather
> than a less efficient one which does not unless square root is a hardware
> instruction, which it should be but is not on most machines.

And a closing note emhpasizing mathematical speed as the measure of
a compiler/machine performance.

> -- 
> Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
> Phone: (317)494-6054
> hrubin@l.cc.purdue.edu (ARPA or UUCP) or hrubin@purccvm.bitnet

-----

Benjamin Ellsworth
Hewlett Packard Company
P.O. Box 617
Colorado Springs, CO 80901
(303) 590-5849
...hplabs!hpldola!ben

sher@sunybcs.uucp (David Sher) (10/28/87)

(Followup's to comp.lang.misc)
Time for a stupid analogy:
Someone gives me a 2 watt power line and asks me to build a toaster:
So using my electronics ability and ingenuity I build a toaster that
toasts a piece of bread that was very carefully placed in it in about
15minutes.  

However things advance and he now hands me a 2 Megawatt power line
and says ok buid me a toaster,  and I throw together a toaster that
toasts bread in about 1 minute.  (The most work going into making sure
the user does not electrocute himself.) He then comes back and complains,
I gave you a million times as much power and you only give me a factor
of 20 speedup!  Power engineering sure advances faster than electrical!

Disclamer: As should be obvious by now, I know nothing about constructing
appliances.   This story is not true, and the names have been changed to
protect the innocent and all that stuff.  
-David Sher
ARPA: sher@cs.buffalo.edu	BITNET: sher@sunybcs
UUCP: {rutgers,ames,boulder,decvax}!sunybcs!sher

eugene@pioneer.arpa (Eugene Miya N.) (10/28/87)

In article <5084@utah-cs.UUCP> shebs%defun.UUCP@utah-cs.UUCP (Stanley T. Shebs) writes:
>Take a look at HAL/S, >space shuttle computers.
>In fact, there are probably a few people reading this group who could
offer a few choice comments on it...

You asked:
	Grrrrrr.

From the Rock of Ages Home for Retired Hackers:

--eugene miya, NASA Ames Research Center, eugene@ames-aurora.ARPA
  "You trust the `reply' command with all those different mailers out there?"
  "Send mail, avoid follow-ups.  If enough, I'll summarize."
  {hplabs,hao,ihnp4,decwrl,allegra,tektronix}!ames!aurora!eugene

drw@culdev1.UUCP (10/28/87)

crowl@cs.rochester.edu (Lawrence Crowl) writes:
| In article <594@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
| >I suggest that a language intended for library development be approached by
| >the developers with the attitude that a machine cycle wasted is a personal
| >affront.  I think we will find that the resulting languages will be easier
| >to use and less artificial than the current ones.

What has become quite clear is the opposite:  Human time is *much*
more expensive than computer time.  If wasting cycles were a personal
affront, then we would still be editing programs by playing with decks
of punch cards.

There are certain library routines, OS schedulers, and the like that
will be executed enough to make spending human time for serious
optimization reasonable, but they probably are less than 1/10,000 of
all program lines written.

| >Implementing an arbitrary set of types is no more difficult for the user than
| >the 5 or 6 that the guru thinks of.

Really?  How about the coercions (automatic type conversions) that are
allowed?  How does, say, "add" decide to jiggle the types of the
arguments so that the arguments are addable?  If you think that this
is simple, read the beginning of the IBM PL/1 reference manual where
it discusses the datatypes and how they interact.  There is DECIMAL
FIXED and BINARY FIXED and DECIMAL FLOAT and BINARY FLOAT, and STRING
and BIT STRING, and they have lengths (in bits or digits) and scales
(for FIXED types) and there is a whole host of conversions.  In the
extreme, you must note that

	IF PI THEN ...

is perfectly meaningful in PL/1, and the conversion rules tell you
whether PI is true or false...  (I'm not saying that this is good,
just that there are a lot of strange implications of
reasonable-sounding typing systems.)

Implementing a type is simple, provided that no operation is affected
by its context and that you don't have to make previously-existing
operators work with the new datatype.  Otherwise there are a lot of
subtle problems that have to be thought through.

| >Allowing the user to put in his operations and specifying their syntax is not
| >much more difficult for the compiler than the present situation.
| 
| User modifiable syntax is a very difficult to define consistently and very
| difficult to parse.  The consensus so far appears to be that it is not worth
| the cost.

I agree with crowl.  Allowing the user to change the syntax of a
language is a monstrous problem.  Anyone who doesn't think so hasn't
put together a parser for a serious language.

Dale
-- 
Dale Worley    Cullinet Software      ARPA: culdev1!drw@eddie.mit.edu
UUCP: ...!seismo!harvard!mit-eddie!culdev1!drw
If you get fed twice a day, how bad can life be?

crowl@rochester.UUCP (10/28/87)

In article <596@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>[reguarding access to machine specifics from high level languages]

See Stanley Shebs's article <5084@utah-cs.UUCP>.  I will not address those
topics which he addresses.

>Are we to be limited to those features which are portable to all machines?
>If we do this, the only arithmetic operations we can allow are fairly short
>integer arithmetic; ...

You are still oriented on supporting the hardware instead of describing a
solution.  A real high level language is independent of the word size.  If a
machine does not have a long integer, the implementation of the language must
build it from short integers.

>... of the machines I have looked into in the past few years, no two have the
>same floating-point arithmetic.

Given that highly accurate floating point routines are heavily dependent on
specific floating point formats, this is a problem.  No high level language
will be portable from machine to machine when the formats are different.  The
IEEE floating point standard should help.

>The languages should not be designed around the quirks; they should be
>powerful enough to enable the programmer to make use of the quirks. ...  I am
>not advocating a language to optimize a specifice machine.  I believe that a
>language should be sufficiently powerful that the intelligent programmer can
>optimize on whatever machine is being used at the time.

Any language which allows the programmer to make use of the quirks must make
them visible.  This makes the language complex, difficult to understand,
difficult to reason about and makes the resulting programs non-portable.

>I can give competitive algorithms for generating normal random variables on
>the CYBER205 which I know, without programming them, will not be competitive
>on any of the other machines I have listed above.

If you are trying to get 100% performance, you will have to use assembler.
No language that has any meaning beyond a single machine can hope to meet your
needs.  It seems that your real complaint is that assembler languages are too
verbose, not that high level languages do not allow sufficient access to the
machine.  It should not be too hard to whip together an expression-based
"pretty" assembler.

>I question the need for a language which will keep the user ignorant of the
>powers of the computer.  I also question whether such a language, unless very
>carefully presented as incomplete, with sufficiently many indications of its
>deficiencies, will facilitate the eventual enlightenment of the learner; ...

Very few languages are incomplete in the sense of being able to compute an
arbitrary function.  However, assembly languages are examples of incomplete
languages.  All languages are incomplete with respect to their support of
the programmer for some application area.  It is true that there are no
languages tuned to the machine specific implementation of highly optimized
numerical libraries.  I suspect it would look very much like the "pretty"
assembler.

>Unfortunately, some machines such as the CRAY do not provide decent fixed
>point multiplication; on such machines it is necessary to work around it, and
>one may find it advisable to totally revise the algorithm.

You apparently program in a very narrow world where performance is everything.
The vast majority of programming is not done in such a world.

>The user should be able to introduce type definitions (structures in C), new
>operators (such as the very important &~, which may or may not compile
>correctly), and overload old operators.  This should be very flexible.

C++ allows programmers to introduce new types and overload existing operators.
It does not allow the definition of new operators, but does allow definition of
functions which achieve the same effect at the cost of a slightly more verbose
syntax.

>The VAX has twelve general-purpose registers available.  If I write a program
>which uses eleven registers, I object to the compiler, which does not need any
>registers for other purposes, only giving me six.

So complain to the compiler writer.  This is not the fault of the language or
of its designer.

>[Efficient code takes longer to produce than inefficient code] because of the
>badness of the languages.

Effient code takes longer because the algorithms are generally more complex,
not becuase the languages are bad.

>Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
-- 
  Lawrence Crowl		716-275-9499	University of Rochester
		      crowl@cs.rochester.edu	Computer Science Department
...!{allegra,decvax,rutgers}!rochester!crowl	Rochester, New York,  14627

crowl@rochester.UUCP (10/28/87)

In article <4943@ncoast.UUCP> crds@ncoast.UUCP (Glenn A. Emelko) writes:
>In those rough times [of 16K memory], people were very concerned and concious
>about storage, both on disk, as well as memory usage.  This forced software
>engineers to think creatively, to tighten code, and to produce very powerful
>LITTLE algorithms that could do many things and could be called from many
>other parts of the code.

It also forced them to spend a lot of money achieving these goals.

>Code was many times designed to be recursive, if not reentrant, and it was
>expected that the software engineer knew alot about the machine and
>environment (in terms of the hardware) that was hosting the program, which
>allowed for tricks and little magical wonders to happen (laughing, in other
>words, the designer took advantage of what was available).

It also make anyone else maintaining or porting the program spend weeks (read
thousands of dollars) just trying to understand a small program in order to
accomplish the task.

>In contrast, today, many designers know nothing about the destination machine
>(it's a UNIX box?), and most are not very concerned about how much memory,
>disk space, or anything else that they use, and really could care less if
>there are four subroutines in their code which could be combined into one
>general purpose all encompasing sub.

The also produce a functionally equivalent program in far less time which is
available on far more machines.  (Boss, I can spend eight more weeks and double
the speed of program.  Of course, then it will only run on the 40 or so
machines which are exactly like our development machine.)

>In fact, I have noted many specific cases of generally sloppy, slow, and
>out-and-out crude coding techniques passing as "state of the art," simply
>because it is running on a machine that is so blindingly fast that it does
>make that code seem reasonable.

Not fast, cheap.

>Yes, I am talking about the "most popular" spreadsheet of 1984, and a lowly
>Z80 machine ripping it to shreads [speed-wise]!  Is this the current state of
>software technology?  Yes, I believe so.

Which goes to show that raw performance is not highly valued by customers.

>We have software today which can probably be out-run by the software of 1978,
>even when we use it on that powerful processor.  Sure, it's USER FRIENDLY,
>ERGONOMIC, etc., but does it keep pace with the advances in the industry?

Raw speed is only one aspect of advances in the industry.  Hardware
concentrates on speed while software concentrates on functionality.  (Mr.
Customer, I am giving you a fast sort, but I do not have time to implement
a backup utility.)

-- 
  Lawrence Crowl		716-275-9499	University of Rochester
		      crowl@cs.rochester.edu	Computer Science Department
...!{allegra,decvax,rutgers}!rochester!crowl	Rochester, New York,  14627

franka@mmintl.UUCP (Frank Adams) (10/29/87)

In article <590@ihopa.ATT.COM> jdu@ihopa.ATT.COM (John Unruh) writes:
>    The second area is in the
>    theoretical basis for computing in general.  In my mind, this area
>    covers many basic problems such as the halting problem and proof of
>    correctness.  ...  I don't know
>    of any theoretical basis that supports software like circuit analysis
>    supports circuits (both digitial and analog) or like boolean algebra
>    supports digital circuits.

In fact, the theory of things like the halting problem tells us that such a
theoretical support is not possible.  One might come up with a good
approximation, but it will sometimes fail.

Another point worth making: if hardware design is advancing so much faster
than software design, why does the latest trend in hardware design (RISC)
involve putting more of the burden on software?
-- 

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

ken@rochester.UUCP (10/29/87)

References:

Let us not forget that the computer is a tool and that raw computing
speed is but one measure of the effectiveness of the hardware.  If that
computing power has to "go to waste" in a spreadsheet program, I don't
care, as long I get *my* job more effectively. All those cycles going
into painting a bitmap window, think it's a waste? Fine, want to design
METAFONT characters with punch cards, or even a glass TTY?

I used to wonder what we would do with all those MIPS of computing
power hardware would give us. Now I realize that there is at least one
application that can soak up any amount of CPU power you can get -
advanced interfaces. Have a look at the October SciAm issue. Take the
Wired Glove.  Imagine a biochemist being able to experiment with
molecules by "handling" them. Would you begrudge all those cycles
required to support this mode of interaction? I certainly wouldn't.

	Ken

cik@l.cc.purdue.edu (Herman Rubin) (10/29/87)

In article <5084@utah-cs.UUCP>, shebs%defun.uucp@utah-cs.UUCP (Stanley T. Shebs) writes:
[Which of the tasks to be solved should the software developer consider?  On one
[machine there may be hundreds or even tens of thousands of users.  Some of
[these users may have dozens of different types of problems.  
> 
> Then the software developer has dozens of tens of thousands of tasks to solve!

Definitely not.  The job of the software developer is to produce the flexible
tools so that the thinking brain can do the job.  I think that this can be done
by producing typed, but not stongly typed, assemblers with a reasonable syntax
(for example, instead of using
	OPMNEMONIC_TYPE_3	z,y,x
on the VAX, if x, y, and z are of type TYPE I want to be able to write
	x = y OPSYM z
where OPSYM is the operation symbol (+,-,*,etc.)

The user should also be able to produce macros in the same format.  The recent
discussion of 16 bit * 16 bit = 32 bit is an example of this.

[And should we use the same algorithm on different machines?
[I, for one, would question the intelligence
[of those who would attempt to enforce such restrictions.
> 
> I find it interesting that the people who have been programming the longest -
> numerical analysts - are exactly those who use dozens of different algorithms
> to solve the same problem, each with slightly different characteristics.
> Could mean either that multiplicity of algorithms is coming to the rest of
> computing soon, or that numerical analysts haven't yet found the correct forms
> for their algorithms...
> 
I suggest that this means that numerical analysts realize that the algorithm to
use depends on the circumstance.  A reasonable driver uses dozens of different
algorithms in a single trip.
> 
[The VAX has twelve general-purpose registers available.  If I write a program
[which uses eleven registers, I object to the compiler, which does not need any
[registers for other purposes, only giving me six.
> 
> How do you know it "does not need any registers for other purposes"?  Are you
> familiar with the compiler's code generators and optimizers?  Writers of code
> generators/optimizers tend to be much more knowledgeable about machine
> characteristics than any user, if for no other reason than that they get the
> accurate hardware performance data that the customers never see...
> 
I have looked at the code produced.  In it, the registers which it will not
allow me to use are not used.  Therefore, I can see no reason why I should not
be allowed to use them.

[An obvious example of this is [...] the denial of such
[simple elegant ideas as goto.
> 
> "Goto" is neither simple nor elegant on parallel machines.

Neither is if ... then...else.  Both should be avoided on those machines, and
replaced by different procedures.  Again, non-portability.  There is consider-
able effort going into designing algorithms which can take advantage of
parallelism and vectorization.  For many purposes, I would not even consider
using the same algorithm on the VAX and the CYBER205.  There are good vector
algorithms on the 205 which would be ridiculous on the CRAY, which is also a
vector machine.
> 
[The bit-efficient algorithm above starts out
[with a case structure; I immediately observed that, by 99.999% of the time
[keeping the case as a location only and using goto's, the code would be
[considerably speeded up without any loss of clarity.
> 
> OK, this is the time to use assembly language.  What's the problem with that?
> It's the accepted technique; no one will fault you for it.
> 
This is the time to use goto's.  Any language should allow it.
> 
> No silver bullet...
I agree.
> 
[<< Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
[>   Lawrence Crowl		716-275-9499	University of Rochester
[Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
> 
> 							stan shebs
> 							shebs@cs.utah.edu


-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (ARPA or UUCP) or hrubin@purccvm.bitnet

cik@l.cc.purdue.edu (Herman Rubin) (10/29/87)

In article <119@babbage.acc.virginia.edu>, mac3n@babbage.acc.virginia.edu (Alex Colvin) writes:
> 
> Is "hacker" being used here in the derogatory sense ("one who makes furniture
> with an axe")?

No.  I am using "hacker" to mean "someone who will use whatever methods are
appropriate, including those which are generally regarded as inappropriate."

-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (ARPA or UUCP) or hrubin@purccvm.bitnet

mct@praxis.co.uk (Martyn Thomas) (10/29/87)

I don't understand the comparisons which are being made between hardware and
software technology - they seem to be comparing the wrong things.

The advances in hardware technology are advances in *fabrication* technology
mostly.  The feature sizes get smaller, the substrates can be grown with
fewer impurities, the masks can be drawn more accurately.  There are few
useful analogues of this in software.

Software is design, in a pure form.  The *design* technology of hardware, at
the logical, behavioral level rather than the physical place-and-route, is a
long way behind software - mostly because the complexity of most hardware
designs is relatively low.  (This is changing rapidly, as 100K+
gate-equivalent ASICs become more common).  Surely a reasonable comparison
would be:

HARDWARE		SOFTWARE

Schematic capture	Flowcharting
Gate-level design	Assembly-language programming
Most HDLs		FORTRAN
ELLA/VHDL		Modula 2 / ADA		(Don't take this too
						seriously - these
						are not really close enough
						to compare).

LCF/LSM			VDM or Z or CSP		ie formal methods.


The main weakness of this sort of comparison is that the design problems are
different, and that there is no agreed taxonomy of software design
techniques.  The table above is *not* a linear progression from
less-advanced to more-advanced, it is rather an unstructured, pairwise
comparison of different aspects of hardware design technology with rough
software equivalents.

In general, hardware designers are using only limited "structured methods"
(to use a software phrase) and almost no (mathematically) "formal methods".
There are notable exceptions: in the UK, Inmos are using CSP and OCCAM to
prove the correctness of transputer components;  RSRE (UK MoD) are using
LCF/LSM to prove the correctness of VIPER, their formally-verified 32-bit
microprocessor; and increasingly other companies are learning the benefits
of a (more) formal approach to design.  Such methods are in much more
widespread use for software development, and the corresponding research
papers were published about ten years earlier for software.

The hardware designers are catching up fast, though, just in time to merge
their skills with the software engineers' to make a new breed of systems
engineer.

Martyn Thomas, Praxis plc, 20 Manvers Street, Bath BA1 1PX UK.
+44-225-444700.   ...!mcvax!ukc!praxis!mct OR mct%praxis.uucp@ukc.ac.uk

hal@pur-phy (Hal Chambers) (10/29/87)

In article <4943@ncoast.UUCP> crds@ncoast.UUCP (Glenn A. Emelko) writes:

>... I can recall designing a
>sorting algorithm for a data management system which was very popular in the
>early '80s for a specific Z80 based computer, and then one day pulling out
>a copy of the "state of the art" data management/spread sheet program of '84,
>which ran on a 8088 based system, and benchmarking them against each other
>for about 20 different test cases
>and watching the Z80 based computer beat the 8088 based system 10 to 1 at
>WORST!...

>Glenn A. Emelko

This reminds me of a time about 10-12 years ago when I was using a program
which looked for new energy levels in an atom given a list of known levels
and a list of currently unclassified spectrum lines for that element.
This program generated huge lists of numbers which then had to be sorted
and then examined for matches.  The program ran the the CDC6600 here at
Purdue and took large amounts of time which meant: submit job wait till
next day for results.  I examined the sort routine which was written in
COMPASS (the CDC assembly language) for "speed".

What a revelation when I realized the algorithm used was a Bubble sort!!

I wrote a modified Shell-Metzner sort in FORTRAN and got a 30-fold
improvement.  Then I hand-compiled that subroutine into COMPASS and
got another factor of 3 improvement.  Those jobs now run in less than
8 sec. of processor time and turn around is essential immediate.

Hal Chambers

drw@culdev1.UUCP (Dale Worley) (10/29/87)

crowl@cs.rochester.edu (Lawrence Crowl) writes:
| For instance, the arithmetic right shift
| provided by many machines provides division by two except when the number is
| negative and odd.  Should languages be designed around this quirk?  I do not
| think so.

This is not so simple...  What do you mean by "division by 2"?  There
are actually at least two ways of doing integer division:  one way is
to always round the mathematical quotient towards 0.  This is the
common, or "Fortran", way to do it, but to a certain extent it is
historical accident that most programming languages do it this way.
Of course, this is *not* what arithmetic right shift does.

But this method is not necessarily the most natural way to do
division.  In many cases, this is more natural:  round the
mathematical quotient *down*, that is, more negative.  This is
equivalent to the common way for positive quotients, but is different
for negative quotients:  (-1)/2 = -1.

(If you think this method is silly, consider the problem:  I give you
a time expressed in minutes before-or-after midnight, 1 Jan 1980.
Find the hour it occurs in, expressed in hours before-or-after
midnight, 1 Jan 1980.  The advantage of the second division here is
that the quotient is computed so the remainder is always positive.)

I have had to write functions in several programming languages to
perform this second form of division.

Dale
-- 
Dale Worley    Cullinet Software      ARPA: culdev1!drw@eddie.mit.edu
UUCP: ...!seismo!harvard!mit-eddie!culdev1!drw
If you get fed twice a day, how bad can life be?

reggie@pdnbah.UUCP (George Leach) (10/29/87)

In article <4943@ncoast.UUCP> crds@ncoast.UUCP (Glenn A. Emelko) writes:

>Lawrence (and all),

     [stuff deleted....]
>.............................................. and it was expected
>that the software engineer knew alot about the machine and environment (in
>terms of the hardware) that was hosting the program, which allowed for tricks
>and little magical wonders to happen (laughing, in other words, the designer
>took advantage of what was available).  In contrast, today, many designers
>know nothing about the destination machine (it's a UNIX box?), and most are
>not very concerned about how much memory, disk space, or anything else that
>they use...........

     Man have you missed the boat!  That kind of thinking just does
not cut it anymore!  Sure, it is fine to tweek and tighted up stuff
to run optimal upon a specific hardware platform, if you plan on
never moving that software to another machine.  However, the tasks
that software must serve continue to become more complex as we see
greater advances in hardware.  At the same time, software must remain
*PORTABLE*, so as to minimize the effort involved in moving existing
software to new hardware platforms to gain the advantage they offer.
Furthermore, we are seeing emphasis being put upon reusing code.

     Why is it that we still find many COBOL programmers out
there hacking away on IBM mainframes?  The answer is there is
too much money tied up in software, that to switch to something else
must outweight the effort to retrain the programming staff, divert
manpower to port or rewrite the *EXISTING* code, propagate bug fixes
from the existing code to the new or ported code during the effort,
and all without adding any more functionality.  Years ago when I first 
started I did all I could to optimize my Fortran programs running under 
VM/CMS on the IBM 370.  The same machine that we used for development 
was the production machine as well.  We only had to worry about *ONE* 
machine!

     Contrast that to being charged with delivering a system to a 
community of users who want maximum flexibility in choosing the 
hardware to run the applications upon.  So instead of tweeking my
C programs to the VAX 11/785 or something, I must program for maximum
*PORTABILITY*, so that my customers can buy from any vendor that 
supports the strain of UNIX that I develop upon.  If you don't think
that this is important or desirable, then why are there so many 
hardware vendors out there offering some strain of UNIX and an 
unprecedented number of agreements between vendors (eg. AT&T and Sun,
AT&T, Intel and Microsoft,..) to bring those versions closer together?

>
>Further more, it seems, now that we have 3-MIPS and 5-MIPS and higher-MIPS
>machines being sold over the counter at reasonable prices, very little time
>is spent by the software engineer to really PUSH that machine to it's gills.

       Jon Bentley, Chris van Wyke and P.J. Weinberger published a
Technical Report a few years back on Optimizing C code for the VAX 11/7800.
Of how much use is that knowledge to us today?  I haven't kept up with
the advances in the newer VAX machines, but if we could migrate these
optimizations upward, fine!  However, the minute I am asked to port an
C program from a DEC machine runing under ULTRIX to a Pyramid machine
runing OSx or a Sun runing SunOS or a CCI or........., I'm outa luck!

>
>At the same time that I have shared these "beefs" about software development,
>I have mentioned advances in hardware which allows the software guru to get
>lazy and still be considered current.  We have processors today which can
>out-run the processors of 1978 (10 years ago) by 20 to 1.  We have software
>today which can probably be out-run by the software of 1978, even when we use
>it on that powerful processor.  Sure, it's USER FRIENDLY, ERGONOMIC, etc., but
>does it keep pace with the advances in the industry?  Why don't I see ANY
>software advances that are even an order of magnitude above 1978?  

      I don't see much software today that is asked to perform at the 
same functional level as software written in 1978!  In 1978, having
an Alphanumeric terminal at one's disposal was state of the art!  Today
it is bitmapped graphical based workstations.  How can you compare any
piece of software written with 1978 requirements to one of today?  Why
don't we compare the performance of a 1960's muscle car against the
cars of today that must be fuel efficient, air conditioned, have an
AM/FM/Cassette radio, computerized dashboard, conform to federal ]
emissions standards, and still be a fast car!  The demands placed
upon the product are different.  And so it is with software.

      As fast as hardware evolves, so do the demands placed upon 
software to take advantage of the hardware.  Software must deal
with graphics, databases, networks, etc....  This was not the case
in 1978!

George W. Leach					Paradyne Corporation
{gatech,codas,ucf-cs}!usfvax2!pdn!reggie	Mail stop LF-207
Phone: (813) 530-2376				P.O. Box 2826
						Largo, FL  34649-2826

daveb@geac.UUCP (10/29/87)

In article <594@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
|| I suggest that the software developers first consider the power of existing
|| hardware, next the natural power of hardware (what unrealized operations can
|| be easily done in hardware but are not now there), and finally the "current
|| use."

In article <3603@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
| No, software developers should first consider the task to be solved.  The 
| efficient use of the hardware is moot if the software does not meet the need.

  Well, that's motherhood and apple pie, but it isn't **useful**.
Perceived need has already affected the architecture of the
machines, to the extent that need can be know a priori.  In fact,
both software and hardware developers have tried to seperately track
user's needs.  What they haven't done is track each other, about which
Herman justifiably complains.

|| There are instruction present on most machines which someone knowing the
|| machine instructions will want to use as a major part of a program, but which
|| the HLL developers seem to ignore.  
| 
| The task of language developers is not (or should not be) to directly support
| the hardware, but to provide an environment in which a programmer can
| effectively express the solution to a problem.  In those cases where efficiency
| matters, the language model is generally chosen to be efficiently realized on
| conventional machines.  Catering to specific instructions on specific machines
| is generally a loss because the resulting programs are useful only on that
| machine.  Supporting common instructions directly in the language often means
| presenting an inconsistent model.  For instance, the arithmetic right shift
| provided by many machines provides division by two except when the number is
| negative and odd.  Should languages be designed around this quirk?  I do not
| think so.

  I think you're setting up a straw man....  An HLL instruction (/)
may well be mapped into a ASR for those cases (constant-expressions)
where the result is isomorphic with divide.  And is, in a Pascal
compiler I used recently.

|| I suggest that a language intended for library development be approached by
|| the developers with the attitude that a machine cycle wasted is a personal
|| affront.  I think we will find that the resulting languages will be easier
|| to use and less artificial than the current ones.
| 
| I think that just the opposite is true.  Designing a language around
| optimizing the use of a specific machine is likely to leave a language so
| riddled with special case restrictions as to be hopelessly complex.

  Well, if the library **designer's** language is machine
independent, I'll vote for it.  I would like the **implementer's**
language to be machine-specific, even if it has to be assembler....

|| Implementing an arbitrary set of types is no more difficult for the user than
|| the 5 or 6 that the guru thinks of.
| 
| Who implements the arbitrary set of types?  The user?  The language designer?
| If the language provides mechanisms to allow the user to implement types, then
| the task of the language implementer is only difficult.  There are many issues
| in value instantiation, etc. which must be delt with in a language that allows
| definition of arbitrary types.  If the implementer must implement the arbitrary
| set of types, then the task is impossible.
| 
|| Allowing the user to put in his operations and specifying their syntax is not
|| much more difficult for the compiler than the present situation.
| 
| User modifiable syntax is a very difficult to define consistently and very
| difficult to parse.  The consensus so far appears to be that it is not worth
| the cost.

  The consensus is not necessarily the truth: see ML (in the
previous posting) as an example of a successful (if kludgily
implemented) extensible language.


|| For example, I consider it unreasonable to have any language which does not
|| allow fixed point arithmetic.  It may be said that this would slow down the
|| compiler.  However, every compiler I have had access to is sufficiently slow
|| and produces sufficiently bad code that it would be hard to do worse.
| 
| If the target audience of the language does not need fixed point arithmetic,
| the introduction of a fixed point type is counter productive.  How many
| compilers have you looked at?  Some produce very good code.  Others are
| atrocious.  It is far easier to criticize code generators than to provide a
| generator that produces better code.

  The state of practice, as usual, is falling badly behind the state
of the art...

||         5.  Remember that the user knows what is wanted.  I hereby execrate
|| any language designer who states "you don`t want to do that" as either a
|| religious fanatic or sub-human :-).  Those who say that something should not
|| be allowed because "it might get you into trouble" I consider even worse.

  Bravo!  One of the running jokes at a former employer's place of
business was the manager or the accountant sticking his head in the
programmer's office area and saying "but the programmer said", to
which all the programmers would chime in "Nobody would evvvvvvvver
want to do that".  


| The user may know what is wanted, but translating that into code is not always 
| a simple task.

  Once upon a time the pdp-11 C compiler assumed that the programmer
was the boss.  Its error messages basically meant "I don't know how
to generate code for that". Casts were a method of giving hints to
the compiler about how to interpret (generate code for) expressions.
This was successful, but has been compromised as the compilers
became less pdp-11 specific.

||         6.  Realize that efficient code does not necessarily take longer to
|| produce than inefficient, ...
| 
| True, but the one in a thousand cases where this is true don't help much.
| Efficient code almost always takes longer to produce than inefficient code.
| You must invenst development time to get efficiency.

 You can precompute a surprising number of near-optimal sequences,
in at least one case by running a prolong (oops, prolog) program
across a very complete machine description, then binding the results
into the compiler.

| 
|| ... and that there are procedures which are not now being used because the
|| programmer can see that the resources available to him will make the program
|| sufficiently slow that there is no point in doing the programming.
| 
| If that is the case, the procedure was either not worth much to begin with,
| or not within the bounds of feasible computations given today's hardware.

 Since he was TALKING about hardware, you can assume that it is
within the realm of possibility.
-- 
 David Collier-Brown.                 {mnetor|yetti|utgpu}!geac!daveb
 Geac Computers International Inc.,   |  Computer Science loses its
 350 Steelcase Road,Markham, Ontario, |  memory (if not its mind)
 CANADA, L3R 1B3 (416) 475-0525 x3279 |  every 6 months.

joelynn@ihlpf.ATT.COM (Lynn) (10/29/87)

I agree with Glenn's opinion of today's "software".

I would like to add my .02.

Having programmed on the old Z80 based TRS80 with 4k and then 16k to work on,
I have just gotten tired of trying to keep up with the hardware.
It seems the more I learned about the machine's capabilities, the faster
the machine changed.  The sooner my programs became obsolete.  I just
got tired of relearning a new operating system to find it out of date in
a couple short months.  For this reason, I welcome UNIX.
BUT...
I have been handicapped by UNIX simply because it cannot use the total
environment available to it.  This, i think is due to the inability
of the current generation of programmers to look at the whole picture
of what is to be done, and what may be requested of the routine.

conybear@moncsbruce.UUCP (10/30/87)

In article <3603@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
>In article <594@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>>        6.  Realize that efficient code does not necessarily take longer to
>>produce than inefficient, ...
>True, but the one in a thousand cases where this is true don't help much.
>Efficient code almost always takes longer to produce than inefficient code.
>You must invenst development time to get efficiency.
>

	I feel frustrated when I can imagine simple, efficient machine-level
code to solve a problem, but I cannot get my HLL of choice to produce it.

	For example,  suppose I want to allocate space on the stack,  but I don't
know how much space will be required until runtime (this is very important
if I want to implement function calls efficiently in an interpreter,  or
write a B-tree where the user provides the size of the elements to be stored
in each B-tree).  
	How do I proceed?  If the language does not provide this facility and
gives me no way around its restrictions, then I must find another
solution within the language, and this may be a high price to pay (in a lisp
interpreter, the price might be allocating N cons-cells from the heap on
every function call; in a B-tree, allocating 1 B-tree node from the heap for
each activation of the add() procedure).
	On the other hand,  if the language does allow me to turn off restrictions
(e.g. in-line assembler, type casting), then I can have a non-portable
but efficient solution. 

	I believe that it is *my* business, not the language designer's, to 
choose the tradeoff between efficiency and portability when the language
does not provide a solution to my problem.

	I feel that many modern computer languages suffer from *hubris* on the part
of the language designer(s).  Like everybody, language designers make mistakes,
and I have yet to see a 'perfect' language.  I do not expect the 
designer to anticipate all the features I might want in their language.  I
do mind when the designer says "take my language and submit to it's rules,
or write in assembler".  I am most productive when I can submit to a
language's restrictions where those restrictions help me avoid errors;
and disable such restrictions, with care, when they obstruct "the best"
solution to a problem.

Now for my $0.02...
In article <3603@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
>In article <594@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>>I suggest that there be a major effort to produce an incomplete language
>>which is ...

>>        2.  Easily extended by the user.
>
>What do you mean by "extended".  Different notions have different costs and
>benefits.
>
>>        3.  Designed with the idea that anything the user wants to do should
>>be efficiently representable in the extended language.

	It is generally agreed nowadays that languages should be extensible.  I
propose that we should strive to produce languages that are extensible in
as many different directions as possible, since every language feature which
is fixed places a restriction on the programmer and the power of the language.
	For example, LISP traditionally can be extended by adding functions,
however just a few types (ints, strings, cons cells) are built into
the language and this set is not extensible.
	In Pascal or C, I can define types and functions, but not operators.
I cannot generally define constants in the types I define.

	I also propose that our compilers should be extensible.  My ideal compiler
would look like one of today's compilers, but decomposed into a set of modules,
with explicit interfaces.  Some modules would describe how to compile builtin
types.  If I did not like the compiler's code generation for floating point
numbers,  I could reimplement the appropriate module.  If I felt that complex
numbers were essential, I could implement them.  While much of this
can be done using user-defined types and functions, the important thing is
that if I have control over my types' compilation then I also control their
implementation.
	Furthermore,  I can develop an application quickly using whatever
compilation methods are available,  and optimise it later.  For example, I
might write a module to provide variable automatic allocation at run-time
using the heap, and later alter it to use the stack.

Roland Conybeare
(conybear@moncsbruce.oz)

pd@sics.UUCP (10/30/87)

In article <5079@utah-cs.UUCP> shebs%defun.UUCP@utah-cs.UUCP (Stanley T. Shebs) writes:
>By your rationale, every language should include PEEK and POKE, and the
>hardware shouldn't generate any of those silly "segmentation violation"
>traps.  Lisp systems should allow the programmer to acquire the address
>of an object, even if it will be worthless one millisecond later when the
>garbage collector kicks in.  I hardly think a responsible language designer
>would include a capability that has proven from experience to be a source
>of catastrophic and mysterious bugs, especially if the capability itself
>is not particularly important.

Lisp machines allows precisely that. It is a quite necessary part of
the system. Of course the user will have to know exactly what he is
doing when using the facility, and most users seldom have any need for
it, but it has to be there.
-- 
Per Danielsson          UUCP: {mcvax,decvax,seismo}!enea!sics!pd
Swedish Institute of Computer Science
PO Box 1263, S-163 13 SPANGA, SWEDEN
"No wife, no horse, no moustache."

uh2@psuvm.bitnet.UUCP (10/31/87)

In article <5534@weitek.UUCP>, rober@weitek.UUCP (Don Rober) says:
>
>When was the last time you wrote a Data Management System; or a spreadsheet
>program; or a transcendental package; or a compiler, linker, etc?
>
>When was the last time you wrote a mail package from scratch, a grep program
>and all of the other UNIX utilities?
>
>WHile there is a long way to go, I think if you look at it, we've done okay.
     
Twenty years ago, writing a Pascal compiler, a Basic interpreter,
a screen editor, a mail package, a text formatter, etc etc etc, was
considered a MAJOR project.
     
Today, any of these would be considered a stiff challenge for a talanted
college senior.  Why?  Because techniques and strategies in Software
engineering have changed, changed a lot, and changed for the better.
     
I agree.  We've done OK.
     

shebs%defun.uucp@utah-cs.UUCP (Stanley T. Shebs) (10/31/87)

In article <326@moncsbruce.oz> conybear@moncsbruce.oz (Roland Conybeare) writes:

>	I feel that many modern computer languages suffer from *hubris* on the part
>of the language designer(s).  Like everybody, language designers make mistakes,
>and I have yet to see a 'perfect' language.  I do not expect the 
>designer to anticipate all the features I might want in their language.  I
>do mind when the designer says "take my language and submit to it's rules,
>or write in assembler".

Of course, everyone is free to design their own languages.  It's also the
case that programmers are free to use whatever language is available, unless
they are working in some kind of preexisting and restrictive context (DoD,
maintenance of old code, etc).  Making the language implementation available
is just a matter of programming, using well-known techniques.  Therefore,
any moaning and complaining about languages must issue from people who are
unwilling to do the work themselves, and who think that there is a whole crowd
of language specialists who have nothing better to do, and must be coerced
into thinking the "right" way.

There's nothing to stop anyone from introducing a new and wildly popular
language, and I say to them: go for it!

>	I also propose that our compilers should be extensible.  My ideal compiler
>would look like one of today's compilers, but decomposed into a set of modules,
>with explicit interfaces.  Some modules would describe how to compile builtin
>types.  If I did not like the compiler's code generation for floating point
>numbers,  I could reimplement the appropriate module.  [...]

This is a very good idea, and can be found in the latest Lisp compilers.
Unfortunately, it's tricky to use, and not documented very well.  There is
still a lot of theory needed to get a certain level of generality;  without
that basis, users would moan and complain about all the restrictions placed
on extensions to the compiler.  For instance, getting register allocation to
work correctly in the presence of user-supplied assembly code is tricky...
Look for some amazing compilers in about 10-20 years, maybe less if the work
ever gets funded adequately.  (Compilers are not a fashionable topic at the
moment, sigh.)

>Roland Conybeare

							stan shebs
							shebs@cs.utah.edu

bart@reed.UUCP (Bart Massey) (10/31/87)

Probably I shouldn't add to this discussion, since it seems to be basically
going nowhere.  But I couldn't resist putting in my 2-cents-worth...

The arguments being presented for primitive software technology, 
as I have interpreted them, are these:
   
   Hardware designs have gotten more general-purpose and reusable in
   the past 20 years.   Software designs haven't.
   
   Hardware has gotten more much more efficient wrt speed and size in the
   past 20 years.  Software has gotten less efficient in that time.

Seems to me that the most reasonable lines of counter-argument runs
like this:

    Hardware costs money to manufacture.  If a sufficient number of copies
    of a design are sold (as with ICs) the manufacturing costs swamp the
    design and distribution costs.  This isn't true of software.  An infinite
    number of copies may be manufactured for free, and design costs are
    generally the constraining factor.  Thus, much more design effort is put
    into a typical hardware project.  On the other hand, many more software
    designs are actually implemented.  This is why hardware is used for
    general-purpose tasks, and software for specific problems.  The
    major task of the hardware designer has been to do general-purpose tasks
    faster and smaller, at which s/he has succeeded admirably.  The major
    task of the software designer has been to do a wide variety of specific
    tasks, at which s/he has also suceeded admirably.

    It just isn't true that our knowledge about how to do small, fast
    software has *decreased*.  Anyone who believes this should look at the
    original 64K Apple Macintosh ROMs.  Certainly, the coding there was in
    the same range of size/speed efficiency as the 4K Tiny Basic I used 10
    years ago.  And yet there was 16 times as much of it.  What has happened
    is that run-time efficiency has been traded for design-time efficiency.
    At Reed College, there is an abundance of machines, and a major
    shortage of programmers.  Thus, we try to use the programmers
    efficiently, not the machines.  Contrary to some people's claims, we
    believe that there's no way to optimize both simultaneously.  Always
    choosing the people optimization makes good economic sense for us.
    And for most people.

In summary, 20 years ago, you couldn't do a lot of things, period.  The
things you did do, you did almost solely in hardware.  Today, a mixture
of general-purpose hardware and specific-purpose software is being used
to do a lot of new things more cheaply and easily.  

My personal opinion based on the above evaluation is that name-calling
software technology "primitive" or "20 years out of date" is a little silly.
Suggesting specific ways that either sofware or hardware technology could be
improved is not.  The recent discussions in this group about the extent to
which compiler syntax can reasonably be modified at run time, about the ways
in which languages can be optimized to make code-sharing easier without
significantly increasing design expense, about the costs and benefits of OO
programming, seem much more relevant and useful.  Let's take up those
topics and drop this one.

						Bart

crowl@cs.rochester.edu (Lawrence Crowl) (11/01/87)

In article <1737@geac.UUCP> daveb@geac.UUCP (Dave Collier-Brown) writes:
>In fact, both software and hardware developers have tried to seperately track
>user's needs.  What they haven't done is track each other, ...

I submit that hardware should track the software, which should track the user.

>Well, if the library **designer's** language is machine independent, I'll vote
>for it.  I would like the **implementer's** language to be machine-specific,
>even if it has to be assembler....

I do not understand what you are advocating.  I can go to the ACM collected
algorithms for much of the machine independent library routines I want.  I
understand the need for a machine-specific language.  I call it assembler, and
not a high-level language.  I'm all for more structured assemblers.  Do not
expect them to be portable.

>Once upon a time the pdp-11 C compiler assumed that the programmer was the
>boss.  Its error messages basically meant "I don't know how to generate code
>for that". Casts were a method of giving hints to the compiler about how to
>interpret (generate code for) expressions.  This was successful, but has been
>compromised as the compilers became less pdp-11 specific.

The normal mode of the compiler should not be "do it if you can think of some
code to generate for it".  Strong typing goes a long way towards ensuring
correct code at very little run-time cost.  I do not advocate a language in
which it is impossible to avoid strong typing, just one in which avoidance is
a clearly indicated, special operation.  The routine use of casts in C makes
for an unsafe programming environment.

The reason for covers on electrical panels is so that you do not accidently
fry yourself.  You can still open the cover, but you must do so explicitly.
The C approach is to tack bare wires onto the wall and announce that the
competent homeowner will handle them correctly.

-- 
  Lawrence Crowl		716-275-9499	University of Rochester
		      crowl@cs.rochester.edu	Computer Science Department
...!{allegra,decvax,rutgers}!rochester!crowl	Rochester, New York,  14627

ssh@esl.UUCP (Sam) (11/01/87)

Let's look at this cost issue.  Hardware gets faster and cheaper by
orders of magnitude.  Software productivity cannot keep up; any
improvements are measured in percentages, not in orders of magnitude.
Therefore we hear the argument "let hardware do the job... it's not
worth it to burn the money to improve the hardware for code or
performance...".

I consider this socially irresponsible.  Rather than company XYZ
paying the extra $YYY to properly structure the code, they rely on the
ZZZ thousands of consumers to pay extra for disk storage, power, time,
etc. to store, feed, and wait for this software.  This is an order of
magnitude not taken into account in rebuttals to the original
observation.

Do customers understand why processors with supposedly 10-30 times
more power as their old machine have no more than incremental
improvements in functionality and performance, yet take roughly 10
times the disk storage?  (Yes, I and others *can* cite examples of
this).

Let's reserve the productivity / portability argument for those few of
a kind cases such as custom-designed software (eg. military /
government contracts), but let's not get carried away by excusing the
laziness of the commercial software market.

Granted there's a middle ground; it's not so slanted in favor of
software sloth.
					-- Sam Hahn

franka@mmintl.UUCP (Frank Adams) (11/02/87)

One point which is worth making here, I think.  There are really three areas
of technology here, not two:

1) Basic hardware - how small can you make your components, how close
together can you pack them, how fast do they operate?

2) Hardware design - how effectively do you put together your components?

3) Software - how well do you use the resulting system?

It is the first of these, and only the first, which has seen orders of
magnitude improvements.  I think both hardware design and software design
have made really quite impressive advances.  In any other context, they
would be seen as areas of outstanding development.

But the basic hardware has been doubling its efficiency every few years!

This is unprecedented in any area of technology.  There is no reason we
should expect hardware and software design to advance at the same pace, just
because they happen to be dealing with that technology.  Do you expect your
car to be twice as good at half the price as five years ago, just because
computers are?
-- 

Frank Adams                           ihnp4!philabs!pwa-b!mmintl!franka
Ashton-Tate          52 Oakland Ave North         E. Hartford, CT 06108

alan@pdn.UUCP (Alan Lovejoy) (11/03/87)

I detect a basic philosophical disagreement over the purpose of an HLL
in this discussion.  Is an HLL meant as an abstraction that provides a
standard, portable paradigm of computation, or should it be a set of
abstraction mechanisms, or should it be both?

An HLL which is just a set of abstraction mechanisms would be based upon
the machine language and programming model of some cpu, and would rely
upon the programmer to define his own abstractions, including
information hiding, type checking, operator/procedure overloading,
inheritance/subclassing, and parameterized data and process
abstractions.  Macro assemblers are a rather primitive example of this
type of language, Forth is another (slightly more advanced, but not a
pure example).  The idea has a certain stunning simplicity, but no one
has ever actually implemented anything like this that really matches
the ideal.

Most HLL's present the programmer with an invariant, take-it-or-leave-it
computational paradigm.  The problem with this is that the paradigm may
not interact well with the underlying hardware architecture (or with 
the application--but that's a different issue).  No one has demonstrated
that there is some 'optimum' computational paradigm, and so each
hardware and language designer happily does his own thing.  One of the
major arguments for RISC is that the level of abstraction of the computational 
paradigms of most of the popular languages is too near--yet too
different from--the level of abstraction of most hardware computational
paradigms.  By 'reducing' the abstraction level ('complexity') of the
hardware paradigm, the job of producing an efficient translation from
the software paradigm into the hardware paradigm becomes easier, because
the hardware instructions make fewer assumptions (do less) that conflict
with the semantics of the software instructions.  The term 'reduced' in
RISC should be understood as 'more generalized'.  The more microcode
executed for each machine instruction, the more likely it is that the
user didn't *need* all those microcode instructions to accomplish his
task.  Imagine being forced to program in assembly language with only
push, pop and JSR instructions.  Why call a subroutine when all you
wanted to do was increment an integer by one?

Of course, the problem can also be tackled by raising the abstraction
level of HLL computational paradigms.  This leads to the problem
of the oversized screw-driver:  it can be indispensible for some jobs
but fail miserably for others.  What's needed is a tool chest which
contains screw-drivers of every size (a language or languages that
can operate on a wide range of abstraction levels, from bare hardware
to referential transparency, polymorphism, data hiding and lambda
functions).

--alan@pdn

the popular languages are too near the a

reggie@pdnbah.UUCP (George Leach) (11/03/87)

In article <1706@pdn.UUCP> alan@pdn.UUCP (0000-Alan Lovejoy) writes:
>I detect a basic philosophical disagreement over the purpose of an HLL
>in this discussion.........

       [stuff deleted.....]

>Of course, the problem can also be tackled by raising the abstraction
>level of HLL computational paradigms.  This leads to the problem
>of the oversized screw-driver:  it can be indispensible for some jobs
>but fail miserably for others.  What's needed is a tool chest which
>contains screw-drivers of every size (a language or languages that
>can operate on a wide range of abstraction levels, from bare hardware
>to referential transparency, polymorphism, data hiding and lambda
>functions).

      Unfortunately, more often than not a single tool is chosen with
which all work is performed!  Ever try to install a tape deck in your
car with a hammer?  The appropriate tool should be applied to the
appropriate job.  Different HLLs can be used for different pieces of 
a software system (provided they mesh well at the interfaces) in order
to take advantage of features to make life easier.  But it may even be
taken a step further.  If you find an organization that attempts to
prototype a system before the actual implementation work is undertaken,
many times the implementation language and the prototype language are
one and the same!  There are VHHL languages, such as SETL, which are 
intended to take the level of abstraction to a point where something
may be quickly prototyped, test, refined,.....  There are also VHHLs
that are used as specification languages in some circles.

Furthermore, from the ergonomics world we can learn from tool design.  
Biomechanical analysis is used to influence hand tool design, so that 
the tools are easier to use and reduce fatigue, making the worker more 
productive.  You can find some examples of this in software tools, EMACS
for example.  However, we still have a long way to go.  Perhaps AI will
help out here.  


>--alan@pdn


George W. Leach					Paradyne Corporation
{gatech,codas,ucf-cs}!usfvax2!pdn!reggie	Mail stop LF-207
Phone: (813) 530-2376				P.O. Box 2826
						Largo, FL  34649-2826

henry@utzoo.UUCP (Henry Spencer) (11/03/87)

[Expletive deleted]!  It sure would be nice if people debating this issue
would refrain from re-publishing the entire previous debate in every article!
The mark of an intelligent followup is that it reproduces a *bare minimum*
of previous material.

If you want to be heard, remember that the size of your readership is more
or less inversely proportional to the length of your article!  *I* certainly
am not reading these 300-line screeds.
-- 
PS/2: Yesterday's hardware today.    |  Henry Spencer @ U of Toronto Zoology
OS/2: Yesterday's software tomorrow. | {allegra,ihnp4,decvax,utai}!utzoo!henry

pase@ogcvax.UUCP (Douglas M. Pase) (11/03/87)

In article <sol.3784> crowl@cs.rochester.edu (Lawrence Crowl) writes:
>
>I do not advocate a language in which it is impossible to avoid strong
>typing, just one in which avoidance is a clearly indicated, special
>operation.  The routine use of casts in C makes for an unsafe programming
>environment.
>
>The reason for covers on electrical panels is so that you do not accidently
>fry yourself.  You can still open the cover, but you must do so explicitly.
>The C approach is to tack bare wires onto the wall and announce that the
>competent homeowner will handle them correctly.
>

This is Baloney!  The use of casts is very much like the removal of the
panel cover.  Routine use of casts is like the routine removal of the cover!
If you don't know what you're doing, why are you using casts?  If you do,
why are you complaining?  C is a strongly typed language with one major hole -
the interface between function calls and declarations.  That is a hole which
lint fills.  (OK, so there are holes in lint a program could drive a truck
through.)  Casts are functions which take objects in one type space to objects
in another type space, subject to some "natural" mapping.  C is not nearly as
restrictive as, say, Pascal, but "restrictive" and "strongly typed" are not
equivalent terms.

Strongly typed means no errors occur because of type mismatches.  The single
most important place where such errors can occur is in function interfaces.
Lint catches most of those (assuming it is used).  In some compilers there may
be additional examples, such as pointers and integers being interchangeable.
The escape hatches are definitely built into `C', that is part of what makes
it such a useful and popular language.  Some examples of hatches are the `asm'
directive, and the unsupervised use of `union'.  Both of those can cause
incorrect operations to occur (e.g. integer add of floating point values).
The existance of the hatches doesn't force one to use them anymore than
leaving the keys in your ignition forces someone to steal your car.

I suppose someone is going to argue that C *isn't* strongly typed because
there are some places where it falls down, but I'm not interested in that
kind of argument.  I leave the taxonomy to those who are interested in it.
C enjoys most of the advantages of a strongly typed language, if not all.
--
Doug Pase   --   ...ucbvax!tektronix!ogcvax!pase  or  pase@cse.ogc.edu.csnet

daveb@geac.UUCP (11/06/87)

In article <3784@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
>In article <1737@geac.UUCP> daveb@geac.UUCP (Dave Collier-Brown) writes:
>>In fact, both software and hardware developers have tried to separately track
>>user's needs.  What they haven't done is track each other, ...
>
>I submit that hardware should track the software, which should track the user.

  That's true, but not sufficient.  In practice, the changes in
hardware and software change user expectations, which typically
affect hardware (performance, of course), and the loop starts up
again.

  Or, the user is not labouring in a vacuum...

--dave
ps: I love your characterization of C as bare wires on a wall, and I
can already think of a Pascalophobe to use it on.
-- 
 David Collier-Brown.                 {mnetor|yetti|utgpu}!geac!daveb
 Geac Computers International Inc.,   |  Computer Science loses its
 350 Steelcase Road,Markham, Ontario, |  memory (if not its mind)
 CANADA, L3R 1B3 (416) 475-0525 x3279 |  every 6 months.

root@uwspan.UUCP (John Plocher) (11/06/87)

(Note that followups are going to comp.software-eng)

I would like to recomend an article in the November 1987 issue of Unix Review
for everyone's reading.  The article:

			No Silver Bullets
			      by
		      Frederick P. Brooks, Jr
		      Kenan Professor of CS at
			 UNC-Chapel Hill

"The frustrations of software development are often nightmarish.  But search
as we might, we'll never come upon a single development - be it in technology
or in management - that of itself will provide an order-of-magnitude
productivity improvement"

-- 
Email to unix-at-request@uwspan with questions about the newsgroup unix-at,
otherwise mail to unix-at@uwspan with a Subject containing one of:
	    386 286 Bug Source Merge or "Send Buglist"
(Bangpath: rutgers!uwvax!uwspan!unix-at & rutgers!uwvax!uwspan!unix-at-request)