[comp.lang.misc] Many people's opinions on computer languages

smryan@garth.UUCP (Steven Ryan) (09/09/88)

>>When I see a language, the corresponding question is
>>whether the designers have provided a way for me to use the hardware instruc-
>>tions, and how easy and convenient it is for me to do that.
>
>In other words:  when you look at a language, you look to see how low a
>level it is and how easy and convenient it is for you to hand optimise
>in it.  This doesn't sound like you want a *high* level language to me.

What is the purpose of language? To specify good code on a PDP-11 or good
code on the machine in use? Postdecrement and postincrement operators are
horrendous on a RISC. The lack of a boolean mode and the requirement that
&& and || shortcircuit are horrendous on a RISC. On the other hand, the
machine I use has a (32 int)*(32 int)=(64 int) instruction which C does not
provide for.

I feel the language should provide access to every instruction the hardware
provides. Programmers can then abstract out machine independent operators
if they wish. A good extensible language would permit infix operators to be
defined as the programmer sees fit.
 
>Don't you mean NON-portable software?  To me, semi-portable software
>means that the machine-dependent parts are isolated and that's all that
>has to be changed when porting to a different machine.  With your
>ideas, the machine-dependent parts would be tightly integrated with the
>rest of the software; this is what makes porting a nightmare.

So what? It's not that job of language designer (except, of course, Wirth, from
whom all blessings flow) to decide what is good and bad programming--just make
good tools. Give programmers direct access to the machine so they can be honest
about writing machine dependent code. Give them good abstraction facilities so
they encapsulate it if they desire.

>>the C gurus came up with the idea of saying, "Nobody uses this, so we will use
>>it for exclusive or."
>
>What would you have chosen for xor, considering a very limited
>keyboard, and that most of the other symbols were taken?  C was created

How about `xor' `|-' `+|' .... ?

>>This is similar to the use of \ for an escape character.
>
>I pose the same question as above.  And remember, the escape character
>should be one character, be printable, and the ESC key did not exist on
>most keyboards at the time.

Other language have other solutions which do not use any escape. In fact,
nearly every other language.

>>This is a major bone of contention and interpretation.  I consider that
>>a binary operator function normally uses an infix symbol.
>
>Who invented the f(x,y) syntax anyway?  It sure wasn't the computer
>gurus.  Prefix notation is extendable (arbitrary number of arguments),
>infix is not (well, not easily).  And quite frequently I find myself
>having to add another parameter to a function when I try to generalize
>it.  This would be very difficult to do with infix notation.

Ever see Smalltalk? Postfix and infix notation, and that's it. No prefixes,
no functional notations. From what I've seen of Smalltalk, it's easier to
understand than C. How about
     calculus integrate:f from:l to:U
instead of
     (*calculus.integrate)(f,l,u)

Have you ever seen calls with dozens of arguments? How do you know what each
argument stands for? Algol60 had a neat feature (inspired Smalltalk?):
      integrate(f) from:(l) to:(u)

>>What do you think the reaction of HLL users
>>would be if you required them to write addint(x,y) for x+y if one wanted
>>to add the integers x and y?
>
>Many already do.  It occurs in a very old language; a language about as
>old as FORTRAN.  It's call LISP.

LISP is the primary language of numerical analysis, after all.

>>in addition to the usual operations.  I maintain that any acceptable language
>>should allow such additions, including the temporary addition of infix
>>functions, if necessary.
>
>Then WRITE a language, and show us that this is indeed better.

And if you don't like the operating system, write a new one. Same with device
drivers. Same with cpu. Same with disc drives. God forbid someone who is a tad
more expert might help out.

>>A simple example is division, with quotient and
>>remainder.  Another example is that one frequently wants both the sine and
>>cosine; it is only about 30% more expensive to produce both of them simul-
>>taneously than to produce one.
>
>A LOT of overhead if you only need one.

Even more overhead if you want both and someone decided for you you need to
make to calls.
 
>And why should a programmer programming in a HLL care whether these can
>be performed simultaneously.  He specifies what he needs in the HLL,

If the HLL permits it to be specified.

>and the compiler is supposed to generate the best possible machine code
>for doing that.  If the compiler isn't doing that you should be
>complaining about the implementation, not the language.  Neither of

Oh, c'mon. If the programmer is forced to write incredibly complicated code
because someone decided an operation is unnecessary, do you really expect
any compiler to decode what is happenning and produce efficient results.

That's like putting up fence on a path, making people walk way around, and
then assuming it will be as if the fence did not exist.
 
>@I just did some find/sed/grep hacking to find out just how prevalent
>@cross-posting is.
>
>@Here are the people who cross-posted to five groups:
>@, Herman Rubin,
>
>Congratulations!

Well, that's certainly addressing the issue. If you can't dispute the
message, sacrafice the messenger.

sommar@enea.se (Erland Sommarskog) (09/11/88)

Herman Rubin writes:
>There is also the problem of optimizing.  I have used the CDC6500 a lot (for
>some time it was THE production machine at Purdue) and the 6600 a little. 
>These machines the same instructions, and, barring pathologies, any program
>on one will run on the other and give the same answer.  Nevertheless, the
>programming should be different because of timing problems.

And Steven Ryan (smryan@garth.UUCP) writes:
>I feel the language should provide access to every instruction the hardware
>provides. Programmers can then abstract out machine independent operators
>if they wish. A good extensible language would permit infix operators to be
>defined as the programmer sees fit.

It surprises me that these ideas still hang on. In these days when 
man power is expensive and machine power is cheap the computer should
do as much job possible. This includes translating abstract descriptions
of what to do and make these translation as effective as possible.
This helps me to concentrate on such things the machine can't handle.
Describing the problem to be solved, deciding what the program should
do in different situations. If the program runs too slow you can always
buy a faster machine. But if you don't get the code on time, what do you
do? Buy a faster programmer?

Now, we should not forget that there are different applications. 
Advanced number crunching and things you run on a Cray still requires 
much more handicraft in the coding. However, I imagine that these 
things are fairly straight-forward in terms of requirements, so you 
have more time fro optimizing. But the main part of the programs
written today solves problems that are much more complicated than
mathematical equations.
 
My opinion is that the language should help the programmer to be
as portable as possible. This means that he should be able to
encapsulate OS-calls and similar. (Hardware-dependent parts should 
really be avoided or cut down to a minimum.)

Even more important is that the language should help me to express 
my ideas as clearly as possible, not for the machine, for anyone
else who is reading my code.) It should help me to make distictions
between entities and then hit me on the fingers if violate them.
For example, I should be able to define:
     Apple : Apple_type;
     Pear  : Pear_type;
In many old-fashioned languages I can't do this. (Fortran, Cobol, C?).
Other languages let me do this, but when I by mistake write:
    Apple := Pear;   or
    Eat_an_apple(Pear); -- Parameter is of Apple_type.
the compiler will not always warn me. (Pascal, Modula-2?, Oberon?)
  Some languages that enforce this string type checking are Ada, 
Eiffel, Simula and C++(?). Hardly just a coincidence that all of 
them but Ada are object-oriented. 
  You may wonder why this is important. Let me just answer with a
common, but good, clich'e: Get the errors out early.
-- 
Erland Sommarskog            ! "Hon ligger med min b{ste v{n, 
ENEA Data, Stockholm         !  jag v}gar inte sova l{ngre", Orup
sommar@enea.UUCP             ! ("She's making love with best friend,
                             !   I dare not to sleep anymore")

cik@l.cc.purdue.edu (Herman Rubin) (09/11/88)

In article <3938@enea.se>, sommar@enea.se (Erland Sommarskog) writes:
> Herman Rubin writes:
< <There is also the problem of optimizing.  I have used the CDC6500 a lot (for
< <some time it was THE production machine at Purdue) and the 6600 a little. 
< <These machines the same instructions, and, barring pathologies, any program
< <on one will run on the other and give the same answer.  Nevertheless, the
< <programming should be different because of timing problems.
> 
> And Steven Ryan (smryan@garth.UUCP) writes:
< <I feel the language should provide access to every instruction the hardware
< <provides. Programmers can then abstract out machine independent operators
< <if they wish. A good extensible language would permit infix operators to be
< <defined as the programmer sees fit.
> 
> It surprises me that these ideas still hang on. In these days when 
> man power is expensive and machine power is cheap the computer should
> do as much job possible. This includes translating abstract descriptions
> of what to do and make these translation as effective as possible.
> This helps me to concentrate on such things the machine can't handle.
> Describing the problem to be solved, deciding what the program should
> do in different situations. If the program runs too slow you can always
> buy a faster machine. But if you don't get the code on time, what do you
> do? Buy a faster programmer?

I use many operations which the languages do not know about.  Now I can
translate them into hardware instructions.  I can even post to comp.arch
that additional instructions would be useful.  But if I have to write
twenty lines of code to even get them clumsily expressed in a HLL, the 
human time which Erland Sommerskog decries is equally wasted.  Especially
if I have many such situations.  In addition, the clumsy encoding in C
or FORTRAN might not be understandable to others.

Programming time is therefore not necessarily faster in HLLs.  Frequently,
I see how to use a machine instruction or instructions to accomplish something
which I consider simple and obvious.  Why should I even have to try to find a
way to do in in some HLL?  The HLL gurus have left out too much.  Now I have
been arguing for a language in which at least much of both worlds can be
achieved.  Would an extensible HLL be that difficult?  

Many have argued that the compiler would be slowed down, or that a many pass
compiler would be needed.  Now you are arguing that machine time is not that
important.  If execution time is not that important, why should compiler time
be any more important?

Furthermore, in many cases the time difference is so obvious that I, the 
algorithm producer, know that a given algorithm is not worth producing on
a particular machine.  I cannot see any way of vectorizing procedures for
generating exponential and normal random variables, which are quite efficient
on many other machines, on the CRAY 1;  even on those machines on which they
vectorize, slightly different algorithms may have running times differing by
a factor of 10 or more.  Certainly procedures designed to be used many times
deserve a considerable measure of time improvement.

You seem to feel that the language gurus have done a reasonable job of handling
your constructs.  I can demonstrate that they have not done that for mine.
I do not expect them to be able to anticipate all that I am likely to want.
I do not believe that they can all be identified at this time.  Thus, extens-
ibiltiy is needed.

> Now, we should not forget that there are different applications. 
> Advanced number crunching and things you run on a Cray still requires 
> much more handicraft in the coding. However, I imagine that these 
> things are fairly straight-forward in terms of requirements, so you 
> have more time fro optimizing. But the main part of the programs
> written today solves problems that are much more complicated than
> mathematical equations.
>  
> My opinion is that the language should help the programmer to be
> as portable as possible. This means that he should be able to
> encapsulate OS-calls and similar. (Hardware-dependent parts should 
> really be avoided or cut down to a minimum.)
> 
> Even more important is that the language should help me to express 
> my ideas as clearly as possible, not for the machine, for anyone
> else who is reading my code.) It should help me to make distictions
> between entities and then hit me on the fingers if violate them.
> For example, I should be able to define:
>      Apple : Apple_type;
>      Pear  : Pear_type;
> In many old-fashioned languages I can't do this. (Fortran, Cobol, C?).
> Other languages let me do this, but when I by mistake write:
>     Apple := Pear;   or
>     Eat_an_apple(Pear); -- Parameter is of Apple_type.
> the compiler will not always warn me. (Pascal, Modula-2?, Oberon?)
>   Some languages that enforce this string type checking are Ada, 
> Eiffel, Simula and C++(?). Hardly just a coincidence that all of 
> them but Ada are object-oriented. 
>   You may wonder why this is important. Let me just answer with a
> common, but good, clich'e: Get the errors out early.

We disagree about what is important.  I see nothing wrong with that.
I say that for numerical computations, our languages are inadequate.
For anyone using a CRAY or a CYBER, that is what you want.  In fact,
although these machines are used for compilation, I would entertain
the idea that this is a misuse of the resources.  You want strong type
checking.  I want it not to be there.  I see no reason why we cannot
both be satisfied; let the compiler make it an overridable error.

I repeat, what is needed is FLEXIBILITY.  I believe it is possible.
What is impossible is full portability.  Semi-portability is possible.
I liken the languages to a control system which will get your automobile
to a given address in a reasonable manner, but will not let you back it
into the driveway, or let you avoid rush hour traffic.
-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)

c60a-1cu@e260-1f.berkeley.edu (09/12/88)

In article <923@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>Many have argued that the compiler would be slowed down, or that a many pass
>compiler would be needed.  Now you are arguing that machine time is not that
>important.  If execution time is not that important, why should compiler time
>be any more important?
>-- 
>Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
>Phone: (317)494-6054
>hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)
Becuase compile times are directly related to programmer productivity.  How
many times do you sit are your terminal (pc, workstation, whatever) just
waiting for that 10+ minute (or even 5 minute) compile to finish ?
Nah...you don't.  You get up, get some coffee or whatever, go talk to
somebody, and end up being away from your desk for 15 minutes.  Then it
turns out that the compiler aborted 30 seconds into the job because of some
(often dumb, like forgetting a command line switch) error.  Now you've
just wasted 14+ minutes.  If that same compile will take 30 seconds total,
you just sit through it, and think a little.  Do this a couple of hundred
times, and the lost time really adds up....

Drew Dean
FROM Disclaimers IMPORT StandardDisclaimer;

jeff@lorrie.atmos.washington.edu (Jeff Bowden) (09/12/88)

In article <14147@agate.BERKELEY.EDU> c60a-1cu@e260-1f.berkeley.edu writes:

>In article <923@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>>If execution time is not that important, why should compiler time
>>be any more important?

>Becuase compile times are directly related to programmer productivity.

[stuff deleted]

>Do this a couple of hundred
>times, and the lost time really adds up....

But what about the time the *users* spend waiting for the program to 
execute?  Since the number of users in many (most?) cases is strictly
greater than the number of programmers, any gains in execution speed will
be multiplied accordingly.

In an earlier posting someone suggested that if you need better execution
speed, then you should buy a bigger machine.  What if you want to target
a small machine?  If you can make it run in a reasonable amount time on a
smaller machine, you will increase your potential market (assuming you are
planning to *sell* your program).

ok@quintus.uucp (Richard A. O'Keefe) (09/12/88)

In article <923@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>I use many operations which the languages do not know about.  Now I can
>translate them into hardware instructions.  I can even post to comp.arch
>that additional instructions would be useful.  But if I have to write
>twenty lines of code to even get them clumsily expressed in a HLL, the 
>human time which Erland Sommerskog decries is equally wasted.  Especially
>if I have many such situations.  In addition, the clumsy encoding in C
>or FORTRAN might not be understandable to others.
>
>Programming time is therefore not necessarily faster in HLLs.  Frequently,
>I see how to use a machine instruction or instructions to accomplish something
>which I consider simple and obvious.  Why should I even have to try to find a
>way to do in in some HLL?  The HLL gurus have left out too much.

Please tell us about some of these instructions.

In an earlier posting, I reported that I had studied the MC680x0
instruction set and found very few instructions which a PL/I compiler
could not make good use of.  [Whether there is a PL/I compiler which
does so is another matter.]  Admittedly, the MC680x0 architecture does
not include floating point operations.  They are handled by the
MC688[12].  I have not done a similar study for that, but I consider it
plausible that a PL/I compiler could exploit most of them, even to
providing access to 80-bit "extended" floats.  PL/I even has the
preprocessor facilities which are required to exploit hardware features
in a quasi-portable manner.

[I do not mention PL/I because I think it is a particularly good language,
but because Rubin singled it out for special attack.]

This is not the case in C.  However, some C compilers (such as the SunOS
ones) have an "inline" facility which can be used by an ordinary programmer
to write function calls in his source code and have them expanded into the
assembly code of his choice (before the final peephole optimisation pass).
Some months ago I posted an example in comp.lang.c showing how 
	find Q=X div D, R=X mod D where X = A*B+C
could be done this way, X being 64 bits and the other variables 32 bits.
Recent System V C compilers have "asm procedures" which can be used the
same way.  Thus there is no instruction on an MC680x0 or Intel 80386
which cannot be generated by a UNIX C compiler, with *no* function call
overhead.

ADA provides a standard form (package MACHINE_CODE, in LRM 13.8) for
"machine code inserts" by means of which any instruction can be requested
by the programmer (though this package is optional).  Perhaps ADA would
suit Rubin's needs?

smryan@garth.UUCP (Steven Ryan) (09/13/88)

>It surprises me that these ideas still hang on. In these days when 
>man power is expensive and machine power is cheap the computer should
>do as much job possible. This includes translating abstract descriptions
>of what to do and make these translation as effective as possible.

The problem is: who creates these abstract descriptions?

In C, Kernighan and Ritchie made up a set of operators that work well
on some machines and miserably on others. Instructions available on
some machines are unavailable in C because of assumptions about what
should and should not be done.

>My opinion is that the language should help the programmer to be
>as portable as possible. This means that he should be able to
>encapsulate OS-calls and similar. (Hardware-dependent parts should 
>really be avoided or cut down to a minimum.)

Procedure/operator/class definitions all help make code portable. I don't
recall anybody saying get rid of these.

To me, the question is: Do the powers that be define exactly those
operators which are sufficient and cast all others (and the cpus) into
the Outer Darkness? Or do we provide access the hardware and an
appropriate encapsulation technique?

smryan@garth.UUCP (Steven Ryan) (09/13/88)

>>Many have argued that the compiler would be slowed down, or that a many pass
>>compiler would be needed.  Now you are arguing that machine time is not that
>>important.  If execution time is not that important, why should compiler time
>>be any more important?

>Becuase compile times are directly related to programmer productivity.  How
>many times do you sit are your terminal (pc, workstation, whatever) just
>waiting for that 10+ minute (or even 5 minute) compile to finish ?

Well, before I jump in, I would like to see proof that extensible languages
are significantly harder to compile.

pmontgom@sm.unisys.com (Peter Montgomery) (09/13/88)

In article <382@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>In article <923@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>>
>>Programming time is therefore not necessarily faster in HLLs.  Frequently,
>>I see how to use a machine instruction or instructions to accomplish something
>>which I consider simple and obvious.  Why should I even have to try to find a
>>way to do in in some HLL?  The HLL gurus have left out too much.
>
>Please tell us about some of these instructions.

	Herman Rubin previously mentioned my most important need:  support
of long long multiplication and division (multiply two 32-bit numbers to
get a 64-bit product; divide a 64-bit number by a 32-bit number to get
quotient and remainder).

        Another need is the truncated base 2 logarithm of an unsigned
integer n.  For example, LOG2(13) = 3.  Conventionally LOG2(0) = -1.
I want to be able to write code resembling (in practice, the "switch"
would probably be several #ifdefs).

inline int LOG2(unsigned long n)   /* Return truncated base 2 log of n */
{
    switch(target machine) {

    default:    Use a loop which searches n one bit at a time

    CDC CYBER:  if ((signed long)n < 0)
		    return 59;
		else if ((n >> 48) == 0)
		    return 47 - (B register output when normalizing n);
		else
		    return 95 - (B register output when normalizing n >> 48);

    CRAY:       return 63 - (leading zero count of n)

    SUN 3:      return 31 - (output of BFFFO (find first) applied to n);

    VAX:        {   const double d = (double)((signed long)n);   (CVTLD)
		    if (d == 0.0) return -1;
		    if (d < 0.0) return 31;
		    extract lower 5 bits of exponent of d, return 1 less;
		}
    }
}
        Although all these architectures allow the function to be computed
directly (without a loop), the CRAY CFT compiler is the only C or FORTRAN
compiler I know which lets the programmer express this algorithm and get
inline code.  [The SUN 3 "inline" utility may work, but its manual page
states "Only opdefs and addressing modes generated by Sun compilers are
guaranteed to work"; I want to access an instruction NOT generated by
these compilers.]  I use LOG2 when initializing the left-to-right binary
method of exponentiation and when normalizing a divisor for multiple
precision division.  The exponent of the highest power of 2 dividing a
positive number p (needed by the binary method for greatest common divisors)
is LOG2(p & ~(p-1)).  In practice I write assembly code for each target
machine, but these procedure calls inhibit normal optimization (the compiler
doesn't know what other variables my procedure may reference or modify) and
prevent peephole optimizations which a good compiler would make, such as
replacing 30 - LOG2(n) by (output of BFFFO) - 1 on a SUN 3.

ok@quintus.uucp (Richard A. O'Keefe) (09/14/88)

In article <smL~rDM@check.sm.unisys.com> pmontgom@sm.unisys.com (Peter Montgomery) writes:
>In article <382@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:
>>In article <923@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>>>
>>>Programming time is therefore not necessarily faster in HLLs.  Frequently,
>>>I see how to use a machine instruction or instructions to accomplish something
>>>which I consider simple and obvious.  Why should I even have to try to find a
>>>way to do in in some HLL?  The HLL gurus have left out too much.
>>
>>Please tell us about some of these instructions.
>
Summary of Montgomery's message:  64-bit integer operations, and find first
bit.  Summary of my reply:  PL/I can express them.

>	Herman Rubin previously mentioned my most important need:  support
>of long long multiplication and division (multiply two 32-bit numbers to
>get a 64-bit product; divide a 64-bit number by a 32-bit number to get
>quotient and remainder).

It is important to distinguish between *languages* and *implementations*.
PL/I, Pascal, ADA, and Algol 68 can all express these calculations.  With
PL/I, you could get exactly what you want with the built-in functions
MULTIPLY and DIVIDE, if the implementation allowed you.

Any programming language which lets the user specify the range of integer
variables could provide access to these instructions if the implementor
chose to let you declare variables with sufficient range.  Instead of
saying "gimme access to arbitrary machine instructions", why not say
"FINISH the implementation of the language which you offered me, and
let me use a BINARY FIXED (63,0) variable if I really want one."


>        Another need is the truncated base 2 logarithm of an unsigned
>integer n.
>        Although all these architectures allow the function to be computed
>directly (without a loop), the CRAY CFT compiler is the only C or FORTRAN
>compiler I know which lets the programmer express this algorithm and get
>inline code.  [The SUN 3 "inline" utility may work, but its manual page
>states "Only opdefs and addressing modes generated by Sun compilers are
>guaranteed to work"; I want to access an instruction NOT generated by
>these compilers.]

The manual doesn't say that only >instructions< generated by the compilers
work, but that only >addressing modes< generated by the compilers are
guaranteed to work.  BFFFO is an instruction, not an addressing mode.
I have successfully used
cat <<'EOF' >haulong.il
	.inline	_haulong,4
	moveq	#32,d0
	movl	sp@+,d1
	bfffo	d1{#0:d0},d1
	bnes	1f
	moveq	#33,d1
    1:	subl	d1,d0
	.end
EOF
with SunOS 3.2 and SunOS 3.5.  {The ffs() library function searches from
the least significant bit towards the most significant bit.}

In PL/I, you could use INDEX(BIT(source,32), '1'B) to look for the
first bit; it would be easy enough for a compiler to recognise this
particular pattern (the bits would in effect be numbered MSB=1..LSB=32).

And of course ADA can handle these things with the aid of machine-code
inserts, and UNIX V.3 C compilers have asm functions.

I really don't want to be mistaken for a friend of PL/I.
My point is just that there *are* medium-level programming languages
(Ada, some Cs, PL/I) which can handle the examples suggested so far,
even ones (Ada, some Cs) which can be made to generate any instruction.

Could we have some discussion of why
    - PL/I BIT(N) and FIXED(P[,Q]) data types and the operations it provides
      are not adequate
    - ADA machine code inserts are not adequate
    - "inline" or V.3 asm functions are not adequate
    - what could be done to make them adequate

smryan@garth.UUCP (Steven Ryan) (09/15/88)

>Summary of Montgomery's message:  64-bit integer operations, and find first
>bit.  Summary of my reply:  PL/I can express them.

My complaint of PL/I is that is does TOO much--everything's there, but you
aren't sure what you're getting.

F'r instance, decimal fixpoint arithmetic is nice if the cpu has decimal
instructions, but not so nice if it's a binary-only machine.
 
>Could we have some discussion of why
>    - PL/I BIT(N) and FIXED(P[,Q]) data types and the operations it provides
>      are not adequate
>    - ADA machine code inserts are not adequate
>    - "inline" or V.3 asm functions are not adequate
>    - what could be done to make them adequate

Probably because this started out as gripes about C.

noise@eneevax.UUCP (Johnson Noise) (09/15/88)

In article <3938@enea.se> sommar@enea.se (Erland Sommarskog) writes:

>do in different situations. If the program runs too slow you can always
>buy a faster machine. But if you don't get the code on time, what do you
>do? Buy a faster programmer?

	Hell yes.
	How many teamsters does it take to screw in a light bulb?
	15!! YOU GOT A PROBLEM WITH THAT?

	Nuclear power plant construction suffers cost and schedule
overruns,  I guess we should conduct more reasearch on fast setting
epoxies.

>     Apple : Apple_type;
>     Pear  : Pear_type;
>In many old-fashioned languages I can't do this. (Fortran, Cobol, C?).

	Can you read, write, speak the english language?
	Good thing nobody uses dinosaurs like C anymore!  Just think
what could have happened... compilers, editors, operating systems, and
embedded systems may have been horrendously implemented.

>Other languages let me do this, but when I by mistake write:
>    Apple := Pear;   or
>    Eat_an_apple(Pear); -- Parameter is of Apple_type.
>the compiler will not always warn me. (Pascal, Modula-2?, Oberon?)

	C'mon. Everyone knows that nikky's tighter than a TIG weld when it
comes to type checking, not that I'm against it. Well, almost

>  Some languages that enforce this string type checking are Ada, 
>Eiffel, Simula and C++(?). Hardly just a coincidence that all of 
>them but Ada are object-oriented.

	Oh, so you can read, write, speak the english language.

>  You may wonder why this is important. Let me just answer with a
>common, but good, clich'e: Get the errors out early.

	What a concept. Are you planning to publish?
	By the way, what happened to that oxy-acetalene C flame you
had goin' for a while in your .sig?  And why did you stop posting?

ok@quintus.uucp (Richard A. O'Keefe) (09/15/88)

In article <1413@garth.UUCP> smryan@garth.UUCP (Steven Ryan) writes:
>>Summary of Montgomery's message:  64-bit integer operations, and find first
>>bit.  Summary of my reply:  PL/I can express them.

>My complaint of PL/I is that is does TOO much--everything's there, but you
>aren't sure what you're getting.

>F'r instance, decimal fixpoint arithmetic is nice if the cpu has decimal
>instructions, but not so nice if it's a binary-only machine.

I really don't want to be mistaken for a friend of PL/I.  I think its
implicit conversions are a menace to shipping.  As for DECIMAL FIXED
arithmetic, since the topic was "HLL designers are ratbags who won't
give me access to all the instructions", it should be pointed out in
PL/I's favour that if the machine _has_ got some decimal support (as
a 680x0, 80*86, VAX, PR1ME 500, /370, and so on, and several floating-
point chips have) Rubin et al might want to get at it, and PL/I will
let them.

>>Could we have some discussion of why
>>    - PL/I BIT(N) and FIXED(P[,Q]) data types and the operations it provides
>>      are not adequate
>>    - ADA machine code inserts are not adequate
>>    - "inline" or V.3 asm functions are not adequate
>>    - what could be done to make them adequate
>
>Probably because this started out as gripes about C.

Well, it _really_ started out as postings in comp.lang.c and comp.arch
which attacked HLLs in general.  It's not reasonable to require C to be
everything to everyone.  Now there's a lot in PL/I which I imagine Rubin
has no particular need for (COMPLEX PICTUREd data, perhaps?).  The
question is whether he only needs the kind of operations that are good
for multiprecision integer arithmetic (in which case we may look to PL/I
for ideas about a language that would let him get on with it) or whether
he needs access to whatever instructions there are (in which case we may
look to ADA) or whether some entirely new ideas are needed.

Just to get in first:  an amalgam of PL/I and ADA would be _awful_.

hjm@cernvax.UUCP (Hubert Matthews) (09/15/88)

One of the main ideas behind high-level languages is to provide a virtual
machine that is mostly independent of the low-level hardware.  So, trying
to make a language that allows full access to the low-level destroys this
abstraction which language designers have fought so hard to achieve.  If
Herman wants machine features, then why does he insist on using a high-level
language which, by its very nature, is designed to rid him of having to
be concerned about such things?  Destroying this illusion makes such code
much more difficult to understand as the machine environment must be included
as well as the semantics of the language.  Why do you think that assembler
is so much less portable than an HLL?  That is the major issue here; the
level of abstraction.

On a slightly less ethereal level, if Herman wants a 32 x 32 -> 64 multiply
instruction, then I suggest that the real problem is not that the language
doesn't let him specify this particular instruction, but that the language
doesn't let him specify his data to a sufficiently high degree.  PL/I allows
a declaration including the number of bits of storage.  Occam allows BYTE,
INT, INT16, INT32 and INT64 (so the 32/32 -> 64 will work) for integer types.
What should be done to circumvent this is to force all integer variables to
be declared using ranges, so that the abstraction is retained by implying
that you don't particularly care how something is done, as long as it is
done.  Herman could then declare i,j : -2**31..2**31-1 and k : -2**63..2**63-1
and expect a decent compiler to pick the right machine instructions to do

	k := i * j

as a 32*32 -> 64.

If you want to use other, more perverse, instructions, then why not use small
functions to do it.  On VMS, there are several function such as LIB$INSQHI
to access some of the special instructions *that are peculiar to the VAX*?
(There is even a LIB$EMUL for doing 32*32 -> 64 multiplication :-) ).
Put such things in a library - don't junk up a language to do it.  No other
machine that I can think of has interlocked queue instructions, so what use
is carrying around VAX stuff in a language that is to be used on 68k/*86
and any other type of stuff.  I don't want to pay (in terms of compiler cost
and complexity, bugs and size) for your little, one-off, instruction hacks.
That's one of the good things about K&R C.  Everything's in libraries, so
I don't have to use your stuff if I don't want it.  Don't destroy my
abstraction for your "do it all in one instruction" bodge.  I want a lean,
mean, hungry and beautiful language and not a kludge-up like C or PL/I or
ADA or FORTRAN or .....

	Hubert Matthews

jeff@aiva.ed.ac.uk (Jeff Dalton) (09/16/88)

In article <1386@garth.UUCP> smryan@garth.UUCP (Steven Ryan) writes:
>The lack of a boolean mode and the requirement that
>&& and || shortcircuit are horrendous on a RISC.

How is this different from saying "if is horrendous on a RISC"?
That is what I'd have to write if not for && and ||.

.

karl@haddock.ima.isc.com (Karl Heuer) (09/17/88)

About 10 years ago I used an extensible language called IMP.  (I believe Ned
Irons may have been responsible for it.)  Does anyone have information about
its current availability, or where Ned Irons is these days?

Karl W. Z. Heuer (ima!haddock!karl or karl@haddock.isc.com), The Walking Lint

cik@l.cc.purdue.edu (Herman Rubin) (09/17/88)

In article <822@cernvax.UUCP>, hjm@cernvax.UUCP (Hubert Matthews) writes:
> 
> One of the main ideas behind high-level languages is to provide a virtual
> machine that is mostly independent of the low-level hardware.

Whose virtual machine?  Just as I find that a student who has had a statistical
methods course has a much harder time understanding statistical concepts than
one who does not, I suspect that someone who learns to program in a highly
restricted language will never understand what hardware is capable of and 
can be capable of.  My virtual machine would have all operations that can
be thought of, not just the ones I can imagine.

>                                                                 So, trying
> to make a language that allows full access to the low-level destroys this
> abstraction which language designers have fought so hard to achieve.  If
> Herman wants machine features, then why does he insist on using a high-level
> language which, by its very nature, is designed to rid him of having to
> be concerned about such things?  Destroying this illusion makes such code
> much more difficult to understand as the machine environment must be included
> as well as the semantics of the language.  Why do you think that assembler
> is so much less portable than an HLL?  That is the major issue here; the
> level of abstraction.

I agree completely that machine language is less portable.  I find that the
assemblers use unnecessarily messy code, with an unnatural format which is
designed to be easy for the machine.  A weakly typed overloaded operator
infix notation language would do well; C is partly in that direction, C++
a little more so (and would be much better if additional operators could be
introduced), but I know of no reasonable implementations.

> On a slightly less ethereal level, if Herman wants a 32 x 32 -> 64 multiply
> instruction, then I suggest that the real problem is not that the language
> doesn't let him specify this particular instruction, but that the language
> doesn't let him specify his data to a sufficiently high degree.  PL/I allows
> a declaration including the number of bits of storage.  Occam allows BYTE,
> INT, INT16, INT32 and INT64 (so the 32/32 -> 64 will work) for integer types.
> What should be done to circumvent this is to force all integer variables to
> be declared using ranges, so that the abstraction is retained by implying
> that you don't particularly care how something is done, as long as it is
> done.  Herman could then declare i,j : -2**31..2**31-1 and k : -2**63..2**63-1
> and expect a decent compiler to pick the right machine instructions to do
> 
> 	k := i * j
> 
> as a 32*32 -> 64.

I have seen no decent compilers.  Actually, your idea of typing integers quite
agrees with mine, except that I am not sure that ranges are needed so much, and
you do not go far enough.  You also seem to go along with overloaded operators;
if you read K&R, you will find that no consideration is given to product being
done in such a way that the type of the result is not the type of the factors
actually multiplied.  My idea of an overloaded operator assembler would do this
if the types are as stated and the operation is either hardware on the
appropriate storage classes or defined in a macro.  This is what is not 
available in languages and compilers, and I know of no overloaded assemblers. 
There is "standard" software which gives conditional definitions given machine
parameters; the definition of multiplication should be such that the user can
supersede the compiler if the compiler does not do the right thing.

> If you want to use other, more perverse, instructions, then why not use small
> functions to do it.  On VMS, there are several function such as LIB$INSQHI
> to access some of the special instructions *that are peculiar to the VAX*?
> (There is even a LIB$EMUL for doing 32*32 -> 64 multiplication :-) ).
> Put such things in a library - don't junk up a language to do it.

The cost of a subroutine call is so high that it usually pays to use a much
worse procedure.  In every implementation of C that I have seen, abs is a
subroutine call, and it is necessary to have a separate fabs for floating.
At least on every machine I know, the inline absolute value would not only
be faster but even shorter.  LIB$EMUL uses at least 4 instructions in the 
program doing the call (I would have to check out if it would take 6) to
accomplish the results of one, not counting the 4 instructions used by the
subroutine.  Thus, even if the subroutine call were very fast (which it is
not), 8 instructions are run instead of one.

>                                                              No other
> machine that I can think of has interlocked queue instructions, so what use
> is carrying around VAX stuff in a language that is to be used on 68k/*86
> and any other type of stuff.  I don't want to pay (in terms of compiler cost
> and complexity, bugs and size) for your little, one-off, instruction hacks.
> That's one of the good things about K&R C.  Everything's in libraries, so
> I don't have to use your stuff if I don't want it.  Don't destroy my
> abstraction for your "do it all in one instruction" bodge.  I want a lean,
> mean, hungry and beautiful language and not a kludge-up like C or PL/I or
> ADA or FORTRAN or .....

I am willing to give up a lot to get a powerful, mean, hungry and beautiful
language.  Your lean language is undernourished so much as to be feeble.  I
am willing to give up the denial of registers, the use of operator precedence,
the inability to control storage, etc.  BTW, I have never used the queue
instructions, or even the decimal instructions, so I would not use that part
of the language.  But I see no reason why the arithmetic operations should
not be overloaded to include the decimal operations.

I do not want a computerized automobile which will take me to a street address
by an automatic route.  I want to be able to reroute it around rush hour
traffic if I think I know a better way.  And I want to be able to back the
car out of the garage into the driveway to clean out the garage.
> 
> 	Hubert Matthews


-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet, UUCP)

sommar@enea.se (Erland Sommarskog) (09/17/88)

As anyone who has been on the net for a while, there are different
people hanging around here. Some of them get into to the debate
to state their view. You may or may not agree with them, but you
appreciate them as long as they are well-informed and stick to the
case. Others are not so well-informed, but must raise their voice.
Other people keep saying the same thing all over again, no matter
how much their ideas are refuted. Other ones feel it as their duty 
to stick to personal attacks and make people upset.
  Johnson Noise (noise@eneevax.umd.edu.UUCP) seems to be of the 
latter kind. He comments my article:
>	Can you read, write, speak the english language?
>	Good thing nobody uses dinosaurs like C anymore!  Just think
>what could have happened... compilers, editors, operating systems, and
>embedded systems may have been horrendously implemented.

Yes, I can read, write, speak the English language. (And spell it with
a capital "E"!) But I fail to read between the lines what you are
trying to say in the actual discussion. Are you trying to say something? 
I think not.

>>Other languages let me do this, but when I by mistake write:
>>    Apple := Pear;   or
>>    Eat_an_apple(Pear); -- Parameter is of Apple_type.
>>the compiler will not always warn me. (Pascal, Modula-2?, Oberon?)
>
>	C'mon. Everyone knows that nikky's tighter than a TIG weld when it
>comes to type checking, not that I'm against it. Well, almost

I must immediately restate what I said above about my knowledge of  
English. What is "nikky"? And what is a "TIG weld"? And what has 
it do withtype checking? Am I right if I assume it's sheer nonsense?

>	What a concept. Are you planning to publish?

No. Why should I? There are already good books on this issue.

>	By the way, what happened to that oxy-acetalene C flame you
>had goin' for a while in your .sig?  And why did you stop posting?

I like to change my signture from time to time. Good jokes can 
get worn out too. And I didn't stop posting. If I had done that,
you couldn't have written your follow-up, could you? A better
question would be: why did you ever start posting?
-- 
Erland Sommarskog            ! "Hon ligger med min b{ste v{n, 
ENEA Data, Stockholm         !  jag v}gar inte sova l{ngre", Orup
sommar@enea.UUCP             ! ("She's making love with best friend,
                             !   I dare not to sleep anymore")

ok@quintus.uucp (Richard A. O'Keefe) (09/17/88)

In article <822@cernvax.UUCP> hjm@cernvax.UUCP (Hubert Matthews) writes:
>If you want to use other, more perverse, instructions, then why not use small
>functions to do it.  On VMS, there are several function such as LIB$INSQHI
>to access some of the special instructions *that are peculiar to the VAX*?

>Put such things in a library - don't junk up a language to do it.  No other
>machine that I can think of has interlocked queue instructions, so what use
>is carrying around VAX stuff in a language that is to be used on 68k/*86
>and any other type of stuff.  I don't want to pay (in terms of compiler cost
>and complexity, bugs and size) for your little, one-off, instruction hacks.

I'm not sure whether this is amusing or distressing, but BSD UNIX for the
VAX added a number of functions such as
	insque(elem, pred)		% man 3 insque
	remque(elem)			% man 3 insque
	ffs(i)				% man 3 bcopy
to the UNIX library.  SunOS is not, as far as I know, available for VAXen
(pity), but it has these library functions even though the MC68010 has no
instructions to which they might correspond.  What was that about
"carrying around VAX stuff in a language ... on 68k"?

I suggest that anyone providing access to machine instructions through
libraries should do so by providing a special library which has to be
explicitly loaded, *not* in libc.a!

schwartz@shire (Scott Schwartz) (09/18/88)

In article <929@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>In article <822@cernvax.UUCP>, hjm@cernvax.UUCP (Hubert Matthews) writes:
>> Herman could then declare i,j : -2**31..2**31-1 and k : -2**63..2**63-1
>> and expect a decent compiler to pick the right machine instructions to do
>> 
>> 	k := i * j
>> 
>> as a 32*32 -> 64.
>
>I have seen no decent compilers.  

In that case, I think our problems are over.  All we have to do is come
up with the decent compiler that Hubert proposes.  I think ADA has all
the facilites needed to do a good job, namely operator overloading and
the ability to specify range and precision of variables.

>In every implementation of C that I have seen, abs is a
>subroutine call

Ever used a Sun?  Sun's C compiler generates three inline instructions
for abs().  (see /usr/lib/libm.il)


-- Scott Schwartz                         "... a regular food chain."
   <schwartz@gondor.cs.psu.edu>	                -- Rayan Zachariassen

smryan@garth.UUCP (Steven Ryan) (09/19/88)

>>The lack of a boolean mode and the requirement that
>>&& and || shortcircuit are horrendous on a RISC.
>
>How is this different from saying "if is horrendous on a RISC"?
>That is what I'd have to write if not for && and ||.

If short-circuit && and || were not defined, then p && q and p || q
could be equivalent to p & q and p | q, with programmer responsible for
dangerous side-effects. In Ada this would perturb the code,
but it is not a problem in C or Algol because of the existence of conditional
expression.

In short, if you want p and q, use p&&q. If you want, if p then q else don't
bother with q, just say false, use p?q:false.

ok@quintus.uucp (Richard A. O'Keefe) (09/19/88)

In article <1433@garth.UUCP> smryan@garth.UUCP (Steven Ryan) writes:
>If short-circuit && and || were not defined, then p && q and p || q
>could be equivalent to p & q and p | q, with programmer responsible for
>dangerous side-effects. In Ada this would perturb the code,
>but it is not a problem in C or Algol because of the existence of conditional
>expression.

I am puzzled by the reference to ADA, which has
	and {roughly "&"}		and then {same as "&&"}
	or  {roughly "|"}		or else  {same as "||"}
and is thus the same as C (apart from having a separate boolean type).
Why I say "roughly" is that if p and q are completely free of side effects,
it is still not the case that "p&&q" and "p&q" have the same value.
Consider, for example, p==4, q==2.  Then p&&q is defined to be 1, but
p&q is 0.  For the same numbers, p||q is also 1, but p|q is 6.
If you want an expression with the same value as p&&q, you have to write
	!!(p) & !!(q)

There are three separate things:
   (a)	bitwise masking operations
   (b)  "pure" logical operations
   (c)  short-circuit operations
Fortran and Pascal conflate (b) and (c), leaving it up to the compiler to
decide which to use.  C omits (b), providing only (a) and (c).  Algol 60
provides (b) and if-expressions.  Algol 68 provides (a), (b), and if-
expressions.  I think most people will agree that C is an improvement over
BCPL, where the same symbol was used for both & and &&, but if you used it
in a "value" context you got "&", while in a "control" context you got "&&".

smryan@garth.UUCP (Steven Ryan) (09/20/88)

>>If short-circuit && and || were not defined, then p && q and p || q
>>could be equivalent to p & q and p | q, with programmer responsible for
>>dangerous side-effects. In Ada this would perturb the code,
>>but it is not a problem in C or Algol because of the existence of conditional
>>expression.
>
>I am puzzled by the reference to ADA, which has
>	and {roughly "&"}		and then {same as "&&"}

Ada has and-then/or-else but doesn't have conditional expressions.
Pascal does/does not have either.
>	or  {roughly "|"}		or else  {same as "||"}
>and is thus the same as C (apart from having a separate boolean type).
>Why I say "roughly" is that if p and q are completely free of side effects,
>it is still not the case that "p&&q" and "p&q" have the same value.
>Consider, for example, p==4, q==2.  Then p&&q is defined to be 1, but
>p&q is 0.  For the same numbers, p||q is also 1, but p|q is 6.
>If you want an expression with the same value as p&&q, you have to write
>	!!(p) & !!(q)
>
>There are three separate things:
>   (a)	bitwise masking operations
>   (b)  "pure" logical operations
>   (c)  short-circuit operations
>Fortran and Pascal conflate (b) and (c), leaving it up to the compiler to
>decide which to use.  C omits (b), providing only (a) and (c).  Algol 60
>provides (b) and if-expressions.  Algol 68 provides (a), (b), and if-
>expressions.  I think most people will agree that C is an improvement over
>BCPL, where the same symbol was used for both & and &&, but if you used it
>in a "value" context you got "&", while in a "control" context you got "&&".

nevin1@ihlpb.ATT.COM (Liber) (09/23/88)

In article <822@cernvax.UUCP> hjm@cernvax.UUCP (Hubert Matthews) writes:

>On a slightly less ethereal level, if Herman wants a 32 x 32 -> 64 multiply
>instruction, then I suggest that the real problem is not that the language
>doesn't let him specify this particular instruction, but that the language
>doesn't let him specify his data to a sufficiently high degree.

That brings about the question:  What is a sufficiently high degree of
precision?  No matter what limit someone specifies for a given
implementation X, there is always someone who wants greater precision.
-- 
 _ __		NEVIN J. LIBER  ..!att!ihlpb!nevin1  (312) 979-4751  IH 4F-410
' )  )			 Anyone can survive being frozen in liquid nitrogen;
 /  / _ , __o  ____	  it's surviving the *thawing* that counts :-).
/  (_</_\/ <__/ / <_	These are NOT AT&T's opinions; let them make their own.

nevin1@ihlpb.ATT.COM (Liber) (09/23/88)

In article <929@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>In article <822@cernvax.UUCP>, hjm@cernvax.UUCP (Hubert Matthews) writes:

>> One of the main ideas behind high-level languages is to provide a virtual
>> machine that is mostly independent of the low-level hardware.

>Just as I find that a student who has had a statistical
>methods course has a much harder time understanding statistical concepts than
>one who does not,

Nice statement on our educational system.

>I suspect that someone who learns to program in a highly
>restricted language will never understand what hardware is capable of and 
>can be capable of.

Not (necessarily) true.  Since they can abstract more than if they had
to program in a low-level language (assuming you include languages like
C in your definition of highly restricted languages), they can probably
get the hardware to do more things than a programmer who programs only
in low-level languages.

>My virtual machine would have all operations that can
>be thought of, not just the ones I can imagine.

Hmmm...  You want a virtual machine that have all the operations that
can be thought of.  Sounds very close to a Universal Turing Machine,
which cannot exist!  The size of your virtual machine must not only be
finite, but it must also have an upper bound.
-- 
 _ __		NEVIN J. LIBER  ..!att!ihlpb!nevin1  (312) 979-4751  IH 4F-410
' )  )			 Anyone can survive being frozen in liquid nitrogen;
 /  / _ , __o  ____	  it's surviving the *thawing* that counts :-).
/  (_</_\/ <__/ / <_	These are NOT AT&T's opinions; let them make their own.

dik@cwi.nl (Dik T. Winter) (09/24/88)

In article <8780@ihlpb.ATT.COM> nevin1@ihlpb.UUCP (55528-Liber,N.J.) writes:
 > In article <822@cernvax.UUCP> hjm@cernvax.UUCP (Hubert Matthews) writes:
 > 
 > >On a slightly less ethereal level, if Herman wants a 32 x 32 -> 64 multiply
 > >instruction, then I suggest that the real problem is not that the language
 > >doesn't let him specify this particular instruction, but that the language
 > >doesn't let him specify his data to a sufficiently high degree.
 > 
 > That brings about the question:  What is a sufficiently high degree of
 > precision?  No matter what limit someone specifies for a given
 > implementation X, there is always someone who wants greater precision.

There is a trade-off of course.  I do not think there are many applications
that require more than 100 digits of precision, but in the future?
On the other hand, it is not exactly a question of not being able to
specify your data to a sufficiently high degree.  There is more involved.
Just like a number of people would like to have a (function/operator) that
returns both quotient and remainder of an integer division, it would be
nice to have a (function/operator) that returns high order and low order
part of a multiplication (which is available on many machines, the WE32000
being an exception).  However, this is still not all you need for
multiple precision arithmetic; how to write in a high level language the
equivalent of:
	add	*r0++,*r1++,*r2++	add first elements
	addc	*r0++,*r1++,*r2++	add succesive elements with carry
	count1b				count 1 and branch back
so you need add and subtract that return carry bit also etc.
In my opinion, when Herman wants to do efficient multiple precision
arithmetic in a high level language, he is going the wrong way.
At least with the current state of the art with high level languages.
(And you open a whole new can of worms if you are going to look at
vector processors.)
-- 
dik t. winter, cwi, amsterdam, nederland
INTERNET   : dik@cwi.nl
BITNET/EARN: dik@mcvax