[comp.lang.misc] Fortran vs. C for numerical work - expression notation

jlg@lanl.gov (Jim Giles) (12/09/90)

From article <16798@csli.Stanford.EDU>, by poser@csli.Stanford.EDU (Bill Poser):
> [... I said notation should resemble standard math closely ...]
>
> Jim, if you'll read the past 20 messages or so you'll see that this
> is precisely one of the things we've been discussing.  [...]

It is interesting that you should choose to complain about my posting
and not about Peter Grandi's.  He said (or, at least, implied) that
programming notation should not closely resemble standard math because
such a thing was misleading to novice programmers.  I hope you agree
that such a position is indefensible.

Instead, you chose to come religiously to the defense of C when I made
a final comment that Fortran was a noticible improvement on C with
respect to mathematical notation.  Ok, I'll follow your argument to
see if there's a grain of validity to it.

> [...]                                                 I have
> already asserted that neither C nor Fortran provides much direct
> math support, that to the extent that they do the notation is very
> close to mathematical notation, and that the only real difference
> that I can see is that in Fortran power is an infix operator
> as is typical in mathematics whereas in C it is a function and
> therefore prefixed. This is an awfully small difference, isn't it?

Well, there's a lot of different meanings to the phrase "awfully small
difference".  To a computing science professional, the difference is
indeed trivial.  To some branches of abstract methematics, functional
notation is actually more natural.  But to most scientists, engineers,
and applied mathematicians, the difference is rather more significant.
Most of these people can read standard mathematical notation as fast
as you or I read English prose (and probably with more comprehension).
Any difference from such standard notation is an impediment - period.

Now, you correctly point out that neither Fortran nor C have a
complete set of usual mathematical notation.  To some extent, this
is a constraint of the character set, and to some extent it is a
limitation of computing.  After all, it is not really meaningful to
make the integral sign an operator in your language unless
integration is a built-in feature of the language - when, in fact,
approximating an integral is often the purpose of the program
itself.  We can only hope that as character sets get larger (my
workstation has dozens of fonts and hundreds of characters) and as
higher-level languages become available, the computational notation
available to scientists will converge with their own standardized
professional notations.

You also correctly point out that some of Fortran's (more or less)
standard operators (exponentiation, normal arithmetic on Complex,
etc.) require prefix (function call) notation in C.  This is an
impediment.  You may choose to call it trivial, but it is still an
impediment.

You fail to point out that both languages (but C more than Fortran)
use standard mathematical symbols for non-standard purposes.  The
equal sign in both languages is used for assignment, for example.
Again, this is an impediment.  Again, C suffers from it worse than
Fortran.

You fail to mention that both languages use non-standard symbols for
standard mathematical concepts.  Fortran uses '.and.' for conjunction,
while C uses '&', for example (the standard symbol for conjunction is,
of course, '/\').  I think that C is still worse than Fortran here.
To be sure, C uses '<' and '>' for comparison operators, but then
commits the error of the last paragraph by using these same symbols as
part of the shift operators.

These last two points (and the fact that C uses the whole ASCII
character set in peculiar ways) tend to make many C programs look
like communications line noise - at least to most non-users of C.
This can't be held in C's favor in a discussion about closeness
to standard forms of notation.

Now, the final point that most of the people in this discussion (on
_both_ sides) seem to miss is this: _any_ change or impediment is
unjustifiable unless it confers some _advantage_.  Further, the
advantage conferred must be of greater value that the cost of the
change.  No matter how trivial you consider the changes from Fortran
to C notation, there is no intrinsic advantage to such a change.
Using prefix instead of infix doesn't make you a better mathematician.
It doesn't make your code run faster (indeed the reverse may be true
if the infix operator was built-in and "inlined" and the prefix
notation actually does a function call).

J. Giles
.

poser@csli.Stanford.EDU (Bill Poser) (12/09/90)

Jim, I hardly came to the "religious defence of C". Rather, I asked that
you give evidence of your claim that Fortran's mathematical
notation is substantially closer to that used in mathematics than C's,
since this had been explicitly discussed. As it turns out, you don't really
examples different from those already mentioned, but apparently you
think that they are more significant.

On this, two comments. First, I don't really accept the claim that
C is further from math than Fortran because it uses, e.g. & for
logical and. If, as you suggest, we were to use /\, we'd still have a
different symbol, one that looks sort of like the mathematical one, but
not quite, and that doesn't type as a single symbol. 

Second, my experience has been that people who study mathematics
at all seriously, both pure and applied, are comfortable with a wide
variety of notations. Look at physics, for example, and the bewildering
variety of subscript and superscript notation you find, and tensor
notation, etc. It is very difficult for me to believe that the difference
between infix and prefix power functions and the like can make much
of a difference to anyone doing serious mathematical computation.
Is there any evidence that it actually matters?

Regarding Piercarlo Grandi's argument that programming notation
should differ as much as possible from mathematical notation,
I am not terribly sympathetic. I suspect that this will just
make people spend their time learning the funny new notation, not
make them think harder about how actual digital computation differs
from symbolic or ideal continuous numerical computation. He could be
right, but I wouldn't bet on it. So, no, I'm not advocating gratuitious
differences, just suggesting: (a) that Fortran and C are not very
different in this respect; (b) that these differences probably don't
matter very much.

							Bill
 

jlg@lanl.gov (Jim Giles) (12/10/90)

From article <16799@csli.Stanford.EDU>, by poser@csli.Stanford.EDU (Bill Poser):
> [...]
> Second, my experience has been that people who study mathematics
> at all seriously, both pure and applied, are comfortable with a wide
> variety of notations.  [...]

None of which resemble C - even remotely.  And, most of those different
notations confer some advantage in a specific corner of the problem
domain (with _clarity_ the highest criterion).  C notation confers
no such advantages and clarity seems to have been on the bottom of the
list there.

> [...]          It is very difficult for me to believe that the difference
> between infix and prefix power functions and the like can make much
> of a difference to anyone doing serious mathematical computation.

You again make the incorrect assertion that this is the only difference.
I regard this as an important difference, but not as important as most
of the others.

> [...]
> Is there any evidence that it actually matters?

Yes.  There are 40+ operators in C and 15 precedence levels.  Almost
none of this matches any notational or precedence convention in any
known problem domain.  It also doesn't match or resemble conventions
used by any other programming language.  This is widely regarded
as one of the reasons that C is hard to learn to use _effectively_.
Further, there are several specific experimental results regarding
side-effects - specifically that assignment should be a statement
level operator and _not_ an expression level operator.

> [...]
> Regarding Piercarlo Grandi's argument that programming notation
> should differ as much as possible from mathematical notation,
> I am not terribly sympathetic. I suspect that this will just
> make people spend their time learning the funny new notation, not
> make them think harder about how actual digital computation differs
> from symbolic or ideal continuous numerical computation.  [...]

Interesting.  That is a very succinct and accurate restatement of the
argument I made in the last article about learning C syntax.  So,
you see the problem - I just don't understand how you fail to see
that it applies to C.

 J. Giles

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (12/10/90)

In article <8339@lanl.gov> jlg@lanl.gov (Jim Giles) writes:
  [ Jim paraphrases Peter's position: ]
> programming notation should not closely resemble standard math because
> such a thing was misleading to novice programmers.  I hope you agree
> that such a position is indefensible.

The arguments that programming notation should look like math notation
are suspiciously like the arguments for visual (i.e., WYSIWYG) document
design. The counterarguments are very much like the arguments for
logical document design. Didn't the Bulletin recently carry a rather
convincing series on the superiority of logical document design?

  [ C has prefix pow(a,b), Fortran has infix a**b ]
  [ Bill calls this an ``awfully small difference'' ]
  [ Jim responds: ]
> Well, there's a lot of different meanings to the phrase "awfully small
> difference".
  [ blah blah blah ]
> Any difference from such standard notation is an impediment - period.

Jim, how is a**b more ``standard mathematical notation'' than pow(a,b)?
How is a.gt.b more ``standard mathematical notation'' than a > b?

Most professional Fortran programmers groan when they hear that
Fortran 8X (oops, 9X) has a > b. Maybe ``standard mathematical
notation'' isn't as important as familiar notation...

> We can only hope that as character sets get larger (my
> workstation has dozens of fonts and hundreds of characters) and as
> higher-level languages become available, the computational notation
> available to scientists will converge with their own standardized
> professional notations.

I sure hope not. The computer treats complex types quite differently
from integer types, and I want to be reminded of that difference when
I'm programming. Until a computer can *do* mathematics I don't want to
pretend that it can *talk* mathematics.

I agree entirely that Macsyma and friends should imitate standard
mathematical notation when they're doing standard mathematics.

> You fail to point out that both languages (but C more than Fortran)
> use standard mathematical symbols for non-standard purposes.  The
> equal sign in both languages is used for assignment, for example.
> Again, this is an impediment.  Again, C suffers from it worse than
> Fortran.

How is = a ``nonstandard'' symbol for assignment? If you think of your
computer as a sequence of states, then x = y + z is a quite natural way
to say ``x in the next state equals y + z in this state, with everything
else the same.'' So is x <- y + z: ``Store y + z in x.'' Or x -> y + z:
``x changes to y + z.'' If any of these is ``standard'', it's the first
one, but the concept is expressed so differently in mathematics that I
can't believe you consider any language notation better than any other.

> You fail to mention that both languages use non-standard symbols for
> standard mathematical concepts.  Fortran uses '.and.' for conjunction,
> while C uses '&', for example (the standard symbol for conjunction is,
> of course, '/\').

Spoken like a true computer scientist. The traditional symbol for
conjunction was ``&''. The most popular way to express conjunction (for
English speakers) has always been the word ``and''. That pseudo-Lambda
and similar abominations are relatively recent inventions still not very
popular outside mathematical logic.

> Now, the final point that most of the people in this discussion (on
> _both_ sides) seem to miss is this: _any_ change or impediment is
> unjustifiable unless it confers some _advantage_.

Beautifully convoluted rhetoric. What is your point?

Neither Fortran nor C uses particularly ``standard'' mathematical
notation, because programming is not particularly ``standard''
mathematics. There is no reason this situation should change, and it is
not a mark against either language that certain concepts are expressed
somewhat differently.

---Dan

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (12/10/90)

In article <8366@lanl.gov> jlg@lanl.gov (Jim Giles) writes:
> There are 40+ operators in C and 15 precedence levels. 

So what? There are a zillion operators in mathematics, and nobody even
agrees on the precedence levels. You solve this in math papers the same
way as in programs: you put in extra parentheses wherever you think
anyone might get confused. This is perfectly natural and doesn't waste
time.

> Almost
> none of this matches any notational or precedence convention in any
> known problem domain.

Actually, the function notation in C matches mathematical notation quite
well. Think about how the Laplace operator interacts with evaluation at
a point, for example. MDAS and its extensions work. Past that I just use
parentheses.

> Further, there are several specific experimental results regarding
> side-effects - specifically that assignment should be a statement
> level operator and _not_ an expression level operator.

What studies are you referring to?

---Dan

john@ghostwheel.unm.edu (John Prentice) (12/10/90)

In article <914:Dec923:50:2990@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
>
>Jim, how is a**b more ``standard mathematical notation'' than pow(a,b)?
>How is a.gt.b more ``standard mathematical notation'' than a > b?
>
It is sort of obvious isn't it?  To use LaTex format, one writes 
exponentiation as x^{y} in normal mathematical notation
(at least they did where I went to school).  Thus I would agree that
a**b is "more natural" than pow(a,b).  Once you are used to it, either
notation is fine I suppose, but you have to admit that the a**b form
is the "more natural".  I could always say something like "Fortran a
computer language is" instead of "Fortran is a computer language".
Either one works, but which would YOU prefer?  I agree with Jim on this one.

>Most professional Fortran programmers groan when they hear that
>Fortran 8X (oops, 9X) has a > b. Maybe ``standard mathematical
>notation'' isn't as important as familiar notation...

We must know different professional Fortran programmers.  I have never 
heard anyone make this comment.

John

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (12/10/90)

In article <1990Dec10.023332.15164@ariel.unm.edu> john@ghostwheel.unm.edu (John Prentice) writes:
> In article <914:Dec923:50:2990@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
> > Jim, how is a**b more ``standard mathematical notation'' than pow(a,b)?
> > How is a.gt.b more ``standard mathematical notation'' than a > b?
> It is sort of obvious isn't it? 

No, it is not.

> To use LaTex format, one writes 
> exponentiation as x^{y} in normal mathematical notation

I see no relations between a**b, pow(a,b), a^{b}, and superscript
notation other than that Fortran and LaTeX use (different) infix, C uses
prefix, and math uses superscript.

So your argument reduces to ``If LaTeX uses an infix notation, then any
infix notation is more standard than any prefix notation.'' This is
ridiculous.

Shall we consider fractions? Math uses a slashed form in two different
positions. Fortran and C use the ``standard'' horizontal form. But for
the vertical form plain TeX uses an infix {a\over b}, and AMSTeX uses
prefix \frac! Which is the ``standard'' notation here?

> but you have to admit that the a**b form
> is the "more natural".

Why? Let's try to use logic, not handwaving. Also note that Jim and I
are talking about ``standard mathematical notation.''

> I could always say something like "Fortran a
> computer language is" instead of "Fortran is a computer language".

The latter is standard English, while the former isn't.

Neither C, nor Fortran, nor TeX has a standard notation for powers. At
least C's notation matches the usual functional notation; you can't say
this for Fortran or plain TeX.

> >Most professional Fortran programmers groan when they hear that
> >Fortran 8X (oops, 9X) has a > b. Maybe ``standard mathematical
> >notation'' isn't as important as familiar notation...
> We must know different professional Fortran programmers.  I have never 
> heard anyone make this comment.

This was the uniform reaction of a large roomful of Fortran programmers
at Brookhaven National Laboratory in 1987 at a presentation of Fortran
8X, and I've heard the same opinion from many other people. Who's your
source?

---Dan

john@ghostwheel.unm.edu (John Prentice) (12/10/90)

In article <4390:Dec1003:50:4790@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
>> To use LaTex format, one writes 
>> exponentiation as x^{y} in normal mathematical notation
>
>I see no relations between a**b, pow(a,b), a^{b}, and superscript
>notation other than that Fortran and LaTeX use (different) infix, C uses
>prefix, and math uses superscript.
>
>So your argument reduces to ``If LaTeX uses an infix notation, then any
>infix notation is more standard than any prefix notation.'' This is
>ridiculous.
>

Whoa!  You misinterpreted what I said.  I was using LaTex notation just
as a device for expressing 
                                y
                               x

on the screen.  I was not suggesting that because LaTex does something
that it must be right.  Come on now.  All I was saying was that infix
notation is what mathematicians use for exponentiation so it strikes
me that if you are going to worry over whether infix or prefix is
more "natural", the infix notation is in this case.  Honestly I think
the issue is overblown.

>Why? Let's try to use logic, not handwaving. Also note that Jim and I
>are talking about ``standard mathematical notation.''

I hope my previous paragraph clears this up.

>> >Most professional Fortran programmers groan when they hear that
>> >Fortran 8X (oops, 9X) has a > b. Maybe ``standard mathematical
>> >notation'' isn't as important as familiar notation...
>> We must know different professional Fortran programmers.  I have never 
>> heard anyone make this comment.
>
>This was the uniform reaction of a large roomful of Fortran programmers
>at Brookhaven National Laboratory in 1987 at a presentation of Fortran
>8X, and I've heard the same opinion from many other people. Who's your
>source?

I have heard alot of groaning about Fortran Extended being too big, too
much like Ada, etc...  I have NEVER heard a room full of Fortran 
programmers complain about it however because it uses < instead of
.lt. (which is what you said).

John

mccalpin@perelandra.cms.udel.edu (John D. McCalpin) (12/10/90)

>>>>> On 10 Dec 90 07:14:57 GMT, john@ghostwheel.unm.edu (John Prentice) said:

John> In article <4390:Dec1003:50:4790@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:

>> >Most professional Fortran programmers groan when they hear that
>> >Fortran 8X (oops, 9X) has a > b. Maybe ``standard mathematical
>> >notation'' isn't as important as familiar notation...
>
>This was the uniform reaction of a large roomful of Fortran programmers
>at Brookhaven National Laboratory in 1987 at a presentation of Fortran 8X

John> I have heard alot of groaning about Fortran Extended being too big, too
John> much like Ada, etc...  I have NEVER heard a room full of Fortran 
John> programmers complain about it however because it uses < instead of
John> .lt. (which is what you said).

Come on, people --- Fortran Extended does not "use" '<' instead of '.LT.'.
Fortran Extended *allows* the use of '<' as an *alternative* to '.LT.'.
If you don't like it, you don't have to use it.

If other people use it and it hurts your eyes to read their code, then
I would be happy to sell you a very clever gadget that will convert
their blasphemous code back to the way God intended it.
--
John D. McCalpin			mccalpin@perelandra.cms.udel.edu
Assistant Professor			mccalpin@brahms.udel.edu
College of Marine Studies, U. Del.	J.MCCALPIN/OMNET

pcg@cs.aber.ac.uk (Piercarlo Grandi) (12/11/90)

On 9 Dec 90 02:56:19 GMT, poser@csli.Stanford.EDU (Bill Poser) said:

In article <16799@csli.Stanford.EDU> poser@csli.Stanford.EDU (Bill
Poser) writes:

poser> Regarding Piercarlo Grandi's argument that programming notation
poser> should differ as much as possible from mathematical notation,
poser> I am not terribly sympathetic.

I am afraid that you like Jim Giles did not understand my argument -- it
was not "since maths and programming are so different the notations
should be different as well", but "since maths and programming are so
different, similarity of notation is irrelevant and possibly even a trap
for the unwary".

I am not advocating pursuing difference of notation as a benefit; I am
saying that it should not be seen as an advantage of Fortran, and if it
seen as such, this may indicate little awareness of the immense
difference therein. Hey, Mathematica (as somebody remarked) has an even
more maths like notation than Fortran, but this does not mean that the
semantics of those operations are the same (at least in the numerical
domain -- as to the symbolic one Mathematics used to have several bugs
:->).

poser> I suspect that this will just make people spend their time
poser> learning the funny new notation, not make them think harder about
poser> how actual digital computation differs from symbolic or ideal
poser> continuous numerical computation.

Oh yes, I can accept that. After all one of my alternatives to Fortran
was C++, because of its abstraction capabilities; one can use them to
create even more faithful reproduction of maths like notation, even if
again I think it is pointless.


poser> So, no, I'm not advocating gratuitious differences, just
poser> suggesting: (a) that Fortran and C are not very different in this
poser> respect; (b) that these differences probably don't matter very
poser> much.

Notation is usually not that important, as long as it helps work instead
of hindering it, understanding the issues is. Maybe a noation that does
not resemble traditional notation helps more understand that the
semantics do not resemble traditional semantics, maybe not.

--
Piercarlo Grandi                   | ARPA: pcg%uk.ac.aber.cs@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcsun!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

jlg@lanl.gov (Jim Giles) (12/11/90)

A room full of Fortran programmers may indeed groan when someone
mentions that A > B will be the new form of  relational expression.
They are _not_ groaning because they think it's a bad idea, they
are groaning because it is a good idea that has been discussed to
death in the Fortran community for nearly two decades.  They groan
simply because it is an old debate that they are simply tired of
hearing about.

Now, if you tell that same roomful that you plan to remove x**y and
replace it with pow(x,y), they'll throw vegetables at you.  This is
because such a change is _not_ a good idea.  One of the advantages to
x**y or x^y over pow(x,y) is conciseness (which isn't really a word -
the correct word is 'concision' :-).  Another advantage is that, while
different from standard mathematical notation, they are _MUCH_ closer
to it.  If anyone disagrees, fine - Fortran doesn't FORCE you to use
the '**' operator, you can write a pow() function and call it - you
can even make it a statement function so that it gets inlined.  C
can't offer the same privilege to converting Fortan users - there is
no way in C (within the standard anyway) of defining an exponentiation
operator.  People who find this form most natural (most of us) cannot
use it in C.

Yes, C++ has allows user defined operators (or, allows users to
overload existing operators, but not define new ones is the real
rule).  This is a good idea and _allows_ most of Fortran user's
complaints about C expression notation to be fixed (I say 'allows',
there is still a problem of compatibility if two sets of users pick
different operators to overload, different functions to implement the
overloaded functionality, etc.).  But, if such overloaded operators
are actually implemented as _external_ function calls, this answer
is not satisfactory.  (By the way, Fortran Extended has overloadable
and user definable operators too.)

J. Giles

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (12/11/90)

In article <1990Dec10.071457.21537@ariel.unm.edu> john@ghostwheel.unm.edu (John Prentice) writes:
>                                 y
>                                x
  [ ... ]
> All I was saying was that infix
> notation is what mathematicians use for exponentiation

No. Superscript notation is not infix notation. It's not even close.
Jim implied that Fortran is closer to ``standard mathematical notation''
than C is. I don't believe him. x**y and pow(x,y) are both quite far
from superscript notation. And x.gt.y is certainly not standard.

> I have NEVER heard a room full of Fortran 
> programmers complain about it however because it uses < instead of
> .lt. (which is what you said).

Well, I have.

Wasn't there a study showing that command names could be ridiculously
illogical and people would learn them just as quickly? I really do think
familiar notations are more important than ``standard mathematical
notation.''

---Dan

sommar@enea.se (Erland Sommarskog) (12/11/90)

Also sprach Dan Bernstein (brnstnd@kramden.acf.nyu.edu):
>I sure hope not. The computer treats complex types quite differently
>from integer types, and I want to be reminded of that difference when
>I'm programming. Until a computer can *do* mathematics I don't want to
>pretend that it can *talk* mathematics.

The computer treats integers and reals different too, don't
you want to be reminded by that each time you write a program?

And the day you discover you have to go from reals to complex,
do you just love having to write the code, just because the
language - either by built-ins or overloading - does not allow
you use infix notation for complex numbers? Or maybe, software
maintenance is something you never have to bother about?
-- 
Erland Sommarskog - ENEA Data, Stockholm - sommar@enea.se

iad@chaos.cs.brandeis.edu (Ivan Derzhanski) (12/11/90)

In article <8339@lanl.gov> jlg@lanl.gov (Jim Giles) writes:

>[...] some of Fortran's (more or less)
>standard operators (exponentiation, normal arithmetic on Complex,
>etc.) require prefix (function call) notation in C.  This is an
>impediment.  You may choose to call it trivial, but it is still an
>impediment.

Yes, it is indeed.  It is sort of strange that a language with 40
operators should make exponentiation a function.  I always think of
this as one of the bad things in C.  It could have had 41 operators
just as easily.

... or 42, or 43, for that matter...  Look, it is fine to accomodate
the preferences of the math-minded people by making operators operators
and functions functions, but it is also hopeless.  There simply are
too many operators in mathematics.  Let's suppose exponentiation must
not be prefix; we'll write it "**".  Now what about factorial? It is
just as common an operation.  And it must be postfix.  You don't want to
create an impediment, do you?  (Don't forget what double factorials mean
in maths. 4!! is not (4!)! = 24!. Do you expect that users will be willing
to unlearn this?)  Next, how about...  And so on and so forth...

If this were a serious issue, languages which allow you to define your
own operators (such as Algol-68) would have more success among number-
processing users.

>[...] both languages (but C more than Fortran)
>use standard mathematical symbols for non-standard purposes.  The
>equal sign in both languages is used for assignment, for example.
>Again, this is an impediment.  Again, C suffers from it worse than
>Fortran.

Could you please say why?  I'm completely confused by this statement.
How can one language suffer more than the other by a deviation from
standard mathematics that they both share?

>[...] both languages use non-standard symbols for
>standard mathematical concepts.  Fortran uses '.and.' for conjunction,
>while C uses '&', for example (the standard symbol for conjunction is,
>of course, '/\').  I think that C is still worse than Fortran here.

Again, why?  I have seen "&" in quite many logic books.  (It is another
story, of course, that for some reason "&" is bit-wise conjunction in C,
while truth values are conjoined with "&&", which looks awful.)

>To be sure, C uses '<' and '>' for comparison operators, but then
>commits the error of the last paragraph by using these same symbols as
>part of the shift operators.

By the very same logic, Fortran commits an error by using "*" for multi-
plication, because it already is a part on the exponentiation operator.
You commit an error if you call a variable "I", because that's part of
the name of the sine function.

>These last two points (and the fact that C uses the whole ASCII
>character set in peculiar ways)

1. Is it really the whole ASCII charset?  Remind me the use of the dollar
sign.  And of the backquote.
2. What is wrong with using the whole ASCII charset?  What is it there for
anyway?
3. What are the unpeculiar ways of using it?  How would you use (put here
the name of your favourite peculiarly-used character) in an intuitive way?

> tend to make many C programs look
>like communications line noise - at least to most non-users of C.

It doesn't take too much time for much of this noise to start to make
sense.  It depends on the good will of the programmer.  The language
hasn't been designed that wouldn't allow you to write unreadable code
should you choose to.

>This can't be held in C's favor in a discussion about closeness
>to standard forms of notation.

No, since both languages are hopelessly far from standard notation.
You know how introductory books go: "Thou shalt not write `(-1)**K'
as thou dost in maths" etc. etc.

Again, speaking of standard notation (and given the fact that the
ASCII charset is a natural limitation).  How about admitting that the
C "?:" operator makes C look much more like maths?  Conditional
expressions (written with a big "{") are an integral part of the usual
mathematical notation.  They are very naturally converted to "?:".
Fortran has nothing similar, which is a serious impediment.





-- 
Ivan A. Derzhanski   iad@chaos.brandeis.edu      Any clod can have the facts,
MB 1766 / Brandeis University                but having an opinion is an art.
P.O.Box 9110 / Waltham, MA 02254-9110 / USA                    Charles McCabe

john@ghostwheel.unm.edu (John Prentice) (12/12/90)

In article <1990Dec11.051448.10742@chaos.cs.brandeis.edu> iad@chaos.cs.brandeis.edu (Ivan Derzhanski) writes:

<< reply to Jim Giles concerning the fidelity of mathematical notation in
   Fortran and c largely deleted ..
>
>No, since both languages are hopelessly far from standard notation.
>You know how introductory books go: "Thou shalt not write `(-1)**K'
>as thou dost in maths" etc. etc.
>
>

I think it is fairly obvious that neither fortran nor C maintain any
great fidelity to mathematical notation beyond a very simple one.  I also
don't know that the people who wrote these languages ever suggested
more than that.

John Prentice

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (12/12/90)

In article <2309@enea.se> sommar@enea.se (Erland Sommarskog) writes:
> Also sprach Dan Bernstein (brnstnd@kramden.acf.nyu.edu):
> >I sure hope not. The computer treats complex types quite differently
> >from integer types, and I want to be reminded of that difference when
> >I'm programming. Until a computer can *do* mathematics I don't want to
> >pretend that it can *talk* mathematics.
> The computer treats integers and reals different too, don't
> you want to be reminded by that each time you write a program?

Yes. This is one of the advantages of Forth.

Integer addition is just like normal addition except for overflow. I'm
willing to accept a normal plus sign for it.

Floating-point addition is vastly different. The only reasonably
standard notation I've seen for it is Knuth's plus-in-a-circle, but this
conflicts with other uses of circles. I'd be happier with fadd(x,y) than
with x + y.

> And the day you discover you have to go from reals to complex,
> do you just love having to write the code, just because the
> language - either by built-ins or overloading - does not allow
> you use infix notation for complex numbers?

Huh? It's not my problem if you have an incompetent editor.

I use my editor to reduce my typing time. I use my programming language
to express instructions to a computer. I use my pretty-printer to make
my programs more readable on paper. Each stage has a different view of
floating-point addition, and that's how it should be.

> Or maybe, software
> maintenance is something you never have to bother about?

How does it hurt maintenance to use realistic notations?

---Dan

sommar@enea.se (Erland Sommarskog) (12/13/90)

Also sprach Dan Bernstein (brnstnd@kramden.acf.nyu.edu):
>I use my editor to reduce my typing time.

And if you don't have re-edit because of a design change, or
change in the presumptions you save even more time.
-- 
Erland Sommarskog - ENEA Data, Stockholm - sommar@enea.se