[comp.lang.misc] He's not the only one at it again!

cik@l.cc.purdue.edu (Herman Rubin) (07/26/90)

In article <25630@cs.yale.edu>, zenith-steven@cs.yale.edu (Steven Ericsson Zenith) writes:
> In article <2400@l.cc.purdue.edu>, cik@l.cc.purdue.edu (Herman Rubin) writes:

			.........................

> From Juliussen and Juliussen's Computer Industry Almanac:
> 
> 1944 Grace Murray Hopper starts a distinquished career in the computer industry
>      by being the first programmer for the Mark 1.
> 1953 IBM ships its first stored program computer, the 701. [...]
> 1954 FORTRAN is created by John Backus at IBM following his 1953 SPEEDCO
> program.
>      Harlan Herrick runs first successful FORTRAN program.
> 1954 Gene Amdahl develops the first operating system, used on IBM 704.
> 1957 FORTRAN is introduced.
> 1958 ALGOL, first called IAL (International Algebraic Language), is presented
>      in Zurich.
> 1959 COBOL is defined by the Conference on Data Systems Languages (Codasyl)
>      based on Grace Hoppers's Flow-Matic.

> [Sorry no earlier mention of Flow-Matic]

Well, I did get my dates somewhat wrong.  But there was considerable similarity
between the 701 and 704 and 709, and it is still the case that Fortran was
written specifically for  that machine design.

> 
> |>  The extremely poor attempt to produce a machine-independent 
> |>computational language ALGOL was the followup.
> 
> This is an unjustifiable comment, since ALGOL has had a far more profound
> influence on the design of programming languages than either FORTRAN or
> COBOL.

You are, unfortunately, absolutely correct here.  Fortran was not intended to
be complete.  ALGOL was, and failed miserably here.  ALGOL was intended to be
a programming language adequate for all numerical computations on all machines.
But it did not handle all the hardware options a good programmer would use on
the existing machines at that time.  Hardware produced a simultaneous quotient
and remainder; ALGOL made no provision for it.  Even then, mathematicians were
using multiple precision computations.  Likewise, no provision for that,
although most hardware had.  A machine without overflow was unusual; again,
no provision in the language.  

I claim I have made a strong case against ALGOL being even a good programming
language for mathematics.  The weaknesses of ALGOL and Fortran are to a
considerable extent responsible for these instructions disappearing from
the hardware.
-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)	{purdue,pur-ee}!l.cc!cik(UUCP)

gsh7w@astsun.astro.Virginia.EDU (Greg S. Hennessy) (07/26/90)

In article <2404@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
#I claim I have made a strong case against ALGOL being even a good programming
#language for mathematics. 

Because certain computers had hardware instructions that ALGOL could
not use, then ALGOL is not a good programming language for
mathematics. I don't understand the logic, because for ANY language,
there will be machines that have instructions that cannot be used.

--
-Greg Hennessy, University of Virginia
 USPS Mail:     Astronomy Department, Charlottesville, VA 22903-2475 USA
 Internet:      gsh7w@virginia.edu  
 UUCP:		...!uunet!virginia!gsh7w

cik@l.cc.purdue.edu (Herman Rubin) (07/26/90)

In article <1990Jul26.020229.2205@murdoch.acc.Virginia.EDU>, gsh7w@astsun.astro.Virginia.EDU (Greg S. Hennessy) writes:
> In article <2404@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
> #I claim I have made a strong case against ALGOL being even a good programming
> #language for mathematics. 
> 
> Because certain computers had hardware instructions that ALGOL could
> not use, then ALGOL is not a good programming language for
> mathematics. I don't understand the logic, because for ANY language,
> there will be machines that have instructions that cannot be used.

The early computer instruction sets were largely designed by mathematicians
to do mathematics.  These instructions were present on most of the computers,
for the good reason that mathematicians saw the need for them. 

For calculations whose accuracy is greater than that explicitly provided for
in the architecture, the job must be done in integer arithmetic, using
multi-word procedures.  I would be surprised if a mathematician even in
the 18th century would not have said this.  There are methods now which
could be used to modify this, and the advantage is still being disputed,
but even these benefit from a basic word x word to doubleword integer
operation, and doubleword integer/integer -> quotient,remainder.

So here we have these constructs, built into the hardware in the first place
because mathematicians saw the need for them, which a language intended to
be THE programming language for all of numerical mathematics left out!  
-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)	{purdue,pur-ee}!l.cc!cik(UUCP)

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (07/29/90)

In article <1990Jul26.020229.2205@murdoch.acc.Virginia.EDU>, gsh7w@astsun.astro.Virginia.EDU (Greg S. Hennessy) writes:
: In article <2404@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
: #I claim I have made a strong case against ALGOL being even a good programming
: #language for mathematics. 
 
: Because certain computers had hardware instructions that ALGOL could
: not use, then ALGOL is not a good programming language for
: mathematics. I don't understand the logic, because for ANY language,
: there will be machines that have instructions that cannot be used.

If you want a *portable* programming language, that's so.
But is a class of programming languages ``MOHOLs'' which provide access
to all the instructions of the machines they are intended for.
On the B6700: ESPOL contains Algol as a sublanguage, but there is no
B6700 instruction which it cannot generate.
On the IBM/360:  PL/360 can generate any instruction; if there is an
instruction the compiler normally doesn't generate you can declare it
and then use it as if it were a subroutine or function.
On the DECsystem-10:  BLISS-10 and BLISS-36 can generate any instruction;
if there is an instruction the compiler normally doesn't generate (or,
for that matter, which the machine hasn't got, such as a UUO) you can
declare it and use it as if it were a subroutine or function.
(It helps that the DEC-10 instruction set is very regular.)

As it happens, Algol 60 in practice had a way of getting any instruction
whatsoever into your programs, namely procedures with "machine code" bodies.
I've seen it used.

Herman Rubin was talking in this instance about instructions which are useful
for multiprecision integer arithmetic.  Actually, that's the wrong way of
thinking about it.  The _right_ way to think about it is to realise that
integers being confined to nasty little boxes is an artefact of computers,
_not_ a feature of our programs.  If I declare
	var x: 12223495872309457 .. 2345230549872039458720983475;
then any compiler worth its salt ought to be able to figure out how many
of its nasty little boxes to use, and how to exploit whatever instructions
the architect may have provided for "multiprecision arithmetic".  The
language to slam is not Algol, which merely imitated Fortran in this
respect, but Pascal, which (a) had the example of COBOL before it,
and (b) had the notational resources (subranges) to let programmers give
the compiler enough information to do the job without imposing its nasty
little boxes on *us*, but (c) went ahead and rammed nasty little boxes
down our throats anyway.  (I prefer languages where integers are as big
as they need to be without me having to bother; Yay Lisp! Yay Scheme!, 
but Pascal's subrange notation *could* have been exploited to ensure that
integer arithmetic on ranged variables was completely portable, without
requiring dynamic allocation of numbers.)

-- 
Science is all about asking the right questions.  | ok@goanna.cs.rmit.oz.au
I'm afraid you just asked one of the wrong ones.  | (quote from Playfair)

roy@phri.nyu.edu (Roy Smith) (07/30/90)

zenith-steven@cs.yale.edu (Steven Ericsson Zenith) writes:
> The use of := distinguishes assignment from equality [...] and IMHO is a
> much nicer solution to the C hack == used to ovecome the same problem.

	I agree that differentiating assignment from equality testing is
good, but why is using {:=, =} (no, that's not some kind of overgrown
smiley face!) any better or worse than using {=, ==}?  One might argue that
one is easier to type, or less likely to cause typos, or something like
that, but to call C's version a hack seems like you're overreacting a bit.
If you are going to invent a multi-ascii-character token for assignment,
why not "<-"?
--
Roy Smith, Public Health Research Institute
455 First Avenue, New York, NY 10016
roy@alanine.phri.nyu.edu -OR- {att,cmcl2,rutgers,hombre}!phri!roy
"Arcane?  Did you say arcane?  It wouldn't be Unix if it wasn't arcane!"

art@cs.bu.edu (Al Thompson) (07/30/90)

In article <25681@cs.yale.edu> zenith-steven@cs.yale.edu (Steven Ericsson Zenith) writes:
|In article <3478@goanna.cs.rmit.oz.au> ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes:
|>In article <58091@lanl.gov>, jlg@lanl.gov (Jim Giles) writes:
|>
|>> 2) Features which are different, but completely irrelevant (like using
|>>    ":=" instead of "=" for assignment).
|>
|>It isn't clear what := is supposed to be irrelevant to.
|
|a := b	means "assign the value of b to a".
|a = b	means "a is equal to b".
|
|The use of := distinguishes assignment from equality, thus prevents
|overloading a single operator and IMHO is a much nicer solution to the
|C hack == used to ovecome the same problem.

Exactly.  Originally the assignment operator was a back arrow.  Then, some
genius decided to do away with a back arrow key and := became the
operator.  I always liked (and still do) the back arrow better because it
is suggestive of what you are doing, moving a copy of a value to the to
the assigned variable e.g. a <- b.

zenith-steven@cs.yale.edu (Steven Ericsson Zenith) (07/31/90)

In article <1990Jul30.143530.24295@phri.nyu.edu>, roy@phri.nyu.edu (Roy
Smith) writes:
|>zenith-steven@cs.yale.edu (Steven Ericsson Zenith) writes:
|>> The use of := distinguishes assignment from equality [...] and IMHO is a
|>> much nicer solution to the C hack == used to ovecome the same problem.
|>
|>	I agree that differentiating assignment from equality testing is
|>good, but why is using {:=, =} (no, that's not some kind of overgrown
|>smiley face!) any better or worse than using {=, ==}?  One might argue that
|>one is easier to type, or less likely to cause typos, or something like
|>that, but to call C's version a hack seems like you're overreacting a bit.
|>If you are going to invent a multi-ascii-character token for assignment,
|>why not "<-"?

                
Keyboard ease is a good pragmatic point. The principal objection to use
of = as a symbol meaning assignment is that the symbol most commonly
means equality outside of Computer Science. Things are complicated in
C since, in that language, assignment is an expression. The main argument
for := as an assignment operator is familiarity, since it is now widely used
with this meaning. This symbol is used in Occam for that reason and also
in Ease, although Ease extends its use to include allocation (the declaration
and possible initialisation of variables).

My objection to <- would be typographic. Fixed width font versions (widely used
for program listing) make the visual distance between the < and dash
exagerated. Functional languages often use -> (see ML, Haskell), 
and indeed Ease uses -> for type constraint. But I don't really
like the visual distance between dash and >. The problem is born from a 
desire to maintain compatibility with the ASCII character set.


--
Steven Ericsson Zenith              *            email: zenith@cs.yale.edu
Fax: (203) 466 2768                 |            voice: (203) 432 1278
"The tower should warn the people not to believe in it." - P.D.Ouspensky
Yale University Dept of Computer Science 51 Prospect St New Haven CT 06520 USA

dgil@pa.reuter.COM (Dave Gillett) (08/01/90)

In <25684@cs.yale.edu> zenith-steven@cs.yale.edu (Steven Ericsson Zenith) writes:

>In article <1990Jul30.143530.24295@phri.nyu.edu>, roy@phri.nyu.edu (Roy
>Smith) writes:
>|>zenith-steven@cs.yale.edu (Steven Ericsson Zenith) writes:
>|>> The use of := distinguishes assignment from equality [...] and IMHO is a
>|>> much nicer solution to the C hack == used to ovecome the same problem.
>|>
>|>why not "<-"?

>My objection to <- would be typographic. Fixed width font versions (widely used
>for program listing) make the visual distance between the < and dash
>exagerated. Functional languages often use -> (see ML, Haskell), 
>and indeed Ease uses -> for type constraint. But I don't really
>like the visual distance between dash and >. The problem is born from a 
>desire to maintain compatibility with the ASCII character set.

It's worth mentioning here that APL (a functional language) uses a symbol for
assignment that looks remarkably like "<-", except that it appears as a single
character.  (APL does not adhere to ASCII....)  This leaves "=" for its usual
role as a comparison predicate.  I have personally observed people learning the
language who found this completely straightforward; the same people had 
previously had difficulty with ":=", "==", and (worst of all!) Micorsoft
BASIC's "=" which means one or the other, depending on context (ouch).

I don't know what J, an ASCII-compatible descendent of APL, uses...
                                   Dave

unhd (Anthony Lapadula) (08/02/90)

In article <25681@cs.yale.edu> zenith-steven@cs.yale.edu (Steven Ericsson Zenith) writes:
>
>The use of := distinguishes assignment from equality, thus prevents
>overloading a single operator and IMHO is a much nicer solution to the
>C hack == used to ovecome the same problem.

Actually, ``=='', being two chartacters long, dovetails nicely with
``!='', ``<='', and ``>=''.

Besides, why do you consider ``=='' to be a hack, but not ``:=''?

-- Anthony (uunet!unhd!al, al@unh.edu) Lapadula

// Wanted: catchy .sig.

smryan@garth.UUCP (sous-realiste) (08/11/90)

>_not_ a feature of our programs.  If I declare
>	var x: 12223495872309457 .. 2345230549872039458720983475;
>then any compiler worth its salt ought to be able to figure out how many
>of its nasty little boxes to use, and how to exploit whatever instructions
.	.	.

Do you have any idea what this does to the context sensitive grammar of
the language? Well, most people don't use CS grammars, so probably not,
but it leads to an incredible of snarl of constants, constant expressions,
and constant values you would not believe. Or you would if tried to write
the compilers based on things like Ada LRM.
-- 
Her somber eyes consider all        ||/+\==/+\||                     Steven Ryan
that loom and tower, large and tall.||\=/++\=/||       ...!uunet!ingr!apd!smryan
Her everyday is always new          ||/=\++/=\||...!{apple|pyramid}!garth!smryan
and fills her eyes of frail blue.   ||\+/==\+/||   2400 Geng Road, Palo Alto, CA

carroll@udel.edu (Mark Carroll <MC>) (08/11/90)

In article <675@garth.UUCP> smryan@garth.UUCP (sous-realiste) writes:
]]_not_ a feature of our programs.  If I declare
]]	var x: 12223495872309457 .. 2345230549872039458720983475;
]]then any compiler worth its salt ought to be able to figure out how many
]]of its nasty little boxes to use, and how to exploit whatever instructions
].	.	.
]
]Do you have any idea what this does to the context sensitive grammar of
]the language? Well, most people don't use CS grammars, so probably not,
]but it leads to an incredible of snarl of constants, constant expressions,
]and constant values you would not believe. Or you would if tried to write
]the compilers based on things like Ada LRM.

Why?

This seems to be nothing more than Pascal like subranges combined with 
Lisp-like bignums. Bignums are more complicated to implement, but not
horrible. Subranges are completely trivial. And I completely fail to see
what deep and profound affects they're going to have on the grammar. Lisp
has dealt with Bignums in the syntax for years and years; I see nothing
in the addition of bignums to subranges that complicates it.

	<MC>


--
|Mark Craig Carroll: <MC>  |"We the people want it straight for a change;
|Soon-to-be Grad Student at| cos we the people are getting tired of your games;
|University of Delaware    | If you insult us with cheap propaganda; 
|carroll@dewey.udel.edu    | We'll elect a precedent to a state of mind" -Fish

lgm@cbnewsc.att.com (lawrence.g.mayka) (08/12/90)

In article <27159@nigel.ee.udel.edu> carroll@udel.edu (Mark Carroll <MC>) writes:
>In article <675@garth.UUCP> smryan@garth.UUCP (sous-realiste) writes:
>]]_not_ a feature of our programs.  If I declare
>]]	var x: 12223495872309457 .. 2345230549872039458720983475;
>]]then any compiler worth its salt ought to be able to figure out how many
>]]of its nasty little boxes to use, and how to exploit whatever instructions
>]
>]Do you have any idea what this does to the context sensitive grammar of
>]the language? Well, most people don't use CS grammars, so probably not,
>]but it leads to an incredible of snarl of constants, constant expressions,
>]and constant values you would not believe. Or you would if tried to write
>]the compilers based on things like Ada LRM.
>
>This seems to be nothing more than Pascal like subranges combined with 
>Lisp-like bignums. Bignums are more complicated to implement, but not
>horrible. Subranges are completely trivial. And I completely fail to see
>what deep and profound affects they're going to have on the grammar. Lisp
>has dealt with Bignums in the syntax for years and years; I see nothing
>in the addition of bignums to subranges that complicates it.

And Common Lisp indeed has subrange types.  If I set X to
2345230549872039458720983474, and ask

(TYPEP X '(INTEGER 12223495872309457 2345230549872039458720983475))

the result is T.  I can even declare X to be of this subrange if I wish:

(DECLARE (TYPE (INTEGER 12223495872309457 2345230549872039458720983475)
		X))


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.

gudeman@cs.arizona.edu (David Gudeman) (08/12/90)

In article  <27159@nigel.ee.udel.edu> carroll@udel.edu (Mark Carroll <MC>) writes:
=In article <675@garth.UUCP> smryan@garth.UUCP (sous-realiste) writes:
=]]_not_ a feature of our programs.  If I declare
=]]	var x: 12223495872309457 .. 2345230549872039458720983475;
=]]then any compiler worth its salt ought to be able to figure out how many
=]]of its nasty little boxes to use, and how to exploit whatever instructions
=].	.	.
=]
=]Do you have any idea what this does to the context sensitive grammar of
=]the language?
=
=This seems to be nothing more than Pascal like subranges combined with 
=Lisp-like bignums.

Lisp does not include the type system in the grammar.  All values in
Lisp have to carry their type around with them, and all operations
have to check the type at run time to make sure the operation is
legal.  This slows things down.  Pascal, C and similar languages fix
the types of variables and other expressions at compile time.

I don't agree with the original author that this is a problem though.
Given different lengths of integers, operations can be defined to
produce an int of length equal to the length of the longest operand.
Given subranges of the sort described, the result of an operation
should be an integer of the same length as the longest subrange, with
no range restrictions.  The range restrictions are checked when
assigning to a variable that has a restricted range.
-- 
					David Gudeman
Department of Computer Science
The University of Arizona        gudeman@cs.arizona.edu
Tucson, AZ 85721                 noao!arizona!gudeman

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (08/13/90)

I wrote
> >If I declare
> >	var x: 12223495872309457 .. 2345230549872039458720983475;
> >then any compiler worth its salt ought to be able to figure out how many
> >of its nasty little boxes [i.e. storage locations] to use

In article <675@garth.UUCP>, smryan@garth.UUCP (sous-realiste) writes:
> Do you have any idea what this does to the context sensitive grammar of
> the language?

Why should it do anything worse to the CS grammar of the language than
the *existing* notation
	var x: 122..234;
which is already legal Pascal?  What has the size of the constants to do
with the grammar?

-- 
The taxonomy of Pleistocene equids is in a state of confusion.