[comp.lang.misc] Algol68

drw@cullvax.UUCP (Dale Worley) (04/22/87)

Are there any Algol 68 compilers out there, particularly for Un*x
environments?

Please reply by e-mail, as I don't read this newsgroup.  I will send
or post a summary if people are interested.

Dale
-- 
Dale Worley		Cullinet Software
UUCP: ...!seismo!harvard!mit-eddie!cullvax!drw
ARPA: cullvax!drw@eddie.mit.edu
Un*x (a generic name for a class of OS's) != Unix (AT&T's brand of such)

ok@quintus.uucp (Richard A. O'Keefe) (11/19/88)

In article <591@tuck.nott-cs.UUCP> anw@maths.nott.ac.uk (Dr A. N. Walker) writes:

>"Very few implementations finished" -- well, not exactly.  The current (and
>	last [:-(]) Algol Bulletin lists 7 *current* implementations "which
>	you can actually obtain and use", running on at least 20 different

Would there be one for Suns (especially Sun-386i)?  I used to use Algol 68
on a DEC-10, and would dearly love to be able to use a nice clean language
again.

ian@ux.cs.man.ac.uk (Ian Cottam) (11/30/88)

In article <594@tuck.nott-cs.UUCP> anw@maths.nott.ac.uk (Dr A. N. Walker) writes:
>[Clay Phipps (>>) and Steven Ryan (>) are surprised that:
>	>>	  begin int I := 0, K := 1;
>	>>	  ref int Ptr := I;
>	>>	  Ptr := K;
>	>>	  Print (I);
>	>>	  end
>prints 0 rather than 1.]

 Charles Lindsey replies...
 Various people have been trying to explain this one. Personally, I would write
 it as follows.

	  begin
 	  loc int I := 0, K := 1;
	  loc ref int Ptr := I;
	  Ptr := K;
	  Print (I);
	  end
Me:
Actually Charles would not write it like that as it (really the original
version) is not syntactically correct.

-----------------------------------------------------------------
Ian Cottam, Room IT101, Department of Computer Science,
University of Manchester, Oxford Road, Manchester, M13 9PL, U.K.
Tel: (+44) 61-275 6157         FAX: (+44) 61-275-6280
ARPA: ian%ux.cs.man.ac.uk@nss.cs.ucl.ac.uk   
JANET: ian@uk.ac.man.cs.ux    UUCP: ..!mcvax!ukc!mur7!ian
-----------------------------------------------------------------

abray@acorn.co.uk (Andy Bray) (12/10/88)

If this all goes horribly wrong - please excuse me - this is my first posting.

Further to the recent discussion of Algol68, I would like to stick a small oar
in (more a paddle really :-)).

Algol68 compiler development is still going on here in the UK, a small company
(I believe called Algol Applications) are developing an Algol68 compiler for
80x86 based computers, partly under contract from the RSRE (Royal Signals
Research Establishment) and partly for commercial sale.

The developers are aquaintances of mine who are studiously trying to wean me
off C. They may yet convince me - but there are only so many hours...

I must say that the code generated on a humble PC is impressive. This I believe
is mainly due to the programmer putting a lot more information in the hands of
the compiler than languages such as C allow, so a greater degree of
optimisation is possible.

I don't really know that much about this compiler, but if there is any interest
I will endevour to find out.

Andrew Bray.

mph@lion.inmos.co.uk (Mike Harrison) (02/07/90)

In article <PCG.90Feb5183407@rupert.cs.aber.ac.uk> pcg@rupert.cs.aber.ac.uk (Piercarlo Grandi) writes:

>Algol68 is difficult to *parse*, but it has been carefully
>designed to make efficient code possible, and indeed even easy.
>Array slicing, for example, is there for this reason, as are
>other things. Too bad that something like 'fast' is missing (but
>'ronly' is there).

I have just looked at my copy of the 'revised report' and can't find any 
reference to 'ronly', [I didn't think I had come across it], so what is it?
Some kind of pragmat (and in what implementation) ?

While talking of pragmats, in S3, ICL's Algol68 based systems langauge, there
was a 'pr' RARELY, which informed the compiler that the relevant code would be
obeyed only on 'rare' occasions.
This could be used for the code of error condition branches of 'if' clauses.
The effect was that the code was generated in a different code area, and could
be loaded into a different segment, thus improving code locality.

>On this let me disagree too. The hints should not get thrown out;
>they are valuable, *quality* information. The compiler writer
>should not believe that it can (reliably!) second guess the
>author as to the semantic/pragmatic properties of the program.
>
>The author should describe them as clearly as possible, and the
>compiler should treasure such information. Of course the author
>should be spared (not always! if it is *really* important,
>machine dependent considerations should enter into the design of
>a program) the work of machine dependent optimization (which is
>actually just competent code generation), because there the
>compiler is in charge.
>
Here I agree, a sensible set of pragmas, can do much to improve the 
quality of generated code, by hinting to the compiler that some complex
check is worth performing.
For example, in Ada there is an optimisation for passive tasks, which
eliminates much of the full generality of tasks [Nassi & Haberman, I
think]. Searching for such cases is hard and time consuming, but if a
user could suggest such an optimisation in respect of a particular task
eg. by:

  pragma MONITOR_TASK(mytask);

it is fairly easy to check the validity.

Mike,


Michael P. Harrison - Software Group - Inmos Ltd. UK.
-----------------------------------------------------------
UK : mph@inmos.co.uk             with STANDARD_DISCLAIMERS;
US : mph@inmos.com               use  STANDARD_DISCLAIMERS;

jejones@mcrware.UUCP (James Jones) (02/08/90)

In article <3968@ganymede.inmos.co.uk> mph@inmos.co.uk (Mike Harrison) writes:
>I have just looked at my copy of the 'revised report' and can't find any 
>reference to 'ronly', [I didn't think I had come across it], so what is it?
>Some kind of pragmat (and in what implementation) ?

The LHS of an assignation has to have mode REF AMODE.  An object with some
other mode can't be assigned to--and since, unlike C, Algol 68 doesn't confuse
pointers and arrays, one can have a formal parameter with mode [] AMODE and
it will enjoy the same protection.  Sounds like the moral equivalent of
"readonly" to me.

(Apologies to fellow Algol 68 fans if I've misused the vocabulary--if I have,
you have permission to slap me with a wet REFSETY. :-)

	James Jones

pcg@aber-cs.UUCP (Piercarlo Grandi) (02/10/90)

In article <3968@ganymede.inmos.co.uk> mph@inmos.co.uk (Mike Harrison) writes:
    
    I have just looked at my copy of the 'revised report' and can't find any 
    reference to 'ronly', [I didn't think I had come across it], so what is it?
    Some kind of pragmat (and in what implementation) ?

Well, my love of Algol 68 is such that I exxagerated a bit
here. What actually happens is that in Algol 68 you have to
explicitly declare entitities to be variables. In a sense (but
not completely) REF is a 'notronly' keyword.
    
    While talking of pragmats, in S3, ICL's Algol68 based
    systems langauge, there was a 'pr' RARELY, which informed
    the compiler that the relevant code would be obeyed only on
    'rare' occasions.    

	[ ... ]

    For example, in Ada there is an optimisation for passive tasks, which
    eliminates much of the full generality of tasks [Nassi & Haberman, I
    think].

Thank you for two good examples. Quite interesting ones. In
particular the one about RARELY. Optimizing programs for
locality seems to be a lost art. Bleeech.
-- 
Piercarlo "Peter" Grandi           | ARPA: pcg%cs.aber.ac.uk@nsfnet-relay.ac.uk
Dept of CS, UCW Aberystwyth        | UUCP: ...!mcvax!ukc!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: pcg@cs.aber.ac.uk

mo@messy.bellcore.com (Michael O'Dell) (03/11/91)

Learning Algol68 as a student did wonders for understanding other
languages, in particular, declarations in C.

And we have suffered much from not having better insight into some
of the systems done in Europe.

	-Mike

yfcw14@castle.ed.ac.uk (K P Donnelly) (03/14/91)

mo@messy.bellcore.com (Michael O'Dell) writes:

>Learning Algol68 as a student did wonders for understanding other
>languages, in particular, declarations in C.

Agreed!!  Why are we not using Algol68 now when it is so much better
than other languages.  Is it, as I suspect, that people did not realise
how good it was?  Is it because it was bypassed by Pascal, which was
designed as a teaching language but ended up being used for lots of
things it was never intended for?  Is Algol68 being used anywhere now?

   Kevin Donnelly

dww@math.fu-berlin.de (Debora Weber-Wulff) (03/19/91)

Why is Algol 68 not being used? Well, it seems to have every feature
ever needed, and is thus rather difficult to read and learn. Tony
Hoare wrote about "The Emperor`s New Clothes" a while back, in which
he politely explains why less is better. 

Sure, it has some nice stuff (like "own" variables), but look at
the definition - what is it, 5 cm thick? 

All the same, I'd like to hear about "real" software (as opposed
to programming exercises) that have been programmed in it.

-- Debbie

-- 
Debora Weber-Wulff
snail: FU Berlin, ZI Fachdidaktiken, Habelschwerdter Allee 45, W-1000 Berlin 33
email: weberwu@inf.fu-berlin.de, dww@math.fu-berlin.de

dhoyt@vx.acs.umn.edu (DAVID HOYT) (03/19/91)

In article <YV6O4AC@math.fu-berlin.de>, dww@math.fu-berlin.de (Debora Weber-Wulff) writes...
>Why is Algol 68 not being used? Well, it seems to have every feature
>ever needed, and is thus rather difficult to read and learn. Tony
>Hoare wrote about "The Emperor`s New Clothes" a while back, in which
>he politely explains why less is better. 

  I suspect it has more to do with IBM, which pretty actively (in the USA
atleast) discouraged people from using it.  They invented PL/I as the
language to use (in place of Algol).  So people started using PL/I.  When
everyone noticed that PL/I only ran on expensive IBM hardware, and that
Fortran was the only language that ran on everything; they bagged PL/I and used
Fortran.  Algol survived for a while in Europe (Siemans, ICL and others
outsold IBM for a while there), but after a while Algol programmers noticed
that you couldn't write programs for nice IBM, DEC, etc. mainframes and
mini's, so they too started writing programs in Fortran.  In the USA Cobol
was an option for some, and more recently C is the language to end all
languages.  Ada has replaced PL/I and Algol as 'the language with a
feature for everybody.'  It even runs on more than one vendor's machine as
the trillion dollar DoD specifies it in most contracts these days.

  david | dhoyt@vx.acs.umn.edu | dhoyt@umnacvx.bitnet

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (03/19/91)

In article <YV6O4AC@math.fu-berlin.de>, dww@math.fu-berlin.de (Debora Weber-Wulff) writes:
> Why is Algol 68 not being used? Well, it seems to have every feature
> ever needed, and is thus rather difficult to read and learn.

The "minority report" slammed Algol 68 because it LACKED features.
It hasn't got [Ada=>generics, C++=>templates] or inheritance or
modules/packages/classes, it hasn't got bignums or rationals, ...

> Sure, it has some nice stuff (like "own" variables), but look at
> the definition - what is it, 5 cm thick? 

Gee, what a *well*-informed critique.  Algol 68 has no "own" variables.
My copy of the definition is more like 0.5cm thick.  It would take 5 or
6 copies of the Algol 68 report to make up the weight of the Ada LRM,
and the "Interpretations" of the Ada LRM come to as much again.
The Algol 68 report is roughly 1/4 of the size of the "Annotated C++
Reference Manual", which is by comparison hopelessly vague.  Heck,
the Algol 68 report is physically less than half the size of the
Fortran 77 standard.

Basically, the problem with the Algol 68 definition is that the
committee tried to give a *rigorous* definition of what is basically
a rather simple language (if you exclude formatted transput).
People *think* Pascal is simple because the Pascal report didn't
tell the whole truth.  (Of the INRIA formal specification of Ada
it could well be said "Look upon my works, ye mighty, and despair".)

> All the same, I'd like to hear about "real" software (as opposed
> to programming exercises) that have been programmed in it.

Why do you think the Royal Radar Establishment went to the trouble
of writing their own Algol 68 compiler?
-- 
Seen from an MVS perspective, UNIX and MS-DOS are hard to tell apart.

efeustel@prime.com (Ed Feustel) (03/20/91)

For one thing, Algol68 was used to write the CAP operating system at the University
of Cambridge.  As I understand, only a very few lines of assembler were required
to do the whole thing and it was extremely maintainable.

Although the language specification in the form of the Van W... grammar was
about 3/4 inch thick, the language was as regular as any you might wish to
define.  While it took quite a while to build quality compilers, there were
quite a few on a variety of machines as attested to by the Algol68 newsletter.

Marketing more than goodness propelled C to the forefront.  It is hard to
turn down something like C if it is for free and C was free in the academic
community circa 1975.  Algol68 and Ada (in the 80s) were not free.  They
were not generally available either.  Hence C and Ada are not mainstream
teaching languages today; nor for that matter is Smalltalk.

rst@cs.hull.ac.uk (Rob Turner) (03/20/91)

I always remember being told about various advanced programming
features at university, and, invariably, Algol 68 would have each of
those features.

I think the language was way ahead of its time. People were not ready
for it.

What you don't understand, you don't use.

Rob

nmm@cl.cam.ac.uk (Nick Maclaren) (03/20/91)

In article <3628@ux.acs.umn.edu> dhoyt@vx.acs.umn.edu writes:
>In article <YV6O4AC@math.fu-berlin.de>, dww@math.fu-berlin.de (Debora Weber-Wulff) writes...
>>Why is Algol 68 not being used? Well, it seems to have every feature
>>ever needed, and is thus rather difficult to read and learn. Tony
>>Hoare wrote about "The Emperor`s New Clothes" a while back, in which
>>he politely explains why less is better. 
>
>  I suspect it has more to do with IBM, which pretty actively (in the USA
>at least) discouraged people from using it.  They invented PL/I as the
>language to use (in place of Algol).  So people started using PL/I.  ....

I am afraid that I must disagree with both sides here!

    A) Algol 68 is not actually a huge language, and compilers for it are
usually a fraction of the size of those for most 'competitive' languages.
The Algol 68 Revised Report is difficult to read because of the technically
advanced/complex nature of the way that it is described, not because there
is so much of it.

    B) I think that people have a tendency to overestimate the influence
of IBM over which language people used.  I agree that its influence was
much greater then than now, but not THAT great.

I would like to use Algol 68, because it is the most productive language
(3 GL, that is) that I have ever used or even read about.  Unfortunately,
it did have some very serious failings, and they were what killed it.
In my view, the main ones are:

    1) It was ahead of its time, and needed compiler technology that was
not then standard practice.  This applies particularly to the mode coercion
code and the library mechanisms; the latter was not designed in detail until
about 1978.

    2) It was written in an extremely precise but mathematical language,
which intimidated most people (both implementors and programmers).  Some
good reference manuals have been written since, but the harm was done
initially.

    3) A few areas turned out to be mistakes in the light of experience,
such as the plethora of stropping rules and some aspects of the coercions.
These were used by its enemies as weapons against it, ignoring the fact
that it was breaking new ground.

    4) The I/O was little short of a disaster, for more reasons than I
am prepared to go into here.  On the other hand, the I/O of most languages
is pretty dreadful, so it should not be damned too much.  A proposal that
would have sorted out the worst of the problems came too late to save it.

Overall, it was the fact that people REGARDED it as unbelievably complex
that killed it, because the few compilers that were produced were rarely
brought up to production quality.  They could have been, and the world
of programming languages would now be totally different ....

Nick Maclaren
University of Cambridge Computer Laboratory
nmm@cl.cam.ac.uk

avg@hq.demos.su (Vadim Antonov) (03/21/91)

In <YV6O4AC@math.fu-berlin.de> dww@math.fu-berlin.de (Debora Weber-Wulff) writes:

>Why is Algol 68 not being used? Well, it seems to have every feature
>ever needed, and is thus rather difficult to read and learn.

Difficult??? You can say so after PL/I, Ada and C++ ?!  The main advantage
of Algol-68 is simplicity and small number of basic orthogonal elements.
It's even simplier than Pascal (in number of senses).

>Sure, it has some nice stuff (like "own" variables), but look at
>the definition - what is it, 5 cm thick? 

"own" variables is a feature of Algol-60. Variables in Algol-68 are
syntactic sugar, not a basic element.

5-cm thick is the FORMAL definition of the language and it should be read
only by complier designers, not for end loosers. There are also some
informal books on Algol-68, some are really good (cannot remember
authors now, sigh). The latter books are not thick.

After all Algol-68 is THE ONLY well-described language.

>All the same, I'd like to hear about "real" software (as opposed
>to programming exercises) that have been programmed in it.

Some sexy military programs I've heard about. Plus I have known a
women who wrote a whole medical information system for Moscow University
in Algol 68. UK Royal Radar Society used it as well.

I think the only reason why Algol-68 was forgotten is the well-known
American ignorance on European computer civilization. Pascal seems to
be the first worldwide accepted European language. Algol-68 was too
early. Still I think it deserves to be remembered.

Vadim Antonov
DEMOS, Moscow, USSR

anw@maths.nott.ac.uk (Dr A. N. Walker) (03/21/91)

In article <YV6O4AC@math.fu-berlin.de> dww@math.fu-berlin.de
(Debora Weber-Wulff) writes:
>Why is Algol 68 not being used? Well, it seems to have every feature
>ever needed,

	Yes, by orthogonal extension of a small number of concepts.

>	      and is thus rather difficult to read and learn.

	Significantly easier than Pascal or C (speaking from many
years of teaching students these languages), and I expect easier than
Modula-N, Ada, name-your-favourite-more-modern-language-but-I-haven't-
yet-tried-to-teach-it-to-our-first-year-students-so-can't-prove-it.

>							      Tony
>Hoare

	... who is a distinguished computer scientist with a bee in
his bonnet about Algol ...

>      wrote about "The Emperor`s New Clothes" a while back, in which
>he politely explains why less is better. 

	Just so.  The syntax charts for Algol fit neatly onto one side
of A4, if you except formats;  how much smaller do you want it?

>Sure, it has some nice stuff (like "own" variables),

	*Not* like own variables.

>						      but look at
>the definition - what is it, 5 cm thick? 

	No, about 2.5, of which about half is (quite large-scale)
programming examples, and about half the rest is explanatory comments.
How big do you expect a *very* *careful* definition of a language to
be?  What did you want the authors to leave out?  You are not expected
to learn Algol from the Report (though many of us did), any more than
you are expected to learn C from the Standard.  You read K&R or an
elementary primer for C;  modesty forbids a recommendation for Algol.

>All the same, I'd like to hear about "real" software (as opposed
>to programming exercises) that have been programmed in it.

	Well, virtually every program written in this Department from
1972 for about a decade was written in it.  There were many thousands
of programming exercises, of course, but also timetable programs,
exam software, text-processing software, compilers, neural-net simulators,
graph packages, numerical analysis packages, music programs, you name
it, we did it.  A series of conference proceedings of the period show
many other major applications.  How many do you want?

	We still maintain and run some of these packages, though we
probably won't after this year.  I still use Algol for new programs
from time to time, which I hope answers the other question as to
whether there are still any users.

-- 
Andy Walker, Maths Dept., Nott'm Univ., UK.
anw@maths.nott.ac.uk

keithc@cs.qmw.ac.uk (Keith Clarke;W208a) (03/21/91)

In <1991Mar20.154216.26337@cl.cam.ac.uk> nmm@cl.cam.ac.uk (Nick Maclaren) 
writes:
>Overall, it was the fact that people REGARDED it as unbelievably complex
>that killed it, because the few compilers that were produced were rarely
>brought up to production quality.  They could have been, and the world
>of programming languages would now be totally different ....

Totally? I'm surprised at people at Edinburgh & Cambridge being so, well, 
downbeat about this. My feeling is that ML is everything one could want in a 
child of Algol68 - the things that aren't there are the horrible stropping 
rules, the coercions, the i/o.. (Nick's list, I think).
I can add one more Algol68 misfeature that ML fixes, & that's the 
stack-implementation of closures. I once got a 5 cm hex dump from an ICL 2900 
for having the temerity to try out whatever the Algol68 is/was for

	    fun twice f = fn x=>f(f(x)); (twice sin 0.5)

I can't think of anything Algol68 has that I miss in ML. Oh, perhaps 
multi-dimensional arrays & slicing, if I still did that kind of programming.
--
Keith Clarke
QMW, University of London.

gudeman@cs.arizona.edu (David Gudeman) (03/22/91)

In article  <3151.9103201408@olympus.cs.hull.ac.uk> Rob Turner writes:
]I always remember being told about various advanced programming
]features at university, and, invariably, Algol 68 would have each of
]those features.

That is likely because the advanced features that were discussed were
those that happen to by in Algol68.  There are lots of features known
today that weren't even thought of in 68.

]I think the language was way ahead of its time. People were not ready
]for it.

I think it was a good language that should have gotten more attention
than it did, but I don't think it was because it was ahead of its
time.  There are a lot of personal, political, business, and
economical reasons why some languages become popular and some don't.
How good a language is, is one of the least important considerations.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman

rcd@ico.isc.com (Dick Dunn) (03/22/91)

yfcw14@castle.ed.ac.uk (K P Donnelly) writes:
> mo@messy.bellcore.com (Michael O'Dell) writes:
> >Learning Algol68 as a student did wonders for understanding other
> >languages, in particular, declarations in C.

I felt learning about Algol68 (can't say I learned it; we didn't have a
working compiler) did a lot for conceptual understanding.  It was one of
the cleanest languages in terms of [amount of power] relative to [number of
fundamental concepts] I've ever seen.  SO...

> Agreed!!  Why are we not using Algol68 now when it is so much better
> than other languages.  Is it, as I suspect, that people did not realise
> how good it was?...

That might be true, in a very real sense and for a very good reason:  It
was too hard to figure it out!  There was no K&R or Jensen&Wirth for it.
There was the _Informal_Introduction_..., which was a wonderful book (in
spite of the table of contents, which was a clumsy nuisance), but it didn't
really answer serious questions or lay down the law.  Then there was the
_Report_, which was nearly inscrutable, and the _Revised_Report_, which was
worse.  It took far too much initial study to be able to understand what
the report was saying, and it took far too much general familiarity to use
it to answer a question.  (Particularly in the revised report, chasing down
all the myriad productions sent you all over the language...and the worst
of it was that the simple concept "this is not allowed" was expressed as
giving you something to search for, which didn't exist!  If it came down
to MURF MORFETY FOOBLE REFFOOBLETY, and you couldn't find a definition for
that, it wasn't legal.)

>...Is it because it was bypassed by Pascal, which was
> designed as a teaching language but ended up being used for lots of
> things it was never intended for?...

Yes, another dose of reality:  Pascal was simple and it was implemented.
The implementation was widely available.  Therefore people used it.  It had
the necessary initial kick to start the positive-feedback loop.
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   ...Relax...don't worry...have a homebrew.

mcdonald@aries.scs.uiuc.edu (Doug McDonald) (03/22/91)

In article <1991Mar22.013748.4944@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
>
>I felt learning about Algol68 (can't say I learned it; we didn't have a
>working compiler) did a lot for conceptual understanding.  It was one of
>the cleanest languages in terms of [amount of power] relative to [number of
>fundamental concepts] I've ever seen.  SO...
>
>> Agreed!!  Why are we not using Algol68 now when it is so much better
>> than other languages.  Is it, as I suspect, that people did not realise
>> how good it was?...
>

For a VERY simple reason: Real Programmers (TM) are not masochists
and seldom are Real Typists (TM) and simply HATE to type for the
most common construct in any program the loathsome, redundant,
hard to type, piece of shit:



   :=




Doug McDonald

jones@pyrite.cs.uiowa.edu (Douglas W. Jones,201H MLH,3193350740,3193382879) (03/23/91)

From article <1991Mar22.153944.1096@ux1.cso.uiuc.edu>,
by mcdonald@aries.scs.uiuc.edu (Doug McDonald):
> 
> For a VERY simple reason: Real Programmers (TM) are not masochists
> and seldom are Real Typists (TM) and simply HATE to type for the
> most common construct in any program the loathsome, redundant,
> hard to type, piece of shit:
> 
>    :=

This kind of flame is unworthy of response, but for one serious point.
Aversion to typing := doesn't explain why Pascal was so much more
successful than Algol 68, both of which have :=.

Besides, Real Programmers (TM) also hate == for the very same reason,
so you can't win.
					Doug Jones
					jones@cs.uiowa.edu

peter@ficc.ferranti.com (Peter da Silva) (03/23/91)

In article <1991Mar22.153944.1096@ux1.cso.uiuc.edu> mcdonald@aries.scs.uiuc.edu (Doug McDonald) writes:
> For a VERY simple reason: Real Programmers (TM) are not masochists
> and seldom are Real Typists (TM) and simply HATE to type for the
> most common construct in any program the loathsome, redundant,
> hard to type, piece of shit:

>    :=

Funny, that's one of the two syntactical changes I'd love to make to C.

The other is to change pointer indirection to a postfix operator.

These two features are responsible for the vast majority of coding errors
in C. Wouldn't you prefer:

	void signal()^();
To:
	void (*signal())();

?????
-- 
Peter da Silva.  `-_-'  peter@ferranti.com
+1 713 274 5180.  'U`  "Have you hugged your wolf today?"

john@mingus.mitre.org (John D. Burger) (03/23/91)

mcdonald@aries.scs.uiuc.edu (Doug McDonald) writes:

  Real Programmers (TM) are not masochists and seldom are Real Typists
  (TM) and simply HATE to type for the most common construct in any
  program the loathsome, redundant, hard to type, piece of shit:

    :=

Gee, you'd realy hate Common Lisp:

  (SETF X 4)

instead of

  x=4;

or even

  x:=4;

BTW, this is certainly not "the most common construct in any program"
that I write.  I rarely do explicit assignment at all.
--
John Burger                                               john@mitre.org

"You ever think about .signature files? I mean, do we really need them?"
  - alt.andy.rooney

hasan@ut-emx.uucp (David A. Hasan) (03/23/91)

In article <1991Mar22.013748.4944@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:

[ in response to a discussion of the demise of Algol68... ]

>That might be true, in a very real sense and for a very good reason:  It
>was too hard to figure it out!  There was no K&R or Jensen&Wirth for it.
>There was the _Informal_Introduction_..., which was a wonderful book (in
>spite of the table of contents, which was a clumsy nuisance), but it didn't
>really answer serious questions or lay down the law.  Then there was the
>_Report_, which was nearly inscrutable, and the _Revised_Report_, which was
>worse.  It took far too much initial study to be able to understand what
>the report was saying, and it took far too much general familiarity to use
>it to answer a question.

hmmm...sounds *just* like my frustrations in getting up to
speed in Ada.  The Ada Language Reference Manual is not
exactly an "easy" reference when you're trying to figure out
why something doesn't compile.  And Ada is *full* of
gotchas... at least Algol68 was orthogonal.

-- 
 |   David A. Hasan
 |   hasan@emx.utexas.edu 

lnds@obed.uucp (Mark Israel) (03/24/91)

In article <46036@ut-emx.uucp>, hasan@ut-emx.uucp (David A. Hasan) writes:

> And Ada is *full* of gotchas... at least Algol68 was orthogonal.

   Algol68 is full of gotchas.  Try explaining to a new user why
you can't say "a[i] +:= 1", where a is a flex array.  Try explaining
why operators scope differently from procedures.  Try explaining
why "exit" is defined so that in the context where you'd intuitively
most often want to use it (IF something THEN EXIT FI), it does
nothing at all.

   APL is full of gotchas.  Since "," catenates and "+/A" adds
the elements of A, might you not reasonably expect ",/A" to
catenate the elements of A?  No, sir.

   My observation is, "Languages that are supposed to be highly
consistent... aren't."

				Mark Israel
I have heard the Wobble!	userisra@mts.ucs.ualberta.ca

bs@alice.att.com (Bjarne Stroustrup) (03/24/91)

Algol68 was/is one of the most elegant and useful languages I ever worked
with. It was the first language where in my personal experience non-trivial
programs occationally ran correctly first time they compiled after a major
change - this is an effect that I (successfully) tried to achieve for C++.

So why didn't Algol68 succeed on a LARGE scale. Naturally, there are many
reasons, but here is a few I think were important:

	- Algol68 was a decade in the making before implementations were
	beginning to appear - and for many (most?) machines implementations
	has yet to become widely available. By the time they appeared the
	initial excitement about the Algol68 concepts had died and early
	fans become disillusioned and moved on to other languages.

	- Algol68 developed a terminology and a style of language definition
	that was clean and surperior to any alternatives I have seen. This
	made it unapproachable to the casual user. The very elegance of the
	terminology and style of presentation was a major barrier. I was
	lucky enough to get my hands on a concise (about 150 page) and very
	readable description - and a good implementation - otherwise I would
	have had to dismiss Algol68 as a practical tool and have left it as
	a ``probably interesting intellectual challenge'' that I would never
	have had time to look into.

	- By the time Algol68 was complete and beginning to become available
	the issues of encapsulation and modularity had come to dominate
	many people's concerns. Algol68 did not emphasize that - and the
	textbooks I saw and (with notable exceptions) the Algol68 programmers
	I talked to completely ignored that and concentrated on explaining
	how Algol68 was a better Algol or a better Fortran. Thus, by the
	time one could use Algol68, it was seen as an oldfashioned language -
	the final close-to-perfect development of a dead branch of evolution.

	- Algol68 was seen as European and Academic. Either could in itself -
	at the time - kill a language in the parts of US industry that set the
	tone for what tools programmers actually use.

	- Algol68 had a main-frame bias at a time where first mini-computers
	and then micros emerged as the major growth areas for programmers.

I think Algol68 was and is a most attractive language. Had there been a K&R
for Algol68 and a cheap and easily portable implementation of Algol68 available
for a PDP11 in 1975 or so it would have swept the world covering the areas
where Pascal and C thrived. However, there wasn't such a book and there wasn't
such an implementation - and it is doubtful if there could have been. The
culture that produced Algol68 (in my impression) simply didn't think that way.

chl@cs.man.ac.uk (Charles Lindsey) (03/25/91)

In <1991Mar22.013748.4944@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:

>it to answer a question.  (Particularly in the revised report, chasing down
>all the myriad productions sent you all over the language...and the worst
>of it was that the simple concept "this is not allowed" was expressed as
>giving you something to search for, which didn't exist!  If it came down
>to MURF MORFETY FOOBLE REFFOOBLETY, and you couldn't find a definition for
>that, it wasn't legal.)

Not at all. We carefully arranged for the production rules to be cross
referenced (both forwards and backwards). So if you wanted to know where or
whether some rule was defined the cross reference pointed you to a finite
number of places where it could be, and even warned you if there was a blind
alley.

I know of no subsequent language definition which has cross referenced its
grammar so thoroughly.

rreid@ecst.csuchico.edu (Ralph Reid III) (03/25/91)

In article <46036@ut-emx.uucp> hasan@ut-emx.uucp (David A. Hasan) writes:
(Documentation problems deleted)
>hmmm...sounds *just* like my frustrations in getting up to
>speed in Ada.  The Ada Language Reference Manual is not
>exactly an "easy" reference when you're trying to figure out
>why something doesn't compile.  And Ada is *full* of
>gotchas... at least Algol68 was orthogonal.

Maybe one of the following books can help:
Software Components with ADA
by Grady Booch
Benjamin/Cummings Publishing
(contains sample source code)


Ada As A Second Language
by Norm Cohen
McGraw Hill publishing

-- 
Ralph.  SAAC member.
ARS: N6BNO
Compuserve: 72250.3521@compuserve.com
email: rreid@cscihp.ecst.csuchico.edu

jejones@mcrware.UUCP (James Jones) (03/25/91)

In article <1991Mar22.013748.4944@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
>That might be true, in a very real sense and for a very good reason:  It
>was too hard to figure it out!  There was no K&R or Jensen&Wirth for it.
>There was the _Informal_Introduction_..., which was a wonderful book (in
>spite of the table of contents, which was a clumsy nuisance), but it didn't
>really answer serious questions or lay down the law.

Well...I wish I remembered who it was that pointed out in SIGPLAN Notices
some time back that the easy-to-read standards (Pascal given as case in
point, but nowadays the same can be said of C) were also full of holes,
and that attempts to make them rigorous also made them much longer and
less legible.   (Said person also made the same point being made in the
original posting, that a straightforward standard, even if leaky, encourages
implementors.)

	James Jones

hasan@ut-emx.uucp (David A. Hasan) (03/26/91)

[ in response to an observation I made about the similarities
  between learning Ada and a previous poster's
  observations about Algol 68 ]

In article <1991Mar25.132527.5570@ecst.csuchico.edu> rreid@ecst.csuchico.edu (Ralph Reid III) writes:

>Maybe one of the following books can help:

>Software Components with ADA
>by Grady Booch
>Benjamin/Cummings Publishing
>(contains sample source code)

>Ada As A Second Language
>by Norm Cohen
>McGraw Hill publishing

Agreed.  In fact that is precisely how I get along.  However,
Booch's book is organized by components, and it is very
difficult to get specific Ada information out of it without
working through the whole thing.  That is *not* to diminish
it's value (IMHO it's an excellent text), but it makes using
it as a _reference_ hard.  Cohen's book is voluminous and in
fact the one I find most useful most of the time, especially
since it is easy to use as a topical reference.

Nevertheless, there come certain situations where the two
compilers I use (DEC and Miiieridian-for-the-Mac) behave
differently, and I then use the LRM to figure out where the
*real* truth lies.  (If I can find it!)


-- 
 |   David A. Hasan
 |   hasan@emx.utexas.edu 

cs450a03@uc780.umd.edu (03/26/91)

Mark Israel writes:
>   APL is full of gotchas.  Since "," catenates and "+/A" adds
>the elements of A, might you not reasonably expect ",/A" to
>catenate the elements of A?  No, sir.

But aren't all the elements of A already catenated?  8-)

(Incidentally, you're using a fairly old version of APL.  Or else a
sub-set.)

Raul

firth@sei.cmu.edu (Robert Firth) (03/26/91)

In article <1991Mar22.153944.1096@ux1.cso.uiuc.edu> mcdonald@aries.scs.uiuc.edu (Doug McDonald) writes:

>For a VERY simple reason: Real Programmers (TM) are not masochists
>and seldom are Real Typists (TM) and simply HATE to type for the
>most common construct in any program the loathsome, redundant,...

>   :=

Well said!  However, that is only the second most common loathsome,
redundant &c.  The most common, which should truly be hated since
any decent language design can get rid of it, is the vile, obnoxious

	;

chl@cs.man.ac.uk (Charles Lindsey) (03/27/91)

In <1991Mar24.121900.7141@cs.UAlberta.CA> lnds@obed.uucp (Mark Israel) writes:

>   Algol68 is full of gotchas.  Try explaining to a new user why
>you can't say "a[i] +:= 1", where a is a flex array.  Try explaining
>why operators scope differently from procedures.  Try explaining
>why "exit" is defined so that in the context where you'd intuitively
>most often want to use it (IF something THEN EXIT FI), it does
>nothing at all.

The places where ALGOL 68 got in a mess were the few places where it gave up
being orthogonal. The whole FLEX business, for example, sticks out like one
big sore thumb. It was only put in, as Van Wijngaarden explained to me, in
order to provide STRINGs (which are indeed nice, but could have been a
primitive).

Likewise, EXIT is a leftover from pre structured programming days. I do not
think I have ever used it in anger.

bobd@zaphod.UUCP (Bob Dalgleish) (03/27/91)

In article <9168@castle.ed.ac.uk> yfcw14@castle.ed.ac.uk (K P Donnelly) writes:
>mo@messy.bellcore.com (Michael O'Dell) writes:

>>Learning Algol68 as a student did wonders for understanding other
>>languages, in particular, declarations in C.
Agreed!  The formalisms introduced with A68 are fundamental to
understanding any procedural (Von Neuman) language.

>Agreed!!  Why are we not using Algol68 now when it is so much better
>than other languages.  
It was also larger than other languages.  The only full compilers that I
knew of were larger than the IBM PL/I level G compilers, even though
they were a lot faster.
>                       Is it, as I suspect, that people did not realise
>how good it was?  
Partly - the references for the language were so abstruse that it was
very hard to learn the basics.
>                  Is it because it was bypassed by Pascal, which was
>designed as a teaching language but ended up being used for lots of
>things it was never intended for?  
Pascal was designed by Wirth after he got disgusted with the rapidly
growing size and complexity of the Algol68 effort.  I gave up in disgust
when the I/O subsystem became overspecified and bloated.  The other
thing that gave me heebie-jeebies was the use of _skip_ and _nil_ in
initialization positions -- the values actually assigned could be random
garbage that would not cause UNINITIALIZED VARIABLE errors, but would
make things not work.  Also the horror stories that my friends who were
implementing a full compiler told me made my hair stand on end.  In
fact, the stories are very similar to the ones I heard about Ada.  The
overloaded operators, partial parametization, and array slices explored
the very horizons of computer science capability.
>                                   Is Algol68 being used anywhere now?
I will be snide and say that the spirit of Algol68 now lives on
in the Ada programming language.  Many of the features are present in
Ada, some of them emasculated (procedure variables are gone), some of
them horribly present (overloaded operators, tasking); some of the nice
features are there as well.
>
>   Kevin Donnelly


-- 
-- * * * Remember: I before E except after DALGL * * *--
Bob Dalgleish		bobd@zaphod.UUCP

anw@maths.nott.ac.uk (Dr A. N. Walker) (03/28/91)

In article <20109@alice.att.com> bs@alice.att.com (Bjarne Stroustrup) writes:
	[Much that is true about the reasons for Algol's relative demise.
	 However, ...]

>	- Algol68 was a decade in the making before implementations were
>	beginning to appear

    The Malvern compiler appeared in 1972, give or take a year [depends a
little on what "appear" means].  There were other implementations by
around 1975-6.

>			    - and for many (most?) machines implementations
>	has yet to become widely available.

    You just have to know who to ask!  There are several re-targetable
implementations which could easily be put onto any [reasonable] specified
machine.  As there is a version for the Amsterdam Compiler Kit, for example,
there is, by inference, an existing implementation for any machine which
has C [and can therefore run the ACK emulator].  "Widely available" is,
of course, the killer.

>				Algol68 did not emphasize [encapsulation
>	and modularity].

    Somewhat to the contrary, the Algol report was the first place where
I remember seeing *in a language definition* the idea that a user-program
ran in an environment which was defined by the system- and user-libraries.
The mechanisms of "keep lists" and of "holes" and "import/export" loom
large in the user manuals for actual implementations.  Admittedly, it
all looks somewhat primitive today.

>	- Algol68 had a main-frame bias at a time where first mini-computers
>	and then micros emerged as the major growth areas for programmers.

   The 68S implementation was designed specifically for minis, at a time
when, really, there were very few high-level languages available for them.

>							Had there been a K&R
>for Algol68 and a cheap and easily portable implementation of Algol68 available
>for a PDP11 in 1975 or so it would have swept the world covering the areas
>where Pascal and C thrived. However, there wasn't such a book and there wasn't
>such an implementation - and it is doubtful if there could have been.

    We were running 68S on our PDP-11 in the late 70's.  It *had* been
ported to Unix from something-or-other, I *think* in 1976.  It failed to
sweep the world.  There *were* good books on Algol, there *were* cheap
(at least to universities!) portable implementations.  They didn't come
with source code, and they were too difficult for undergraduates to play
with.  Compare Pascal.  Shame.  For that matter, compare Ada, and wonder.

    You omit another prime reason.  There was too much bickering.  People
take C or Pascal, warts and all;  they may wish something was different,
but no-one denied the initial rights of DMR&KT and NW to define C and
Pascal as they wanted.  The Algol community was riven by schism after
schism, and it really didn't help to have distinguished computer scientist
after distinguished computer scientist quibbling, too often from ignorance,
about features of Algol.

-- 
Andy Walker, Maths Dept., Nott'm Univ., UK.
anw@maths.nott.ac.uk

rcd@ico.isc.com (Dick Dunn) (03/28/91)

jejones@mcrware.UUCP (James Jones) writes:
> rcd@ico.isc.com (me) writes:
> > [stuff about Algol 68 having problems because no readable definition]
> Well...I wish I remembered who it was that pointed out in SIGPLAN Notices
> some time back that the easy-to-read standards (Pascal given as case in
> point, but nowadays the same can be said of C) were also full of holes,
> and that attempts to make them rigorous also made them much longer and
> less legible...

While it is true that there are holes in both J&W and K&R, the holes are
nowhere near as numerous or as big as people like to make them out to be.
One reason you need the big, clunky standards is to avoid the tort-lawyer
syndrome--where people intentionally go looking for holes in the language
definition, instead of reading it intelligently.  (I suppose one could also
call this the Weirdnix syndrome.:-)  I've spent a fair amount of time down
inside Pascal compilers, and a little bit of time inside C compilers, plus
a lot of time using both, and I've found very few questions that couldn't
be answered by a careful, honest reading of J&W or K&R, respectively.
When you place the reader in an adversarial role to the author, you make it
very hard to produce anything readable!  Writing is communication; the
typical standard is written with the assumption that the reader will
attempt to misunderstand (i.e., reject communication) whenever possible.

Also, although the standards process supposedly attempts to produce
rigorous definitions, it does so by an incredibly expensive, slow, stupid,
and roundabout method.  You don't get good designs out of committees;
there's certainly no reason to expect good standards out of them!  Stan-
dards committees usually get a few good people (perhaps 20% of a typical
committee, ultimately producing 80% of the work).  Then they get filled up
with people who have axes to grind, peter-principled mid-level-managers
who get to go to meetings as a job perk 'cause they like to travel,
deadwood who get sent to meetings so that the folks back home can get some
work done while they're gone, and so on.  In the name of "fairness", every
possible conflict within the industry is represented from both sides,
maximizing the number of issues which require extensive debate and mini-
mizing the likelihood that the end result will be cohesive.  In the end,
some important issues are decided based upon which committee members are
the stubbornest.

I'd bet that a usable C standard could have been produced by gathering a
small committee to pick apart K&R, looking for problem areas and loopholes,
submitting a short problem list to K and/or R, contracting with them or the
BLabs for their time to produce a revision.  It would have cost 1/10 as
much and produced a better result in a fraction of the time.  The standards
process carries an incredible lot of baggage.  (Yes, I know the process I'm
suggesting doesn't meet the rules of ANSI or ISO or even IEEE...and I don't
care.  I'm only trying to hypothesize something that could work better than
the sorry mess we've got today.)

So...stripped of my diatribe, the point is that YES, cleaning up a language
definition is going to make it slightly less accessible--but it need not
become so much less accessible as the standards process usually achieves.

>...(Said person also made the same point being made in the
> original posting, that a straightforward standard, even if leaky, encourages
> implementors.)

Right!  Not only do you get more implementors; you get a lot more users!
Remember, an implementor has much more incentive to dig through details and
formalisms.

Coming back to something related to my point about Algol 68, I'll say that
you have the best chance of getting a bunch of good implementations and a
real user community if you can get the implementations done and out into
the world before a standard is inflicted on the language!
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   The Official Colorado State Vegetable is now the "state legislator".

paul@taniwha.UUCP (Paul Campbell) (03/29/91)

In article <4202@zaphod.UUCP> bobd@zaphod.UUCP (Bob Dalgleish) writes:
>Pascal was designed by Wirth after he got disgusted with the rapidly
>growing size and complexity of the Algol68 effort.  I gave up in disgust

I did and Algol68 implementation as my MSc project, almost 10 years ago
now (I wish I'd had C to implement it in - it might still be useable :-)
I always had the idea that what Wirth did was implement all the 'easy'
stuff in Pascal.

>implementing a full compiler told me made my hair stand on end.  In
>fact, the stories are very similar to the ones I heard about Ada.  The
>overloaded operators, partial parametization, and array slices explored
>the very horizons of computer science capability.

Actually most of this stuff wasn't that hard to implement, what I found
really interesting was reading the papers being written by the people
implementing Ada - they were solving many of the same problems (and making
many of the same mistakes) that the Algol68 implementors had solved a
decade before - I think that a lot was learned from Algol68 in the field
of language design and a lot was missed in actual implementation.

In my opinion the major thing 'wrong' with Algol68 is that it is not easy to
approach with modern compiler tools - modern language designers tend to
choose grammers that are lr0 (or slr/whatever) because they want them to
go through yacc. Algol68 really has 2 grammers embedded inside each other
(expressions and parentheses) you can't parse one untill you've parsed 
the other.

The report(s) are hard to read, you keep getting that same sort of 'aha!'
reaction you get in v6 unix when you finally figure out what's ment by
"you're not expected to understand this".

By the way - for the record if you ever have to implement operators
with changing priorities (ala A68/Ada) it's easy, create a production
for expressions:

	E := E OP E

This of course gives a shift/reduce conflict on OP - just have the 
parser generator note it as such, and at run time compare the priorities
of the stacked OP and the one about to be shifted, use this information
to decide whether to shift or reduce (ie resolve the conflict at run-time).

	Paul
-- 
Paul Campbell    UUCP: ..!mtxinu!taniwha!paul     AppleLink: CAMPBELL.P

"But don't we all deserve.
 More than a kinder and gentler fuck" - Two Nice Girls, "For the Inauguration"

nmm@cl.cam.ac.uk (Nick Maclaren) (03/30/91)

Dick Dunn writes:
< While it is true that there are holes in both J&W and K&R, the holes are
< nowhere near as numerous or as big as people like to make them out to be.
< ...

I agree that they are no bigger, but they are MUCH more numerous!

As one of the half-dozen or so people in the world who has designed and
implemented a C run-time system for a totally un-UNIX operating system
(IBM MVS), I know something about this area.  I tried looking at K&R to
find a description of what the UNIX libraries do, in order to resolve
some of the ambiguities in the ANSI standard.  Yeah, well ....

The ONLY reliable definition of the C language is the compiler, and
there is NO reliable definition of the library.  The available UNIX
implementations are subtly different, and the non-UNIX ones are often
very different.  I ended up deciding that it didn't matter too much
what I did in the problem areas, because the UNIX libraries were either
inconsistent or just plain buggy.

You may say that the library is not the language, but practical programmers
might disagree.  In any case, I have found such discrepancies in the
language proper, as well.  [I am also very much into writing portable code,
where portable means to any system/compiler you care to name.]

Come back the Algol 68 Report, all is forgiven!  I speak as an implementor.

Nick Maclaren
University of Cambridge Computer Laboratory
nmm@cl.cam.ac.uk

Someone else's quote:  "From an MVS viewpoint, it is difficult to
distinguish UNIX and MS-DOS".

lindsay@gandalf.cs.cmu.edu (Donald Lindsay) (03/31/91)

In article <801@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
>... modern language designers tend to
>choose grammers that are lr0 (or slr/whatever) because they want them to
>go through yacc.

Some of us would rephrase that. We use lr0 (etc), because otherwise
the users get confused. CF the failed "user extendable grammar"
languages.

>By the way - for the record if you ever have to implement operators
>with changing priorities (ala A68/Ada) it's easy, ...
>... and at run time compare the priorities
>of the stacked OP and the one about to be shifted, use this information
>to decide whether to shift or reduce (ie resolve the conflict at run-time).

This does not sound like a language feature to be proud of. 

-- 
Don		D.C.Lindsay .. temporarily at Carnegie Mellon Robotics

richard@aiai.ed.ac.uk (Richard Tobin) (04/01/91)

In article <1991Mar29.222133.2819@cl.cam.ac.uk> nmm@cl.cam.ac.uk (Nick Maclaren) writes:
>As one of the half-dozen or so people in the world who has designed and
>implemented a C run-time system for a totally un-UNIX operating system
>(IBM MVS), I know something about this area.  I tried looking at K&R to
>find a description of what the UNIX libraries do, in order to resolve
>some of the ambiguities in the ANSI standard.  Yeah, well ....

Did you report your problems to ANSI or ISO?  Did you get any response?

It would be very useful if the problems you found were made public,
so that the rest of us don't have to re-find them ourselves.

>Someone else's quote:  "From an MVS viewpoint, it is difficult to
>distinguish UNIX and MS-DOS".

That's Richard O'Keefe's.  To me, it suggests serious problems with
an MVS viewpoint.

-- Richard

-- 
Richard Tobin,                       JANET: R.Tobin@uk.ac.ed             
AI Applications Institute,           ARPA:  R.Tobin%uk.ac.ed@nsfnet-relay.ac.uk
Edinburgh University.                UUCP:  ...!ukc!ed.ac.uk!R.Tobin

jmaynard@thesis1.med.uth.tmc.edu (Jay Maynard) (04/02/91)

In article <4394@skye.ed.ac.uk> richard@aiai.UUCP (Richard Tobin) writes:
>In article <1991Mar29.222133.2819@cl.cam.ac.uk> nmm@cl.cam.ac.uk (Nick Maclaren) writes:
>>Someone else's quote:  "From an MVS viewpoint, it is difficult to
>>distinguish UNIX and MS-DOS".
>That's Richard O'Keefe's.  To me, it suggests serious problems with
>an MVS viewpoint.

No...all it says is that MVS is *different* from the interactively-oriented
philosophy inherent in both Unix and MS-DOS. The differences between MVS
and either are far, far greater than the differences between the two. There's
a simple reason: just as a 3090-600J is designed to do a different kind of
work from a Sequent, so MVS is designed to do a different kind of work from
Unix.

...Jay (a senior MVS systems programmer in real life)
-- 
Jay Maynard, EMT-P, K5ZC, PP-ASEL | Never ascribe to malice that which can
jmaynard@thesis1.med.uth.tmc.edu  | adequately be explained by stupidity.
  "You can even run GNUemacs under X-windows without paging if you allow
          about 32MB per user." -- Bill Davidsen  "Oink!" -- me

bobd@zaphod.UUCP (Bob Dalgleish) (04/06/91)

In article <801@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
>In article <4202@zaphod.UUCP> bobd@zaphod.UUCP (Bob Dalgleish) writes:
>>Pascal was designed by Wirth after he got disgusted with the rapidly
>>growing size and complexity of the Algol68 effort.  I gave up in disgust
>I did and Algol68 implementation as my MSc project, almost 10 years ago
I'm impressed.
>now (I wish I'd had C to implement it in - it might still be useable :-)
>I always had the idea that what Wirth did was implement all the 'easy'
>stuff in Pascal.
How do you mean?

I certainly know that the use of VW grammars went to
various people's heads, and that they decided it might be easier to
implement some of the runtime in VW grammars.  (I understand that VW
himself completed a specification for the run-time in his two-level
grammar; it was at least as hard to read as the syntax).

>>overloaded operators, partial parametization, and array slices explored

>Actually most of this stuff wasn't that hard to implement, what I found
>really interesting was reading the papers being written by the people
>implementing Ada - they were solving many of the same problems (and making
>many of the same mistakes) that the Algol68 implementors had solved a
>decade before - I think that a lot was learned from Algol68 in the field
>of language design and a lot was missed in actual implementation.
Agreed -- there is nothing hard about implementing an NP-hard algorithm;
it just takes a lot of nerve to explain to a programmer that his
three-hundred line program will take until next Tuesday to compile
because he used a seemingly innocuous form of overloading.  The
pathological cases cannot be readily recognized by the compiler until
you run out of CPU time.

>In my opinion the major thing 'wrong' with Algol68 is that it is not easy to
>approach with modern compiler tools - modern language designers tend to
>choose grammers that are lr0 (or slr/whatever) because they want them to
>go through yacc. Algol68 really has 2 grammers embedded inside each other
>(expressions and parentheses) you can't parse one untill you've parsed 
>the other.
Hank Boehm at the University of Alberta in Edmonton put together a neat
method of recognition.  He constructed two slr(0) grammars, one to be
applied left to right, and the other to be applied by reading the source
backwards.  When the I/O system was added afterwards, his parser failed.
I also saw some work on attribute grammars, which were well over my
head.

>The report(s) are hard to read, you keep getting that same sort of 'aha!'
>reaction you get in v6 unix when you finally figure out what's ment by
>"you're not expected to understand this".
Much like a religious book!  However, I never really got comfortable
with the semidecidability of language construction: illegal constructs
would often turn into productions which had no valid reduction (excuse
the hashed metaphor).
-- 
-- * * * Remember: I before E except after DALGL * * *--
Bob Dalgleish		bobd@zaphod.UUCP

paul@taniwha.UUCP (Paul Campbell) (04/07/91)

In article <4217@zaphod.UUCP> bobd@zaphod.UUCP (Bob Dalgleish) writes:
>In article <801@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:

>>I always had the idea that what Wirth did was implement all the 'easy'
>>stuff in Pascal.

>How do you mean?

Well he implemented the basic type structures (but not full pointers or
unions), most of the flow control (but only as voids) etc etc

I didn't mean this in a bad way - I think that it was probably a good choice
at the time - and the results are obvious - Pascal caught on because it
is (relatively) easy to implement and a portable compiler is available ...

>Hank Boehm at the University of Alberta in Edmonton put together a neat
>method of recognition.  He constructed two slr(0) grammars, one to be
>applied left to right, and the other to be applied by reading the source
>backwards.  When the I/O system was added afterwards, his parser failed.
>I also saw some work on attribute grammars, which were well over my
>head.

I did something similar except I did two passes, the first parsed the 
parenthesis (control flow and mode/operator/prio decs)  using recursive
descent - the result was a tree annotated with the unparsed expressions
in it. The second pass traversed the treee and used a lr0 parser that
was recursively run on the unparsed nodes - the result being a treee that was
walked to generate code.

	Paul Campbell

PS: for those who don't know even the revised report contains ambiguity,
the most common example is:

	union([]int , struct (int a, b, c)) fred = (1,2,3);

-- 
Paul Campbell    UUCP: ..!mtxinu!taniwha!paul     AppleLink: CAMPBELL.P

"But don't we all deserve.
 More than a kinder and gentler fuck" - Two Nice Girls, "For the Inauguration"

dik@cwi.nl (Dik T. Winter) (04/08/91)

In article <809@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
 > PS: for those who don't know even the revised report contains ambiguity,
 > the most common example is:
 > 
 > 	union([]int , struct (int a, b, c)) fred = (1,2,3);
 > 
I know there are even outright errors in the revised report (if I remember
right, somewhere in the transupt section there is a heap declaration that
is not allowed), but as far as I know the above is disallowed.  I have to
check because I have no RR here at home, but the CDC Algol 68 compiler
(pretty good and extremely complete) has the following to say:
         1          (
         2          'UNION'([]'INT','STRUCT'('INT'A,B,C)) FRED = (1,2,3);
         3          PRINT(FRED)
         4          )
 ERRORS DETECTED DURING THE MODE CHECKING
 ***     2 F     MODE-ERROR IN IDENTITY DECLARATION FOR FRED
which means that there is no coercion that will coerce the right-hand side
to the proper mode.
--
dik t. winter, cwi, amsterdam, nederland
dik@cwi.nl

chl@cs.man.ac.uk (Charles Lindsey) (04/08/91)

In <809@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:

>PS: for those who don't know even the revised report contains ambiguity,
>the most common example is:

>	union([]int , struct (int a, b, c)) fred = (1,2,3);

Not so. For this to be allowed, "(1,2,3)" would have to be a
	strong-UNITED-unit		[4.4.1.c,d,5.2.1.c]
where 'UNITED' is the particular union in question.
This, in turn, would have to be a
	strong-UNITED-ENCLOSED-clause	[3.2.1.d,5.1.A,B,C,D]
or, more specifically, a
	strong-UNITED-collateral-clause	[1.2.2.A]
BUT, there is no such animal		[3.3.1.d,e]

The point is that ENCLOSED-clauses cannot be coerced (you can only coerce
the things inside them).

bd@ZYX.SE (Bjorn Danielsson) (04/09/91)

In article <809@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
> [text deleted]
>
>PS: for those who don't know even the revised report contains ambiguity,
>the most common example is:
>
>	union([]int , struct (int a, b, c)) fred = (1,2,3);
>

That example isn't a legal Algol-68 declaration: the source (right-hand part)
must be the coercee of a "united to" coercion in order to be acceptable as a
union value for the identity-declaration. But the syntactic "sort" of a
"united to" coercend is "meek" according to section 6.4.1.a, and a collateral
clause like "(1,2,3)" is only allowed in a "strong" position according to
section 3.3.1.

I'm sure there are bugs in the revised report, but that wasn't one of them.

(My source is the Springer-Verlag edition from 1976, ISBN 3-540-07592-5)
-- 
Bjorn Danielsson, ZYX Sweden AB, +46 (8) 665 32 05, bd@zyx.SE

paul@taniwha.UUCP (Paul Campbell) (04/10/91)

In article <1991Apr08.235109.11628@ZYX.SE> bd@ZYX.SE (Bjorn Danielsson) writes:
>In article <809@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
>> [text deleted]
>>
>>	union([]int , struct (int a, b, c)) fred = (1,2,3);
>That example isn't a legal Algol-68 declaration: the source (right-hand part)
>must be the coercee of a "united to" coercion in order to be acceptable as a
>union value for the identity-declaration. But the syntactic "sort" of a
>"united to" coercend is "meek" according to section 6.4.1.a, and a collateral
>clause like "(1,2,3)" is only allowed in a "strong" position according to
>section 3.3.1.

But STRONG is defined to be (amoung other things) FIRM (6.1.1A)
which is defined to be (amoung others) MEEK (6.1.1B).

An easier way to understand it is on page 93 where it states (in the
explanation): 'In a strong position all 6 coercions can occur'.

Basicly the idea is that the coercions have to be unambiguous, the strong
meek etc positions desrcibe places where a coercion might be ambiguous
and therefore the compiler wouldn't know what to do. The reason the above
problem occurs is because there are two forms of the collateral clause
(3.3.1d and 3.3.1e) which can in some circumstances be ambiguous.

Gee it's a long time since I delved into the Report (urgh - 10 years so
I'm probably a bit rusty)

In a strong position the compiler always knows which type it's coercing to,
in a firm position it has a list to choose from (eg the parameters from a
list of overloaded operators), meek ones are limited mainly because the
required type is known to be very simple because of it's syntactic position
(ie an array subscript or the selection expression in an if statement),
soft positions are the place where ambiguity is most to be avoided since
the resulting types are used to create a strong position for another
expression (ie a in a := b).

All in all I hate coercions, they are a real pain in the butt, give me
the C 'a = *b' any day, not only is it easier to compile, typecheck etc
but the programmer can see what's going on easily. Note that most languages
[except a few like Bliss] still have dereference coercions in assignments,
but just one (so the C 'a = b' is really the Bliss 'a = .b'). Other places
where coercions pop up are in float<->int changes and Pascal calls
of functions without parameters (C requires an explicit call - hooray!).

	Paul

(Followups to comp.lang.misc)
-- 
Paul Campbell    UUCP: ..!mtxinu!taniwha!paul     AppleLink: CAMPBELL.P

"But don't we all deserve.
 More than a kinder and gentler fuck" - Two Nice Girls, "For the Inauguration"

paul@taniwha.UUCP (Paul Campbell) (04/10/91)

In article <chl.671105965@m1> chl@cs.man.ac.uk (Charles Lindsey) writes:
>In <809@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
>
>>	union([]int , struct (int a, b, c)) fred = (1,2,3);
>
>Not so. For this to be allowed, "(1,2,3)" would have to be a
>	strong-UNITED-unit		[4.4.1.c,d,5.2.1.c]
>where 'UNITED' is the particular union in question.
>This, in turn, would have to be a
>	strong-UNITED-ENCLOSED-clause	[3.2.1.d,5.1.A,B,C,D]
>or, more specifically, a
>	strong-UNITED-collateral-clause	[1.2.2.A]
>BUT, there is no such animal		[3.3.1.d,e]
>
>The point is that ENCLOSED-clauses cannot be coerced (you can only coerce
>the things inside them).

OK, I'm beat :-) I finally figured out where this happens - it's really 
because:

	PRIMARY:: ENCLOSED clause [5D]

rather than:

	PRIMARY:: ENCOLSED clause coercee

(all the other productions in this section end in coercee). 

Early on while I was working on my compiler I read someone describe this
as being aproblem with the report, when I got to that part of the compiler
(coercing displays etc) it all seemed to make sense.

So how do you initialise a union with a row display?? Do you have to put
it in a temporary variable first?? ugh


	Paul

-- 
Paul Campbell    UUCP: ..!mtxinu!taniwha!paul     AppleLink: CAMPBELL.P

"But don't we all deserve.
 More than a kinder and gentler fuck" - Two Nice Girls, "For the Inauguration"

aap@cged.co.uk (Andy Pryor) (04/12/91)

paul@taniwha.UUCP (Paul Campbell) writes:

>In article <chl.671105965@m1> chl@cs.man.ac.uk (Charles Lindsey) writes:
>>In <809@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
>>
>>>	union([]int , struct (int a, b, c)) fred = (1,2,3);
>>
>(coercing displays etc) it all seemed to make sense.

>So how do you initialise a union with a row display?? Do you have to put
>it in a temporary variable first?? ugh

Surely, all you need is a cast?

     union([]int , struct (int a, b, c)) fred = [] int (1,2,3);


-- 
Andy Pryor
Computer General Electronic Design Ltd,            Email: aap@cged.co.uk
The New Church, Henry St, Bath, BA1 1JR, UK        Phone: +44 225 482744

chl@cs.man.ac.uk (Charles Lindsey) (04/12/91)

In <812@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:

>So how do you initialise a union with a row display?? Do you have to put
>it in a temporary variable first?? ugh

Use a cast.

	union([]int , struct (int a, b, c)) fred = []int (1,2,3);

yfcw14@castle.ed.ac.uk (K P Donnelly) (04/12/91)

I knew I had seen this question some where before:

>>>	union([]int , struct (int a, b, c)) fred = (1,2,3);

In the book "Algol68: a first and second course" by A.D.McGettrick, 1978,
exercise 10 (page 240) in chapter 7 is as follows:

     Does

        union ([]int, struct(int p,q,r))
  
     specify an acceptable united mode?  If so, is

        union ([]int, struct(int p,q,r)) z := (1,2,3)

     legal?  Give reasons for your answer.

This is the book I have most enjoyed reading of any book on computing I
have ever read.  At the time I read it, ten years ago, I had a good 
Algol68 compiler available to me and I naively thought that in a couple 
of years we would all be using Algol68 because it was so much better than
Fortran or anything else.  strcat("But ten ",strcat("years later"," ...")

  Kevin Donnelly

bd@ZYX.SE (Bjorn Danielsson) (04/12/91)

In article <812@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
>In article <chl.671105965@m1> chl@cs.man.ac.uk (Charles Lindsey) writes:
>> [text deleted]
>>The point is that ENCLOSED-clauses cannot be coerced (you can only coerce
>>the things inside them).
>
>OK, I'm beat :-) I finally figured out where this happens - it's really 
>because:
>
>	PRIMARY:: ENCLOSED clause [5D]
>
>rather than:
>
>	PRIMARY:: ENCOLSED clause coercee
>
>(all the other productions in this section end in coercee). 

Yes, Charles Lindsey is of course right, and my explanation was incorrect.
(I knew that collateral clauses couldn't be coerced into unions, but I was
a bit hasty when looking it up in the report -- sorry!)

> [more text deleted]
>So how do you initialise a union with a row display?? Do you have to put
>it in a temporary variable first?? ugh

No, you use a cast, e.g.

        union([]int, struct(int a, b, c)) fred = []int(1,2,3);

A strong (or firm) cast of the appropriate mode may be coerced into a union.

As I remember from playing around with Algol-68 programming, the coercion
rules were much easier to use than to understand...

>Paul Campbell    UUCP: ..!mtxinu!taniwha!paul     AppleLink: CAMPBELL.P
-- 
Bjorn Danielsson, ZYX Sweden AB, +46 (8) 665 32 05, bd@zyx.SE

sfk@otter.hpl.hp.com (Steve Knight) (04/17/91)

I vaguely recall the original posting on Algol68 wondering why the language
didn't receive more acclaim and interest.  I can't help but think that this
string is as good an explanation as I've ever seen.

In short, the language is full of complex rules for the convenience
of the programmer (such as the coercion rules) whose implications are 
inobvious.  It would have performed a better service if the designers had
focussed on keeping it small, clean, and easy to learn.

My two favourite examples from Algol68 are (1) the dual syntax for control
constructs and (2) the limitations on procedures being returned as results.
Before I get zapped by solid flame -- let me add that there's nothing 
intrinsically wrong with either or these decisions.  But they, amongst a 
cast of thousands, contributed to a general perception of complexity
and a lack of orthogonality.

If these two decisions are compared with those of Pascal, a much more
"successful" languages by my reading, we notice that the lack of a dual
syntax is not lamented (although other aspects are) and that the issue
of procedure-results was solved by simply forbidding them!  Both examples
are much, much easier to learn -- even though one might think they are
decisions of an inferior quality.

Anyway, I was really amused by the general feel of the string & the
original question.

Steve

anw@maths.nott.ac.uk (Dr A. N. Walker) (04/19/91)

[I *know* I shouldn't rise to these silly articles, but I still cannot
resist.  Sorry.]

In article <2400042@otter.hpl.hp.com> sfk@otter.hpl.hp.com (Steve Knight)
writes:
>I vaguely recall the original posting on Algol68 wondering why the language
>didn't receive more acclaim and interest.

	It *did*. Unfortunately, it also received a lot of flak, mostly
ill-informed, and it wasn't helped by poor PR [at least in comparison
with Pascal].

>In short, the language is full of complex rules for the convenience
>of the programmer (such as the coercion rules) whose implications are 
>inobvious.

	The *formal* rules are complex.  Have you looked at the similar
rules in C, Ada, PL/1, ML, ..., and even Pascal?  22 years on, *I* don't
understand them, but it hasn't ever bothered me or the programs I write,
any more than not memorising the C precedence rules stops me writing C.
In Algol, things work the way you expect or else the compiler spits out
an error and you put some more brackets in.

>	     It would have performed a better service if the designers had
>focussed on keeping it small, clean, and easy to learn.

	They *did* focus on clean.  As for small, the standard Watt-Peck-
Sintzoff syntax charts (comparable to the Pascal "railway tracks") fit
onto one side of A4, and the EBNF definition of Woodward&Bond just spills
into a second page of smaller than A4;  both smaller than the equivalent
Pascal.  As for easy to learn:  from years of teaching both, I confidently
assert that if you found Pascal easier, your Algol was badly taught.
Algol is a *programmer's* language, just as Unix is a programmer's OS.

>My two favourite examples from Algol68 are (1) the dual syntax for control
>constructs

	If it were not dual, which would you prefer?  Lots of parentheses,
as in Lisp?  Or monstrosities like "2 * begin x+y end"?  You are free to
use only one system;  the compiler will be happy.  Most programmers prefer
to use "begin ... end" or "( ... )" as best suits the context.

>	    and (2) the limitations on procedures being returned as results.

	What limitations?  You are perhaps thinking instead of the fact
that locally-constructed procedures tend to have narrower scope than one
might like, which makes returning them less useful than one might hope;
but that applies to *all* modes, and happens in C and Pascal as well.

>						      But they, amongst a 
>cast of thousands, contributed to a general perception of complexity
>and a lack of orthogonality.

	*False* perception, gratuitously spread by ill-informed comments.

>If these two decisions are compared with those of Pascal, a much more
>"successful" languages by my reading,

	Prediction:  in 5 years time, Pascal will be deader than Algol,
and much less lamented.  How do you measure success, anyway?  C, for
example, contains much more Algol than Pascal.  There may be fewer books
on Algol than on Pascal, fewer introductory courses, fewer packages for
your Mac/PC, but there is certainly more influence on later languages,
and there are still lessons for language designers and compiler writers
to learn from Algol.

>Anyway, I was really amused by the general feel of the string & the
>original question.

	Your privilege.

-- 
Andy Walker, Maths Dept., Nott'm Univ., UK.
anw@maths.nott.ac.uk

sfk@otter.hpl.hp.com (Steve Knight) (04/23/91)

In reply to some of Andy Walker's comments (which in turn were a reply to
my flippant remarks concerning the needless complexities of Algol68):

> The *formal* rules are complex.  Have you looked at the similar
> rules in C, Ada, PL/1, ML, ..., and even Pascal?

This remark assumes that C or Ada are proper comparisons to make.  I cordially
dislike all the languages mentioned in this comparison & would cheerful lob
brickbats at them, too.  But it doesn't make Algol68 a better language or less
complex.  (My benchmark would be ML.)

> They *did* focus on clean.

I guess this is just a matter of opinion.  Every Algol programmer I know
cheerfully concedes they don't understand the rules in detail or all their
probable interactions.  I don't call this clean.

I remarked on the dual control construct syntax.  I intended the remark to
be applied for the strange IF/CASE syntax rather than the begin/end 
abbreviation.  However, Andy comments:

> If it were not dual, which would you prefer?  Lots of parentheses,
> as in Lisp?  Or monstrosities like "2 * begin x+y end"? 

The answer is neither.  The necessity for "begin" and "end" is an artifact
or poorly chosen syntax.  The comparison I would use here is Pop11 which has
an Algol-like syntax without the dual control constructs or the necessity for
begin/end brackets.

I also commented on the well-known limitation of Algol68 of being unable to
return general procedures as results.  Andy asks:

> What limitations?  You are perhaps thinking instead of the fact
> that locally-constructed procedures tend to have narrower scope than one
> might like, which makes returning them less useful than one might hope;
> but that applies to *all* modes, and happens in C and Pascal as well.

I think we are talking about the same limitation here.  I was alluding to 
the restriction in Algol68 of being unable to return procedures whose scope
includes non-global variables declared in a wider scope.  In short, the 
failure to implement lexical binding according to (say) the lambda calculus.

Of course, C and Pascal are poor choices.  Why not choose an earlier language
such as Simula67, where no such restriction exists?  Other obvious choices
might be Pop11, Common Lisp, or Standard ML.  All of these languages, and
many other, implement lexical binding in its full glory without any
restriction.  Standard ML is an especially good comparison because it is
strongly typed.  Modula-3 is going o be another candidate.

Andy then attacks my remarks about apparent complexity as:
> *False* perception, gratuitously spread by ill-informed comments.

I don't think so -- although I admit my remarks were less than 100% serious.
My criticisms come from working with languages that adhere to the same overall
design principles as Algol68 without the seeming complexity.  Given this
experience one can't help but ask "why is Algol68 so different & so apparently
complex?"  There doesn't appear to be any answer beyond the historical
details of who was on the committee & their personal aims.

> Prediction:  in 5 years time, Pascal will be deader than Algol,
> and much less lamented.

We can dream :-)

Steve

chl@cs.man.ac.uk (Charles Lindsey) (04/26/91)

In <2400045@otter.hpl.hp.com> sfk@otter.hpl.hp.com (Steve Knight) writes:

>I think we are talking about the same limitation here.  I was alluding to 
>the restriction in Algol68 of being unable to return procedures whose scope
>includes non-global variables declared in a wider scope.  In short, the 
>failure to implement lexical binding according to (say) the lambda calculus.

>Of course, C and Pascal are poor choices.  Why not choose an earlier language
>such as Simula67, where no such restriction exists?  Other obvious choices
>might be Pop11, Common Lisp, or Standard ML.  All of these languages, and
>many other, implement lexical binding in its full glory without any
>restriction.  Standard ML is an especially good comparison because it is
>strongly typed.  Modula-3 is going o be another candidate.

Yes, but all these languages pay a high run-time overhead for the facility.
This issue was discussed exhaustively at the time of the ALGOL 68 Revision,
and the decision was that ALGOL 68 had been designed as a language to be
implemented using a conventional stack, and it would be better to leave such
facilities to newer languages designed round a different philosophy (such as
SML). We wanted to avoid the FORTRAN approach of trying to catch up with our
competitors, but with a 10-year time delay :-). A lot of time was spent
investigating implementation problems, but we wanted to adhere to the Bauer
Criterion (that people who did not use a facility should not pay for it) and
we kept coming across obscure problems which prevented this from being met.

What we did recommend as the way to graft this sort of thing onto an
existing language was partial parametrization. This was described as an
extension after the revision was completed, but noone has implemented it so
far as I know :-(.

anw@maths.nott.ac.uk (Dr A. N. Walker) (04/30/91)

In article <2400045@otter.hpl.hp.com> sfk@otter.hpl.hp.com (Steve Knight)
writes:
[Lots of things.  He previously compared Algol unfavourably to Pascal;
 he then grumbles that my reply only compared Algol with languages he
 "cordially dislikes" (including ML, which is his "benchmark").  He
 flippantly (his claim) tosses more brickbats at the Algol committee,
 and five times refers to Algol's "apparent" complexity.]

The complexity of coercions is inherent.  As a mathematician, I want to
"add" all sorts of things:  integer, real and complex numbers, fractions,
vectors, matrices, sets, strings, sequences, etc....  As a computer
programmer, I want to use the addition operator for constants of all
these different types, in many combination, variables, functions returning
pointers to such variables, etc., and I want to be able to define such
an operator [either as a variant of "+", or at least as a function
called "add_vect" or similar] in order to "add" other things that may
occur in my code.

	The *situation* is complicated.  If the complication is put into
the language and the compiler, then the user can be given a nice simple
interface, so that as well as

	2 + 2

I can write (for instance)

	j + k		[where "j" and "k" are integer variables],
	A + B		[where "A" and "B" are conformable matrices],
	x + i y + 1	[where "x" and "y" are real variables] and
	"hello" + world [where "world" is a function returning a pointer
		to a function returning a string]

As you remove the ability to declare operators, and to coerce objects,
the rules become simpler, and the programming gets correspondingly more
tedious, until you have to write the penultimate example above as

	complex_add (complex (deref (x), deref (y)), complex (real (1), 0.0))

Algol is easy to use only because its formalisation is complicated.

>I remarked on the dual control construct syntax.  I intended the remark to
>be applied for the strange IF/CASE syntax [...]

	What's strange about it [except that "end_if" is spelled "fi"]?

>			 The necessity for "begin" and "end" is an artifact
>or poorly chosen syntax.

	Historical necessity of the days when most programs were prepared
on card punches which had only upper case letters, digits, +, - and a few
punctuation symbols, and only one style of parenthesis.  These days, when
almost every terminal has parentheses, brackets and braces, several quote
symbols, and much punctuation, it is much easier to design languages which
use few words.  Whether one *should* do so is a matter of taste.

>I also commented on the well-known limitation of Algol68 of being unable to
>return general procedures as results.  [...] I was alluding to
>the restriction in Algol68 of being unable to return procedures whose scope
>includes non-global variables declared in a wider scope.

	There is no such restriction (as I stated in my previous posting).
The following program [in which "q" returns a procedure which involves the
non-global "x" declared in a wider scope;  this procedure is assigned to
"p", which is invoked after "q" has gone out of scope] compiles and prints
3.2 on this computer [Sun 3/260]:

	(REAL dummy;
	   (REAL x;
	      (PROC (REAL) REAL p;
		 (PROC q = PROC (REAL) REAL: (REAL a) REAL: x + a; p := q);
	       x := 1.2; print (p (2)))))

It is true that Algol is cautious about scope-safety of assignments, and
about scopes in general.  As a very large fraction of the non-trivial bugs
that I see as a teacher would be caught by this caution if it applied to
other languages, I have no objection to it.  Charles Lindsey, in another
reply to SFK, has pointed out that the utility could be extended with no
loss of safety by partial parametrisation.

>My criticisms come from working with languages that adhere to the same overall
>design principles as Algol68 without the seeming complexity.  Given this
>experience one can't help but ask "why is Algol68 so different & so apparently
>complex?"  There doesn't appear to be any answer beyond the historical
>details of who was on the committee & their personal aims.

	Almost all languages start out as one- or two-person projects,
"defined by DMR's compiler", so to speak.  If the language becomes popular,
as did C, then DMR's compiler is no longer enough;  the language is ported
to a wide variety of computers, and used for a wide variety of purposes,
many of them unintended by DMR.  If the language remains popular, then
discrepancies between implementations start to loom large, and an effort
is made to standardise the language.  At this point, what may have seemed
to be a nice, simple language suddenly becomes more apparently complex;
C is again a good example.  If you compare *any* language which has been
properly standardised with Algol, and especially *any* language whose
standard is expressed formally, you might start to appreciate the [now
23 years old] achievement of Algol.  A few other languages have started
out as committee efforts;  Ada is the prime example.  Compare and discuss.

	Of course, if you persist in comparing the formal definition of
Algol with a glossy brochure describing the best side of your favourite
language, you get a misleading impression.  Try reading some Algol
glossy brochures.

-- 
Andy Walker, Maths Dept., Nott'm Univ., UK.
anw@maths.nott.ac.uk

sfk@otter.hpl.hp.com (Steve Knight) (05/01/91)

I wrote in an earlier message (amongst other topics) about the omission of
full lexical binding from Algol 68, comparing the Algol68 policy with that of
languages such as Lisp and Standard ML.

Charles comments:
> Yes, but all these languages pay a high run-time overhead for the facility.

I'd like to understand this claim.  

From my perspective, as a programmer in
1991, the implementation of lexical binding only involves a cost in two 
circumstances.  Firstly, when a procedure is returned as a result that 
captures a variable declared in a wider lexical scope than the procedure.
Since this is the situation that Algol68 outlaws, the Bauer Criterion (see
below) would allow any extra costs.  So this first case is not relevant.

The second situation is relevant.  In the case where a procedure is not
returned as a result but the compiler is unable to deduce this (without
inter-procedural reasoning) there is an associated cost.  An example would
typically be uses of "maplist" which maps a procedure of type A -> B across a
list of type list(A) to produce a result of type list(B).  When "maplist"
is used, the compiler may not know enough about maplist to 'realise' that the
lexical variables captured by the procedure can be implemented cheaply.

Although it is true that this second situation fails the Bauer Criterion, it
is not a disasterous failure.  It is not an especially common situation in
typical Algol programming (although it is in purely functional programming)
and there exist several obvious and effective heuristics for eliminating
the overhead in common cases.

This does not imply "high runtime overhead" to me.  But perhaps this is because
my perspective is different from that in 1968 (or whenever).  I would 
genuinely appreciate clarification.

> A lot of time was spent
> investigating implementation problems, but we wanted to adhere to the Bauer
> Criterion (that people who did not use a facility should not pay for it) and
> we kept coming across obscure problems which prevented this from being met.

Generally speaking the Bauer Criterion (as described above) is a good touch-
stone for the addition of features to a language.  I've frequently applied it
in the Pop11 standardisation work.  However, it does need to be taken as part
of a set of general principles, such as orthogonality and ease of learning,
and occasionally compromised.  (I appreciate this comment does not contradict
the original posting.)  Lexical scoping is one issue which demonstrates 
such a compromise.  We don't know how to resolve the efficiency criteria with
those of orthogonal language design.  Since we only have to relax the 
efficiency criteria very slightly, this is probably the best solution.

I am also struck by the different solutions adopted by Pascal and Algol68 on
this issue.  (Doubtless my colleagues will award me a second fruit-basket-on-
the-head for what follows.)  Pascal solved this conflict by forbidding 
procedural return values -- a restriction that is easy to understand.  The
Algol68 restriction, I believe, although more relaxed is harder to teach.
This is an example, in my view, of how Pascal is a more satisfactory 
language in terms of teaching and popularisation.

Steve

kers@hplb.hpl.hp.com (Chris Dollin) (05/02/91)

Steve Knight wrote:

   .....
   this issue.  (Doubtless my colleagues will award me a second fruit-basket-
   on-the-head for what follows.)  Pascal solved this conflict by forbidding 
   .....

Biscuit-basket, actually.

[I agree with Steve's remarks about full lexical scoping, which I have ommitted
for the sake of the basket.]
--

Regards, Kers.      | "You're better off  not dreaming of  the things to come;
Caravan:            | Dreams  are always ending  far too soon."

sfk@otter.hpl.hp.com (Steve Knight) (05/03/91)

Well, I'll make this my last posting on Algol68, methinks, to comment on a
few small points made by Andy Walker.

> He flippantly (his claim) tosses more brickbats at the Algol committee,

This certainly wasn't my intention -- the language, yes, the committee, no.
Apologies to all concerned if my message read that way.

> The complexity of coercions is inherent. [Argument deleted]

The obvious comparison is with Standard ML.  It implements variables as
refs, exactly as Algol68, without the coercions.  It is a good, workable
system.  In the case of arithmetic, interestingly enough, SML does implement
a form of type coercion through overloading the arithmetic operators.  

So whilst the general point Andy makes is valid, SML is working proof that
you don't require the complicated coercion rules in strongly typed languages.
I don't think this has anything to do with SML's polymorphic type 
system -- which would make this an invalid comparison.

On the topic of procedural results, Andy states:
> There is no such restriction (as I stated in my previous posting).
He supplies this program:
	(REAL dummy;
	   (REAL x;
	      (PROC (REAL) REAL p;
		 (PROC q = PROC (REAL) REAL: (REAL a) REAL: x + a; p := q);
	       x := 1.2; print (p (2)))))
This indeed shows that I overstated the restriction (whoops!).  However, the
salient feature of this program is that the procedure is invoked within the
scope of "x".  The restriction I was trying to capture should be extended
to include an invocation of the procedure outside the scope of the variable it
"captures".  A simple example in ML would be
    fun K x = fn y => x;
    val one = K 1;
Any invocation of "one" would involve referring to an out-of-scope local.
As I understand it, this is the serious restriction in Algol68 -- although 
I didn't accurately state it.

> If you compare *any* language which has been
> properly standardised with Algol, and especially *any* language whose
> standard is expressed formally, you might start to appreciate the [now
> 23 years old] achievement of Algol.

True.  The comparison that people should make is with Standard ML, which
is formally specified, also.  Draw your own conclusions.

Incidentally, I sincerly hope that no other readers of my
letters thought it belittled the work of the Algol68 committee!  I also hope
it is fair to criticise a language like Algol68 for being "apparently 
complex" without being thought to take pot-shots.

> Of course, if you persist in comparing the formal definition of
> Algol with a glossy brochure describing the best side of your favourite
> language, you get a misleading impression. 

A cheap shot.  The point I was making is one of "apparent" complexity rather
than genuine complexity.  Berating me with my own point makes me feel
rather tired.  Sure, in real use the complexity of Algol68 rarely bites.
Just like in Lisp, the parentheses quickly fade from view.  But Lisp gets
knocked for its visual ugliness & Algol for its complex rules.  Whether you
think this is "fair" depends on whether you are trying to promote the 
language or understand its relative lack of success.

Steve

markf@zurich.ai.mit.edu (Mark Friedman) (05/03/91)

In article <2400046@otter.hpl.hp.com> sfk@otter.hpl.hp.com (Steve Knight) writes:

   I wrote in an earlier message (amongst other topics) about the
   omission of full lexical binding from Algol 68, comparing the
   Algol68 policy with that of languages such as Lisp and Standard ML.

   Charles comments: Yes, but all these languages pay a high run-time
   > overhead for the facility.

   I'd like to understand this claim.  

   From my perspective, as a programmer in 1991, the implementation of
   lexical binding only involves a cost in two circumstances.  ...

   The second situation is relevant.  In the case where a procedure is
   not returned as a result but the compiler is unable to deduce this
   (without inter-procedural reasoning) there is an associated cost.

What's the associated cost? 

   An example would typically be uses of "maplist" which maps a
   procedure of type A -> B across a list of type list(A) to produce a
   result of type list(B).  When "maplist" is used, the compiler may
   not know enough about maplist to 'realise' that the lexical
   variables captured by the procedure can be implemented cheaply.

How would it matter if the compiler could figure out that "maplist"
doesn't return a procedure? I don't see anything about "maplist" that
would allow lexical variables captured by its procedure argument to be
implemented cheaply. 

-Mark
--

Mark Friedman
MIT Artificial Intelligence Lab
545 Technology Sq.
Cambridge, Ma. 02139

markf@zurich.ai.mit.edu

anw@maths.nott.ac.uk (Dr A. N. Walker) (05/10/91)

In article <2400047@otter.hpl.hp.com> sfk@otter.hpl.hp.com (Steve Knight)
writes:
	[A much more balanced view of Algol!  But he raises a point that
I don't follow:]

>On the topic of procedural results, Andy states:
>> There is no such restriction (as I stated in my previous posting).
>He supplies this program:
	[Algol program deleted]
>This indeed shows that I overstated the restriction (whoops!).  However, the
>salient feature of this program is that the procedure is invoked within the
>scope of "x".  The restriction I was trying to capture should be extended
>to include an invocation of the procedure outside the scope of the variable it
>"captures".  [ML example deleted]

	The procedure that is being delivered is "(REAL a) REAL: x + a",
in which "x" is a real variable declared somewhere else.  In a scoped
language, how can this procedure even be contemplated except while "x"
is still in scope?  The pseudo-instruction "throw away x, it's now gone
out of scope" is incompatible with a later pseudo-instruction "grab the
value stored in x and add it to a".  Thus, the scope of the procedure is
*inevitably* tied up with that of "x".  If you want your procedure to
work, you mustn't tell the computer to throw away "x" first!  But this
seems to be what Steve wants.  Or have I misunderstood something?

	You could always declare "HEAP REAL x" to give "x" global scope,
and then your procedure would work anywhere, even in parts of the program
that couldn't refer directly to "x".  This is, of course, the *default*
situation in languages that do not have scoped variables, but that is a
different argument.

	This "restriction" is not, as Steve initially suggested, something
unorthogonal and peculiar to procedures;  it applies equally to, for example,
pointers, and any other objects that might want to use "x".  I still suspect
that Steve is *really* thinking of something different, namely the effect
of procedure *parameters*, which make functional composition less useful
in Algol than one might hope;  I won't bore readers further with either
the problem or its proposed solutions, which have been well rehearsed in
this newsgroup in the past.

-- 
Andy Walker, Maths Dept., Nott'm Univ., UK.
anw@maths.nott.ac.uk

kers@hplb.hpl.hp.com (Chris Dollin) (05/13/91)

Dr A. N. Walker responds to Steve Knight:

   In article <2400047@otter.hpl.hp.com> sfk@otter.hpl.hp.com (Steve Knight)
   writes:
[deleted]

	   The procedure that is being delivered is "(REAL a) REAL: x + a",
   in which "x" is a real variable declared somewhere else.  In a scoped
   language, how can this procedure even be contemplated except while "x"
   is still in scope?  The pseudo-instruction "throw away x, it's now gone
   out of scope" is incompatible with a later pseudo-instruction "grab the
   value stored in x and add it to a".  Thus, the scope of the procedure is
   *inevitably* tied up with that of "x".  If you want your procedure to
   work, you mustn't tell the computer to throw away "x" first!  But this
   seems to be what Steve wants.  Or have I misunderstood something?

This is what Steve wants; he *hasn't* told the computer to throw anything away.
Just because x would go out of (lexical) scope need not mean that its store has
to be reclaimed.

	   You could always declare "HEAP REAL x" to give "x" global scope,
   and then your procedure would work anywhere, even in parts of the program
   that couldn't refer directly to "x".  

I had this vague recollection that this still wouldn't work in Algol68; you can
give x global scope, but theres some intrinsic local required to refer to it,
and the defining document doesn't let you export *that* out of x's scope. But I
can't give details.

    This is, of course, the *default*
   situation in languages that do not have scoped variables, but that is a
   different argument.

Which languages were you thinking of that ``do not have scoped variables''?
Perhaps (from the context of the discussion) you were intending to denote Lisp,
ML, or Scheme (or even Pop11!), but they all have what *I'* call scoped
variables. [In, for contrast, Pascal or C, the question hardly arises; either
there are no nested procedures, or the restrictions on their export are 
draconian.]

   This "restriction" is not, as Steve initially suggested, something
   unorthogonal and peculiar to procedures;  it applies equally to, for
   example, pointers, and any other objects that might want to use "x".

That's true (but it's still a restriction, not a "restriction").

   I still suspect
   that Steve is *really* thinking of something different, namely the effect
   of procedure *parameters*, which make functional composition less useful
   in Algol than one might hope;  I won't bore readers further with either
   the problem or its proposed solutions, which have been well rehearsed in
   this newsgroup in the past.

Isn't the latter (weakness of A68 functional composition) just a consequence of
the former (can't export a procedure outside the scope of one of its free
variables)?
--

Regards, Kers.      | "You're better off  not dreaming of  the things to come;
Caravan:            | Dreams  are always ending  far too soon."

anw@maths.nott.ac.uk (Dr A. N. Walker) (05/16/91)

In article <KERS.91May13165106@cdollin.hpl.hp.com> kers@hplb.hpl.hp.com
(Chris Dollin) writes:
>Dr A. N. Walker responds to Steve Knight:
	[much deleted]
>					    If you want your procedure to
>   work, you mustn't tell the computer to throw away "x" first!  But this
>   seems to be what Steve wants.  Or have I misunderstood something?
>
>This is what Steve wants; he *hasn't* told the computer to throw anything away.

	Yes he has!

>Just because x would go out of (lexical) scope need not mean that its store has
>to be reclaimed.

	From time immemorial, whatever the equivalent of

		BEGIN REAL x; ... END

is in your favourite Algol-based language has meant something like

		start a new stack frame, allocate some *local* space for a
		real variable identified by "x", ..., deallocate the space
		and close the frame.

You don't *have* to throw away the space [and a common optimisation, in
suitable cases, is to move the space to an outer range and economise on
frames], but you can't rely on the space still being usable.  Whether
local variables *should* mean this is a proper subject for debate, but
not the subject that was being debated.  Note that Algol, Pascal, C etc
also have ways of allocating space that is *not* local.  You can debate
whether local or heap should be the default, but not that local space
shouldn't be "thrown away".

>I had this vague recollection that [globals] wouldn't work in Algol68; you can
>give x global scope, but theres some intrinsic local required to refer to it,

	Right, as I acknowledged in my article yesterday.

>Which languages were you thinking of that ``do not have scoped variables''?

	None in particular [sorry!].

>That's true (but it's still a restriction, not a "restriction").

	The restriction was not the "restriction" that Steve thought,
hence the quotes.

>Isn't the latter (weakness of A68 functional composition) just a consequence of
>the former (can't export a procedure outside the scope of one of its free
>variables)?

	Arguably, but I don't think so.  Composition is a reasonable
thing to want, but using a variable out of scope isn't.  (Note: "scope"
in Algol is a technical term, and differs from "range", which seems to
be what some contributors mean by "lexical scope".)  Reasonable wants
shouldn't be satisfied in unreasonable ways.

	It is true if you try to write composition in the obvious way
the result has a uselessly narrow scope.  The partial parametrisation
proposals [see Algol Bulletin #37, p24] get around this without
putting unreasonable loads on the implementation, but within the
spirit of the scope rules.  This possibility shows that composition
weaknesses aren't a necessary consequence of tight scope rules.

-- 
Andy Walker, Maths Dept., Nott'm Univ., UK.
anw@maths.nott.ac.uk

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (05/17/91)

In article <1991May15.181833.19502@maths.nott.ac.uk>, anw@maths.nott.ac.uk (Dr A. N. Walker) writes:
> 	From time immemorial, whatever the equivalent of
> 
> 		BEGIN REAL x; ... END
> 
> is in your favourite Algol-based language has meant something like
> 
> 		start a new stack frame, allocate some *local* space for a
> 		real variable identified by "x", ..., deallocate the space
> 		and close the frame.
> 
Let me rephrase that.  The outcome you describe results from the
interaction of two rules:

(1) BEGIN REAL x; ... END
    opens a new environment extending its lexically enclosing environment
    allocates some space 
    adds a binding (x->the new space) to the new environment
    ...
    closes the environment
(2) Space is reclaimed when the last reference to it goes away.

Algol 60, Pascal, and (I think) Algol W had no way of creating a
reference to a variable that would outlast the environment.

However, "since time immemorial", the lambda calculus has permitted the
capture of outer variables in results:
	(\x. \y. + y x)(3)
returns a function equivalent to \y. + y 3.

Remember that the Algol 68 statement
	( REAL x = 3.2 ; ... )
need not (and in actual implementations may well not) allocate ANY
storage for x.  What goes in the environment is a _binding_ not a
storage location per se.

The thing is that in Common Lisp, Scheme, ML, Haskell, and so on,
*both* the rules above are fully obeyed, *but* they have means for
creating values that can outlast the environment.

It is possible to sort the bindings in a frame so that the first
binding to become "dead" is at the far end, the next binding to
become "dead" is just below it, ... so that space can be reclaimed
from a stack frame _before_ the procedure/block/what-have-you is
exited.  This is standard practice in Prolog compilers, for example,
in order to try to minimize stack size.  What is criterial is not
"is this the end of the scope", but "have we passed the last use
of the variable".

> 	Arguably, but I don't think so.  Composition is a reasonable
> thing to want, but using a variable out of scope isn't.

Example:
    BEGIN
	REAL x;
	PROCEDURE store x(y); x := y; 
	REAL PROCEDURE fetch x; fetch x := x;
	...
	BEGIN
	    INTEGER x;
	    ...
	    store x(2.0);
	    ...
	    store x(fetch x + 1.0);
	END;
	...
    END
If the dots are filled in correctly, this can become a legal Algol 60
program (no, you did _not_ have to declare arguments in Algol 60, and
yes, spaces _were_ legal inside identifiers).  Here the inner block
is using the variable x at a point where its name isn't visible.  All
that going out of scope means is that the name becomes invisible and
remains invisible.  Why is it ok to continue to use a variable in one
case but not the other; why is not having a visible name venial in one
case but deadly in the other?

-- 
There is no such thing as a balanced ecology; ecosystems are chaotic.

anw@maths.nott.ac.uk (Dr A. N. Walker) (05/22/91)

In article <5809@goanna.cs.rmit.oz.au> ok@goanna.cs.rmit.oz.au
(Richard A. O'Keefe) writes:

>(1) BEGIN REAL x; ... END
>    opens a new environment extending its lexically enclosing environment
>    allocates some space 
>    adds a binding (x->the new space) to the new environment
>    ...
>    closes the environment
>(2) Space is reclaimed when the last reference to it goes away.

	Well, I've just checked several of my old texts on Algol 60, and
I haven't found anything that even remotely resembles that.  ("Binding"?
"Environment"?  "Reference"?)  I haven't found a text that distinguishes
in any way, let alone adequately, between variables, the storage that
they occupy, and the identifiers that identify them.  I checked my books
on Pascal and C (and others) with similar lack of success.  Plenty of
stuff about how "x" can only be used between the "BEGIN ... END", about
name hiding, sometimes about stacks.  Even some of the books on Algol 68
are distinctly hazy about all this.  [I'm pleased to say that Lindsey &
van der Meulen and Brailsford & Walker both have explanations that still
make sense to me, though I wouldn't have written either of them that way.]

	Using a modern perspective in an attempt to describe what 20-30
year-old languages were thinking about is at best anachronistic, and
almost certainly misleading.

>However, "since time immemorial", the lambda calculus has permitted the
>capture of outer variables in results:
>	(\x. \y. + y x)(3)
>returns a function equivalent to \y. + y 3.

	Irrelevant to the semantics of Algol-like languages.  Maths has
always been cavalier about scope and range of notation.

>Remember that the Algol 68 statement
>	( REAL x = 3.2 ; ... )
>need not (and in actual implementations may well not) allocate ANY
>storage for x.

	Yes, but not the example given.  *My* example has an equivalent
in every Algol-based language I know of, and is generally understood
(misunderstood, if you insist) to mean what I said.

[Algol 60 example deleted]
>						Here the inner block
>is using the variable x at a point where its name isn't visible.  All
>that going out of scope means is that the name becomes invisible and
>remains invisible.  Why is it ok to continue to use a variable in one
>case but not the other; why is not having a visible name venial in one
>case but deadly in the other?

	Because there is a difference between the identifier going out
of range [which you call "scope", and some contributors have called
"lexical scope"] and the variable it identifies going out of scope
["dynamical scope"].  Your example has "x" out of reach, but within
range and within scope, which is fine.  More interesting examples have
variables within scope after an associated identifier is out of range.

In a related posting, <KERS.91May17090215@cdollin.hpl.hp.com>,
kers@hplb.hpl.hp.com (Chris Dollin) writes:

>				       *I'd* say that the (equivalent of) the
>above code means ``once you hit END, I no longer wish to be able to use the
>name "x" to refer to ... er ... whatever it did refer to (ie, the variable, or
>the location, or the value, etc)''.

	Yes, I might say that too, but when [historically, this is not an
attempt to discover your age!] would you *first* have said that?  Before
Algol 68, there was little distinction made between "x" the identifier
and "x" the variable inside the computer [at least in the books I read].

	Even a decade later, I read in "The C Programming Language --
Reference Manual":  "... two attributes of the identifier:  its storage
class and its type. ... the type determines the meaning of the values
found in the identifier's storage.  ... Automatic variables are local
to each invocation of a block, and are discarded upon exit from the
block" [p3], "If the [storage class]-specifier is missing from a function,
it is taken to be auto inside a function" [p11], "If any of the identifiers
in the declaration-list were previously declared, the outer declaration
is pushed down [! ANW] for the duration of the block" [p16].  Pretty much
what I said!

	Again judging from books on my shelves, few other authors of books
on C, Pascal or other languages are as concise or as accurate as Dennis
Ritchie.  My conclusion is that the vast majority of authors, and hence
of lecturers and students, and hence of practitioners, espouse a very
simplistic view of scopes, lifetimes, extents, ranges, call them what
you will.  Contributors to c.l.m are self-selected to be (relatively)
knowledgeable about such things.

>Yes, it occured to me that there was a terminological problem, and that Algol68
>(for some arcane reason) used ``scope'' to mean what was usually called
>``extent'' if it was called anything, and then had to introduce a new word to
>mean what ``scope'' meant.

	I asked a few friends in the local CS Dept;  none of them had any
idea what I might mean by "extent".  Their definitions of "scope" were as
garbled as you might expect from the above.  They fared better when I asked
about "lexical/dynamical scope".  Having been bitten once or twice in the
early 70's, I have since tried [not always successfully] to use "scope",
"range", "reach", "environ" and other technical words in the Algol sense,
to explain [eg "range" == "lexical scope"] the notation when first used,
and to be prepared to interpret other usages.  That the notation is not
more widely understood seems to me to be just another example of an Algol
feature that was ahead of its time.

>							 But I think the other
>contributors mean ``lexical scope'', aka ``range'', by scope. Certainly that's
>what Steve means [...]

	That's what I assumed Steve meant first time, and what he denied.

-- 
Andy Walker, Maths Dept., Nott'm Univ., UK.
anw@maths.nott.ac.uk