[comp.arch] Algol68

mo@messy.bellcore.com (Michael O'Dell) (03/11/91)

Learning Algol68 as a student did wonders for understanding other
languages, in particular, declarations in C.

And we have suffered much from not having better insight into some
of the systems done in Europe.

	-Mike

yfcw14@castle.ed.ac.uk (K P Donnelly) (03/14/91)

mo@messy.bellcore.com (Michael O'Dell) writes:

>Learning Algol68 as a student did wonders for understanding other
>languages, in particular, declarations in C.

Agreed!!  Why are we not using Algol68 now when it is so much better
than other languages.  Is it, as I suspect, that people did not realise
how good it was?  Is it because it was bypassed by Pascal, which was
designed as a teaching language but ended up being used for lots of
things it was never intended for?  Is Algol68 being used anywhere now?

   Kevin Donnelly

rcd@ico.isc.com (Dick Dunn) (03/22/91)

yfcw14@castle.ed.ac.uk (K P Donnelly) writes:
> mo@messy.bellcore.com (Michael O'Dell) writes:
> >Learning Algol68 as a student did wonders for understanding other
> >languages, in particular, declarations in C.

I felt learning about Algol68 (can't say I learned it; we didn't have a
working compiler) did a lot for conceptual understanding.  It was one of
the cleanest languages in terms of [amount of power] relative to [number of
fundamental concepts] I've ever seen.  SO...

> Agreed!!  Why are we not using Algol68 now when it is so much better
> than other languages.  Is it, as I suspect, that people did not realise
> how good it was?...

That might be true, in a very real sense and for a very good reason:  It
was too hard to figure it out!  There was no K&R or Jensen&Wirth for it.
There was the _Informal_Introduction_..., which was a wonderful book (in
spite of the table of contents, which was a clumsy nuisance), but it didn't
really answer serious questions or lay down the law.  Then there was the
_Report_, which was nearly inscrutable, and the _Revised_Report_, which was
worse.  It took far too much initial study to be able to understand what
the report was saying, and it took far too much general familiarity to use
it to answer a question.  (Particularly in the revised report, chasing down
all the myriad productions sent you all over the language...and the worst
of it was that the simple concept "this is not allowed" was expressed as
giving you something to search for, which didn't exist!  If it came down
to MURF MORFETY FOOBLE REFFOOBLETY, and you couldn't find a definition for
that, it wasn't legal.)

>...Is it because it was bypassed by Pascal, which was
> designed as a teaching language but ended up being used for lots of
> things it was never intended for?...

Yes, another dose of reality:  Pascal was simple and it was implemented.
The implementation was widely available.  Therefore people used it.  It had
the necessary initial kick to start the positive-feedback loop.
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   ...Relax...don't worry...have a homebrew.

mcdonald@aries.scs.uiuc.edu (Doug McDonald) (03/22/91)

In article <1991Mar22.013748.4944@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
>
>I felt learning about Algol68 (can't say I learned it; we didn't have a
>working compiler) did a lot for conceptual understanding.  It was one of
>the cleanest languages in terms of [amount of power] relative to [number of
>fundamental concepts] I've ever seen.  SO...
>
>> Agreed!!  Why are we not using Algol68 now when it is so much better
>> than other languages.  Is it, as I suspect, that people did not realise
>> how good it was?...
>

For a VERY simple reason: Real Programmers (TM) are not masochists
and seldom are Real Typists (TM) and simply HATE to type for the
most common construct in any program the loathsome, redundant,
hard to type, piece of shit:



   :=




Doug McDonald

jones@pyrite.cs.uiowa.edu (Douglas W. Jones,201H MLH,3193350740,3193382879) (03/23/91)

From article <1991Mar22.153944.1096@ux1.cso.uiuc.edu>,
by mcdonald@aries.scs.uiuc.edu (Doug McDonald):
> 
> For a VERY simple reason: Real Programmers (TM) are not masochists
> and seldom are Real Typists (TM) and simply HATE to type for the
> most common construct in any program the loathsome, redundant,
> hard to type, piece of shit:
> 
>    :=

This kind of flame is unworthy of response, but for one serious point.
Aversion to typing := doesn't explain why Pascal was so much more
successful than Algol 68, both of which have :=.

Besides, Real Programmers (TM) also hate == for the very same reason,
so you can't win.
					Doug Jones
					jones@cs.uiowa.edu

peter@ficc.ferranti.com (Peter da Silva) (03/23/91)

In article <1991Mar22.153944.1096@ux1.cso.uiuc.edu> mcdonald@aries.scs.uiuc.edu (Doug McDonald) writes:
> For a VERY simple reason: Real Programmers (TM) are not masochists
> and seldom are Real Typists (TM) and simply HATE to type for the
> most common construct in any program the loathsome, redundant,
> hard to type, piece of shit:

>    :=

Funny, that's one of the two syntactical changes I'd love to make to C.

The other is to change pointer indirection to a postfix operator.

These two features are responsible for the vast majority of coding errors
in C. Wouldn't you prefer:

	void signal()^();
To:
	void (*signal())();

?????
-- 
Peter da Silva.  `-_-'  peter@ferranti.com
+1 713 274 5180.  'U`  "Have you hugged your wolf today?"

hasan@ut-emx.uucp (David A. Hasan) (03/23/91)

In article <1991Mar22.013748.4944@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:

[ in response to a discussion of the demise of Algol68... ]

>That might be true, in a very real sense and for a very good reason:  It
>was too hard to figure it out!  There was no K&R or Jensen&Wirth for it.
>There was the _Informal_Introduction_..., which was a wonderful book (in
>spite of the table of contents, which was a clumsy nuisance), but it didn't
>really answer serious questions or lay down the law.  Then there was the
>_Report_, which was nearly inscrutable, and the _Revised_Report_, which was
>worse.  It took far too much initial study to be able to understand what
>the report was saying, and it took far too much general familiarity to use
>it to answer a question.

hmmm...sounds *just* like my frustrations in getting up to
speed in Ada.  The Ada Language Reference Manual is not
exactly an "easy" reference when you're trying to figure out
why something doesn't compile.  And Ada is *full* of
gotchas... at least Algol68 was orthogonal.

-- 
 |   David A. Hasan
 |   hasan@emx.utexas.edu 

chl@cs.man.ac.uk (Charles Lindsey) (03/25/91)

In <1991Mar22.013748.4944@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:

>it to answer a question.  (Particularly in the revised report, chasing down
>all the myriad productions sent you all over the language...and the worst
>of it was that the simple concept "this is not allowed" was expressed as
>giving you something to search for, which didn't exist!  If it came down
>to MURF MORFETY FOOBLE REFFOOBLETY, and you couldn't find a definition for
>that, it wasn't legal.)

Not at all. We carefully arranged for the production rules to be cross
referenced (both forwards and backwards). So if you wanted to know where or
whether some rule was defined the cross reference pointed you to a finite
number of places where it could be, and even warned you if there was a blind
alley.

I know of no subsequent language definition which has cross referenced its
grammar so thoroughly.

jejones@mcrware.UUCP (James Jones) (03/25/91)

In article <1991Mar22.013748.4944@ico.isc.com> rcd@ico.isc.com (Dick Dunn) writes:
>That might be true, in a very real sense and for a very good reason:  It
>was too hard to figure it out!  There was no K&R or Jensen&Wirth for it.
>There was the _Informal_Introduction_..., which was a wonderful book (in
>spite of the table of contents, which was a clumsy nuisance), but it didn't
>really answer serious questions or lay down the law.

Well...I wish I remembered who it was that pointed out in SIGPLAN Notices
some time back that the easy-to-read standards (Pascal given as case in
point, but nowadays the same can be said of C) were also full of holes,
and that attempts to make them rigorous also made them much longer and
less legible.   (Said person also made the same point being made in the
original posting, that a straightforward standard, even if leaky, encourages
implementors.)

	James Jones

firth@sei.cmu.edu (Robert Firth) (03/26/91)

In article <1991Mar22.153944.1096@ux1.cso.uiuc.edu> mcdonald@aries.scs.uiuc.edu (Doug McDonald) writes:

>For a VERY simple reason: Real Programmers (TM) are not masochists
>and seldom are Real Typists (TM) and simply HATE to type for the
>most common construct in any program the loathsome, redundant,...

>   :=

Well said!  However, that is only the second most common loathsome,
redundant &c.  The most common, which should truly be hated since
any decent language design can get rid of it, is the vile, obnoxious

	;

bobd@zaphod.UUCP (Bob Dalgleish) (03/27/91)

In article <9168@castle.ed.ac.uk> yfcw14@castle.ed.ac.uk (K P Donnelly) writes:
>mo@messy.bellcore.com (Michael O'Dell) writes:

>>Learning Algol68 as a student did wonders for understanding other
>>languages, in particular, declarations in C.
Agreed!  The formalisms introduced with A68 are fundamental to
understanding any procedural (Von Neuman) language.

>Agreed!!  Why are we not using Algol68 now when it is so much better
>than other languages.  
It was also larger than other languages.  The only full compilers that I
knew of were larger than the IBM PL/I level G compilers, even though
they were a lot faster.
>                       Is it, as I suspect, that people did not realise
>how good it was?  
Partly - the references for the language were so abstruse that it was
very hard to learn the basics.
>                  Is it because it was bypassed by Pascal, which was
>designed as a teaching language but ended up being used for lots of
>things it was never intended for?  
Pascal was designed by Wirth after he got disgusted with the rapidly
growing size and complexity of the Algol68 effort.  I gave up in disgust
when the I/O subsystem became overspecified and bloated.  The other
thing that gave me heebie-jeebies was the use of _skip_ and _nil_ in
initialization positions -- the values actually assigned could be random
garbage that would not cause UNINITIALIZED VARIABLE errors, but would
make things not work.  Also the horror stories that my friends who were
implementing a full compiler told me made my hair stand on end.  In
fact, the stories are very similar to the ones I heard about Ada.  The
overloaded operators, partial parametization, and array slices explored
the very horizons of computer science capability.
>                                   Is Algol68 being used anywhere now?
I will be snide and say that the spirit of Algol68 now lives on
in the Ada programming language.  Many of the features are present in
Ada, some of them emasculated (procedure variables are gone), some of
them horribly present (overloaded operators, tasking); some of the nice
features are there as well.
>
>   Kevin Donnelly


-- 
-- * * * Remember: I before E except after DALGL * * *--
Bob Dalgleish		bobd@zaphod.UUCP

rcd@ico.isc.com (Dick Dunn) (03/28/91)

jejones@mcrware.UUCP (James Jones) writes:
> rcd@ico.isc.com (me) writes:
> > [stuff about Algol 68 having problems because no readable definition]
> Well...I wish I remembered who it was that pointed out in SIGPLAN Notices
> some time back that the easy-to-read standards (Pascal given as case in
> point, but nowadays the same can be said of C) were also full of holes,
> and that attempts to make them rigorous also made them much longer and
> less legible...

While it is true that there are holes in both J&W and K&R, the holes are
nowhere near as numerous or as big as people like to make them out to be.
One reason you need the big, clunky standards is to avoid the tort-lawyer
syndrome--where people intentionally go looking for holes in the language
definition, instead of reading it intelligently.  (I suppose one could also
call this the Weirdnix syndrome.:-)  I've spent a fair amount of time down
inside Pascal compilers, and a little bit of time inside C compilers, plus
a lot of time using both, and I've found very few questions that couldn't
be answered by a careful, honest reading of J&W or K&R, respectively.
When you place the reader in an adversarial role to the author, you make it
very hard to produce anything readable!  Writing is communication; the
typical standard is written with the assumption that the reader will
attempt to misunderstand (i.e., reject communication) whenever possible.

Also, although the standards process supposedly attempts to produce
rigorous definitions, it does so by an incredibly expensive, slow, stupid,
and roundabout method.  You don't get good designs out of committees;
there's certainly no reason to expect good standards out of them!  Stan-
dards committees usually get a few good people (perhaps 20% of a typical
committee, ultimately producing 80% of the work).  Then they get filled up
with people who have axes to grind, peter-principled mid-level-managers
who get to go to meetings as a job perk 'cause they like to travel,
deadwood who get sent to meetings so that the folks back home can get some
work done while they're gone, and so on.  In the name of "fairness", every
possible conflict within the industry is represented from both sides,
maximizing the number of issues which require extensive debate and mini-
mizing the likelihood that the end result will be cohesive.  In the end,
some important issues are decided based upon which committee members are
the stubbornest.

I'd bet that a usable C standard could have been produced by gathering a
small committee to pick apart K&R, looking for problem areas and loopholes,
submitting a short problem list to K and/or R, contracting with them or the
BLabs for their time to produce a revision.  It would have cost 1/10 as
much and produced a better result in a fraction of the time.  The standards
process carries an incredible lot of baggage.  (Yes, I know the process I'm
suggesting doesn't meet the rules of ANSI or ISO or even IEEE...and I don't
care.  I'm only trying to hypothesize something that could work better than
the sorry mess we've got today.)

So...stripped of my diatribe, the point is that YES, cleaning up a language
definition is going to make it slightly less accessible--but it need not
become so much less accessible as the standards process usually achieves.

>...(Said person also made the same point being made in the
> original posting, that a straightforward standard, even if leaky, encourages
> implementors.)

Right!  Not only do you get more implementors; you get a lot more users!
Remember, an implementor has much more incentive to dig through details and
formalisms.

Coming back to something related to my point about Algol 68, I'll say that
you have the best chance of getting a bunch of good implementations and a
real user community if you can get the implementations done and out into
the world before a standard is inflicted on the language!
-- 
Dick Dunn     rcd@ico.isc.com -or- ico!rcd       Boulder, CO   (303)449-2870
   The Official Colorado State Vegetable is now the "state legislator".

paul@taniwha.UUCP (Paul Campbell) (03/29/91)

In article <4202@zaphod.UUCP> bobd@zaphod.UUCP (Bob Dalgleish) writes:
>Pascal was designed by Wirth after he got disgusted with the rapidly
>growing size and complexity of the Algol68 effort.  I gave up in disgust

I did and Algol68 implementation as my MSc project, almost 10 years ago
now (I wish I'd had C to implement it in - it might still be useable :-)
I always had the idea that what Wirth did was implement all the 'easy'
stuff in Pascal.

>implementing a full compiler told me made my hair stand on end.  In
>fact, the stories are very similar to the ones I heard about Ada.  The
>overloaded operators, partial parametization, and array slices explored
>the very horizons of computer science capability.

Actually most of this stuff wasn't that hard to implement, what I found
really interesting was reading the papers being written by the people
implementing Ada - they were solving many of the same problems (and making
many of the same mistakes) that the Algol68 implementors had solved a
decade before - I think that a lot was learned from Algol68 in the field
of language design and a lot was missed in actual implementation.

In my opinion the major thing 'wrong' with Algol68 is that it is not easy to
approach with modern compiler tools - modern language designers tend to
choose grammers that are lr0 (or slr/whatever) because they want them to
go through yacc. Algol68 really has 2 grammers embedded inside each other
(expressions and parentheses) you can't parse one untill you've parsed 
the other.

The report(s) are hard to read, you keep getting that same sort of 'aha!'
reaction you get in v6 unix when you finally figure out what's ment by
"you're not expected to understand this".

By the way - for the record if you ever have to implement operators
with changing priorities (ala A68/Ada) it's easy, create a production
for expressions:

	E := E OP E

This of course gives a shift/reduce conflict on OP - just have the 
parser generator note it as such, and at run time compare the priorities
of the stacked OP and the one about to be shifted, use this information
to decide whether to shift or reduce (ie resolve the conflict at run-time).

	Paul
-- 
Paul Campbell    UUCP: ..!mtxinu!taniwha!paul     AppleLink: CAMPBELL.P

"But don't we all deserve.
 More than a kinder and gentler fuck" - Two Nice Girls, "For the Inauguration"

nmm@cl.cam.ac.uk (Nick Maclaren) (03/30/91)

Dick Dunn writes:
< While it is true that there are holes in both J&W and K&R, the holes are
< nowhere near as numerous or as big as people like to make them out to be.
< ...

I agree that they are no bigger, but they are MUCH more numerous!

As one of the half-dozen or so people in the world who has designed and
implemented a C run-time system for a totally un-UNIX operating system
(IBM MVS), I know something about this area.  I tried looking at K&R to
find a description of what the UNIX libraries do, in order to resolve
some of the ambiguities in the ANSI standard.  Yeah, well ....

The ONLY reliable definition of the C language is the compiler, and
there is NO reliable definition of the library.  The available UNIX
implementations are subtly different, and the non-UNIX ones are often
very different.  I ended up deciding that it didn't matter too much
what I did in the problem areas, because the UNIX libraries were either
inconsistent or just plain buggy.

You may say that the library is not the language, but practical programmers
might disagree.  In any case, I have found such discrepancies in the
language proper, as well.  [I am also very much into writing portable code,
where portable means to any system/compiler you care to name.]

Come back the Algol 68 Report, all is forgiven!  I speak as an implementor.

Nick Maclaren
University of Cambridge Computer Laboratory
nmm@cl.cam.ac.uk

Someone else's quote:  "From an MVS viewpoint, it is difficult to
distinguish UNIX and MS-DOS".

richard@aiai.ed.ac.uk (Richard Tobin) (04/01/91)

In article <1991Mar29.222133.2819@cl.cam.ac.uk> nmm@cl.cam.ac.uk (Nick Maclaren) writes:
>As one of the half-dozen or so people in the world who has designed and
>implemented a C run-time system for a totally un-UNIX operating system
>(IBM MVS), I know something about this area.  I tried looking at K&R to
>find a description of what the UNIX libraries do, in order to resolve
>some of the ambiguities in the ANSI standard.  Yeah, well ....

Did you report your problems to ANSI or ISO?  Did you get any response?

It would be very useful if the problems you found were made public,
so that the rest of us don't have to re-find them ourselves.

>Someone else's quote:  "From an MVS viewpoint, it is difficult to
>distinguish UNIX and MS-DOS".

That's Richard O'Keefe's.  To me, it suggests serious problems with
an MVS viewpoint.

-- Richard

-- 
Richard Tobin,                       JANET: R.Tobin@uk.ac.ed             
AI Applications Institute,           ARPA:  R.Tobin%uk.ac.ed@nsfnet-relay.ac.uk
Edinburgh University.                UUCP:  ...!ukc!ed.ac.uk!R.Tobin

jmaynard@thesis1.med.uth.tmc.edu (Jay Maynard) (04/02/91)

In article <4394@skye.ed.ac.uk> richard@aiai.UUCP (Richard Tobin) writes:
>In article <1991Mar29.222133.2819@cl.cam.ac.uk> nmm@cl.cam.ac.uk (Nick Maclaren) writes:
>>Someone else's quote:  "From an MVS viewpoint, it is difficult to
>>distinguish UNIX and MS-DOS".
>That's Richard O'Keefe's.  To me, it suggests serious problems with
>an MVS viewpoint.

No...all it says is that MVS is *different* from the interactively-oriented
philosophy inherent in both Unix and MS-DOS. The differences between MVS
and either are far, far greater than the differences between the two. There's
a simple reason: just as a 3090-600J is designed to do a different kind of
work from a Sequent, so MVS is designed to do a different kind of work from
Unix.

...Jay (a senior MVS systems programmer in real life)
-- 
Jay Maynard, EMT-P, K5ZC, PP-ASEL | Never ascribe to malice that which can
jmaynard@thesis1.med.uth.tmc.edu  | adequately be explained by stupidity.
  "You can even run GNUemacs under X-windows without paging if you allow
          about 32MB per user." -- Bill Davidsen  "Oink!" -- me

bobd@zaphod.UUCP (Bob Dalgleish) (04/06/91)

In article <801@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
>In article <4202@zaphod.UUCP> bobd@zaphod.UUCP (Bob Dalgleish) writes:
>>Pascal was designed by Wirth after he got disgusted with the rapidly
>>growing size and complexity of the Algol68 effort.  I gave up in disgust
>I did and Algol68 implementation as my MSc project, almost 10 years ago
I'm impressed.
>now (I wish I'd had C to implement it in - it might still be useable :-)
>I always had the idea that what Wirth did was implement all the 'easy'
>stuff in Pascal.
How do you mean?

I certainly know that the use of VW grammars went to
various people's heads, and that they decided it might be easier to
implement some of the runtime in VW grammars.  (I understand that VW
himself completed a specification for the run-time in his two-level
grammar; it was at least as hard to read as the syntax).

>>overloaded operators, partial parametization, and array slices explored

>Actually most of this stuff wasn't that hard to implement, what I found
>really interesting was reading the papers being written by the people
>implementing Ada - they were solving many of the same problems (and making
>many of the same mistakes) that the Algol68 implementors had solved a
>decade before - I think that a lot was learned from Algol68 in the field
>of language design and a lot was missed in actual implementation.
Agreed -- there is nothing hard about implementing an NP-hard algorithm;
it just takes a lot of nerve to explain to a programmer that his
three-hundred line program will take until next Tuesday to compile
because he used a seemingly innocuous form of overloading.  The
pathological cases cannot be readily recognized by the compiler until
you run out of CPU time.

>In my opinion the major thing 'wrong' with Algol68 is that it is not easy to
>approach with modern compiler tools - modern language designers tend to
>choose grammers that are lr0 (or slr/whatever) because they want them to
>go through yacc. Algol68 really has 2 grammers embedded inside each other
>(expressions and parentheses) you can't parse one untill you've parsed 
>the other.
Hank Boehm at the University of Alberta in Edmonton put together a neat
method of recognition.  He constructed two slr(0) grammars, one to be
applied left to right, and the other to be applied by reading the source
backwards.  When the I/O system was added afterwards, his parser failed.
I also saw some work on attribute grammars, which were well over my
head.

>The report(s) are hard to read, you keep getting that same sort of 'aha!'
>reaction you get in v6 unix when you finally figure out what's ment by
>"you're not expected to understand this".
Much like a religious book!  However, I never really got comfortable
with the semidecidability of language construction: illegal constructs
would often turn into productions which had no valid reduction (excuse
the hashed metaphor).
-- 
-- * * * Remember: I before E except after DALGL * * *--
Bob Dalgleish		bobd@zaphod.UUCP

paul@taniwha.UUCP (Paul Campbell) (04/07/91)

In article <4217@zaphod.UUCP> bobd@zaphod.UUCP (Bob Dalgleish) writes:
>In article <801@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:

>>I always had the idea that what Wirth did was implement all the 'easy'
>>stuff in Pascal.

>How do you mean?

Well he implemented the basic type structures (but not full pointers or
unions), most of the flow control (but only as voids) etc etc

I didn't mean this in a bad way - I think that it was probably a good choice
at the time - and the results are obvious - Pascal caught on because it
is (relatively) easy to implement and a portable compiler is available ...

>Hank Boehm at the University of Alberta in Edmonton put together a neat
>method of recognition.  He constructed two slr(0) grammars, one to be
>applied left to right, and the other to be applied by reading the source
>backwards.  When the I/O system was added afterwards, his parser failed.
>I also saw some work on attribute grammars, which were well over my
>head.

I did something similar except I did two passes, the first parsed the 
parenthesis (control flow and mode/operator/prio decs)  using recursive
descent - the result was a tree annotated with the unparsed expressions
in it. The second pass traversed the treee and used a lr0 parser that
was recursively run on the unparsed nodes - the result being a treee that was
walked to generate code.

	Paul Campbell

PS: for those who don't know even the revised report contains ambiguity,
the most common example is:

	union([]int , struct (int a, b, c)) fred = (1,2,3);

-- 
Paul Campbell    UUCP: ..!mtxinu!taniwha!paul     AppleLink: CAMPBELL.P

"But don't we all deserve.
 More than a kinder and gentler fuck" - Two Nice Girls, "For the Inauguration"

dik@cwi.nl (Dik T. Winter) (04/08/91)

In article <809@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
 > PS: for those who don't know even the revised report contains ambiguity,
 > the most common example is:
 > 
 > 	union([]int , struct (int a, b, c)) fred = (1,2,3);
 > 
I know there are even outright errors in the revised report (if I remember
right, somewhere in the transupt section there is a heap declaration that
is not allowed), but as far as I know the above is disallowed.  I have to
check because I have no RR here at home, but the CDC Algol 68 compiler
(pretty good and extremely complete) has the following to say:
         1          (
         2          'UNION'([]'INT','STRUCT'('INT'A,B,C)) FRED = (1,2,3);
         3          PRINT(FRED)
         4          )
 ERRORS DETECTED DURING THE MODE CHECKING
 ***     2 F     MODE-ERROR IN IDENTITY DECLARATION FOR FRED
which means that there is no coercion that will coerce the right-hand side
to the proper mode.
--
dik t. winter, cwi, amsterdam, nederland
dik@cwi.nl

chl@cs.man.ac.uk (Charles Lindsey) (04/08/91)

In <809@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:

>PS: for those who don't know even the revised report contains ambiguity,
>the most common example is:

>	union([]int , struct (int a, b, c)) fred = (1,2,3);

Not so. For this to be allowed, "(1,2,3)" would have to be a
	strong-UNITED-unit		[4.4.1.c,d,5.2.1.c]
where 'UNITED' is the particular union in question.
This, in turn, would have to be a
	strong-UNITED-ENCLOSED-clause	[3.2.1.d,5.1.A,B,C,D]
or, more specifically, a
	strong-UNITED-collateral-clause	[1.2.2.A]
BUT, there is no such animal		[3.3.1.d,e]

The point is that ENCLOSED-clauses cannot be coerced (you can only coerce
the things inside them).

bd@ZYX.SE (Bjorn Danielsson) (04/09/91)

In article <809@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
> [text deleted]
>
>PS: for those who don't know even the revised report contains ambiguity,
>the most common example is:
>
>	union([]int , struct (int a, b, c)) fred = (1,2,3);
>

That example isn't a legal Algol-68 declaration: the source (right-hand part)
must be the coercee of a "united to" coercion in order to be acceptable as a
union value for the identity-declaration. But the syntactic "sort" of a
"united to" coercend is "meek" according to section 6.4.1.a, and a collateral
clause like "(1,2,3)" is only allowed in a "strong" position according to
section 3.3.1.

I'm sure there are bugs in the revised report, but that wasn't one of them.

(My source is the Springer-Verlag edition from 1976, ISBN 3-540-07592-5)
-- 
Bjorn Danielsson, ZYX Sweden AB, +46 (8) 665 32 05, bd@zyx.SE

paul@taniwha.UUCP (Paul Campbell) (04/10/91)

In article <1991Apr08.235109.11628@ZYX.SE> bd@ZYX.SE (Bjorn Danielsson) writes:
>In article <809@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
>> [text deleted]
>>
>>	union([]int , struct (int a, b, c)) fred = (1,2,3);
>That example isn't a legal Algol-68 declaration: the source (right-hand part)
>must be the coercee of a "united to" coercion in order to be acceptable as a
>union value for the identity-declaration. But the syntactic "sort" of a
>"united to" coercend is "meek" according to section 6.4.1.a, and a collateral
>clause like "(1,2,3)" is only allowed in a "strong" position according to
>section 3.3.1.

But STRONG is defined to be (amoung other things) FIRM (6.1.1A)
which is defined to be (amoung others) MEEK (6.1.1B).

An easier way to understand it is on page 93 where it states (in the
explanation): 'In a strong position all 6 coercions can occur'.

Basicly the idea is that the coercions have to be unambiguous, the strong
meek etc positions desrcibe places where a coercion might be ambiguous
and therefore the compiler wouldn't know what to do. The reason the above
problem occurs is because there are two forms of the collateral clause
(3.3.1d and 3.3.1e) which can in some circumstances be ambiguous.

Gee it's a long time since I delved into the Report (urgh - 10 years so
I'm probably a bit rusty)

In a strong position the compiler always knows which type it's coercing to,
in a firm position it has a list to choose from (eg the parameters from a
list of overloaded operators), meek ones are limited mainly because the
required type is known to be very simple because of it's syntactic position
(ie an array subscript or the selection expression in an if statement),
soft positions are the place where ambiguity is most to be avoided since
the resulting types are used to create a strong position for another
expression (ie a in a := b).

All in all I hate coercions, they are a real pain in the butt, give me
the C 'a = *b' any day, not only is it easier to compile, typecheck etc
but the programmer can see what's going on easily. Note that most languages
[except a few like Bliss] still have dereference coercions in assignments,
but just one (so the C 'a = b' is really the Bliss 'a = .b'). Other places
where coercions pop up are in float<->int changes and Pascal calls
of functions without parameters (C requires an explicit call - hooray!).

	Paul

(Followups to comp.lang.misc)
-- 
Paul Campbell    UUCP: ..!mtxinu!taniwha!paul     AppleLink: CAMPBELL.P

"But don't we all deserve.
 More than a kinder and gentler fuck" - Two Nice Girls, "For the Inauguration"