[comp.lang.misc] The Universal Language

mikeb@coho.ee.ubc.ca (Mike Bolotski) (08/06/90)

In article <23893@megaron.cs.arizona.edu>, gudeman@cs.arizona.edu (David Gudeman) writes:
> Only ridiculous if you have a limited imagination.  It is perfectly
> plausible that it is possible to design a ``universal'' language of
> some sort, that makes it unnecessary to use other languages.  Such a

This is an interesting claim.  Of course, it is difficult to dispute
statements that begin with "it is plausible", but I will attempt to do so.

For one, I haven't seen any supporting arguments for the possibility
existence of the universal language (handwaving excluded).

Here is a counterargument.  Workers in different fields solve problems
using entirely different languages.  Mathematics is one such language,
and it includes many "sublanguages" -- continuous variable notation,
matrix notation, logic, etc.   Circuit diagrams are another language.
English, with terms specialized to each area,  is another language, used 
almost exclusively in non-technical fields.

A "universal" programming language is in a sense equivalent to the claim
that a single language is appropriate for all areas of study.  

> language will probably not be small, and it may be divided into many
> sublanguages, each appropriate for different problems, but there could
> be a unifying framework and user interface.  In fact a lot of current

This is a cop-out.  If the language is defined to be English, then all
currently existing languages are only sublanguages in a common framework.
A language with a sufficient amount of sufficiently different "sublanguages"
cannot be said to be a language.  

> research is going into the search for such a language, although the
> researchers don't generally think of their work in this way.

I should hope that the researchers don't think of their work that way.
They would probably change research areas.  By the way, whose work
do you consider falling into this category?

> for your problem.  To suggest that the issue has been decided is a
> little premature to say the least.

These days, the suggestion that any issue has been decided is premature.
One can point to the various speed levels  that were "proven" to be
non-exceedable by humans, for example

On the other hand, the excuse of "this hasn't been decided yet" gets
vacuous rather rapidly without any supported counterarguments.
 
--
Mike Bolotski           VLSI Laboratory, Department of Electrical Engineering
mikeb@salmon.ee.ubc.ca  University of British Columbia, Vancouver, Canada 


Mike Bolotski           VLSI Laboratory, Department of Electrical Engineering
mikeb@salmon.ee.ubc.ca  University of British Columbia, Vancouver, Canada 

gudeman@cs.arizona.edu (David Gudeman) (08/06/90)

In article  <1356@fs1.ee.ubc.ca> mikeb@coho.ee.ubc.ca (Mike Bolotski) writes:
]In article <23893@megaron.cs.arizona.edu>, gudeman@cs.arizona.edu (David Gudeman) writes:
]> Only ridiculous if you have a limited imagination.  It is perfectly
]> plausible that it is possible to design a ``universal'' language of
]> some sort, that makes it unnecessary to use other languages.  Such a
]
]This is an interesting claim.  Of course, it is difficult to dispute
]statements that begin with "it is plausible", but I will attempt to do so.
]
]For one, I haven't seen any supporting arguments for the possibility
]existence of the universal language (handwaving excluded).

Since you left out the relevent portion of the reply, I include it here:

In article  <1352@fs1.ee.ubc.ca> mikeb@ee.ubc.ca (Mike Bolotski) writes:
]
]In article <2428@l.cc.purdue.edu>, cik@l.cc.purdue.edu (Herman Rubin) writes:
]> [ ...                 ].  One should not have to use a different programming
]> language for different types of problems.  [ ... ] 
]
]The most ridiculous statement I've seen on this newsgroup.  It rather 
]causes the reader to question the qualifications of the author on
]any aspect of language design or implementation.

Here you make a gratuitous ad homenim attack based on an unsupported
assumption.  My article was not intended to support the position that
a universal language is possible -- by handwaving or any other means.
It was meant only to point out the unfairness of _your_ remark.  The
fact that you think the idea is ridiculous implies that you think the
issue is decided, contrary to your comment:

]These days, the suggestion that any issue has been decided is premature.

I have an opinion on the question, but I do not intend to discuss it
in this setting of ad homenim attacks and insulting condescension.
-- 
					David Gudeman
Department of Computer Science
The University of Arizona        gudeman@cs.arizona.edu
Tucson, AZ 85721                 noao!arizona!gudeman

mikeb@coho.ee.ubc.ca (Mike Bolotski) (08/07/90)

In article <23902@megaron.cs.arizona.edu>, gudeman@cs.arizona.edu (David Gudeman) writes:
> 
> Here you make a gratuitous ad homenim attack based on an unsupported
> assumption.  My article was not intended to support the position that

I apologize for the personal nature of the attack on Herman and David.
In view of the high quality of discussion on this newsgroup, it was
completely uncalled for.

> It was meant only to point out the unfairness of _your_ remark.  The
> fact that you think the idea is ridiculous implies that you think the
> issue is decided, contrary to your comment:

True. The word ridiculous should be changed to "highly improbable"
to reduce the offensive tone of my earlier post.

> I have an opinion on the question, but I do not intend to discuss it
> in this setting of ad homenim attacks and insulting condescension.

I would be interested in the opinion, as I still believe that a
universal language isn't a good idea. 

--
Mike Bolotski           VLSI Laboratory, Department of Electrical Engineering
mikeb@salmon.ee.ubc.ca  University of British Columbia, Vancouver, Canada 


Mike Bolotski           VLSI Laboratory, Department of Electrical Engineering
mikeb@salmon.ee.ubc.ca  University of British Columbia, Vancouver, Canada 

mwm@raven.pa.dec.com (Mike (Real Amigas have keyboard garages) Meyer) (08/07/90)

In article <23950@megaron.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:

   But there is another approach to universality.  The existence of
   English plus current programming languages plus all the specialized
   notations of mathematics, logic, linguistics, chemistry, and many
   other areas show that a fairly universal language is possible.  I
   don't think there is anything inherently wrong with saying that all
   this is one language.  Of course it is not a very cohesive or elegant
   language.

I've been for quite a while working with a language that allows
"plug-in" modules for various purposes. The most interesting of the
modules are full-blown applications, and I normally think of this as
an "applications integration language". However, some of the modules
are extensions to give the language notations from realms other than
those it was implemented for.

   Given this view of what a universal language is, the question becomes
   ``to what extent can this hodgepodge be unified and streamlined?''.
   The question becomes a quantitative rather than a qualitative one.

Here's where either I lose what you're talking about, or you lose
touch with what I perceive the problem to be.

The problem isn't streamlining the various specialized notations
together, it's providing all the facilities for manipulating those
objects that are going to be needed. For example, even though I have
the appropriate notations to work with the field in question, if I
need to manipulate lists of objects and the underlying language
doesn't provide primitives for list manipulation, then it may be
easier to re-implement the notations in a language that supports lists
than vice-versa.

   This unification is not a process of throwing out the specialized
   features of various languages, rather it involves finding
   relationships among the various specialized features, subsuming them
   with more general features.

Yes, but if that "more general" feature doesn't map simply to the
specific features needed, it's not clear you've made an improvement
over including them both. Of course, the LISP solution is to include
all three cases. But I think that that way lies madness, at least on
the scale of a universal language.

   Unix has shown pretty conclusively the benefit of combining different
   languages. [...]
   Even so, I don't think anyone would call this collection a single
   language.  But maybe with enough ingenuity something similar can be
   designed that is so cohesive that it _could_ be called a single
   language.

It's called PERL :-). But there are still domains that it doesn't work
well in.

   Maybe a universal language
   would involve a lot of little sublanguages, but all fully integrated
   into a single entity, so that you are not committed to a single
   sublanguage just because you chose to start in that language.

Such a language is doomed to failure. The problem can be expressed as:
Do you have hooks for dealing with sigournism, a field of study whose
subject matter will be discovered in 2012, and spring into existence
thereafter? Likewise, the xplee data structure (patent applied for in
1996) is a critical facility - how do you intend to incorporate it?
And after you've added them, you've just created a "more universal"
language than your previous "universal language."

English solves this problem by being "infinitely extensible"; it's
allowable to create new terminology that combines smoothly (or
roughly, if that's the way you want it) with the rest of the language,
and can be used for discussing sigournism or xplees. Of course, there
are a number of "infinitely extensible" programming languages.  None
of them seem to be taking over the world, and Herman isn't interested
in them when they're pointed out to him as solutions to his problem.

	<mike
--
Love and affection,					Mike Meyer
Of the corporate kind.					mwm@relay.pa.dec.com
It's just belly to belly,				decwrl!mwm
Never eye to eye.

gudeman@cs.arizona.edu (David Gudeman) (08/07/90)

In article{  <1360@fs1.ee.ubc.ca> mikeb@coho.ee.ubc.ca (Mike Bolotski) writes:
>
>I apologize for the personal nature of the attack on Herman and David.
>In view of the high quality of discussion on this newsgroup, it was
>completely uncalled for.

Thank you, Mike.  I think many of us have trouble remembering that we
are writing to real people out there in net land, and tend to get
carried away.  It has happened to me many times, and I wasn't
completely innocent in this exchange.  (I did make a crack about lack
of imagination...)

>I would be interested in the opinion, as I still believe that a
>universal language isn't a good idea. 

Well there are two different questions here: whether a universal
language is possible, and whether it is a good idea.  I don't think
you can answer the second until you know what a universal language is.
First, I doubt that there will ever be any formally specified language
the size of, say Ada or smaller that can make a true claim to
universality (although we can't really know for sure).

But there is another approach to universality.  The existence of
English plus current programming languages plus all the specialized
notations of mathematics, logic, linguistics, chemistry, and many
other areas show that a fairly universal language is possible.  I
don't think there is anything inherently wrong with saying that all
this is one language.  Of course it is not a very cohesive or elegant
language.

Given this view of what a universal language is, the question becomes
``to what extent can this hodgepodge be unified and streamlined?''.
The question becomes a quantitative rather than a qualitative one.
There is a lot of work being done today attempting to unify various
programming language paradigms.  And other work studying the
underlying nature of language and what different languages have in
common (both natural and artificial language).  If these researchers
can come up with a small set of basic underlying principles then it
may be possible to use these principles to bring together many of the
divergent languages being used.

This unification is not a process of throwing out the specialized
features of various languages, rather it involves finding
relationships among the various specialized features, subsuming them
with more general features.  This has already been done in different
contexts.  For example, functions and relations were once separate
concepts, but now both may be thought of as sets.  (Actually, I'd like
to discredit that particular model, but that's another story ;-)
Similarly, plane geometry and algebra were once considered completely
unrelated, but now geometry can be done in algebra in a cartesian
plane.  In these and many other cases, the unification of different
languages gave great insights into the problem domains that the
languages were meant for.

Unix has shown pretty conclusively the benefit of combining different
languages.  C alone would not have been adequate for all the work that
has been done, but add lex, yacc, shell, make, awk, and other
languages and you have an extremely powerful system.  The power comes
not just from the existence of the separate languages, but from the
combination.  There would be little point to lex and yacc if you could
not combine them conveniently with C.  If the shell could not call C
programs or make could not invoke shell commands, a lot of power would
be lost.  And there would even be considerable power lost if C could
not run shell commands.

Even so, I don't think anyone would call this collection a single
language.  But maybe with enough ingenuity something similar can be
designed that is so cohesive that it _could_ be called a single
language.  I have often come up with a problem working in one of the
above languages that could have been easily solved if I'd had more
direct access to some other language.  Maybe a universal language
would involve a lot of little sublanguages, but all fully integrated
into a single entity, so that you are not committed to a single
sublanguage just because you chose to start in that language.
-- 
					David Gudeman
Department of Computer Science
The University of Arizona        gudeman@cs.arizona.edu
Tucson, AZ 85721                 noao!arizona!gudeman

preston@titan.rice.edu (Preston Briggs) (08/07/90)

In article <MWM.90Aug6180352@raven.pa.dec.com> mwm@raven.pa.dec.com (Mike (Real Amigas have keyboard garages) Meyer) writes:

>Such a language is doomed to failure. The problem can be expressed as:
>Do you have hooks for dealing with sigournism, a field of study whose
>subject matter will be discovered in 2012, ...

Lisp, of course, has such capabilities; has had for years.
You just wrap parentheses around it and claim that it's
been there all along! ( :-) )

I'm not entirely joking.  New data types (simple and structured) are
easy to add.  New functions are easy.  New control structures are easy.
The framework stretches a long way.

I guess I'm not sure what the question is here.
You guys are talking of a "universal" language...
Do you have a problem we can't program around in Lisp (or C or whatever)?

(Of course, some problems can't be solved, regardless of the language.
Other problems are perhaps more conveniently expressed in one language
or another (Lisp of course making all programs equally inconvenient).)

-- 
Preston Briggs				looking for the great leap forward
preston@titan.rice.edu

lgm@cbnewsc.att.com (lawrence.g.mayka) (08/07/90)

In article <1356@fs1.ee.ubc.ca> mikeb@coho.ee.ubc.ca (Mike Bolotski) writes:
>In article <23893@megaron.cs.arizona.edu>, gudeman@cs.arizona.edu (David Gudeman) writes:
>> language will probably not be small, and it may be divided into many
>> sublanguages, each appropriate for different problems, but there could
>> be a unifying framework and user interface.  In fact a lot of current
>
>This is a cop-out.  If the language is defined to be English, then all
>currently existing languages are only sublanguages in a common framework.
>A language with a sufficient amount of sufficiently different "sublanguages"
>cannot be said to be a language.  

The previous two sentences are self-contradictory.  English indeed
includes myriad sublanguages for any and every specialty, yet is most
certainly "said to be a language."  (Consider, especially, *spoken*
English, which by necessity includes even highly technical notations
of all kinds.)

Your objection about the "sufficient differences" between sublanguages
is a matter of degree, not of kind.  Would you be more inclined to
consider English a unified language if its grammatical structure and
vocabulary extension were more regular?  Perhaps Esperanto would meet
your requirements. 1/2 :-)

But perhaps the programming-language equivalent of a universal syntax
is

(OPERATOR OPERAND OPERAND ...)


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.

mikeb@ee.ubc.ca (Mike Bolotski) (08/07/90)

In article <1990Aug7.020234.13032@cbnewsc.att.com>, lgm@cbnewsc.att.com
(lawrence.g.mayka) writes:
> In article <1356@fs1.ee.ubc.ca> mikeb@coho.ee.ubc.ca (Mike Bolotski)
writes:
> >
> >This is a cop-out.  If the language is defined to be English, then
all
> >currently existing languages are only sublanguages in a common
framework.
> >A language with a sufficient amount of sufficiently different
"sublanguages"
> >cannot be said to be a language.  
> 
> The previous two sentences are self-contradictory.  English indeed
> includes myriad sublanguages for any and every specialty, yet is most
> certainly "said to be a language."  (Consider, especially, *spoken*
> English, which by necessity includes even highly technical notations
> of all kinds.)
> 
> Your objection about the "sufficient differences" between
sublanguages
> is a matter of degree, not of kind.  Would you be more inclined to

There are several points to be addressed in the above.  

1.  Human language vs. Programming language.

    I should've said "all existing programming languages".   English
    is most certainly a human language; it is not a programming
language
    until the natural language understanding folks solve quite a few
    problems in their field.

2.  The sufficiency of English for "any and every specialty".

    English (or any human language) is certainly close to optimal
    as a programming language for most tasks.  I still would not say
    for _every_ task.   I mentioned two counter examples earlier:
    circuit diagrams and mathematics.  When two engineers discuss 
    a circuit design they typically both look at a diagram.  If 
    you've ever attempted to discuss a circuit over a phone, you'll
    understand the frustration at the inadequacy of English.

    A similar argument goes for mathematics:  droning a list of
    symbols is an extraordinarily inefficient way to discuss an
equation,
    and most people require a visual representation.


> But perhaps the programming-language equivalent of a universal syntax
> is
> 
> (OPERATOR OPERAND OPERAND ...)

I've noted your bias toward LISP before :-).  And I'm aware that
LISP is the programming language of choice when both representing 
circuit diagrams and performing symbolic mathematics.  Still,
this is the internal representation and not the preferred method
of programming, if you define programming as giving instructions
to a computer.


LISP ASIDE FOLLOWS:
-------------------
As far as LISP goes, my bias is that it is wonderful for tasks that
require the functional viewpoint, but horrible at tasks that 
require object-oriented thinking.

I've recently implemented a technology mapper in CLOS.  The pattern
matching, tree-traversing, and branch/bound search were an absolute
joy to write.  But the second stage that dealt with representing
gates  as object was extraordinarily clumsy in CLOS. Maybe this is
the fault of CLOS instead of LISP, since from what I've seen of
Flavors it is much more natural.

RANDOM ASIDE FOLLOWS:
---------------------

On the topic of choosing the right programming language for a subtask
and interfacing them.  A friend of mine works for a local telecom
company.  His group just finished a package that uses Prolog for
database management and queries,  C++-based Interviews for
user-interface,
lex/yacc for various parsing tasks, and Eiffel for something that I'd
rather not know about.   The coordination in this case was via
standalone programs and sockets, but it sure would be nice to mix
and match languages at some intermediate level between object code
and processes. 


--
Mike Bolotski           VLSI Laboratory, Department of Electrical
Engineering
mikeb@salmon.ee.ubc.ca  University of British Columbia, Vancouver,
Canada 

gaynor@paul.rutgers.edu (Silver) (08/08/90)

> [All the heat concerning mikeb@coho.ee.ubc.ca...]
And while we're at it, you always put in two signatures!  :^D

gudeman@cs.arizona.edu (David Gudeman) writes:
> First, I doubt that there will ever be any formally specified language the
> size of, say Ada or smaller that can make a true claim to universality
> (although we can't really know for sure).

I hold just the opposite opinion.  Any `universal language' must be very small
but very versatile.  The components of the language itself should be objects in
the language and easy to modify.  F'rinstance, the scanner should be very
customizable so to be able to arbitrarily add notations.  Of course, all of the
nice basic language features should be available, like arbitrary symbol and
environment manipulation, garbage collection, executable objects are data, etc.
The rest should be provided by library routines.

Regards, [Ag]

gudeman@cs.arizona.edu (David Gudeman) (08/08/90)

In article  <MWM.90Aug6180352@raven.pa.dec.com> mwm@raven.pa.dec.com (Mike (Real Amigas have keyboard garages) Meyer) writes:
>
>   Given this view of what a universal language is, the question becomes
>   ``to what extent can this hodgepodge be unified and streamlined?''.
>   The question becomes a quantitative rather than a qualitative one.
>...
>The problem isn't streamlining the various specialized notations
>together, it's providing all the facilities for manipulating those
>objects that are going to be needed.

The point to unifying and streamlining (both terms chosen deliberatly
for vagueness) is to reduce the cost of learning the language.
Presumably a universal language is going to include a lot of stuff.
It would be a good idea to minimize the size of the language by
(for example) avoiding redundancy in notations.

>   ...finding
>   relationships among the various specialized features, subsuming them
>   with more general features.
>
>Yes, but if that "more general" feature doesn't map simply to the
>specific features needed, it's not clear you've made an improvement
>over including them both.

If the more general feature doesn't map simply to the specific
features needed, then it is not more general.

>Such a language is doomed to failure. The problem can be expressed as:
>Do you have hooks for dealing with sigournism, a field of study whose
>subject matter will be discovered in 2012, and spring into existence
>thereafter?...
>English solves this problem by being "infinitely extensible";

Sorry.  I thought it was obvious that any language that makes a claim
to being universal will have to have some ability to evolve.  Not just
to add new features, but also to loose old, unused features.

In fact, we may well have to loose the preconception that a language
must have a single, unambiguous definition.  Such definitions do not
exist for natural languages and it may finally be shown not hold in
the more formal realms either.  Since Greek times there has been a
prevalent idea among Westerners that given the knowledged and tools,
the universe can be described with perfect accuracy and precision.
This attitude came into its modern form with Newton's laws.  However,
there are areas where it is frankly not believable that this will ever
be possible.  Such areas include the behavior of individual molecules
in a gas, and the behavior of individuals in a society.  Modern
physicists have even come to the conclusion that even in principle it
is not possible to describe _anything_ with perfect precision and
accuracy (both at the same time).

Many linguists have given up the idea of rigid rules to describe
natural language, and instead say that the meaning of an utterance is
infered from general rules and the surrounding context.  It might be
that one of the discoveries that would lead to a "universal language"
is a way to make a clear and useful trade-off between precision and
accuracy.  Don't ask me how this might apply to programming languages
because I don't know...
-- 
					David Gudeman
Department of Computer Science
The University of Arizona        gudeman@cs.arizona.edu
Tucson, AZ 85721                 noao!arizona!gudeman

gudeman@cs.arizona.edu (David Gudeman) (08/08/90)

In article  <Aug.7.14.07.35.1990.16114@paul.rutgers.edu> gaynor@paul.rutgers.edu (Silver) writes:
>gudeman@cs.arizona.edu (David Gudeman) writes:
>> First, I doubt that there will ever be any formally specified language the
>> size of, say Ada or smaller that can make a true claim to universality
>> (although we can't really know for sure).
>
>I hold just the opposite opinion.  Any `universal language' must be very small
>but very versatile.  The components of the language itself should be objects in
>the language and easy to modify.

This is an approach that I favor, but frankly it has been tried and
has not born out its promise.  By all means keep trying, but I'm a
little discouraged...

As to the size: there is a certain elegance in the notion that a
universal language can be described as a very simple underlying
system, and that all applications can be built on that framework.  But
as a practical matter I think you really want to standardize as much
as possible, and include in the definition all of the useful sorts of
notations.  Otherwise this ``universal language'' is going to have
huge numbers of dialects, all incomprehensible to the larger
community.  You don't really have a universal language, you have a
language for describing other languages.  Not that this is without
value, but it is not a general purpose tool; it is highly specialized
because it is only good for describing languages.
-- 
					David Gudeman
Department of Computer Science
The University of Arizona        gudeman@cs.arizona.edu
Tucson, AZ 85721                 noao!arizona!gudeman

gaynor@paul.rutgers.edu (Silver) (08/08/90)

Me writes:
> Any `universal language' must be very small but very versatile.  The
> components of the language itself should be objects in the language and easy
> to modify.

gudeman@cs.arizona.edu (David Gudeman) responds:
> This is an approach that I favor, but frankly it has been tried and has not
> born out its promise.  By all means keep trying, but I'm a little
> discouraged...
Agreed.

The simple kernel/language embellished (even supplanted) by a rich set of
libraries/packages is the model of choice.  Keep the base language simple, and
expand it as necessary with libraries and packages.  For instance, there's been
mention of natural languages in this discussion.  They should be implemented in
the universal language in such a manner as to embellish the language, but they
should not be part of the language's definition -- they're just too darned
expensive.

I could continue with a wish list of the features that the Ultimate Language
should have in mine eyes, but such discussions often turn into jihads and tend
to be nonproductive.  (Wanna have one anyway?, hee hee!)  We could continue
kicking around the same old arguments and discussions.  They'd benefit muchly
those who haven't participated in them before, and perhaps some new and germane
issues would be discussed.  But is it possible that we could end up with an
acceptable Universal Language?  Not without a *lot* of concerted effort.  It
sure would be a pretty sight, though!

Regards, [Ag]

mwm@raven.pa.dec.com (Mike (Real Amigas have keyboard garages) Meyer) (08/08/90)

In article <24011@megaron.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
   In fact, we may well have to loose the preconception that a language
   must have a single, unambiguous definition.

I have problems with this. Fortunately, Mr. Gudeman expresses them
quite adequately for me in article <24013@megaron.cs.arizona.edu>:
   But as a practical matter I think you really want to standardize as
   much as possible, and include in the definition all of the useful sorts
   of notations.  Otherwise this ``universal language'' is going to have
   huge numbers of dialects, all incomprehensible to the larger
   community.

In both cases, Mr. Gudeman could be describing LISP. It's grown and
changed over time. While such a system could well provide a universal
language, the current state of the world suggests that such a system
isn't an improvement over a multitude of special purpose languages.

	<mike
--
And then I saw her...					Mike Meyer
She was a bright red '64 GTO				mwm@relay.pa.dec.com
With fins and gills like some giant piranha fish,	decwrl!mwm
Some obscene phallic symbol on wheels.

carroll@udel.edu (Mark Carroll <MC>) (08/09/90)

Before I get started with included text, I want to include some
preliminary stuff.

I am personally of the opinion that a "Universal Language" is an
impossible goal. As I've said before on this newsgroup, a language
is a description of a machine, and the form of the language describes
the form of the machine. When you choose a language, you're choice
is based on which language define the machine that most clearly and
easily models the solution that you have in mind. 

Some models are irreconcilable. For example, try to unify the Smalltalk
model, where object identity is central to the language, with ML, where
immutable data is central. You can't unify them without destroying
the central concept of one of them. (To those who are going to mention
the Object-oriented functional languages: I know that they exist - but
object-orientation in a functional language is a different model of
object-orientation than Smalltalk. Functional object-orientation could
not be unifying with Smalltalk either.)

Now, on to the text.

In article <Aug.7.14.07.35.1990.16114@paul.rutgers.edu>
 gaynor@paul.rutgers.edu (Silver) writes:
>> [All the heat concerning mikeb@coho.ee.ubc.ca...]
>And while we're at it, you always put in two signatures!  :^D
>

And frankly Andy, you could could use a shave. 8^)


>gudeman@cs.arizona.edu (David Gudeman) writes:
>> First, I doubt that there will ever be any formally specified language the
>> size of, say Ada or smaller that can make a true claim to universality
>> (although we can't really know for sure).
>
>I hold just the opposite opinion.  Any `universal language' must be very small
>but very versatile.  The components of the language itself should be objects 
>in the language and easy to modify.  F'rinstance, the scanner should be 
>very customizable so to be able to arbitrarily add notations.  Of course, 
>all of the nice basic language features should be available, like arbitrary
>symbol and environment manipulation, garbage collection, executable objects
>are data, etc.

And I've got to disagree with both of you!

To Andy: if you read your proposal, you'll see the error: "The components
of the language should be objects in the language {\em and easy to modify}."
What about language models where objects cannot be modified? You've just
elminated Prolog, ML, Haskell, and a host of other languages and models.

To David, I say that no language, no matter how large, can make 
fundamentally incompatible models compatible. 

Now, on to what I believe the answer is:

One language can never possibly fit all models together. That won't
work.  But a system which allows multiple languages to interact is
possible. One language, no. One Environment, maybe. Systems such as
Poplog are already starting to investigate this possibility. Make it
simple for the programmer to combine routines written in different
languages in one program, and then let them use whatever combination
of language is appropriate.

>The rest should be provided by library routines.  
> 

Or perhaps, library LANGUAGES.

>Regards, [Ag]

	<MC>

--
|Mark Craig Carroll: <MC>  |"We the people want it straight for a change;
|Soon-to-be Grad Student at| cos we the people are getting tired of your games;
|University of Delaware    | If you insult us with cheap propaganda; 
|carroll@dewey.udel.edu    | We'll elect a precedent to a state of mind" -Fish

gudeman@cs.arizona.edu (David Gudeman) (08/09/90)

In article  <26920@nigel.ee.udel.edu> carroll@udel.edu (Mark Carroll <MC>) writes:
>
>... As I've said before on this newsgroup, a language
>is a description of a machine, and the form of the language describes
>the form of the machine...
>Some models are irreconcilable.  For example, try to unify the Smalltalk
>model, where object identity is central to the language, with ML, where
>immutable data is central. You can't unify them without destroying
>the central concept of one of them.

It depends on what you consider a successful unification.  Let's
assume we are unifying the functional and X paradigms.  If you are
going to insist that the unified language allow no forms of expression
that are not allowed in the functional paradigm, then it is obvious
that the unification is possible only if X is a subparadigm of the
functional one, and the unification will only produce the functional
paradigm.  But the whole purpose of combining paradigms is to give
increased expressiveness, so you should expect that a functional/X
langauge can express things that a pure functional language can't
(such as mutability of objects).

You _can_ combine the functional and object paradigms in the sense
that the combined language allows the expression of any functional
program and any object-oriented program, and some combinations that
are not allowed in either pure language.  Formally, for any functional
language L1, there exists a true object-oriented language L2 and a
language L such that every program in L1 (resp. L2) is a program in L,
and that the meaning of the program in L1 (L2) is subsumed by the
meaning of the program in L.  (If anyone wants to see a discussion of
this idea including a formal definition of "subsumed by", ask me for a
copy of my draft paper.  Warning: this paper contains category theory,
denotational semantics and other dangerous materials)

I am always disturbed by the idea that for a language to have some
good property, it must forbid the programmer from doing X.  Why do you
want to insist that all programs in the language have the property?
What's wrong with saying that any program in the language that does
not do X has the property?  This greatly increases the usefulness of
the language because now it is useful for writing programs where you
really want to do X (at some penalty because these programs won't have
the property).  And for programs where you don't need to do X, the
language is even better.
-- 
					David Gudeman
Department of Computer Science
The University of Arizona        gudeman@cs.arizona.edu
Tucson, AZ 85721                 noao!arizona!gudeman

gudeman@cs.arizona.edu (David Gudeman) (08/09/90)

In article  <MWM.90Aug8111257@raven.pa.dec.com> mwm@raven.pa.dec.com (Mike (Real Amigas have keyboard garages) Meyer) writes:
>[about the Universal Language]
>In both cases, Mr. Gudeman could be describing LISP. It's grown and
>changed over time. While such a system could well provide a universal
>language, the current state of the world suggests that such a system
>isn't an improvement over a multitude of special purpose languages.

I'm willing to admit that Lisp does not provide a solution the problem
of making a universal language.  Part of the problem is the limited
scope of paradigms that Lisp allows.  Another problem is that the
syntax is not convenient for many purposes.  However Lisp systems,
like Unix, have proven extremely powerful and versatile, and may give
an idea for the direction we want to look in.

Also, in my usage, just because everyone doesn't use a language that
doesn't prove that it isn't universal.  English is fairly universal as
a means of expression but it's not universal in the sense that
everyone speaks it.  When I say "universal" I'm refering to the first
sense, that it is adequate for most purposes of expression, not that
it is actually used by everyone.
-- 
					David Gudeman
Department of Computer Science
The University of Arizona        gudeman@cs.arizona.edu
Tucson, AZ 85721                 noao!arizona!gudeman

carroll@udel.edu (Mark Carroll <MC>) (08/09/90)

In article <24043@megaron.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>In article  <26920@nigel.ee.udel.edu> carroll@udel.edu (Mark Carroll <MC>) writes:
>>
>>... As I've said before on this newsgroup, a language
>>is a description of a machine, and the form of the language describes
>>the form of the machine...
>>Some models are irreconcilable.  For example, try to unify the Smalltalk
>>model, where object identity is central to the language, with ML, where
>>immutable data is central. You can't unify them without destroying
>>the central concept of one of them.
>

...

>
>I am always disturbed by the idea that for a language to have some
>good property, it must forbid the programmer from doing X.  Why do you
>want to insist that all programs in the language have the property?
>What's wrong with saying that any program in the language that does
>not do X has the property?  This greatly increases the usefulness of
>the language because now it is useful for writing programs where you
>really want to do X (at some penalty because these programs won't have
>the property).  And for programs where you don't need to do X, the
>language is even better.

The problem is that sometimes, the central feature of a language
is represented by some inability. The beauty of a language like Haskell
is, in part, caused by its inability to mutate an object. The conceptual
purity of functional programming is central to the language. If you
allow arbitrary mutation of objects by non-functional features, you've
just eliminated the central feature of the language. 

What I believe is a better choice is to provide an environment that
allows  you to link together modules written in different languages.

The idea that I have in mind is to some sort of environment that allows
a language to specify the way in which it behaves, and based on that,
allow different languages to interact. 

For example, to combine Smalltalk and Haskell, Haskell would specify
that it cannot allow side effects, and when it made a call to a smalltalk
program, Smalltalk would duplicate the receiver, send the message to the
duplicated, and return a pair containing the new object, and the return
value of the message pass. 

The language descriptions would describe the ways in which a given
language behaves, and the environment would develop systems like the
above to allow them to be connected. In this way, we could provide some
method of allowing the programmer to use the language and notation best
suited to each portion of the job at hand, without comprising the
purity of the languages and paradigms that he decides to use.

>					David Gudeman

	<MC>


--
|Mark Craig Carroll: <MC>  |"We the people want it straight for a change;
|Soon-to-be Grad Student at| cos we the people are getting tired of your games;
|University of Delaware    | If you insult us with cheap propaganda; 
|carroll@dewey.udel.edu    | We'll elect a precedent to a state of mind" -Fish

mwm@raven.pa.dec.com (Mike (Real Amigas have keyboard garages) Meyer) (08/09/90)

In article <24044@megaron.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
   Also, in my usage, just because everyone doesn't use a language that
   doesn't prove that it isn't universal.  English is fairly universal as
   a means of expression but it's not universal in the sense that
   everyone speaks it.  When I say "universal" I'm refering to the first
   sense, that it is adequate for most purposes of expression, not that
   it is actually used by everyone.

Ah, the wonders of the Universal Language English. We can make it mean
pretty much what we want it to mean. That's not acceptable for
programming languages, of course. There's a higher authority than
people - the various language translators. If those don't largely
agree on what a specific text means, then the language is pretty much
worthless as a programming language.

I claim there are already a number of languages that are "universal"
in that sense, as there are people who use them for most (or all)
purposes of expression. Of course, if you allow "adequate" to be even
more weakly defined than "universal", then tapes for turing machines
are the first such example.

No, to be universal, a language must provide access to all forms of
expression of sufficient quality that a large percentage of
programmers will be willing to use it instead of whatever they're
using now. That's a non-negiotable portion of the definition of
"adequate."

	<mike

--
Kiss me with your mouth.				Mike Meyer
Your love is better than wine.				mwm@relay.pa.dec.com
But wine is all I have.					decwrl!mwm
Will your love ever be mine?

toma@tekgvs.LABS.TEK.COM (Tom Almy) (08/10/90)

I always thought that The Universal Language was PL/1. After all it did
combine the needs of the commercial programmer (Cobol features) with the
needs of the scientific programmer (algebraic expressions, matrices), thus
meeting everyones needs.

Or maybe the Universal Language is Ada, since it also is an Osterizer language.

I got it now. It must be C, or rather C++, because you can write *everything* 
in these, and they are *popular*.

:-) on the above. There could no more be a Universal Language than a Universal
Religion or a Universal Screwdriver. As long as people and projects differ
there will never be a Universal Language.

Tom Almy
toma@tekgvs.labs.tek.com
Standard Disclaimers Apply

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (08/10/90)

I seem to be missing something in all this discussion.
I thought we _had_ computation-universal languages already.
To make an glaringly obvious point, if machine code isn't
"universal enough", what are you going to translate your
universal language _into_?

A group in the States did some really serious work on natural language
processing some years ago, in Fortran, on a PDP-11.  (As I recall it,
they had lots of little Fortran programs exchanging data.)  I've heard
of a project (controlling a CAT scanner, if I recall correctly) that
write a large object-oriented program, in Fortran.

To give you a natural-language parallel:

	in the tongue that we speak, all thoughts we can form can be
	said in words that have but one sound in them.  One may have
	to use far more words than one would like, but each time that
	I have tried to say what I meant in words that have but one
	sound in them I have found that I could do it.  It may be
	hard to think of good words for such a form of speech, but if
	you know this tongue well, you can do it.  Is this not true
(*)-->	of the scripts we use to make tools that count do what we want
	as well?  What do the folk who ask for a way to write all
	scripts want, but that it be less hard than now?  Why must a
	way to write scripts that is less hard to use be a way that
	is so big that it is hard to learn?  Is not Scheme a good
	way to write scripts for tools that count?	

(The starred line means "computer programs".)
-- 
Taphonomy begins at death.

gudeman@cs.arizona.edu (David Gudeman) (08/11/90)

In article  <7945@tekgvs.LABS.TEK.COM> toma@tekgvs.LABS.TEK.COM (Tom Almy) writes:
>
>I always thought that The Universal Language was PL/1.
>...[more satire]...

Just because previous attempts to solve a problem were failures, that
does not mean the problem is insoluble.  And I explicitely stated
several times that we don't currently know how to go about creating a
universal language.

>:-) on the above. There could no more be a Universal Language than a Universal
>Religion or a Universal Screwdriver. As long as people and projects differ
>there will never be a Universal Language.

Sorry, Tom, but just because we can't see today how to do something
doesn't mean that it is forever impossible.  Actually, I think a
universal screwdriver _is_ possible.  I've even toyed with the idea.
-- 
					David Gudeman
Department of Computer Science
The University of Arizona        gudeman@cs.arizona.edu
Tucson, AZ 85721                 noao!arizona!gudeman

jlg@lanl.gov (Jim Giles) (08/11/90)

From article <1990Aug10.131143.8898@canterbury.ac.nz>, by phys169@canterbury.ac.nz:
> [...]
> This isn't to mean one shouldn't be available, and doesn't mean one can't be
> produced. A language I and some other folks down here in NZ are working on,
> called NGL (part of eNGLish, and Nth Generation Language), attempts to be a
> universal language, by being dynamically redefinable. Further explanation:

Being dynamically redefinable is not necessarily a bad idea.  However, it
doesn't bring you closer to the concept of a universal language.  In fact,
it takes you further from it.  If you can dynamically redefine a language,
what you _really_ get is a _whole_lot_ of incompatible dialects at various
sites (or even - from one person to the next).  Ada programmers learned
early that being able to define new data types and operators was _not_ the
same as having those defined by the language directly.  For example, there
are a large number of different (and incompatible) versions of complex
numbers for Ada - users of the different versions have difficulty sharing
code or understanding each other's work.

Dynamic redefinition is only 'universal' in the same sense that a phonetic
alphabet is universal for natural languages.  Sure, you can express all
verbal communications in a phonetic alphabet, but that doesn't make the
different languages mutually comprehensible.  Similarly, you may be able
to express all programming styles in a dynamically redefinable language,
but that doesn't necessarily lead to mutual comprehension among programmers.

J. Giles

gaynor@paul.rutgers.edu (Silver) (08/13/90)

carroll@udel.edu (<MC>) writes:
> I am personally of the opinion that a "Universal Language" is an impossible
> goal.
Dang pessimists...

> ..., a language is a description of a machine, and the form of the language
> describes the form of the machine.  When you choose a language, you're choice
> is based on which language define the machine that most clearly and easily
> models the solution that you have in mind.
Machine, work upon thyself!

> Some models are irreconcilable.  For example, try to unify the Smalltalk
> model, where object identity is central to the language, with ML, where
> immutable data is central.  You can't unify them without destroying the
> central concept of one of them.
Ok, they can't be unified, the programmer must choose between the two.  The
former model is more common (that of references to data), so would probably be
default.  Then write a package in the language that modifies it to the latter.
When I wrote that such a language must be simple and very versatile, I meant to
the extent that it is able to reach into itself and modify its `guts'.
Machine, work upon thyself!

>>> [All the heat concerning someone@somewhere...]
>> And while we're at it, you always put in two signatures!  :^D
> And frankly Andy, you could could use a shave.  8^)
So I did.  Fixed.

> If you read your proposal, you'll see the error: "The components of the
> language should be objects in the language {\em and easy to modify}."  What
> about language models where objects cannot be modified?
Addressed above.

> One language can never possibly fit all models together.
Not simultaneously, as you've shown!

> But a system which allows multiple languages to interact is possible.
Sure, this is the current situation, to some extent.  What I envision as a UL
would be a small kernel language where all these high-level features are
provided as packages in the language which embellish or modify it.

>> The rest should be provided by library routines.  
> Or perhaps, library LANGUAGES.
Library features, where a feature is a linguistic feature.  (My reference to
`routines' is unfortunate, it is not what I meant.)

Regards, [Ag]  gaynor@paul.rutgers.edu

austern@ux5.lbl.gov (Matt Austern) (08/14/90)

In article <24043@megaron.cs.arizona.edu>, gudeman@cs (David Gudeman) writes:
>I am always disturbed by the idea that for a language to have some
>good property, it must forbid the programmer from doing X.

I'm sure that this is obvious to everybody, but I think it bears
repeating: if the "good property" is efficiency, this is often best
achieved by forbidding the programmer from using certain
constructions.

The more assumptions a compiler can make about a program, the better
it can do the optimization, and in many cases, the only way a compiler
can safely assume that a programmer doesn't do something is to make it
illegal.  (The C calling convention is more complicated than the
FORTRAN calling convention, for example, because C allows recursion.
Can you really imagine a C compiler/linker that can prove that a
program has no recursive calls, and use a simpler calling convention
for such programs?)

There really is a tension between flexibility and ease of
optimization, and the resolution isn't necessarily the same for all
purposes. 
-- 
Matthew Austern    austern@lbl.bitnet     It's not very subtle, but, uh, 
(415) 644-2618     austern@ux5.lbl.gov    neither is blowing up a whole school.

pmk@craycos.com (Peter Klausler) (08/14/90)

Matthew Austern:
> Can you really imagine a C compiler/linker that can prove that a
> program has no recursive calls, and use a simpler calling convention
> for such programs?)

Uh, yes, I can. I use one daily. Why is this so unimaginable?

eric@cwphysbb.PHYS.CWRU.Edu (Eric M. Rains) (08/14/90)

In article <1990Aug13.213113.3743@craycos.com> pmk@craycos.com (Peter Klausler)
says:

>Matthew Austern:
>> Can you really imagine a C compiler/linker that can prove that a
>> program has no recursive calls, and use a simpler calling convention
>> for such programs?)
>
>Uh, yes, I can. I use one daily. Why is this so unimaginable?

     Because it's completely impossible, that's why.  Imagine the following
piece of C code:

unsigned long possibly_recurse(void fun(),unsigned l)
{
if (l<=0)
  return(1);
else
  return(l*fun(fun,l-1));
}

void main(void)
{
if (some_other_function()) {
  printf("%lu!=%lu\n",5,possibly_recurse(possibly_recurse,5);}
else {
  printf("I am not undergoing recursion.\n");}
}

Now, consider the possibilities.  If "some_other_function()" is really a macro
that returns 0, then the program will not do any recursion.  However, if it
returns something else, the program will perform recursion.  How is your C
compiler going to be able to tell what to do in this case? 

     Suppose it decides that there is recursion going on here.  What if another
module contains only functions that do not recurse?  Suppose I want to call the
function possibly_recurse with arguments (non_recursive_func,3), where the
definition of non_recursive_func is

unsigned long non_recursive_func(void *junk,unsigned l)
{
return(l);
}

How can I link to this other module?  The calling conventions will be different
(assuming the module with non_recursive_func has no recursion)...

                                        Eric

P.S.  Besides; in a language like C, the problem of determining whether a
program performs recursion is equivalent to the halting problem...
-----
"...and the moral of that is--'Be what you would seem to be'--or, if you'd like
it put more simply--'Never imagine yourself not to be otherwise than what it
might appear to others that what you were or might have been was not otherwise
than what you had been would have appeared to them to be otherwise."
                          --Lewis Carroll, Alice's Adventures in Wonderland

mwm@raven.pa.dec.com (Mike (Real Amigas have keyboard garages) Meyer) (08/14/90)

In article <1990Aug13.224738.5469@usenet.ins.cwru.edu> eric@cwphysbb.PHYS.CWRU.Edu (Eric M. Rains) writes:
   In article <1990Aug13.213113.3743@craycos.com> pmk@craycos.com (Peter Klausler)
   says:

   >Matthew Austern:
   >> Can you really imagine a C compiler/linker that can prove that a
   >> program has no recursive calls, and use a simpler calling convention
   >> for such programs?)
   >
   >Uh, yes, I can. I use one daily. Why is this so unimaginable?

	Because it's completely impossible, that's why.  Imagine the following
   piece of C code:

	[code deleted]

Remember, theory is closer to practice in theory than in practice.

Yes, you can't solve the halting problem. From which one can conclude
that it's impossible to prove that all programs that have no recursive
calls have no recursive calls. But you don't need to do that to be
able to take advantage of quicker calling conventions for
non-recursive functions. All you need to be able to do is determine
that some specific function is non-recursive. That's easy for many
functions, and mistakenly tagging a non-recursive function as
recursive just costs a little extra run time. No problem, so long as
you're consistent about it.

	[more code deleted]

   How can I link to this other module?  The calling conventions will be
   different (assuming the module with non_recursive_func has no recursion)...

Now you're dealing with implementation details.

First, a minor nit - you're assuming the worst case - that caller and
callee have to change. Depending on the architechture, this may not be
true. You may only have to change the callee, which makes things
trivial. It's having to change the call that creates problems.

Second, you can safely mix calling conventions in a single program.
All you have to do is insure that everyone agrees on how any given
function is called.  Therefore, you can apply this optimization on a
per-function basis.  How well you can apply it is another problem.
Some brainstorming, based on C:

Two entry points: Generate your non-recursive routine with the "fast"
calling convention. Callers from the same compilation unit use that
entry. You also generate a wrapper that uses the "recursive" calling
convention, and then calls the "fast" function, just for "recursive"
external calls and calls through function pointers. Your smart linker
discards the wrapper if it's never used.

Link-time call generation: Tag all non-recursive functions as such in
the object file. When the linker fixes the addresses being called, it
adds code to use the appropriate call convention.

Actually, extending that another step, it may be possible to upgrade
possibly recursive functions to non-recursive post-compile. There are
already compilers that do post-compile, pre-link optimization of this
nature, so it's not impossible.

	<mike
--
I know the world is flat.				Mike Meyer
Don't try tell me that it's round.			mwm@relay.pa.dec.com
I know the world stands still.				decwrl!mwm
Don't try to make it turn around.

gaynor@paul.rutgers.edu (Silver) (08/14/90)

jlg@lanl.gov (Jim Giles) writes:
> Being dynamically redefinable is not necessarily a bad idea.  However, it
> doesn't bring you closer to the concept of a universal language.  In fact, it
> takes you further from it.  If you can dynamically redefine a language, what
> you _really_ get is a _whole_lot_ of incompatible dialects at various sites
> (or even - from one person to the next).
That's a problem in distribution, not definition...  The programmer community
can collectively cut their throats by witholding packages, but that's a
different issue altogether.

I'm suggesting a language which is a common base, extendable with a library of
packages which provide various interesting linguistic features.  The intended
result is an environment in which everything is generally the same unless
requested otherwise.  Symbols and comments don't change shape unless you do
something unusual.  A program which requires immutable data looks just like one
that doesn't.  Infix, prefix, and postfix notations are available; only one is
default, however, it doesn't matter which.

Somebody cut me down to size, I just reread the paragraph above and it doesn't
even seem totally off the wall!  (Um, Mark, I shaved yesterday, so you'll have
to try another tack. :^)

Regards, [Ag]  gaynor@paul.rutgers.edu

pmk@craycos.com (Peter Klausler) (08/14/90)

In <1990Aug13.224738.5469@usenet.ins.cwru.edu> eric@cwphysbb.PHYS.CWRU.Edu (Eric M. Rains)
quite correctly points out that whether a function is recursive is *in general*
indecidable:

>>me:
>>Matthew Austern:
>>> Can you really imagine a C compiler/linker that can prove that a
>>> program has no recursive calls, and use a simpler calling convention
>>> for such programs?)
>>
>>Uh, yes, I can. I use one daily. Why is this so unimaginable?
>
>     Because it's completely impossible, that's why.  Imagine the following
>piece of C code:
>	[deleted]
>P.S.  Besides; in a language like C, the problem of determining whether a
>program performs recursion is equivalent to the halting problem...

However, on real programs, a compiler and/or linker *can* do an effective job
detecting the impossibility of recursion. Briefly, the technique is:

	- Determine the basic call graph; for each function f, compute the set
	  CALLS (f) of functions it invokes
	- Compute the transitive closure of CALLS(f) for each function f
	- Assume recursion for a function f if f is in the transitive closure
	  of CALLS(f)

It's not quite this simple in practise, for one must deal with signal handlers
and functions called indirectly. But this, too, is not "completely impossible";
good, approximate, conservative solutions can be computed and used to speed
up your program.

The transitive closure of the call graph is also useful for problems of
temporary storage allocation -- a set of functions mutually excluded from all
their CALLS sets can all be assigned to the same set of temporary registers.
Finding these sets of functions is nontrivial, but again, approximate methods
are effective.

Real-world compiler work often involves approximate methods; if they err on
the conservative side and are effective on meaningful programs (read: customer
benchmarks), they win.

carroll@udel.edu (Mark Carroll <MC>) (08/14/90)

In article <Aug.14.07.41.57.1990.21197@paul.rutgers.edu>,
gaynor@paul.rutgers.edu (Silver) writes:
>jlg@lanl.gov (Jim Giles) writes:
>> Being dynamically redefinable is not necessarily a bad idea.  However, it
>> doesn't bring you closer to the concept of a universal language.  In
>> fact, it takes you further from it.  If you can dynamically redefine a 
>> language, what you _really_ get is a _whole_lot_ of incompatible dialects 
>> at various sites (or even - from one person to the next).
>That's a problem in distribution, not definition...  The programmer community
>can collectively cut their throats by witholding packages, but that's a
>different issue altogether.
>

Actually, it's a problem in definition. If the language claims to be
"universal", but actually requires you to generate non-standard local
packages to do any useful programming, then it isn't really universal.

That's part of the reason that I prefer the environment idea. Within
an environment, you provide a full, standard language for a given model
of computation. You still need to have non-standard linkage information
in the environment, but within a single module, everything makes sense.

Routines/modules/packages from other languages would be called with the
use of constructs proper to the language in use. So a call to routine A
written in Pascal will look like a call to function A in ML. Given a
sufficiently descriptive name for funtion A, an ML programmer will be
able to make sense of an ML module, wherever it were written.

In the model that you seem to be suggesting, a progreammer would
create his own local functional language, by modifying the Universal
Language System. And no one else would be able to look at his code
without first hacking his way through the modifications he made to the
ULS. EVERY programmer would end up with a local dialect in order to get
his work done, and no one would understand anyone elses code. 

>I'm suggesting a language which is a common base, extendable with a library of
>packages which provide various interesting linguistic features.  The intended
>result is an environment in which everything is generally the same unless
>requested otherwise.  Symbols and comments don't change shape unless you do
>something unusual.  A program which requires immutable data looks just 
>like one that doesn't.  Infix, prefix, and postfix notations are available;
>only one is default, however, it doesn't matter which.
>

Frankly, I'm not sure that what you're saying really makes sense. I think
that the problem is that we're talking at too high a level. It's easy to
say, "Hey, let's go design a language which lets you do everything". And
on a sufficiently abstract level, it sounds like a good idea. But when
it comes down to the actual concrete implementation of something like this,
I think that a lot of the good ideas are going to get shot down. 

How can we realisticly unify the models of Pascal, Smalltalk, and ML?
What you seem to be suggesting doesn't make sense to me. 

If, as a part of a program, I have something equivalent to an ML functor
exporting a binary tree package, those binary trees are immutable. To
switch views as you seem to be suggesting will suddenly take my well debugged
ML module, and introduce bugs - because the ML program cannot possibly
copy with non-functional programs altering its structures. 

Or, suppose I write a log routine in a language like Prolog. How, within
the structure of a single language, can I make imperative parts of the
program handle backtracking?

I'm not saying that any of this is impossible. In fact, I'm sure that it
could all be done. But, in my opinion, the contortions that would be
necessary to cleanly fit all of this into one language would result in
a bad language. I cannot see how you can possibly fit all of the different
models of computation into a small, clean language. The language would have
to grow into a convoluted mess that would make Ada look good :-)

>Somebody cut me down to size, I just reread the paragraph above and it doesn't
>even seem totally off the wall!  (Um, Mark, I shaved yesterday, so you'll have
>to try another tack. :^)
>

Sorry Andy, I can't cut you down to size. I was never trained for
liposuction.

>Regards, [Ag]  gaynor@paul.rutgers.edu


	<MC>
--
|Mark Craig Carroll: <MC>  |"We the people want it straight for a change;
|Soon-to-be Grad Student at| cos we the people are getting tired of your games;
|University of Delaware    | If you insult us with cheap propaganda; 
|carroll@dewey.udel.edu    | We'll elect a precedent to a state of mind" -Fish

staff@cadlab.sublink.ORG (Alex Martelli) (08/16/90)

austern@ux5.lbl.gov (Matt Austern) writes:

>In article <24043@megaron.cs.arizona.edu>, gudeman@cs (David Gudeman) writes:
>>I am always disturbed by the idea that for a language to have some
>>good property, it must forbid the programmer from doing X.

>I'm sure that this is obvious to everybody, but I think it bears
>repeating: if the "good property" is efficiency, this is often best
>achieved by forbidding the programmer from using certain
>constructions.

>The more assumptions a compiler can make about a program, the better
>it can do the optimization, and in many cases, the only way a compiler
>can safely assume that a programmer doesn't do something is to make it
>illegal.  (The C calling convention is more complicated than the
>FORTRAN calling convention, for example, because C allows recursion.
...it's actually more the ALIASING that kills you, recursion's no
problem on many machines, it's the darned *aliases*... no matter,
technical quibble, your point is well-taken.

Another property which is well served by language restrictions is
'compile-time checkability'.  It SHOULD be obvious - if you want a
ROBUST code, you make it REDUNDANT, i.e. you 'forbid' many patterns
from the stream so you can detect (and possibly correct) errors...!

-- 
Alex Martelli - CAD.LAB s.p.a., v. Stalingrado 45, Bologna, Italia
Email: (work:) staff@cadlab.sublink.org, (home:) alex@am.sublink.org
Phone: (work:) ++39 (51) 371099, (home:) ++39 (51) 250434; 
Fax: ++39 (51) 366964 (work only; any time of day or night).

gudeman@cs.arizona.edu (David Gudeman) (08/17/90)

In article  <27388@nigel.ee.udel.edu> carroll@udel.edu (Mark Carroll <MC>) writes:
>In article <Aug.14.07.41.57.1990.21197@paul.rutgers.edu>,
>>That's a problem in distribution, not definition...
>
>Actually, it's a problem in definition. If the language claims to be
>"universal", but actually requires you to generate non-standard local
>packages to do any useful programming, then it isn't really universal.

I believe the idea is to have lots of standard packages available, and
to have some sort of standards and distribution system that
continually updates packages.  This is really just a formalization of
the process of natural language evolution.  I expect that a universal
programming language will have to be more formal than a natural
language, at least in a few details...

>Routines/modules/packages from other languages would be called with the
>use of constructs proper to the language in use. So a call to routine A
>written in Pascal will look like a call to function A in ML.

This is more-or-less the way things are heading now.  The problem is
that it requires programmers to know dozens of different languages
with different syntaxes, different special-cases, and many different
obscure rules.  "Lets's see, does Ada use FOR or LOOP for that
construction?", "do I seperate the fields with comma or semicolon?",
"Can I alias the arguments to that procedure?".  What a nightmare.
And the majority of these rule differences are arbitrary, making them
even harder to remember.

>How can we realisticly unify the models of Pascal, Smalltalk, and ML?

It's been pretty much done already, since Smalltalk subsumes both
Pascal and ML (modulo built-in functions that are present in ML but
not in Smalltalk).

>If, as a part of a program, I have something equivalent to an ML functor
>exporting a binary tree package, those binary trees are immutable. To
>switch views as you seem to be suggesting will suddenly take my well debugged
>ML module, and introduce bugs - because the ML program cannot possibly
>copy with non-functional programs altering its structures. 

Why not?  It is trivial to implement applicative data structures in
Smalltalk (or almost any other non-applicative language for that
matter).  The fact that non-applicative data is available does not
effect the nature of an applicative ADT.

>Or, suppose I write a log routine in a language like Prolog. How, within
>the structure of a single language, can I make imperative parts of the
>program handle backtracking?

Icon is an imperative language with backtracking.  It is a beautiful
example of how dissimilar paradigms can be combined.
-- 
					David Gudeman
Department of Computer Science
The University of Arizona        gudeman@cs.arizona.edu
Tucson, AZ 85721                 noao!arizona!gudeman

news@usc.edu (08/19/90)

In article <6465@helios.ee.lbl.gov> austern@ux5.lbl.gov (Matt Austern) writes:

   In article <24043@megaron.cs.arizona.edu>, gudeman@cs (David Gudeman) writes:
   >I am always disturbed by the idea that for a language to have some
   >good property, it must forbid the programmer from doing X.

not to be confused with 'If some other language allows Y, I must
implement it in the same fashion.'

   I'm sure that this is obvious to everybody, but I think it bears
   repeating: if the "good property" is efficiency, this is often best
   achieved by forbidding the programmer from using certain
   constructions.

by forbidding?  hmm.. true if you note the qualifier "often best"

   The more assumptions a compiler can make about a program, the better
   it can do the optimization, and in many cases, the only way a compiler
   can safely assume that a programmer doesn't do something is to make it
   illegal.  (The C calling convention is more complicated than the
   FORTRAN calling convention, for example, because C allows recursion.
   Can you really imagine a C compiler/linker that can prove that a
   program has no recursive calls, and use a simpler calling convention
   for such programs?)

It is not a requirement that a language require ___ to be illegal.
'Illegal' generally means that a good implementation has not been
thought of, or at least has not been implemented.  For example, with
recursion, if you were implementing C on a machine where recursion was
expensive, would you not at least check the simple cases to see if a
simpler calling protocol could be used?

C has a problem where you can take a address of an object, pass it
around globally, and every function you call might have access to it.
There are, however, things you can do to trace the propagation of such
addresses.  In the general case this is too hard to do, but in many
simple cases the compiler could be made to recognize a no-aliasing
condition, and take advantage of it.  (e.g. address always localized,
and only passed to pure functions, or easier, address of object never
used except within the context of a single line: a[i] or such.)
Admittedly, this does require 'global' analysis, and in the most
general case (globals in large programs) this might become quite
impractical.

I'm trying to say that clever implementation can get around many
considerations which would otherwise impair a language.

   There really is a tension between flexibility and ease of
   optimization, and the resolution isn't necessarily the same for all
   purposes. 

true :)

wulkwa

carroll@udel.edu (Mark Carroll <MC>) (08/20/90)

In article <24281@megaron.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>In article  <27388@nigel.ee.udel.edu> carroll@udel.edu (Mark Carroll <MC>) writes:
>>How can we realisticly unify the models of Pascal, Smalltalk, and ML?
>
>It's been pretty much done already, since Smalltalk subsumes both
>Pascal and ML (modulo built-in functions that are present in ML but
>not in Smalltalk).
>

I think that this is a completely ludicrous statement. Smalltalk
does NOT subsume ML; ML is a strongly statically-typed language where most
values are immutable. Smalltalk is a dynamically typed language which
is completely dependant on local state. An ML program models lambda
calculus expansion; "objects" are represented by immutable values
which "stream" to handle time, creating a system where objects values
never change, but identity does; Smalltalk programs model OO automata
with object identity; an objects IDENTITY never changes, but its
value does. They may have equivalent expressiveness, but neither
one subsubes the other.

>
>>Or, suppose I write a log routine in a language like Prolog. How, within
>>the structure of a single language, can I make imperative parts of the
>>program handle backtracking?
>
>Icon is an imperative language with backtracking.  It is a beautiful
>example of how dissimilar paradigms can be combined.

True. I deserved to be shot down on that one; if I had thought about
it, I would have remembered Icon.


>-- 
>					David Gudeman
--
|Mark Craig Carroll: <MC>  |"We the people want it straight for a change;
|Soon-to-be Grad Student at| cos we the people are getting tired of your games;
|University of Delaware    | If you insult us with cheap propaganda; 
|carroll@udel.edu          | We'll elect a precedent to a state of mind" -Fish

gudeman@cs.arizona.edu (David Gudeman) (08/20/90)

In article  <27882@nigel.ee.udel.edu> carroll@udel.edu (Mark Carroll <MC>) writes:
>In article <24281@megaron.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>>
>>It's been pretty much done already, since Smalltalk subsumes both
>>Pascal and ML (modulo built-in functions that are present in ML but
>>not in Smalltalk).
>>
>
>I think that this is a completely ludicrous statement. Smalltalk
>does NOT subsume ML; ML is a strongly statically-typed language where most
>values are immutable. Smalltalk is a dynamically typed language which
>is completely dependant on local state. [... more buzz words ...]

All of this is very nice, but it doesn't tell me that there is any
substantial difference between Smalltalk and ML.  It only tells me
that the in the traditional ways of viewing Smalltalk and ML, there
are some features that are different.  However, any ML program can be
translated into Smalltalk with a simple syntactic transformation.
Furthermore, it is possible to select a subset of Smalltalk and impute
an ML-like semantics to that subset, such that the ML-like semantics
and the Smalltalk semantics agree.

To the programmer, this means it is possible for a programmer to use
Smalltalk for programming, but imagine that he is using a functional
language with functional semantics.  The reverse is not possible.
-- 
					David Gudeman
Department of Computer Science
The University of Arizona        gudeman@cs.arizona.edu
Tucson, AZ 85721                 noao!arizona!gudeman

morrison@thucydides.cs.uiuc.edu (Vance Morrison) (08/20/90)

>>In article <24281@megaron.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>All of this is very nice, but it doesn't tell me that there is any
>substantial difference between Smalltalk and ML.  It only tells me
>that the in the traditional ways of viewing Smalltalk and ML, there
>are some features that are different.  However, any ML program can be
>translated into Smalltalk with a simple syntactic transformation.
>Furthermore, it is possible to select a subset of Smalltalk and impute
>an ML-like semantics to that subset, such that the ML-like semantics
>and the Smalltalk semantics agree.
>
>To the programmer, this means it is possible for a programmer to use
>Smalltalk for programming, but imagine that he is using a functional
>language with functional semantics.  The reverse is not possible.
				      ^^^^^^^^^^^^^^^^^^^^^^^^^^^

I have to differ with you there.  Whenever you write semantics of a
language what you are doing is specifying a SYNTATIC transformation
from the source langauge to the semantic language.  Thus if I write
the sematics of smalltalk in ML (which certanly is possible), I have
just written a description that can be used to convert Smalltalk into
ML.  Thus the programmer 'thinks' he is writing smalltalk but is 
'really' writting ML.  

The bottom line is that EVERYTHING a computer does is syntatic, so
if a computer can convert from one langage to another, then one
language can be defined in terms of the other.  Unfortuately the
reverse is almost always true, so you can't say that one language
is 'better' or 'more expressive' based on that arguement.

I think an important point that is often overlooked is that what a
language prohibits is at least as important as what a language 
provides.  The more constrained a language is, the easier it is
to reason about.  For example, in ML I can be sure my program is
type safe, and that I will never have aliasing problems (without
EVER seeing the program).  I can't do this in Smalltalk.  Now
granted, you could restrict yourself to a restricted form of Smalltalk
so that you could guarentee these properties in your smalltalk program,
but what you have just done is convert Smalltalk to ML.  Either you
always use the subset in which case you might as well be using ML,
or you don't in which case you can no longer reason simply about 
the program.  

Thus there is a continuous tradeoff between expressiveness and the
abilty to reason easily about your code.  In general for a particular
application you want to use the most RESTRICTIVE language that allows
you to express what you want to describe.   This way you get the most
'free' information about the behavior of your code.  Obviously
one language can't do this, but you can imagine a family of 'nested'
languages that might provide a good approximation.

Vance Morrison
Univ. of Illinois, Urbana-Champaign

gudeman@cs.arizona.edu (David Gudeman) (08/21/90)

In article  <1990Aug20.143429.7659@ux1.cso.uiuc.edu> morrison@thucydides.cs.uiuc.edu (Vance Morrison) writes:
>>>In article <24281@megaron.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>>..any ML program can be
>>translated into Smalltalk with a simple syntactic transformation.
                                   ^^^^^^
>>Furthermore, it is possible to select a subset of Smalltalk and
>>impute an ML-like semantics to that subset
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>...  The reverse is not possible.
>      ^^^^^^^^^^^^^^^^^^^^^^^^^^^

>I have to differ with you there.  Whenever you write semantics of a
>language what you are doing is specifying a SYNTATIC transformation
>from the source langauge to the semantic language.  Thus if I write
>the sematics of smalltalk in ML (which certanly is possible), I have
>just written a description that can be used to convert Smalltalk into
>ML.

The semantic description you are talking about is far from simple, and
has to specify every production in the grammer.  The syntactic change
from ML to Smalltalk, however can be specified as a general
tranformation.  Also, there is no subset of ML that corresponds to
Smalltalk.

All of my comments have been informal (they _can_ be formalized), but
the intuition can be given fairly clearly without talking about
homomorphisms and functors:  To an ML expression

  fun(arg1, arglist)

there corresponds a Smalltalk expression

  arg1 fun: arglist

where there are direct correspondences between the various parts of
the expression, and the Smalltalk expressions have no side-effects.
However, any side-effecting operation in Smalltalk has no
corresponding operation in ML.
-- 
					David Gudeman
Department of Computer Science
The University of Arizona        gudeman@cs.arizona.edu
Tucson, AZ 85721                 noao!arizona!gudeman

morrison@thucydides.cs.uiuc.edu (Vance Morrison) (08/21/90)

In article <24384@megaron.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>where there are direct correspondences between the various parts of
>the expression, and the Smalltalk expressions have no side-effects.
>However, any side-effecting operation in Smalltalk has no
>corresponding operation in ML.

I agree with you that it is far easier to write a ML to Smalltalk converter
the the reverse, and I agree that intuitively this means that Smalltalk is
a superset (more expressive) then ML.   I only wanted to point out that this
is not a rigorous statement since you can also write an Smalltalk to ML
converter.  Thus to make a rigourous statement we would have to precisely
define 'easier', which looks messy at best.

Also I wanted to point out that I believe the idea of finding 'bigger' and
'bigger' languages that encompass other languages in this way is not 
productive.  The reason (as I stated earlier), is that restricting a language 
is often a very useful thing.  It allows you to reason more simply about
your program.  

Sure there is a subset of Smalltalk that is isomorphic to ML.   But this does
not allow me to reason about my programs (type check etc) in the simple way
I can with ML unless I confine myself to that subset exclusively (in which
case I am really writing my program in ML!).   Introducing more general 
smalltalk code that causes side effects can destroy the simple properties
of the whole program INCLUDING the code written in the subset.   Thus having
a very expressive language alone is not the solution.

Thus I believe the problem to solve is learning how to nest languages
gracefully.   In this case of the imperative-functional issue, this defining
semantics in such a way that imperative parts of the program can not 
destroy the nice properties of the function parts.  

Vance Morrison
Univ of Illinois

peter@ficc.ferranti.com (Peter da Silva) (08/21/90)

In article <1990Aug20.220332.21135@ux1.cso.uiuc.edu> morrison@thucydides.cs.uiuc.edu.UUCP (Vance Morrison) writes:
> Introducing more general smalltalk code that causes side effects can
> destroy the simple properties of the whole program INCLUDING the code
> written in the subset.

Couldn't you use the more restricted code to produce objects that can
be reasoned about in the way you're describing? This would let you build
reliable objects, which would eliminate a large potential source of bugs
in a larger program using the whole language?
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
peter@ferranti.com

morrison@thucydides.cs.uiuc.edu (Vance Morrison) (08/22/90)

In article <MYC5VU9@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:

>> Introducing more general smalltalk code that causes side effects can
>> destroy the simple properties of the whole program INCLUDING the code
>> written in the subset.
>
>Couldn't you use the more restricted code to produce objects that can
>be reasoned about in the way you're describing? This would let you build
>reliable objects, which would eliminate a large potential source of bugs
>in a larger program using the whole language?

This is the idea of nesting I was talking about.  Using a restricted
language to specify the things you can in that subset, but using a more
expressive superset when necessary.   

Unfortunately doing this is not as easy as you may think.  The problem
is that very often the restricted language enjoys the nice properties
is has because of a GLOBAL constraint.  Thus the minute you introduce
ANY code that is not in the restricted language the reasoning gets
more complicated.

A example of this is functional languages.  Any imperative feature can 
make otherwise completely functional code have side effects.  Still
it is not hopeless, by keeping track of what code is purely functional
and what code is not, (which can be done at compile time) you can
salvage alot (these are called effect systems, FX-80 is an example
of such a system).  Data structures pose a problem since we
often want to reason about pieces of them differently, but there
is hope.  

I guess my point is that the language support (type checking, effect
checking and other compile time reasoning systems), are probably
a lot more important than the language itself.   Saying something
like 'if you want to do applicative programming, simply use this subset'
is not enough, because without compiler support, I will never know
if what I wrote is really in the subset (perhaps I missed a crucial
semicolon (:-).  Even I am sure that what I wrote is in the subset
I want to be SURE of its behavior when I combine it with more
powerful constructs in the superset.  Hopefully it and can be 
reasoned about as a black box abstraction in the larger language.  
We want to design languages carefully so they have this nice nesting
feature.  

Vance

phys169@canterbury.ac.nz (08/10/12)

First, a comment on the "Efficient Fortran" thread that led to this discussion: 

It is true that you shouldn't *have* to resort to the inner workings of the
computer, such as the format of real numbers, but sometimes you do, and when
that happens, you shouldn't have to "fight the system" to do simple things like
play with bits in words, simply because the designers held the view that bit
twiddling is fundamentally evil and, besides, they'd put everything into the
language any reasonable programmer could ask for. There are some classes of
problems that need a spot of machine-level fiddling in an otherwise HLL; my
view is that the language should hope to reduce its frequency, but allow it
(albeit with warning bells). Now on with the new topic...

In article <23893@megaron.cs.arizona.edu>, gudeman@cs.arizona.edu (David Gudeman) writes:
> ... possible to design a ``universal'' language of some sort, that makes it 
> unnecessary to use other languages.  
 
In article <1356@fs1.ee.ubc.ca>, mikeb@coho.ee.ubc.ca (Mike Bolotski) writes:
> Here is a counterargument.  Workers in different fields solve problems
> using entirely different languages.  Mathematics is one such language,
> and it includes many "sublanguages" -- continuous variable notation,
> matrix notation, logic, etc.   Circuit diagrams are another language.
> English, with terms specialized to each area,  is another language, used 
> almost exclusively in non-technical fields.
> 
> A "universal" programming language is in a sense equivalent to the claim
> that a single language is appropriate for all areas of study.  
> 
There is a difference between proving a very general-purpose language that
people can use (if they like) for anything, and that one language is most
appropriate for everything (and therefore ought to be used for everything).
There are two good reasons for try to at least *approximate* a universal
language:

 (a) Organisations have difficulties managing projects that use what ever
language is "flavour of the month", e.g. being left over with old software
written in "PLAN", but not having any programmers skilled in the language in X
years time. Even if the whole organisation standardises on one language for all
time, they will have difficulty in finding new programmers, software updates,
etc sooner or later.

 (b) Programmers often have to program with a language not perfectly suited to
a particular job, for practical reasons (e.g. the organisation standardised on
COBOL ten years ago, and isn't about to change simply because one programmer
wants to write a word processor!). More often, it is simply habit - better the
devil you know. Since this sort of thing happens anyway, it would be nice if
the language was flexible - as a universal language would have to be.

But consider a user of something like a Unix command shell, that has set up
lots of aliases for him/herself to do the things that are most often required.
In effect, they are using their own language. Now, if they are sick for a week
and someone else has to use their machine (someone used to their own set of
aliases), then there are problems - either they abandon all the command alises
and use "pure" csh or whatever, or try to understand the previous person's
language. If the tasks to be performed are fairly complex, say special file
update and backup procedures, then they will probably need to understand what
was happenning before for the system to keep functioning. They may even adapt
them with components from their own "language" if they will be working at the
job long enough. 

Think of the manager of such a system; would you enforce everyone to use the
same set of aliases (or no aliases at all)?  Some people like to have 2-letter
abbreviations for the commands they use often. For a beginner this is terrible,
but after a while it is ergonomic. This is not the sort of issue that people
can come down with hard and fast decisions like "never use aliases" or "always
use the shortest abbreviations even if it means beginners suffer for a few
weeks". Of course, the same arguements apply to conventional programming
languages: people are going to want different languages in different
situations, and there are going to be hassels because of this. So don't expect
a "universal programming language" to be *universally applied*. 

This isn't to mean one shouldn't be available, and doesn't mean one can't be
produced. A language I and some other folks down here in NZ are working on,
called NGL (part of eNGLish, and Nth Generation Language), attempts to be a
universal language, by being dynamically redefinable. Further explanation:

(1) Consider again the person with a set of important aliases on a system, away 
 for a while. Suppose that a command, called "weekly_update" was defined in 
 terms of "pure" commands and a handful of other aliases (which, in turn, may
 be defined in terms of other aliases). Suppose you could see the weekly_update
 command file expanded in "pure" commands only. So you only need to know the
 standard language (csh or whatever, in this case), and your own "dialect" of
 it, never other people's. This is something that is easy to understand and
 implement. If you define a "macro" (or whatever name you want to give it) in
 terms of other macros you have defined, you could see the macro listed in
 either pure language components or your other macros (still easy to imagine,
 but a wee bit more tricky to program). Now suppose you can see anyone else's
 macros in terms of your macros. If they define another word for exactly the 
 same command or sequence of commands you have, you see your name for it.

(2) Think back to some versions of BASIC that convert the external source into
 some form of internal p-codes. Data General's Business BASIC is a very good
 example; it converts expressions into RPN, for example. When you list the
 program, you see what you typed in (reformatted nicer, perhaps). But you
 don't need to convert assignments into "LET ... = ..." you could just as
 easily list it in a form like "... := ...;" a la Pascal, or C, or COBOL or
 anything. The same with different "FOR" statement structures - you could see
 the p-codes in just about any language (if the language doesn't have an
 appropriate construct, you could list it as more fundamental operations).
 You could have a team of programmers working on a project using a variety of
 languages (really, all NGL in disguise) and each seeing the whole in their own
 favourite language. 

(3) Imagine a person talking to the computer via a voice recognition unit. In
 all probability, a much wider range of language will be used than when typing
 commands, and English being what it is, the best way to handle that is to let
 the command processor learn as it goes. What has that got to do with the NGL
 programming language? Not a lot at the moment, but that seems to be a logical
 (if distant) progression. 

So, this takes care of the language used in communicating needs to the
computer, but there is obviously more to discuss. Feel free to e-mail me.
What I hope to have explained is that something like a "universal" language is
feasible (sorry to use that word again), but we really need to talk about a 
flexible language, one where you can choose detail or general instructions to
teh system, and then within that, choose the wording and syntax that is most
convenient for you.

Mark Aitchison, University of Canterbury, NZ.