[comp.lang.misc] On whether C has ...

oz@yunexus.yorku.ca (Ozan Yigit) (01/09/91)

In article <24547:Jan822:05:4191@kramden.acf.nyu.edu>
brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes some more:

> On the contrary. Every reference I've found either (1) doesn't define
> first-class; (2) makes ``first-class'' so restrictive that nothing in C
> or Ada can possibly be first-class; or (3) defines ``first-class'' as
> ``can be passed as an argument and [when there are variables] assigned
> to variables.''

Maybe you are only looking for something that is supposed to verify *your*
concept of first-class-ness. Try a copy of Stoy [1]: there is a section in
it about first-class-ness and first-class functions. 

>You have this habit of posting long definitions without verifying that
>they have anything to do with your point.

You have this habit of never quite seeing any other point except your
very own. 

Read what Clinger has to say. 

You said earlier:

>>>Do you have a reference? As always, I'm doing my best to use standard
>>>terminology; I just haven't seen any references that demand a syntactic
>>>restriction like that.

You are shown at least three references so far on this topic, two by Ken
and one by me. Perhaps your ``standard terminology'' isn't.

>Question 1: When you posted that definition, did you believe that it
>defined ``first-class''? 

Of course, I think it is a convincing definition, albeit somewhat more
elaborate than the street version. 

>Question 2: Do you realize that if the properties you posted are taken
>to define ``first-class,'' then nothing in Ada is ``first-class''?

Irrelevant.

>I conjecture that Scheme advertisers have tried to corrupt the meaning
>of ``first-class,'' by adding properties that make Scheme objects into
>``first-class'' objects while making objects in other languages into
>non-``first-class'' objects.

Please spare me your conjectures. If you want to know more about scheme,
you know where to find it. 

>... but ``first-class'' wasn't invented with Scheme.

In other words, you would rather subscribe [for whatever reason] to some
other definition of the concept, than to accept [in my opinion] a more
refined, less restricted concept that is a encapsulated in [amongst other
languages] scheme.

... 
oz
---
[1] Stoy, Joseph E., Denotational Semantics: The Scott-Strachey
Approach to Programming Language Theory, MIT Press, Cambridge, Mass. 1977
---
The king: If there's no meaning	   	    Usenet:    oz@nexus.yorku.ca
in it, that saves a world of trouble        ......!uunet!utai!yunexus!oz
you know, as we needn't try to find any.    Bitnet: oz@[yulibra|yuyetti]
Lewis Carroll (Alice in Wonderland)         Phonet: +1 416 736-5257x3976

kinnersley@kuhub.cc.ukans.edu (Bill Kinnersley) (01/10/91)

In article <27942:Jan902:20:0791@kramden.acf.nyu.edu>, 
   brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
> 
> No. I would rather subscribe to the standard definition. One of the
> flaws of an education in mathematics is that you learn little tolerance
> for attempts to corrupt the standard definitions of words.
> 
Maybe that's one of the flaws in *your* education in mathematics.  Apparently
you were made to memorize and regurgitate a list of definitions at some
point, and thought that that was mathematics.  What you're supposed to have
learned is that definitions are solely for the user's benefit, and it's far
easier to just say what you mean and go on from there.

Never in my life have I seen two mathematicians spend more than five
seconds worrying about the "official" definition of anything.

-- 
--Bill Kinnersley

karl@ima.isc.com (Karl Heuer) (01/11/91)

In article <27770.278aef90@kuhub.cc.ukans.edu> kinnersley@kuhub.cc.ukans.edu (Bill Kinnersley) writes:
>Never in my life have I seen two mathematicians spend more than five
>seconds worrying about the "official" definition of anything.

Apparently you haven't been following the "what is 0^0" thread in sci.math.

Karl W. Z. Heuer (karl@ima.isc.com or uunet!ima!karl), The Walking Lint

brnstnd@kramden.acf.nyu.edu (Dan Bernstein) (01/11/91)

In article <27770.278aef90@kuhub.cc.ukans.edu> kinnersley@kuhub.cc.ukans.edu (Bill Kinnersley) writes:
> In article <27942:Jan902:20:0791@kramden.acf.nyu.edu>, 
>    brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
> > No. I would rather subscribe to the standard definition. One of the
> > flaws of an education in mathematics is that you learn little tolerance
> > for attempts to corrupt the standard definitions of words.
> Maybe that's one of the flaws in *your* education in mathematics.  Apparently
> you were made to memorize and regurgitate a list of definitions at some
> point, and thought that that was mathematics.

Non-mathematicians rarely understand that mathematics is never more than
a step ahead of its definitions. All the great advances in mathematics
were accompanied by advances in terminology and symbolism. Geometry went
nowhere until Euclid made the *definitions* that formalized it. When
Riemann *defined* manifolds he created differential geometry. Now the
Atiyah-Singer theorem is revolutionizing several fields---not because
it's an interesting statement about two different types of index, but
because it *defines* a newly visible structure.

It takes years to introduce a new concept into mathematics. In that
time, several people have probably decided on their own notations and
words for the concept. If the concept becomes popular, each definition
will survive for a while, and papers on the subject have to mention
which definitions they're using. Sometimes competing definitions last
for many decades.

What rarely happens, though, is a *single* term defined in two different
ways in one field. Sure, some people will say ``kernel'' while others
say ``null space''; but neither term is ambiguous. Once somebody had
defined ``null space,'' others using the same concept would either stick
to the *same* term with the *same* definition or introduce a *different*
word with the same definition. There are very few examples of the *same*
word being used with *different* definitions, because it is rare that
two authors will invent the same term independently. (``Graph'' is an
unfortunate couterexample.)

In computer science it's all different. I didn't realize a week ago that
``first-class'' might be meant to imply status compared to other types,
so that first-class objects get all the privileges that any other object
can get. I had seen the standard definition of ``first-class'' in terms
of argument passing and variables, and as a mathematician I assumed that
people wouldn't corrupt a standard definition to suit their purposes.

I was wrong. Did two computer scientists independently invent
``first-class''? Apparently not. So how did the term acquire more than
one meaning? Either somebody doesn't have any respect for standard
definitions, or else people have been using ``first-class'' without
defining it precisely. This isn't immoral; it's just surprising to a
mathematician.

> Never in my life have I seen two mathematicians spend more than five
> seconds worrying about the "official" definition of anything.

Because the *definitions* are so rigid that there's nothing to worry
about. Apparently this isn't true in at least some fields of computer
science.

---Dan

chased@rbbb.Eng.Sun.COM (David Chase) (01/11/91)

karl@ima.isc.com (Karl Heuer) writes:
>kinnersley@kuhub.cc.ukans.edu (Bill Kinnersley) writes:
>>Never in my life have I seen two mathematicians spend more than five
>>seconds worrying about the "official" definition of anything.

>Apparently you haven't been following the "what is 0^0" thread in sci.math.

Do you have any evidence that the people arguing there are
mathematicians?  This is the net, after all.  "I'm not really a
doctor, but I play one on USENET."

David Chase
Sun Microsystems

karl@ima.isc.com (Karl Heuer) (01/11/91)

chased@rbbb.Eng.Sun.COM (David Chase) writes:
>karl@ima.isc.com (Karl Heuer) writes:
>>kinnersley@kuhub.cc.ukans.edu (Bill Kinnersley) writes:
>>>Never in my life have I seen two mathematicians spend more than five
>>>seconds worrying about the "official" definition of anything.
>
>>Apparently you haven't been following the "what is 0^0" thread in sci.math.
>
>Do you have any evidence that the people arguing there are mathematicians?

Oh good, now we get to argue over the official definition of "mathematician".

Seriously, at least some of the people who've at least contributed to the
discussion are well-known mathematicians.  I joined the thread late, when
nearly everybody was already sick of it, so I haven't noticed whether any of
the known mathematicians spent any significant time debating it.

Karl W. Z. Heuer (karl@ima.isc.com or uunet!ima!karl), The Walking Lint

jeff@aiai.ed.ac.uk (Jeff Dalton) (02/01/91)

In article <22345:Jan1021:30:4591@kramden.acf.nyu.edu> brnstnd@kramden.acf.nyu.edu (Dan Bernstein) writes:
>In computer science it's all different. I didn't realize a week ago that
>``first-class'' might be meant to imply status compared to other types,
>so that first-class objects get all the privileges that any other object
>can get. I had seen the standard definition of ``first-class'' in terms
>of argument passing and variables, and as a mathematician I assumed that
>people wouldn't corrupt a standard definition to suit their purposes.

But it _isn't_ the standard definition.  That it is given in lots
of documents doesn't make it right.  The earliest discussions of
first-class that I have found (well before Scheme) are basically
making the point that there shouldn't be arbitrary restrictions
on some objects and not others.  The "bill of rights" idea has
been used since at least 1968.  And the presence of seemingly odd,
perhaps language-specific, rights goes back just as far.  In "The
Design Philosophy of Pop-2", for example, one of the rights is
that all objects can be compared for equality.

The right to be anonymous came from reflection on the differing
treatment of different kinds of objects.  In most languages, numbers,
strings, and structure instances (objects of some struct type) don't
have names.  If some other kinds of objects, such as funtions or
arrays, do need names, then it makes sense to say they are in a
sense 2nd-class citizens.

To pick an example other than C, we can try Basic.  In many versions
of Basic, arrays must be established by a DIM statement, which gives
them names.  The total number of arrays is therefore fixed by the
program source text and new arrays cannot be created at dynamically.
And yet in some of these dialects, arrays can be passed (by ref) to
procedures and returned by procedures.  Assignment to corresponding
elements is also possible as in:

   MAT A = B

I think it is fair to say that these arrays are not 1st-class,
especially compared to how some of these same Basics treat strings.
But suppose we add the ability to assign whole arrays to variables
(rather than element-by-element from one array to another).  Would
they then be 1st-class?  Well, they aren't treated as well as strings.
There's still now way to produce a completely new array (ie, to
allocate one), and it's such differential treatment that talk of
1st and 2nd class is meant to express.