[comp.lang.scheme] in defense of C

gjc@mitech.COM (03/02/90)

In defense of C?
(Or an apology for small extensible languages with minimal overhead
 and minimal required runtime libraries).

The key here is the phrase "equivalent program in Pascal" coupled
with the extremely important suggestion I made, which is that C
could be used like you use lisp.

You say C has no array bounds checking. (In a way, big deal,
certainly the Lispmachines had extremely good low-level checking
of that nature, but it didn't keep the software or user from
being able to "... let machines crash or behave strangely" as Steele puts it).

It is so easy to build up error-checking routines from
non-error-checking primitives. Is that not what we do in lisp? (Maybe
we only *used* to do it, a lost art?) Here are some declarations from
some code I use every day:

struct bitarray
{long width; long height; char *data;};

int bitaref(struct bitarray *,int,int);

void bitaset(struct bitarray *,int,int,int);

struct bitarray *cons_bitarray(long,long);

Now, with prototype-enforcement there is absolutely no way my program is
going to crash or behave badly if I use these three guys, cons_bitarray,
bitaref and bitaset. 

The prototype-enforcement makes sure I'm not calling these on data
that are not bit arrays and integers. My C-compiler can inline these
and remove redundant error checking in many case.

I claim: A good engineer can generate a much richer and more useful set of
subroutines of this nature than found in ANY LANGUAGE DESIGNED BY COMMITTEE.

... especially compared to those those languages who's references manuals
start overflowing into multiple volumes!

On the subject of I/O primitives. "Gets" is one of those line-at-a-time
(what I called mainframe-style) procedures. Not what I had in mind (also,
not really what you should be calling primitive). Better to think in terms
of  getc and putc, or read and write, or XGetNextEvent.

-gjc

p.s. "... let machines crash or behave strangely", personally *no* I
don't use the Macintosh and try to avoid writing Unix kernel code.

mccalpin@vax1.acs.udel.EDU (John D Mccalpin) (03/02/90)

In article <9003012059.AA11786@schizo.samsung.com> gjc@mitech.com writes:
>In defense of C?
>(Or an apology for small extensible languages with minimal overhead
> and minimal required runtime libraries).

>It is so easy to build up error-checking routines from
>non-error-checking primitives.
>Here are some declarations from some code I use every day:
	[...example deleted...]
>Now, with prototype-enforcement there is absolutely no way my program is
>going to crash or behave badly if I use these three guys, cons_bitarray,
>bitaref and bitaset. 

Maybe a step farther is the TYPES package by Saul Youssef at Florida
State which provides an object-oriented programming environment within
a FORTRAN implementation.  By making careful use of arrays pretending
to be structures, it is possible to write fairly robust code even though
the language semantics provide very little protection on type-checking
on such.  The TYPES package provides queues, ordered sets, lists, etc.,
all accessible and operated on by FORTRAN subroutines.

Inquiries to youssef@scri1.scri.fsu.edu

>I claim: A good engineer can generate a much richer and more useful set of
>subroutines of this nature than found in ANY LANGUAGE DESIGNED BY COMMITTEE.

But should every engineer *have*to* generate all of those subroutines?
Certainly it is possible to write your own error-checking code and
such, but many of us were hired to get work done, not to individually
re-created safe programming languages/environments.

>... especially compared to those those languages who's references manuals
>start overflowing into multiple volumes!          -gjc

But it is certainly possible to write a fairly simple language which
provides a good degree of reliability and which does not allow many
of the most common forms of errors that C and FORTRAN are full of.
A good example is Turing-Plus, which combines a fairly restrictive
and safe language with the ability to do all those nasty things that
C and FORTRAN programmers love so....  You just have be be very specific
about telling the machine that you are doing something tricky.
Ditto for Modula-3, as far as I can tell. (I have not yet gotten
it operational on my SparcStation.)

Not all strongly-typed and checked languages need to be Ada....
-- 
John D. McCalpin - mccalpin@vax1.acs.udel.edu
		   mccalpin@delocn.udel.edu
		   mccalpin@scri1.scri.fsu.edu

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/02/90)

In article <9003012059.AA11786@schizo.samsung.com> gjc@mitech.com writes:
>The key here is the phrase "equivalent program in Pascal" coupled
>with the extremely important suggestion I made, which is that C
>could be used like you use lisp.

Not only that, who says C can't have array bounds checking.  As far
as I can tell, the lack of them is just an implementation tradition.
Pointers would have to be more complex, but fi you want bounds
checking enough you'd presumably be willing to pay.

Besides, since when does Lisp make such checks.  Sure, most Lisps
probably check array bounds, but most do not check whether CAR and
CDR are really being applied to conses (just for example).

As GJC notes, Lisp programmers have developed ways to deal with
this, as have C programmers.

>You say C has no array bounds checking. (In a way, big deal,
>certainly the Lispmachines had extremely good low-level checking
>of that nature, but it didn't keep the software or user from
>being able to "... let machines crash or behave strangely" as Steele puts it).

That too.

lyn@ZURICH.AI.MIT.EDU (Franklyn Turbak) (03/03/90)

In article <5799@udccvax1.acs.udel.EDU>, mccalpin@vax1.acs.udel.edu writes:

   >   >I claim: A good engineer can generate a much richer and more useful set of
   >   >subroutines of this nature than found in ANY LANGUAGE DESIGNED BY COMMITTEE.
   >
   > But should every engineer *have*to* generate all of those subroutines?
   > Certainly it is possible to write your own error-checking code and
   > such, but many of us were hired to get work done, not to individually
   > re-created safe programming languages/environments.

Indeed, the expressiveness of a language depends not only on what the
language allows you to say but also on what it doesn't force you to
say.  Given any language with reasonable enough abstraction facilities
(chiefly procedures and macros) it is *possible* to write clean and
safe code, but not necessarily *easy* to do so.  

The hallmark of a language is the extent to which it abstracts over
common programming patterns.  If Scheme didn't support SET! or
CALL-WITH-CURRENT-CONTINUATION it would be possible to simulate them
by passing an explicit store and continuation to every procedure (in
the denotational semantics "style"), but programs would become
cumbersome to write and impenetrable to read.  SET! and
CALL-WITH-CURRENT-CONTINUATION are useful language features exactly
because they embed into the language patterns which otherwise would
have to created again and again by hand.  

A similar argument can be made for a variety of other language
features: types, data abstraction facilities, error-handling
facilities, etc. Sure, it's always possible to simulate these in some
explicit fashion; but do we want to *have* to?

- Lyn -

lgm@cbnewsc.ATT.COM (lawrence.g.mayka) (03/05/90)

[I have added comp.lang.lisp to the newsgroup list.]

In article <1903@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>Besides, since when does Lisp make such checks.  Sure, most Lisps
>probably check array bounds, but most do not check whether CAR and
>CDR are really being applied to conses (just for example).

On Lisp-directed processor architectures, dynamic type checks are
(typically) uniformly applied, often in parallel with the
computation of the most-common-case result.  On conventional (C-
and Fortran-directed) architectures, Common Lisp vendors typically
offer a range of safety/efficiency tradeoffs.  One can then choose
a tradeoff according to the needs, or the stage of development, of
one's application.

One of the most important quality measures of a Common Lisp
compiler is the level of safety it supports without an undue
sacrifice in execution speed.  The better CL implementations score
quite highly on this scale, especially on newer processor
architectures such as SPARC.  For example, the better compilers
indeed ensure that CAR and CDR are only applied to lists, without
a loss of execution speed and even though a CL implementation must
permit one to apply CAR and CDR to the symbol NIL.

>As GJC notes, Lisp programmers have developed ways to deal with
>this, as have C programmers.

I venture to say that generally, Common Lisp programmers compile
unsafe code only when they have to (e.g., when the absolute
maximum execution speed is called for, or when forced by
circumstances to use an inferior CL compiler), not because they
like it.  Unsafe code is not an integral part of the culture or a
badge of pride as, one might say, it seems to be for C.


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@ihlpf.att.com

Standard disclaimer.

jeff@aiai.ed.ac.uk (Jeff Dalton) (03/07/90)

In article <14112@cbnewsc.ATT.COM> lgm@cbnewsc.ATT.COM (lawrence.g.mayka,ihp,) writes:
 >One of the most important quality measures of a Common Lisp
 >compiler is the level of safety it supports without an undue
 >sacrifice in execution speed.  The better CL implementations score
 >quite highly on this scale, especially on newer processor
 >architectures such as SPARC.  For example, the better compilers
 >indeed ensure that CAR and CDR are only applied to lists, without
 >a loss of execution speed and even though a CL implementation must
 >permit one to apply CAR and CDR to the symbol NIL.

You say this as if it were typical of better compilers on machines
other than SPARCs, such as, maybe, 68020s.  Can they really have safe
CARs and CDRs, without loss of speed, on a 68020?  If so, I would
expect that safety to remain even if I set saftey=0, but that doesn't
seem to be what happens.  Moreover, how many of the SPARC compilers
can do this?  I suspect it's only one or two of them.

-- Jeff

pepke@gw.scri.fsu.edu (Eric Pepke) (03/08/90)

In article <1942@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
> You say this as if it were typical of better compilers on machines
> other than SPARCs, such as, maybe, 68020s.  Can they really have safe
> CARs and CDRs, without loss of speed, on a 68020?

I don't know about the internals of any LISP system other than the ones I 
have written.  In the one I am now writing for the 680x0, one can have 
safe CARs and CDRs without loss of speed.  One has to test to see if it is 
(1) a valid list, or (2) NIL, anyway.  So, one just makes that a test for 
(1) a valid list, or (2) anything else.  In my system, that's testing a 
single bit.  In case 1, do the job.  In case 2, return NIL.

Eric Pepke                                    INTERNET: pepke@gw.scri.fsu.edu
Supercomputer Computations Research Institute MFENET:   pepke@fsu
Florida State University                      SPAN:     scri::pepke
Tallahassee, FL 32306-4052                    BITNET:   pepke@fsu

Disclaimer: My employers seldom even LISTEN to my opinions.
Meta-disclaimer: Any society that needs disclaimers has too many lawyers.

lgm@cbnewsc.ATT.COM (lawrence.g.mayka) (03/08/90)

In article <1942@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>In article <14112@cbnewsc.ATT.COM> lgm@cbnewsc.ATT.COM (lawrence.g.mayka,ihp,) writes:
> >One of the most important quality measures of a Common Lisp
> >compiler is the level of safety it supports without an undue
> >sacrifice in execution speed.  The better CL implementations score
> >quite highly on this scale, especially on newer processor
> >architectures such as SPARC.  For example, the better compilers
> >indeed ensure that CAR and CDR are only applied to lists, without
> >a loss of execution speed and even though a CL implementation must
> >permit one to apply CAR and CDR to the symbol NIL.
>
>You say this as if it were typical of better compilers on machines
>other than SPARCs, such as, maybe, 68020s.  Can they really have safe
>CARs and CDRs, without loss of speed, on a 68020?  If so, I would
>expect that safety to remain even if I set saftey=0, but that doesn't
>seem to be what happens.  Moreover, how many of the SPARC compilers
>can do this?  I suspect it's only one or two of them.

You may be right.  The particular compilation "trick" I've seen
relies on the processor hardware to trap a fullword-size reference
to a machine address not aligned on a fullword boundary.  But if,
as I think is true, the 68020 silently satisfies unaligned pointer
references, this particular technique will indeed be ineffective
on that architecture.

All the more reason to upgrade to a RISC-based computer...


	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@ihlpf.att.com

Standard disclaimer.

moore%cdr.utah.edu@cs.utah.edu (Tim Moore) (03/09/90)

In article <542@fsu.scri.fsu.edu> pepke@gw.scri.fsu.edu (Eric Pepke) writes:
>In article <1942@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>> You say this as if it were typical of better compilers on machines
>> other than SPARCs, such as, maybe, 68020s.  Can they really have safe
>> CARs and CDRs, without loss of speed, on a 68020?
>
>I don't know about the internals of any LISP system other than the ones I 
>have written.  In the one I am now writing for the 680x0, one can have 
>safe CARs and CDRs without loss of speed.  One has to test to see if it is 
>(1) a valid list, or (2) NIL, anyway.  So, one just makes that a test for 
>(1) a valid list, or (2) anything else.  In my system, that's testing a 
>single bit.  In case 1, do the job.  In case 2, return NIL.
>

That's the loss of speed that Jeff is talking about. If you assume
that the argument to CAR or CDR is a cons cell or NIL and you have a
low-tags type scheme, then those operations are 1 instruction long on
a 680x0 (or even less: a base-displacement addressing mode). The NIL
case can be handled with a bit of symbol table trickery. A "safe"
C{A,D}R that checks its argument is going to be slower.

>Eric Pepke                                    INTERNET: pepke@gw.scri.fsu.edu
>Supercomputer Computations Research Institute MFENET:   pepke@fsu
>Florida State University                      SPAN:     scri::pepke
>Tallahassee, FL 32306-4052                    BITNET:   pepke@fsu
>


Tim Moore                     moore@cs.utah.edu {bellcore,hplabs}!utah-cs!moore
"Ah, youth. Ah, statute of limitations."
		-John Waters

lam@uicbert.eecs.uic.edu (03/09/90)

/* Written  1:35 pm  Mar  1, 1990 by gjc@mitech.COM in uicbert.eecs.uic.edu:comp.lang.scheme */
/* ---------- "in defense of C" ---------- */
In defense of C?
(Or an apology for small extensible languages with minimal overhead
 and minimal required runtime libraries).

The key here is the phrase "equivalent program in Pascal" coupled
with the extremely important suggestion I made, which is that C
could be used like you use lisp.

You say C has no array bounds checking. (In a way, big deal,
certainly the Lispmachines had extremely good low-level checking
of that nature, but it didn't keep the software or user from
being able to "... let machines crash or behave strangely" as Steele puts it).

It is so easy to build up error-checking routines from
non-error-checking primitives. Is that not what we do in lisp? (Maybe
we only *used* to do it, a lost art?) Here are some declarations from
some code I use every day:

struct bitarray
{long width; long height; char *data;};

int bitaref(struct bitarray *,int,int);

void bitaset(struct bitarray *,int,int,int);

struct bitarray *cons_bitarray(long,long);

Now, with prototype-enforcement there is absolutely no way my program is
going to crash or behave badly if I use these three guys, cons_bitarray,
bitaref and bitaset. 

The prototype-enforcement makes sure I'm not calling these on data
that are not bit arrays and integers. My C-compiler can inline these
and remove redundant error checking in many case.

I claim: A good engineer can generate a much richer and more useful set of
subroutines of this nature than found in ANY LANGUAGE DESIGNED BY COMMITTEE.

... especially compared to those those languages who's references manuals
start overflowing into multiple volumes!

On the subject of I/O primitives. "Gets" is one of those line-at-a-time
(what I called mainframe-style) procedures. Not what I had in mind (also,
not really what you should be calling primitive). Better to think in terms
of  getc and putc, or read and write, or XGetNextEvent.

-gjc

p.s. "... let machines crash or behave strangely", personally *no* I
don't use the Macintosh and try to avoid writing Unix kernel code.
/* End of text from uicbert.eecs.uic.edu:comp.lang.scheme */

gateley@m2.csc.ti.com (John Gateley) (03/09/90)

In article <542@fsu.scri.fsu.edu> pepke@gw.scri.fsu.edu (Eric Pepke) writes:
|In the one I am now writing for the 680x0, one can have 
|safe CARs and CDRs without loss of speed.
|So, one just makes that a test for 
|(1) a valid list, or (2) anything else.  In my system, that's testing a 
|single bit.  In case 1, do the job.  In case 2, return NIL.
|Eric Pepke                                    INTERNET: pepke@gw.scri.fsu.edu

But this is NOT safe: consider the following code:

(defun foo ()
  (if (car 3)
      (tear-down-the-berlin-wall)
      (bomb-the-soviets)))

Your implementation will bomb the soviets! I understand that your
implementation will not crash due to garbage pointers, but I
don't think "safe" is a good term to apply here.

John
gateley@m2.csc.ti.com

pepke@gw.scri.fsu.edu (Eric Pepke) (03/09/90)

Sorry; I misunderstood the controversy.  I feel like Emily Latella.  
"Never mind!"

Eric Pepke                                    INTERNET: pepke@gw.scri.fsu.edu
Supercomputer Computations Research Institute MFENET:   pepke@fsu
Florida State University                      SPAN:     scri::pepke
Tallahassee, FL 32306-4052                    BITNET:   pepke@fsu

Disclaimer: My employers seldom even LISTEN to my opinions.
Meta-disclaimer: Any society that needs disclaimers has too many lawyers.

cph@ZURICH.AI.MIT.EDU (Chris Hanson) (03/12/90)

   Date: 8 Mar 90 15:32:50 GMT
   From: Eric Pepke <zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!uflorida!mephisto!prism!fsu!gw.scri.fsu.edu!pepke@think.com>

   In article <1942@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
   > You say this as if it were typical of better compilers on machines
   > other than SPARCs, such as, maybe, 68020s.  Can they really have safe
   > CARs and CDRs, without loss of speed, on a 68020?

   I don't know about the internals of any LISP system other than the ones I 
   have written.  In the one I am now writing for the 680x0, one can have 
   safe CARs and CDRs without loss of speed.  One has to test to see if it is 
   (1) a valid list, or (2) NIL, anyway.  So, one just makes that a test for 
   (1) a valid list, or (2) anything else.  In my system, that's testing a 
   single bit.  In case 1, do the job.  In case 2, return NIL.

Many 680x0 Scheme compilers offer a mode where no type checking is
performed at all; for these compilers, in that mode, CAR is a single
machine instruction.  (Naturally this is dangerous, but there are ways
to recover from segmentation violations and the like.)  It sounds like
you mean "without loss of speed" relative to something that is already
slower than this.  I believe that type-safe compiled code will
necessarily be slower than than non-type-checking compiled code on the
680x0.