[comp.compilers] obsessions with lexical and syntactic issues

rrh@skagit.cs.washington.edu (Robert R. Henry) (11/14/89)

All of this discussion on lexing and parsing makes me think that we're
still in the 1960's or taking classes taught by theoreticians.  It
seems foolish to be fixated on issues that account for less than 20% of
the compile time, and probably less than 5% of the compiler writer's
time and surely less than .001% of the undetected errors in a
compiler.

Who >really< cares about syntax anyway?

It would be more interesting to spend time on hard issues, such as
attribution, retargetability, run time organization, and so on.  Surely
its harder and ultimately more important to implement powerful data
flow analysis routines correctly and efficiently than to focus on
converting strings to trees.

Robert R. Henry
[We certainly do resemble the drunk who was looking for his glasses under
the streelight because it was better lit there, although I must insist that
the theory behind NFAs and DFAs for lexing and context-free grammars for
parsing are interesting and worthwhile for the well-educated computer
scientist to understand.  Could someone suggest sources for an up-to-date
introduction to attribute grammars and denotational sematics for those of
us still a little hazy on those topics?  -John]
[From rrh@skagit.cs.washington.edu (Robert R. Henry)]
-- 
Send compilers articles to compilers@esegue.segue.boston.ma.us
{spdcc | ima | lotus}!esegue.  Meta-mail to compilers-request@esegue.
Please send responses to the author of the message, not the poster.

albaugh@dms.UUCP (Mike Albaugh) (11/15/89)

From article <1989Nov14.154043.9424@esegue.segue.boston.ma.us>,
  by rrh@skagit.cs.washington.edu (Robert R. Henry):
> All of this discussion on lexing and parsing makes me think that we're
> still in the 1960's or taking classes taught by theoreticians.  It
> seems foolish to be fixated on issues that account for less than 20% of
> the compile time, and probably less than 5% of the compiler writer's
> time and surely less than .001% of the undetected errors in a
> compiler.
> 
> Who >really< cares about syntax anyway?

	I often feel this way, except for the times I'm trying to track
down a particularly obscure error message. If lexing/parsing is so
^&%&*^*& easy, then why have I never found a compiler that actually does
a decent job? It doesn't give me a whole lot of faith that they got
the _hard_ parts right :-)

> [We certainly do resemble the drunk who was looking for his glasses under
> the streelight because it was better lit there, although I must insist that
> the theory behind NFAs and DFAs for lexing and context-free grammars for
> parsing are interesting and worthwhile for the well-educated computer
> scientist to understand. ... ]

	I agree in principle, but perhaps one of the major problems is
that these wonderful tools _may_ be a poor fit to the actual job. Another
case of "To a kid with a hammer, everything looks like a nail"? :-)

					Mike

| Mike Albaugh (albaugh@dms.UUCP || {...decwrl!pyramid!}weitek!dms!albaugh)
| Atari Games Corp (Arcade Games, no relation to the makers of the ST)
| 675 Sycamore Dr. Milpitas, CA 95035		voice: (408)434-1709

-- 
Send compilers articles to compilers@esegue.segue.boston.ma.us
{spdcc | ima | lotus}!esegue.  Meta-mail to compilers-request@esegue.
Please send responses to the author of the message, not the poster.