[comp.arch] It looks like he's at it again!

cdshaw@cs.UAlberta.CA (Chris Shaw) (07/10/90)

In article <2328@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>In article <3627@auspex.auspex.com>, guy@auspex.auspex.com (Guy Harris) writes:
>		[Example of weird instruction omitted.]
>
>But there are lots of reasonable hardware
>instructions which have either disappeared or were rarely implemented.

The problem is, Herman, you're one of the very few people who are interested
in these well-known "reasonable instructions". The other problem is that you
know diddly about building programs much longer than 1000 lines, I'll wager.
Anybody who gripes about the slowness of his machine, and who can afford to 
spend the substantial amount of time it takes to tweak code for the last drop
of CPU power is dealing with programs that are pretty small.
Usually.

The basic problem with assembly coding by hand vs assembly coding by compiler
is that IT DOESN'T SCALE. There are extremely limited application areas where
coding by hand is much faster, and you, Herman, live in one. Just because you
have purple lenses on your glasses doesn't mean the world is purple. Just
because you can code for speed better than your compiler can doesn't mean
that more than a tiny minority "out there" can do the same.
Plus assembler is a nightmare unless one of three things is true:
	1: The project is performed by 1 person.
	2: The project is small (less than 5000 lines)
	3: All modules adhere strictly to a call-return convention.
In other words, you can do it, but it's extremely hard work if you're building
something non-trivial. Anywhere up to 1000 lines of code is trivial.
Besides you're still going to suff a drop in productivity vs HLL's.

You brought up the example of computer chess recently. The fact of the matter
is that unless you are an international chess master, the program Deep Thought
will beat you. Why did I mention this? Well, the major reason Deep Thought
beats excellent chess people is because of its specialized hardware. The
program operates by a well-known brute force algorithm. My main point is that
Deep Thought would be useless if the program relied on some set of greasy
assembler tweaks on some two-bit general purpose CPU. Maybe you operate in
just such a niche, where special purpose hardware is what you should be using.

In particular, the 34010 has some of the following things in its instruction
set. The stuff marked "Check" is in this graphics chip, Herman, all you have
to do is buy one & check it out. No floating point on the 010, though, only the
34020 and 34082 combo.

>Examples of simple instructions in hardware, much more expensive in software,
>and for which I know of "reasonable" applications.
>
>Check	Multiplication of integers with both most and least significant parts
>	of the product available
>
>Check	Division with quotient and remainder simultaneously
>
>	Division of floating point numbers, with integer quotient and
>	floating point remainder
>
>Check	In the above two operations, allowing the choice of which quotient
>	and remainder, depending on the signs of the arguments.
>
>Check	Obtaining the spacing between the ones in a bit sequence.  In the
>	algorithms I would produce, this can be a major operation.
>
>Check	The use of overflow and carry tests.
>
>	Fixed point arithmetic.
>
>	Multiplication of a floating point number by a power of two, not
>	using the multiply unit
>
>	Better conversion between integer and floating point.

And if you're going to rant on (as you always do) about misfeatures in
well-known programming languages, why don't you get off your duff and prove
all us fools wrong? Show conclusively that modern HLLs stink by designing
one that doesn't stink. Do the same for assemblers, too.
So quit your bitching already.

>Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907


--
Chris Shaw     University of Alberta
cdshaw@cs.UAlberta.ca           Now with new, minty Internet flavour!
CatchPhrase: Bogus as HELL !

karsh@trifolium.esd.sgi.com (Bruce Karsh) (07/10/90)

In article <1990Jul10.072443.4844@cs.UAlberta.CA> cdshaw@cs.UAlberta.CA (Chris Shaw) writes:

>The problem is, Herman, you're one of the very few people who are interested
>in these well-known "reasonable instructions".

Just because a subject isn't currently fashonable, doesn't mean it isn't
important.  In computer science, it's often the most important areas that
are least fashonable.  (Why is that?)

>Anybody who gripes about the slowness of his machine, and who can afford to 
>spend the substantial amount of time it takes to tweak code for the last drop
>of CPU power is dealing with programs that are pretty small.
>Usually.

Ah, the Computer Science Religion again.  It's blasphemy to talk about
using assembly code to speed up programs.  The anger and the insults in
the responses one receives when one suggests using assembler is really
a phenomenon.  Could it be that the anger and the name-calling is
because the anti-assembly forces don't really have a good case - just a
set of beliefs?

Where's the evidence that "Anybody who gripes about the slowness of his machine
and who can afford to spend the substantial amount of time it takes to tweak
code is usually dealing with programs that are pretty small"?  And even if
they are small, so what.  Small programs are important too.

Another reason might be that it's economically important to make something
run faster.  If you can solve a computationally intense problem twice as
fast, you may need only purchase half as many computers.  Often it's
reasonable to spend a day or two speeding up the critical parts of a program
so that you don't have to buy more computers.  A couple of days of programmer's
time is expensive, but not as expensive as buying twice as many computers.

Yet another reason might be that you want to sell the program to a very large
market.  Customers don't like to wait for answers.  If you plan on selling
a few hundred thousand copies of a program, it may be worth spending a couple
of days to make the slow parts fast.  You can use a few day's of programmers
time or you can waste hundreds of thousands of end-user's time.  Which makes
more sense?

>The basic problem with assembly coding by hand vs assembly coding by compiler
>is that IT DOESN'T SCALE. There are extremely limited application areas where
>coding by hand is much faster, and you, Herman, live in one.

Perhaps they are extremely limited, but they are sometimes economically
important.  Some examples:

	Graphics Rendering
	Seismic Deconvolution
	Audio Signal Processing
	Business Record Sorting

These are economically important application areas who's viability absolutely
depends on getting as much speed out of the system as possible.

>Just because you
>have purple lenses on your glasses doesn't mean the world is purple. Just
>because you can code for speed better than your compiler can doesn't mean
>that more than a tiny minority "out there" can do the same.

The number of programmers who can do it isn't what's important.  What's
important is the number of customers for the results of the work.  If
people prefer the faster systems enough to purchase them more than slower
systems, then the optimization is worthwhile.  (Provided that there's enough
demand for the product to justify the extra couple of days of optimization)>

>Plus assembler is a nightmare unless one of three things is true:
>	1: The project is performed by 1 person.
>	2: The project is small (less than 5000 lines)
>	3: All modules adhere strictly to a call-return convention.
>In other words, you can do it, but it's extremely hard work if you're building
>something non-trivial. Anywhere up to 1000 lines of code is trivial.

Sure it's hard work.  But we're professionals and we're supposed to work
hard if there's a benefit to doing so.

>Besides you're still going to suff a drop in productivity vs HLL's.

But let's not confuse programmer productivity vs end-user productivity.  When
I purchase a software product, I'm not impressed by the fact that the
programmers didn't have to work hard to write it.  I'm impressed if it
enables me to accomplish my objectives efficiently.

Often I think many programmers don't care at all how long things take.
If you don't believe this, just watch a Unix workstation boot.  Ten
million instructions per second, and it still takes minutes to boot.

>You brought up the example of computer chess recently. The fact of the matter
>is that unless you are an international chess master, the program Deep Thought
>will beat you. Why did I mention this? Well, the major reason Deep Thought
>beats excellent chess people is because of its specialized hardware. The
>program operates by a well-known brute force algorithm. My main point is that
>Deep Thought would be useless if the program relied on some set of greasy
>assembler tweaks on some two-bit general purpose CPU.

Commercially viable chess programs rely on sets of greasy assembler tweaks
on some eight bit general purpose CPU's.

			Bruce Karsh
			karsh@sgi.com

cik@l.cc.purdue.edu (Herman Rubin) (07/10/90)

In article <1990Jul10.072443.4844@cs.UAlberta.CA>, cdshaw@cs.UAlberta.CA (Chris Shaw) writes:
> In article <2328@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
> >In article <3627@auspex.auspex.com>, guy@auspex.auspex.com (Guy Harris) writes:

		[ As much flaming as possible omitted. ]

> Plus assembler is a nightmare unless one of three things is true:
> 	1: The project is performed by 1 person.
> 	2: The project is small (less than 5000 lines)

I do not believe that even a 100 line program should be produced by one
person.  Anyone can miss too much.

> 	3: All modules adhere strictly to a call-return convention.

I have no difficulty with spaghetti code when I need it, and I often move
code blocks off for efficiency.  In fact, the call-return convention is
most annoying from the standpoint of efficiency, and I am quite aware of
this annoying fact.

Most of the present assemblers are horrors, but there are a few, like CAL
on the CRAYs, or COMPASS on the CDC 6x00 and related machines, which were
a step in the right direction.

> In other words, you can do it, but it's extremely hard work if you're building
> something non-trivial. Anywhere up to 1000 lines of code is trivial.
> Besides you're still going to suff a drop in productivity vs HLL's.

			.....................

> >Examples of simple instructions in hardware, much more expensive in software,
> >and for which I know of "reasonable" applications.

Examples omitted.  If you believe that the chips mentioned do this, how do
I get a description of these to verify this?  It is possible to see what 
hardware can do by reading the description.

> And if you're going to rant on (as you always do) about misfeatures in
> well-known programming languages, why don't you get off your duff and prove
> all us fools wrong? Show conclusively that modern HLLs stink by designing
> one that doesn't stink. Do the same for assemblers, too.
> So quit your bitching already.

If you have read what I have written, you would know that I do not believe
a good HLL is possible.  So, even if I had the resources (and they are very
scarce for things like this in an academic environment), why should I try?

As for assemblers, adding weak typing and good macro capabilities (the
macro should have an arbitrary syntax) to something like CAL would do
a good job.

Not everyone has flamed.  There are those who have been sympathetic.
If someone came to me with a statistical problem involving calculations
which the standard packages cannot do (a common situation), if he had the
programming resources, I would help him do it instead of just telling him
to stop complaining.  With the present situation in which funding is almost
entirely from the federal govennment, getting the necessary $30,000 or so
to produce the versatile macro translator is next to impossible.  It is
almost impossible to get graduate students to assist on faculty statistics
research projects.
-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)	{purdue,pur-ee}!l.cc!cik(UUCP)

peter@ficc.ferranti.com (Peter da Silva) (07/10/90)

In article <63692@sgi.sgi.com> karsh@trifolium.sgi.com (Bruce Karsh) writes:
> Ah, the Computer Science Religion again.  It's blasphemy to talk about
> using assembly code to speed up programs.

No. In fact using assembly code for the top 5% if it's doing something that
would benefit from assembly coding is perfectly all right. Redesigning all
our HLLs so they can represent operations like that, or putting all possibly
useful opcodes in all processors, isn't.

If you need specialised opcodes, get a coprocessor.

> because the anti-assembly forces don't really have a good case - just a
> set of beliefs?

What "anti-assembly forces"?

Use the right tool for the job. Sometimes it's a hammer. Just make sure
it *is* the right tool. There's no point in writing your whole program in
assembly... hell, if space is that critical use Forth! Then find the top 5%
and speed *that* up... but keep the original code around.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.
<peter@ficc.ferranti.com>

dave@fps.com (Dave Smith) (07/11/90)

In article <63692@sgi.sgi.com> karsh@trifolium.sgi.com (Bruce Karsh) writes:
 >In article <1990Jul10.072443.4844@cs.UAlberta.CA> cdshaw@cs.UAlberta.CA (Chris Shaw) writes:
 >>The basic problem with assembly coding by hand vs assembly coding by compiler
 >>is that IT DOESN'T SCALE. There are extremely limited application areas where
 >>coding by hand is much faster, and you, Herman, live in one.
 >
 >Perhaps they are extremely limited, but they are sometimes economically
 >important.  Some examples:
 >
 >	Graphics Rendering
 >	Seismic Deconvolution
 >	Audio Signal Processing
 >	Business Record Sorting

Hmmm...I don't know a whole lot about these fields, but I have worked with
several of our customers in the seismic processing industry.  As far as
I can tell they don't do any assembly work.  They may write some vector
code, but not much of that either.  They do love to take their dusty
deck FORTRAN jobs and run them through highly optimizing compilers, though.
I wonder why that is?

For 90% of programmers assembly is more trouble than it's worth.  (Though I 
do remember that on the Apple II I prefered assembly to Applesoft because 
it was easier to make the machine do what I wanted :-) ) Computer 
manufacturers market to the 90% and will build machines suited to the 90%. 

--
David L. Smith
FPS Computing, San Diego
ucsd!celerity!dave or dave@fps.com
***QUOTE CENSORED BY ORDER OF REV. MOM***

cik@l.cc.purdue.edu (Herman Rubin) (07/11/90)

In article <9896@celit.fps.com>, dave@fps.com (Dave Smith) writes:
> In article <63692@sgi.sgi.com> karsh@trifolium.sgi.com (Bruce Karsh) writes:
>  >In article <1990Jul10.072443.4844@cs.UAlberta.CA> cdshaw@cs.UAlberta.CA (Chris Shaw) writes:
>  >>The basic problem with assembly coding by hand vs assembly coding by compiler
>  >>is that IT DOESN'T SCALE. There are extremely limited application areas where
>  >>coding by hand is much faster, and you, Herman, live in one.

			.....................

> For 90% of programmers assembly is more trouble than it's worth.  (Though I 
> do remember that on the Apple II I prefered assembly to Applesoft because 
> it was easier to make the machine do what I wanted :-) ) Computer 
> manufacturers market to the 90% and will build machines suited to the 90%. 

I would certainly agree that for 90% of the programming assembly is more
trouble than it is worth, maybe a lot more.  I disagree for 90% of the
programmers.  And even these programmers use library tools, which should
not be in that fraction.  Those programmers, and users of their programs,
benefit from efficient software produced by the few others.  I would not
want my automobile designed by those who know as little automotive engineering
as I know.  I would hope that there are "unorthodox" people on the design team,
who can think of how to improve things by doing the "unheard of."

How many of the 90% do not even know of the existence of these possibilities?
Are we making it difficult for them to understand them by teaching the present
HLLs?

I have advocated that the HLLs remove their limitations in such a way as
to enable easier inclusion of machine features into the program.  Also, that
the assembler syntax be changed to make the use of these features much easier,
so that Dave Smith's observations about the Apple II can be readily used.  In
addition, that computer manufacturers include the hardware; computing chips
are a small part of the cost of computers, and even a VCISC coprocessor would
be relatively cheap.
-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)	{purdue,pur-ee}!l.cc!cik(UUCP)

dave@fps.com (Dave Smith) (07/12/90)

In article <2338@l.cc.purdue.edu> cik@l.cc.purdue.edu (Herman Rubin) writes:
>In article <9896@celit.fps.com>, dave@fps.com (Dave Smith) writes:
>> For 90% of programmers assembly is more trouble than it's worth.  (Though I 
>> do remember that on the Apple II I prefered assembly to Applesoft because 
>> it was easier to make the machine do what I wanted :-) ) Computer 
>> manufacturers market to the 90% and will build machines suited to the 90%. 
>
>I would certainly agree that for 90% of the programming assembly is more
>trouble than it is worth, maybe a lot more.  I disagree for 90% of the
>programmers.  And even these programmers use library tools, which should
>not be in that fraction.  Those programmers, and users of their programs,
>benefit from efficient software produced by the few others.  I would not
>want my automobile designed by those who know as little automotive engineering
>as I know.  I would hope that there are "unorthodox" people on the design team,
>who can think of how to improve things by doing the "unheard of."

Don't get me wrong.  Assembler is fine for some tasks, however, when it
comes down to a tradeoff between making it easier for the compiler to do
something or easier for the programmer to do something (in assembly) the
compiler wins because the compiler is what's going to be used most of the
time.  In order to be successful in the processor game you have to sell
lots of CPU's and this means running fast on compiled code.  The number
of users who are going to look at your assembler and go "Ick, that's really
grody" and have that be a major point with them are few.  The number who
want to see how fast your code goes out of the compiler with no hand-tweaking
are much larger.

I've been working at FPS (then Celerity) for almost three years doing
systems programming day in and day out.  I still haven't gotten around
to really learning the assembly code for the Accel processor.  About
twice a year I have to go find someone who does know the assembly code 
well to help me figure out why something is happening.  The rest of the year
I do my work in C and it all works fine.  Now, we're moving a lot of
our stuff over to the SPARC and Accel knowledge will not be very helpful.
The time I've spent honing my knowledge of C and Unix internals instead,
however, is still very valuable.  Would learning Accel have been a waste
of my time?  I think so, since the effort versus value, for me, would
have been quite high.


>How many of the 90% do not even know of the existence of these possibilities?
>Are we making it difficult for them to understand them by teaching the present
>HLLs?

I've been of the opinion for a long time that assembler should be taught
first so that a student does have an idea of what the machine does when
it comes time to learn what a pointer is, or dynamic memory, or data
alignment.  Programming in an unstructured environment where you have to 
manage the stack and all your memory by hand also gives you a much greater 
appreciation of all the things an HLL does for you when you move on to one.

Besides tweaking for speed and messing with things the language is trying
to do for you (like stack management) I don't know of too many things
that assembler is really good for.  Any other applications?

>I have advocated that the HLLs remove their limitations in such a way as
>to enable easier inclusion of machine features into the program.  Also, that
>the assembler syntax be changed to make the use of these features much easier,
>so that Dave Smith's observations about the Apple II can be readily used.  In
>addition, that computer manufacturers include the hardware; computing chips
>are a small part of the cost of computers, and even a VCISC coprocessor would
>be relatively cheap.

Place an order for 5 machines and tell us you want a VCISC coprocessor and
you'll fund the development and we'll be happy (I think, I don't work in
management (pun not intended)) to build you one.  Until the marketplace
tells us that is what it wants, though, we're not going to do it.

Marrying your code too tightly to the hardware will probably come back to
bite you anyhow.  Are you using today's hardware to run your code on,
Herman, or is it running on an older machine because that's what it's all
been hand-tuned to run on?  If it will run faster (without tweaking) on
a newer machine then you've already lost.  

High level languages are what allows the kind of competition that's going
on today between computer manufacturers.  If customers were unable to
jump ship when someone else's box ran faster there would be no incentive
to provide a better box.  Assembler's good for when you want to get down
and dirty, but the percentage of your programming that should be done
in assembler is so small that there's no real point in making assembly
easier at the expense of speed or ease in the rest of the code.
--
David L. Smith
FPS Computing, San Diego        |        ucsd!celerity!dave or dave@fps.com
All opinions disowned by me and FPS unless financially lucrative (to me, not 
the litigator)

andrew@dtg.nsc.com (Lord Snooty @ The Giant Poisoned Electric Head ) (07/12/90)

	The Psychology of the HLL Afficionado
	-------------------------------------

Assembler is bad. Assembler is good.
Let's get a metric on this; a handle.. let's argue the relative merits.

On second thoughts, let's not. Because you know, and I know, that what
will result is not a resolution, but a protracted discussion. Very protracted.

So I suggest an alternative attack; a sideways swipe at the Gordian Knot of
HLL vs. Assembler. How would it be to propose that an HLL afficianado is
so predisposed by virtue of his/her own psychology? Well, that's a way.

HLL-a's generally sit on the anal-retentive end of the spectrum (excuse
the pun). They like to believe that shielding a problem, wrapping it up
prettily, will enhance its positive aspects and downplay the negative.
The more tasteless tend to exhibit doily-wrapped toilet rolls and dolls
with frilly dresses in the back windows of their cars. They obsessively
tidy the wires behind their stereo systems. They straighten their picture
and their stack frames on a regular basis. They never repair their own
automobiles.

HLL-a's have a mission - a dream of a higher conceptual order. And when
the known universe does not fit into this pretty structure, they will
not compromise. Instead of adding an extra boondoggle (which DOES NOT
BELONG!) in the structure, they are quite happy to accommodate unruly reality
with a larger, more elegant, slower structure - which is conformant and
utterly monstrous.

And so we see confronted the prissy fanatic obsessive and the down-and-dirty
let's-make-it-work pragmatist.

There's no victor and no vanquished; there's just your choice, l&g.

It's a question of style.
-- 
...........................................................................
Andrew Palfreyman	that asteroid has our names on it
andrew@dtg.nsc.com			" 'course, the 'addock's very nice "

wright@stardent.Stardent.COM (David Wright @stardent) (07/12/90)

In article <63692@sgi.sgi.com>, karsh@trifolium.esd.sgi.com (Bruce Karsh)
writes:

>In article <1990Jul10.072443.4844@cs.UAlberta.CA> cdshaw@cs.UAlberta.CA (Chris
>Shaw) writes:
>>Anybody who gripes about the slowness of his machine, and who can afford to 
>>spend the substantial amount of time it takes to tweak code for the last drop
>>of CPU power is dealing with programs that are pretty small.
>>Usually.

>Ah, the Computer Science Religion again.  It's blasphemy to talk about
>using assembly code to speed up programs.  The anger and the insults in
>the responses one receives when one suggests using assembler is really
>a phenomenon.  Could it be that the anger and the name-calling is
>because the anti-assembly forces don't really have a good case - just a
>set of beliefs?

Oh, stick it in your ear, Karsh.  What is it with you and computer science,
anyway?  Sure, there are plenty of dingbat undergrads who come out with a
CS degree and think they know it all, but that's true of any field.  None
of the really good computer scientists I've met were arguing that there
was no place for assembly language in writing computer systems.  I've got
a PhD in CS, and I use assembler when and where necessary.  So do the other
PhDs I know.

Herman Rubin shows up on the net with his usual tunnel vision about what
he needs to solve his problems, so Chris Shaw writes an equally intemperate
rebuttal, and here you are blazing away at the "Computer Science religion."
Take it to alt.flame, will you?

>Yet another reason might be that you want to sell the program to a very large
>market.  Customers don't like to wait for answers.  If you plan on selling
>a few hundred thousand copies of a program, it may be worth spending a couple
>of days to make the slow parts fast.  You can use a few day's of programmers
>time or you can waste hundreds of thousands of end-user's time.  Which makes
>more sense?

IF we're talking a few days, sure, it makes sense, and that's where you're
going to get the most bang for the buck anyway -- the hot spots are easy 
to find and you do them first.  But there are plenty of people who get so
hypnotized by the speed improvements that pretty soon they're ready to go
out and write the whole damn application in assembler and it's goodbye to
portability.

As usual, the sensible approach is somewhere between the extremes.

>Often I think many programmers don't care at all how long things take.
>If you don't believe this, just watch a Unix workstation boot.  Ten
>million instructions per second, and it still takes minutes to boot.

Amen to this, at least.  Even some O/S programmers, who spend a lot of
time in the lab waiting for computers to boot up, are guilty here.

"Gimme that old-time religion, gimme that old-time religion..." :-)

  -- David Wright, not officially representing Stardent Computer Inc
     wright@stardent.com  or  uunet!stardent!wright

This posting represents the official position of the U. S. Government

peter@ficc.ferranti.com (Peter da Silva) (07/12/90)

[HLL-fanatic is anal retentive]

Hmmm. Seems like it's the assembly language fanatic that's forever twiddling
and tweaking the program instead of just getting the job done in the quickest
language possible and worrying about efficiency later. You know the type...
still deciding what tie to wear when the boat leaves.

In practical terms... the problem is the fanatic: not what they're fanatic
about.

And in net.reality, this doesn't belong in comp.arch. Alt.religion.computers
anyone?
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.
<peter@ficc.ferranti.com>

chip@tct.uucp (Chip Salzenberg) (07/12/90)

According to cik@l.cc.purdue.edu (Herman Rubin):
>If you have read what I have written, you would know that I do not believe
>a good HLL is possible.  So, even if I had the resources (and they are very
>scarce for things like this in an academic environment), why should I try?

... and comp.arch reverberates to the "SLAM" of a mind closing.
-- 
Chip Salzenberg at ComDev/TCT     <chip@tct.uucp>, <uunet!ateng!tct!chip>

new@udel.EDU (Darren New) (07/12/90)

In article <9911@celit.fps.com> dave@fps.com (Dave Smith) writes:
>Besides tweaking for speed and messing with things the language is trying
>to do for you (like stack management) I don't know of too many things
>that assembler is really good for.  Any other applications?

Sure.  Any inner-interpreter type of code is easiest in assembler. Things
like Forth threading are harder to do in C than assembler because Forth assumes
is really big on treating addresses as data and vica versa. In addition,
it is usually only 3-6 instructions to do an inner-interpreter step
and the more speed you can get, the better.          -- Darren

karsh@trifolium.esd.sgi.com (Bruce Karsh) (07/13/90)

>Oh, stick it in your ear, Karsh.

Wow, first I get flamed, then I get a whole bunch of comments which agree
with my posting.

>What is it with you and computer science, anyway?

Well, I wouldn't have mentioned it, but you asked...

I love the science part of computer science, but I am dissapointed with
some of the religious-sounding beliefs that have attached themselves to
the field.  When I was an undergraduate in CS, I became very frustrated
with what we were taught.  We were bombarded with information about how
programs "should be" structured, but usually the end-result of all this
structuring was slow, user-unfriendly, huge programs.  Much of what we
did was completely unconcerned with what the program did for the user.
Instead it was concerned with a lot of cosmetic aspects of program
designs.  I left the field and switched to mathematics where
correctness was considered more important than cosmetics.

Portability, modularity, programming style, and extensibility are
religious issues,  Their general necessity has never been proven, but
it's been taken as an axiom or a dogma of the field.  In some
instances, they may indeed be necessary, but the massive faith in these
techniques as a panacea is widespread and harmful to the field and it
has driven otherwise promissing people out of the field.

I still hope for a day when programming professionals will evaluate
programs by how well they perform their intended function, not by how the
souce code is indented and commented, or how portably they were
written.  (This is, by the way, how people who purchase software
usually evaluate it).

>  Sure, there are plenty of dingbat undergrads who come out with a
>CS degree and think they know it all, but that's true of any field.

I agree that there are a lot of dingbat undergrads who come out with a
CS degree.  But...

In other technical fields, I have not found the stridency of belief in
unproven cosmetic practices as I have found in computer science.  If
electrical engineering were taught like computer science, all the
schematics would be drawn perfectly symetrically, would all use the
very same circuits, would be governed by standards which would add a
fortune to their costs.  TV's would display maybe 5 frames per second,
and would have no audio.  You'd have to use a soldering iron to change
the channel.  They'd only have one channel, but they'd all be
networked.  You'd have to have a system administrator to install one.

There seems to be little interest inside the CS field in issues of how
well the end program works for its intended purpose.  Issues like
portability, programming style, extensibility, and modularity are given
way more emphasis than they deserve.  Is it correct to sacrifice
modularity to improve the refresh rate of a graphics display?  I think
it is, but there are too many programs out there where the religion of
modularity superceeded the necessity of fast performance.

Just what is the common base of knowledge that is basic to computer
science?  Any graduate from a major US university in, for example,
physics, will know how to solve spring-mass systems and orbit problems,
apply Maxwell's equations (in a variety of ways) to electrical charge
distribution problems, set up and solve differential equations for
motion of bodies in fields, solve gas-law problems, ... etc.  In
contrast, the undergraduate computer science grad will assuredly know
how to indent code, how to decompose a problem into way too many
subroutines, how to criticize the way a program is commented, and how
to avoid learning about any particular computer's machine language and
peripherals.

So, yes there are dingbat undergrads in all fields, but in CS, there's
an intellectual attitude against learning new and difficult things.  In
CS, they are too machine-specific, or too unstructured, or too
unportable, or too mathematical, or too application-specific, or not
adequately object- oriented, or too hard to maintain.  Coupled with the
anti-intellectual attitude, there is an emphasis on style over
substance.  By this I mean that ideas are evaluated not on their
correctness, but on their conformance to a set of unsubstatiated
beliefs about proper program style.  I think that these bad attitudes
are diminishing in CS, but they have not gone away yet.  Let's help get
rid of them.

My favorite example of the anti-intellectual current in CS is a paper by
a very famous computer scientist in which he states that the teaching of
certain computer languages should be treated as a criminal act.  Learning
and teaching should not be denigrated, but should instead be encouraged.
Do we find that kind of anti-intellectual bias in other technological
fields.  What if mechanical engineers just decided that bending moments
in beams were a bad thing and should not be taught or allowed to be used?
Flatbed trucks would be too heavy to run on our highways.  If we arbitrarily
choose to consider correct techniques as "bad", then our computer systems
will become slow, bloated, and unreasonably expensive.

I think a lot of this is a result of the rapid growth of the field.
Other disciplines have had a much longer time to mature and to really
figure out what is basic and what isn't.  As CS matures, the situation
will certainly improve.  In the mean-time, when I watch someone with
good but unpopular ideas about CS get blasted on the net, I think we
should look for what's right in these ideas and add them to our base of
knowledge about CS.

>None
>of the really good computer scientists I've met were arguing that there
>was no place for assembly language in writing computer systems.  I've got
>a PhD in CS, and I use assembler when and where necessary.  So do the other
>PhDs I know.

Exactly right.  Really good computer scientists don't often argue that
there's no place for assembly language.  But many typical one do.  In
my opinion assembly language is an underused technique with real power
to make smaller, faster, cheaper systems.  It's one of many techniques
for doing this and the negative attitudes about it are completely
unwarranted.

>Herman Rubin shows up on the net with his usual tunnel vision about what
>he needs to solve his problems, so Chris Shaw writes an equally intemperate
>rebuttal, and here you are blazing away at the "Computer Science religion."
>Take it to alt.flame, will you?

I've done a fair amount of work with numerical methods over very large
amounts of data and I don't think Herman Rubin's problems are unique to
him.  What he writes about may not be completely mainstream, but it's
not fringe work and it's not unimportant.  Our attitudes about what is
important in software are a major factor in the architecture of modern
computer systems.  I think that it's proper to discuss these issues in
a reasoned manner, free from biases about "how things should be done".

If you want to stop the flames, then let's not refer to someone else's
ideas as "tunnel vision".

>IF we're talking a few days, sure, it makes sense, and that's where you're
>going to get the most bang for the buck anyway -- the hot spots are easy 
>to find and you do them first.  But there are plenty of people who get so
>hypnotized by the speed improvements that pretty soon they're ready to go
>out and write the whole damn application in assembler and it's goodbye to
>portability.

>As usual, the sensible approach is somewhere between the extremes.

Sometimes the sensible approach is to operate at the boundaries, not in
the middle.  Doesn't it really depend on what the program is supposed
to do when it's done?  If a major application of assembly programming
is required to meet the needs, then the sensible approach is to just do
it.  There's no point in being hypnotized by speed, but if speed is
what's important and portability isn't, then approaches which sacrifice
portability for speed are sensible.

>>Often I think many programmers don't care at all how long things take.
>>If you don't believe this, just watch a Unix workstation boot.  Ten
>>million instructions per second, and it still takes minutes to boot.

>Amen to this, at least.  Even some O/S programmers, who spend a lot of
>time in the lab waiting for computers to boot up, are guilty here.

Yeah, isn't this a ripe area for a significant improvement in usablity?

			Bruce Karsh
			karsh@sgi.com

pjn@brillig.umd.edu (P. J. Narayanan) (07/13/90)

	I don't think your criticism of CS practices is completely
fair.  I agree with you that one doesn't have to make religions out of
style that some of the computer scientists may preach. Surely we
shouldn't get bogged down by cosmetics and matters of style (and think
that the number of columns of indentation and the placement of the
comment lines are the most important aspects of CS).

	But, modularity and functional decomposition have their own
merits and should be emphasised even more, in my opinion (of course,
one has to be certain that the definition of modularity doesn't
include the number of columns of indentation as an integral part of
it). However high level a programming language may be, one is still
dealing with primitive concepts when dealing with variables (not to
mention assembly language). Extreme indulgence with such low level
structures is not likely to take you anywhere.  By analogy, have you
ever seen an electrical engineer starting a design off with
transistors and FETs when asked to design an interface for a graphic
device? In hardware design, lot of modularity is enforced by the
necessity to use chips, which correspond to primitive subroutines to
achieve common tasks. Have you also seen any chip manufacturer
marketing a NAND gate chip with some weird pin assignments (and
survived too :-) ? The point is that modularity, whether explicit or
implicit, is essential to complex system design, as one would like to
build on simpler and reliable systems designed earlier. The analogy of
using assembly language when designing a missile tracking program in
VLSI is to start the next 80x86 or 680x0 chip by drawing lines on a
sheet of width lambda and so on.

	That is not to say that we should have any intellectual
blockade against the use of low level structures when efficiency is
important.  If the program market gets as highly competitive as
microprocessor market or TV market, we will start seeing machine or
microcode level optimizations to save milliseconds of running time
(like getting a 24 MHz chip as opposed to a 20 MHz one!). But, just as
a microprocessor designed in such a manner will be (have to be) tested
and well documented (and presented as a package that cannot be
tampered with inside), such efficient programs will have to be tested
and documented well enough to serve as a convenient building block to
other programs.

P J Narayanan

amull@Morgan.COM (Andrew P. Mullhaupt) (07/13/90)

In article <24358@estelle.udel.EDU>, new@udel.EDU (Darren New) writes:
> Sure.  Any inner-interpreter type of code is easiest in assembler. Things
> like Forth threading are harder to do in C than assembler because Forth assumes
> is really big on treating addresses as data and vica versa. In addition,
> it is usually only 3-6 instructions to do an inner-interpreter step
> and the more speed you can get, the better.          -- Darren

Well, speed is a _big_ issue, for example in our* APL interpreter.
However, we seem to have come to the idea that only things which are
impossible to do in C are done in assembler. A good reason for this
is that development of the interpreter is simultaneous for more than
one platform. We have generally been able to find C code which can
be compiled into nearly fastest possible on more than one machine.
Given that these are different architectures, this is surprising.
It would be unbelievable in assembler. (I.e. different solutions for
the different machines would almost certainly result.) I think the
only thing assembler is used for is to deal with integer overflow
and this results in a single routine on one of the platforms being
written in assembler. Those who have seen the description of the
performance of the interpreter (e.g. see APL90 proceedings,) know
that this approach has resulted in a _very_ fast interpreter.

Later,
Andrew Mullhaupt

*our = Morgan Stanley, but the prinicpal author of the APL interpreter
is Arthur Whitney. 

peter@ficc.ferranti.com (Peter da Silva) (07/13/90)

In article <64044@sgi.sgi.com> karsh@trifolium.sgi.com (Bruce Karsh) writes:
> I still hope for a day when programming professionals will evaluate
> programs by how well they perform their intended function, not by how the
> souce code is indented and commented, or how portably they were
> written.  (This is, by the way, how people who purchase software
> usually evaluate it).

You mean like the people who buy Word Perfect because it runs on just about
anything? The people who can't upgrade their software when they switch from
DOS to UNIX, Windows, or OS/2? Portability means repeat business.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.
<peter@ficc.ferranti.com>

gillett@ceomax..dec.com (Christopher Gillett) (07/14/90)

In article <64044@sgi.sgi.com> karsh@trifolium.sgi.com (Bruce Karsh) writes:
>>Oh, stick it in your ear, Karsh.
>

How about that for creative, constructive dialog?  Nothing like the calm,
rational approach to discussion, eh? :-(

>When I was an undergraduate in CS, I became very frustrated
>with what we were taught.  We were bombarded with information about how
>programs "should be" structured, but usually the end-result of all this
>structuring was slow, user-unfriendly, huge programs.
...
>
>Portability, modularity, programming style, and extensibility are
>religious issues,  Their general necessity has never been proven, but
>it's been taken as an axiom or a dogma of the field.  In some
>instances, they may indeed be necessary, but the massive faith in these
>techniques as a panacea is widespread and harmful to the field and it
>has driven otherwise promissing people out of the field.
>

I, too, found my CS education to be a tour de force in frustration.  It
was painfully evident that most, if not all, the CS profs were complete
ivory tower types who'd never worked a day in their lives in the "real
world".  I felt as if most of my education was indoctrination into some
sort of weird cult of comment outlines, structured walk-throughs, and
correct punctuation and indentation.  Issues of architecture, hardware,
and system building were treated like Great Mysteries of Faith.  I 
finally had to go over to the EE folks to get my head on straight about
these issues.  All things considered, I graduated *knowing* that I 
really didn't know much.  Most of what I learned was by working with
fellow students on interesting work.  I can't think of any course that
provided reasonable information that was useful outside of the ivory
tower.  I could go on like this for hours, but I digress (sorry).

>I still hope for a day when programming professionals will evaluate
>programs by how well they perform their intended function, not by how the
>souce code is indented and commented, or how portably they were
>written.  (This is, by the way, how people who purchase software
>usually evaluate it).

Bruce, you're close here, but I don't fully agree.  Yes, when someone
purchases a piece of software they are looking for speed, power, ease
of use, and error-free functionality.  And it is definitely appropriate
to evaluate a software professional by examining the output of her
work.  But, I think the better way to judge someone's ability is by
also looking at issues of design efficiency, portability, and other
issues usually associated with "proper software engineering technique".
Why?  Because the better the upfront engineering is, the easier it is
to produce correct programs with good portability, robust user interfaces,
etc.  If you accept that good engineering leads to good software, then
the best way to evaluate somebody is by how much $$$ they make for their
employer.  Certainly that is an excellent measure of professionalism
(of course, I'm excluding a whole host of ethical issues here that aren't
appropriate for this forum).  

>There seems to be little interest inside the CS field in issues of how
>well the end program works for its intended purpose.  Issues like
>portability, programming style, extensibility, and modularity are given
>way more emphasis than they deserve.  
...
>Is it correct to sacrifice
>modularity to improve the refresh rate of a graphics display?  I think
>it is, but there are too many programs out there where the religion of
>modularity superceeded the necessity of fast performance.
>

Here I would beg to disagree again.  Of course, you don't want to
sacrifice performance for structure, but you also don't want (dare I
say you *must not*) sacrifice structure for performance.  This leads,
ultimately, to software that isn't maintainable, and will cost a lot
to keep running (assuming that it's your intention to keep developing,
extending, enhancing, etc. In my world, there's no such thing as a 
"one off" program).  Real-life case in point:  In a previous job, I
worked on a suite of compilers targeted toward various MC680x0 machines.
These compilers were written exclusively in assembly language.  No
high level fluff for us, dammit :-).  The compilers were capable, under
normal circumstances, of compiling around 40,000 lines/minute...more
if you tweaked the environment a tad.  But these guys sacrificed all
semblance of structure, modularity, and documentation in the quest
for incredible speeds.  Finally, it got to the point where we just
couldn't make cost efficient changes.  Bugs tooks days, not hours,
to track down and fix.  In the end, we wound up (at considerable
expense) writing a whole new suite of compilers (in C) using better
engineering techniques.  Now they have a maintainable set of tools,
but it sure was expensive.

>Just what is the common base of knowledge that is basic to computer
>science?  

The problem here is that we insist of calling this field a science.
It's not.  It simply isn't.  You can't compare the "basic tenets of
CS" to the underpinnings of mathematics, physics, chemistry, and the
like.  The longer I work in this field (a little over 10 years now),
the more I'm convinced that this field is art, not science.  
You either have the intellectual and (*especially*) creative abilities 
to "do software" or you don't.  The folks that lock themselves in 
ivory towers and think about strange mathematical phenomena are not 
computer scientists, they're mathematicians who are harnessing computers 
in interesting new ways.  A fresh out of school EE is a scientist, 
a fresh out of school CS type is someone ready to learn the real 
truth about computers and software engineering.

>My favorite example of the anti-intellectual current in CS is a paper by
>a very famous computer scientist in which he states that the teaching of
>certain computer languages should be treated as a criminal act.  

Which paper is this?  I've seen a lot of (IMHO...) silly stuff floating
around (like "GOTOs considered Harmful"...geez, will these guys ever
learn?), but I've not encountered this gem.  Please post a citation.

>Exactly right.  Really good computer scientists don't often argue that
>there's no place for assembly language.  But many typical one do.  

I once got "marked down" on a software project in school because I had
the unmitigated gall to implement a set of lookup routines in assembly
language.  First, the prof couldn't understand how the routines worked,
then he reamed me out for using "dangerous implementation techniques".
He never really could justify his remarks, but he's the prof...

>>>Often I think many programmers don't care at all how long things take.
>>>If you don't believe this, just watch a Unix workstation boot.  Ten
>>>million instructions per second, and it still takes minutes to boot.
>
>>Amen to this, at least.  Even some O/S programmers, who spend a lot of
>>time in the lab waiting for computers to boot up, are guilty here.
>

Geez, you guys must be using the wrong iron.  My fine DECstation 3100
boots up in no time at all.  Maybe you should rush to your nearest
DEC office and grab a few, eh?  :-)

Wow, I guess we're now fully off the subject of computer architecture.
Should we move this discussion, or do the system architects find it
amusing to watch the software guys beat on each other?

>			Bruce Karsh
>			karsh@sgi.com

FWIW,
/Chris

P.S.  Ken Olsen speaks for Digital Equipment Corporation, I speak for me!
      #include <standard_disclaimers>
---
Christopher Gillett               gillett@ceomax.dec.com
Digital Equipment Corporation     {decwrl,decpa}!ceomax.dec.com!gillett
Hudson, Taxachusetts              (508) 568-7172

tif@doorstop.austin.ibm.com (Paul Chamberlain) (07/14/90)

I really should stay out of this but ...

In article <64044@sgi.sgi.com> karsh@trifolium.sgi.com (Bruce Karsh) writes:
>I still hope for a day when programming professionals will evaluate
>programs by how well they perform their intended function, not by how the
>souce code is indented and commented, or how portably they were
>written.  (This is, by the way, how people who purchase software
>usually evaluate it).

Maybe you aren't, but I'm pretty impressed with how many Mario Brothers
get sold.  That software runs on everything.  I have no idea what it
was written in but I'd bet that its very modular, structured, etc.

>Is it correct to sacrifice modularity to improve the refresh rate of
>a graphics display?  I think it is, but there are too many programs
>out there where the religion of modularity superceeded the necessity
>of fast performance.

Yeah, I can just imagine switching out half of a chip so you can get
color from your display.  Modular software does give that option.

>My favorite example of the anti-intellectual current in CS is a paper by
>a very famous computer scientist in which he states that the teaching of
>certain computer languages should be treated as a criminal act.  Learning
>and teaching should not be denigrated, but should instead be encouraged.

Do they still teach EE's how to design with tubes?  Or teach mathematicians
to count on their fingers?  Both are unproductive and perhaps even retard
their future possibilities (as does programming in BASIC).

>I think a lot of this is a result of the rapid growth of the field.

I think this is a result of being bitten by costs from software that
isn't portable, extendable, modular.  It tends to be necessary when
the customer wants more (machines), more (features), more (compatibility).
(BTW, when was the last upgrade to ANY TV IN THE WORLD??  Actually, since
the TV makers are learning the values of modularity, extensibility, etc.
I'm quite sure that (very near) future televisions might well have the
option of upgrades.  Come to think of it, I have a stereo system that I
recently upgraded to include a CD player, thank God for modularity.)

I hope I never have to do anything with your source code.

>>>Ten million instructions per second, and it still takes minutes to boot.

Ironically, this is where alot of the assembly code is too.


Paul Chamberlain | I do NOT represent IBM	  tif@doorstop, sc30661@ausvm6
512/838-7008	 | ...!cs.utexas.edu!ibmaus!auschs!doorstop.austin.ibm.com!tif

aglew@oberon.crhc.uiuc.edu (Andy Glew) (07/14/90)

>>>>Often I think many programmers don't care at all how long things take.
>>>>If you don't believe this, just watch a Unix workstation boot.  Ten
>>>>million instructions per second, and it still takes minutes to boot.
>>
>>>Amen to this, at least.  Even some O/S programmers, who spend a lot of
>>>time in the lab waiting for computers to boot up, are guilty here.
>
>Geez, you guys must be using the wrong iron.  My fine DECstation 3100
>boots up in no time at all.  Maybe you should rush to your nearest
>DEC office and grab a few, eh?  :-)


I may regret this, but I couldn't resist.  I just powercycled the
DECstation 3100 that I am sitting at.  Powercycled, because (1) I
don't have root on this system, and (2) powercycling is effectively
what you have to do in a reliable systems test, where the system has
to recover after a major reconfiguration (yeah, I know about
UPSes...), and (3) as one of those OS guys who has spent a lot of time
in the lab waiting for computers to boot up, I know that I often have
had to do it from power cycle (especially after wedging a new device
in a way that reset wouldn't).
    A few years ago a customer gave us a <30 second boot after power
cycle requirement, for a real-time OS. They wanted <10.

This DECstation 3100, with 16MB of memory, and an approximately 300Mb
local SCSI disk, took 8:19 (eight minutes and nineteen seconds) to
reboot after powercycle.  That included fsck'ing the disk. Time
measured from the time I flicked the switch to the time I could log
in.

That may be good by UNIX standards, but it's not great.  Do you want
to wait that long at an ATM when the banking system is having
"computer problems"?

---

Yes, I know that there are a lot of things to speed up boot, including
UPSes; plus, a friend of mine (hi, Eric!) wrote a memory intensive
fast fsck that I heard was going into next BSD. I expect to see them
soon.  But they're not all here yet.



--
Andy Glew, aglew@uiuc.edu

jgk@osc.COM (Joe Keane) (07/14/90)

In article <24358@estelle.udel.EDU> new@ee.udel.edu (Darren New) writes:
>Sure.  Any inner-interpreter type of code is easiest in assembler. Things
>like Forth threading are harder to do in C than assembler because Forth
>assumes is really big on treating addresses as data and vica versa. In
>addition, it is usually only 3-6 instructions to do an inner-interpreter step
>and the more speed you can get, the better.  -- Darren

I don't see any problem with `treating addresses as data and vice versa'.
That's what casts are for.  Your C code may end up with expressions like
`*(long**)((char*)a+*(short*)b)' but it works and is even somewhat portable.

One real problem with using C for threaded interpreters is that there is no
way to make an arbitrary jump instruction.  If you could `goto' some address
expression there would be no problem.

karsh@trifolium.esd.sgi.com (Bruce Karsh) (07/14/90)

In article <379@e2big.mko.dec.com> gillett@ceomax.dec.com (Christopher Gillett) writes:

>Geez, you guys must be using the wrong iron.  My fine DECstation 3100
>boots up in no time at all.  Maybe you should rush to your nearest
>DEC office and grab a few, eh?  :-)

I just timed it:  80 seconds from the time you type auto in the PROM monitor
until you get to the login window.  Additionally, it takes 33 seconds from
then until you are logged in.

>Wow, I guess we're now fully off the subject of computer architecture.
>Should we move this discussion, or do the system architects find it
>amusing to watch the software guys beat on each other?

Software is the most important part of computer architecture.  It's also the
least well-developed part.  The harware is constantly pushing the limits of
its technology, software mostly hasn't even learned where the limits are yet.

If you design software, you are doing computer architecture.

			Bruce Karsh
			karsh@sgi.com

tve@sprite.berkeley.edu (Thorsten von Eicken) (07/15/90)

In article <3060@osc.COM> jgk@osc.COM (Joe Keane) writes:
>One real problem with using C for threaded interpreters is that there is no
>way to make an arbitrary jump instruction.  If you could `goto' some address
>expression there would be no problem.
Yep, typically the C program ends up being a hunge switch statement:
	loop:	switch(next_thread) {
		case 0: .... goto loop;
		case 1: .... goto loop;
		}
and one hopes that a) a jump table is produced (and not a binary search
tree) and b) the compiler doesn't dump core with some silly error when
the number of cases reaches into the thousands...

Or can anyone suggest a better method?
----
Thorsten von Eicken (tve@sprite.berkeley.edu)

philip@pescadero.Stanford.EDU (Philip Machanick) (07/15/90)

In article <37569@ucbvax.BERKELEY.EDU>, tve@sprite.berkeley.edu
(Thorsten von Eicken) writes:
> In article <3060@osc.COM> jgk@osc.COM (Joe Keane) writes:
> >One real problem with using C for threaded interpreters is that there is no
> >way to make an arbitrary jump instruction.  If you could `goto' some address
> >expression there would be no problem.
> Yep, typically the C program ends up being a hunge switch statement:
> 	loop:	switch(next_thread) {
> 		case 0: .... goto loop;
> 		case 1: .... goto loop;
> 		}
> and one hopes that a) a jump table is produced (and not a binary search
> tree) and b) the compiler doesn't dump core with some silly error when
> the number of cases reaches into the thousands...
> 
> Or can anyone suggest a better method?
If you can afford the overhead of a function call, you can use an array
of function pointers to implement a jump table.

Philip Machanick
philip@pescadero.stanford.edu

colin@array.UUCP (Colin Plumb) (07/15/90)

In article <24358@estelle.udel.EDU> new@ee.udel.edu (Darren New) writes:
> Sure.  Any inner-interpreter type of code is easiest in assembler. Things
> like Forth threading are harder to do in C than assembler because Forth
> assumes is really big on treating addresses as data and vica versa. In
> addition, it is usually only 3-6 instructions to do an inner-interpreter
> step and the more speed you can get, the better.          -- Darren

Well, the Forth inner interpreter is roughly analagous to a C compiler's
entry and exit code, so writing it in assembler is to be expected.
As you said, it's 3-6 instructions (I think I needed 3 when I did it on a VAX).
Not exactly a large chunk of code. 
-- 
	-Colin

cik@l.cc.purdue.edu (Herman Rubin) (07/15/90)

In article <37569@ucbvax.BERKELEY.EDU>, tve@sprite.berkeley.edu (Thorsten von Eicken) writes:
> In article <3060@osc.COM> jgk@osc.COM (Joe Keane) writes:
> >One real problem with using C for threaded interpreters is that there is no
> >way to make an arbitrary jump instruction.  If you could `goto' some address
> >expression there would be no problem.
> Yep, typically the C program ends up being a hunge switch statement:
> 	loop:	switch(next_thread) {
> 		case 0: .... goto loop;
> 		case 1: .... goto loop;
> 		}
> and one hopes that a) a jump table is produced (and not a binary search
> tree) and b) the compiler doesn't dump core with some silly error when
> the number of cases reaches into the thousands...
> 
> Or can anyone suggest a better method?

For which problem?  The assumption that there is a best way to handle all
switches is erroneous in the first place.  There are natural situations
in which the switch variable is an integer, with the probabilities of
successive integers decreasing sufficiently rapidly.  In that case, if
there is a bound after which some "default" procedure is followed, the
following may beat it.

 	switch(i){
		if(i > 1) goto i2;
		.................

i2:		if(i > 2) goto i3:
		.................


		}

I am not assuming that one necessarily returns to the loop.

How for hardware dependence considerations.  Many, if not most machines,
have a 3-way compare.  In that case, the compare operations can be halved,
at no cost.

	switch(i){
		compare i with 2;
		if (>=) goto i2;
		.................

i2:		if (>) goto i3;
		..............

		}
-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)	{purdue,pur-ee}!l.cc!cik(UUCP)

jkrueger@dgis.dtic.dla.mil (Jon) (07/16/90)

In article <63692@sgi.sgi.com> karsh@trifolium.sgi.com (Bruce Karsh) writes:
>Some examples [where assembler is justified]:
>	Business Record Sorting

What's happening in the most demanding database environments does not
tend to support this.  The issue right now is getting away from HLL's
to something more abstract and safer.

-- Jon
-- 
Jonathan Krueger    jkrueger@dtic.dla.mil   uunet!dgis!jkrueger
Drop in next time you're in the tri-planet area!

willr@ntpdvp1.UUCP (Will Raymond) (07/17/90)

 Portability, modularity, programming style, and extensibility are
 religious issues.

	While I agree with some of what Karsh says about CS education there
	is one good reason these issues are emphasized, maintainability.
	Too often, I've had to wade through spaghetti code written by a
	'supposed' hot-shot programmer who didn't feel the need for 
	comments, modularity, portability, etc. to find/correct a problem or
	extend the use of the code.  I consider it common courtesy for those 
    follow to follow a minimal set of common sense rules...major of
	which is documentation ( I don't mean reams of doc. just indicative doc.
	at the function level...with the understanding that the module is
	readable and fits in no less than 2 pgs. { my personal thumb rule -
	if I can't get it on two screens it's probabally too long } ).

    *******       Will Raymond - Northern Telecom NTP in RTP
|  | ~   ~ |  |
   . O   o .      I speak for myself.
|     .V.     |   
     ._ _.     	  "A day without sunshine is a day at work."	   
|      U      |

eliot@cs.qmw.ac.uk (Paul Davison (postmaster)) (07/17/90)

I use threaded code in my dynamic translation Smalltalk virtual machine,
but its written in C.  I use a single asm statement to do the 'jump to next
threaded opcode' & I use simple sed scripts on assembler code to turn C
procedures into threaded opcodes.

I've found this combination provides portability & speed.  It took half a day
to port the machine independent parts of the vm (i.e. not the graphics)
from SUN 3 (mc68k) to SUN 4 (sparc).

In straight C we could define a threaded code interpreter thus:

	void	(**tcip)();	/* threaded code instruction pointer */

	void	inner_interpreter()
	{
		do
			(**tcip++)();
		while (1);
	}
where an opcode could be written
	void	do_something()
	{
		....
		return;
	}

The resulting system would probably spend most of its time doing return/call
pairs.
With a little extra effort (& help from a good C compiler) we can eliminate
the call/return pairs & build a conventional threaded code interpreter that
jumps from routine to routine but is still written in C (well 99.9% anyway).

Here's how it works:

I use GCC so I can declare some oft used global variables in registers:
On mc68k:
	register OOP	*stackPointer asm("a3");
	register TCODE	*tcip asm("a5");
On sparc:
	register OOP	*stackPointer asm("%g5");
	register TCODE	*tcip asm("%g7");

Threaded code is a sequence of 32 bit words organized as 
	<pointer to threaded opcode (C procedure)>
	<operand>
	<pointer to threaded opcode (C procedure)>
	<operand>

First some convenience defines:
#define TBEGIN {
#define TEND JUMPNEXT; }
All threaded opcodes begin with TBEGIN instead of { & TEND instead of }.

Here's a simple threaded opcode that pushes an operand onto the vm's stack:

void	pushLit()
TBEGIN
	*++stackPointer = (OOP)*tcip++;
TEND


JUMPNEXT jumps to the next threaded opcode.  On mc68k its defined as
#define JUMPNEXT \
	do{asm("mov.l (%a5)+,%a0; jmp (%a0)");return;}while(0)

and on sparc as
#define JUMPNEXT \
	do{asm("ld [%g7],%o0; jmpl %o0,%g0; add %g7,4,%g7");return;}while(0)

JUMPNEXT is analogous to (*tcip++)(), but jumps instead of calls.
On a hypothetical pure C machine it could be:
#define JUMPNEXT return

So on the sparc pushLit is actually
void	pushLit()
{
	*++stackPointer = (OOP)*tcip++;
	do{
		asm("ld [%g7],%o0; jmpl %o0,%g0; add %g7,4,%g7");
		return;
	}while(0);
}

Which compiles to:
.global _pushLit
	.proc 1
_pushLit:
	!#PROLOGUE# 0
	save %sp,-80,%sp
	!#PROLOGUE# 1
	add %g5,4,%g5
	ld [%g7],%o0
	st %o0,[%g5]
	add %g7,4,%g7
	ld [%g7],%o0; jmpl %o0,%g0; add %g7,4,%g7
	ret
	restore

Since each threaded opcode is jumping to the next we don't want the prolog or
the epilog.  I apply the following sed-script to the assembler to strip them:
/^_.*:$/{n
N
N
s/	!#PROLOGUE# 0\n	save %sp,[-0-9]*,%sp\n	!#PROLOGUE# 1//
}
/	ret/d
/	restore/d


Which produces
.global _pushLit
	.proc 1
_pushLit:
	add %g5,4,%g5
	ld [%g7],%o0
	st %o0,[%g5]
	add %g7,4,%g7
	ld [%g7],%o0; jmpl %o0,%g0; add %g7,4,%g7

On the mc68k the sed-script is a little more involved but is still only 22
lines. (Its complicated because of the compiler optimizing various register
save/restore code. e.g. pushing a single register is quicker than using a
move multiple with a single bit set in the register move mask.)


All threaded opcodes run in the same stack frame.  The system is kicked off
from a C routine that calls alloca to allocate a large enough stack frame
for all threaded opcodes, e.g.:

void	Interpret()
{
	tcip = init_tcode();
	(void)alloca(1024);
	(**tcip++)();
}



The resulting system is as efficient a threaded code interpreter as one
written entirely in assembler BUT
On the sparc
	The system is about 20,000 lines (including comments)
	All but 13 lines are ordinary C code.
	12 lines are gcc-style global register variable declarations and
	1 line defines JUMPNEXT (as above) with an asm statement.
	42 lines of sed-script in 3 files
		9 lines strip prolog/epilog from threaded opcodes
		5 lines do a peephole optimization
		28 lines restore a global register stomped on by .div & .rem

-- 
Eliot Miranda			email:	eliot@cs.qmw.ac.uk
Dept of Computer Science	Tel:	071 975 5220 (+44 71 975 5220)
Queen Mary Westfield College	ARPA:	eliot%cs.qmw.ac.uk@nsfnet-relay.ac.uk	
Mile End Road			UUCP:	eliot@qmw-cs.uucp
LONDON E1 4NS

new@udel.EDU (Darren New) (07/18/90)

In article <2518@sequent.cs.qmw.ac.uk> eliot@cs.qmw.ac.uk (Eliot Miranda) writes:

[about how to define ASM macros and a sed script to mung the asm output from
 the GCC compiler full of inline assembler directives to make a threaded interpreter
 in C instead of ASM]

>The resulting system is as efficient a threaded code interpreter as one
>written entirely in assembler BUT
>On the sparc
>	The system is about 20,000 lines (including comments)
>	All but 13 lines are ordinary C code.

I would like to point out that the threaded interpreter of most forth-like
languages is on the order of 5-9 instructions on even simple CPUs.

>	12 lines are gcc-style global register variable declarations and

This is where C falls down.  If you use ANSI C instead of GCC, it gets
really inefficient. Also, these 12 lines have to get rewritten for
every CPU anyway, so it's not like it's portable anyway.

>	1 line defines JUMPNEXT (as above) with an asm statement.

Which also must be rewritten.

>	42 lines of sed-script in 3 files
>		9 lines strip prolog/epilog from threaded opcodes

Which are not there in assembler.

>		5 lines do a peephole optimization

Which undoubtably change from CPU to CPU and which does not need
to be done in assembler.

>		28 lines restore a global register stomped on by .div & .rem
And this is being done in SED?

Anyway, thanks for making my point so graphically. Clearly you are not
writing in C, but rather mostly C with assembler for the hard-in-C
parts. I never said that threaded interpreters were difficult in C, but
only the inner-interpreter part. Sure, the 20000 lines of routines are
in C and are probably even efficient. But doing something like a
reentrant threaded language on a 6502 under a DOS that fixes neither
upper nor lower bound of the memory is not something I would care to do
with GCC, SED, and so on.

			 -- Darren

heron@mars.jpl.nasa.gov (Vance Heron) (07/19/90)

A Forth interpreter has been written entirely in C -
The version I used was called Mirella - from James Gun at
Princeton - but the original was credited to Mitch Bradley.
For more info post to comp.language.forth

wright@stardent.Stardent.COM (David Wright @stardent) (07/21/90)

In article <64044@sgi.sgi.com>, karsh@trifolium.esd.sgi.com (Bruce Karsh)
writes:

>I love the science part of computer science, but I am dissapointed with
>some of the religious-sounding beliefs that have attached themselves to
>the field.  When I was an undergraduate in CS, I became very frustrated
>with what we were taught.  We were bombarded with information about how
>programs "should be" structured, but usually the end-result of all this
>structuring was slow, user-unfriendly, huge programs.  Much of what we
>did was completely unconcerned with what the program did for the user.
>Instead it was concerned with a lot of cosmetic aspects of program
>designs.  I left the field and switched to mathematics where
>correctness was considered more important than cosmetics.

And you can write incomprehensible proofs all day to your heart's
content?  ( 0.5 :-)  I agree that there are plenty of second-rate
CS departments where form doesn't follow function.  Instead, form
becomes everything.  The really good people in the field recognize
this (Tony Hoare has made a few pithy comments), but in all too many
cases, it really does become like a religion where the ceremonies
are everything and the inner meaning has been lost.

But I still object to your tarring all CS people with this brush.  It
doesn't apply to me or to a lot of other people I know.  So either qualify
your remarks or stop making them.

>Portability, modularity, programming style, and extensibility are
>religious issues,  Their general necessity has never been proven, 

Oh, come now.  While it's easy to pick out specific examples of
programs where any given one of these properties need not apply, there
are plenty of others where it's obvious that they do apply, and if
they aren't used, there'll be hell to pay later.  It's the notion
that these goals can only be achieved by working in ONE SPECIFIC WAY
that needs to be shot down.

They aren't a panacea, because, for example, extensibility is a hard
thing to get right, and if an extension is needed in a way that the
original coder did not foresee, then there's no gain in that case.
But what would you have us do instead?  Write every program as one
huge main procedure?  No?  Then maybe these techniques do have some
value after all?

>I still hope for a day when programming professionals will evaluate
>programs by how well they perform their intended function, not by how the
>souce code is indented and commented, or how portably they were
>written.  (This is, by the way, how people who purchase software
>usually evaluate it).

Yes it is.  And that's because those people don't have to maintain the
code.  But if you think they don't care about portability, you're wrong.
What if they want to buy new hardware?  And in effect, they do care
how it's written, because they'll want improvements, and bug fixes. and
quality.  Hard to get from spaghetti code.

>If electrical engineering were taught like computer science, all the
>schematics would be drawn perfectly symetrically, would all use the
>very same circuits, would be governed by standards which would add a
>fortune to their costs.  TV's would display maybe 5 frames per second,
>and would have no audio.  You'd have to use a soldering iron to change
>the channel.  They'd only have one channel, but they'd all be
>networked.  You'd have to have a system administrator to install one.

Pfui.  The notion that a well-structured program is inherently slow
is bullshit.  As for utility, well, you're right that it hasn't had
the attention it deserves.  But that's independent of the existence
or non-existence of CS as a discipline, and has everything to do with
what users have been willing to bend over and take so far.

>Just what is the common base of knowledge that is basic to computer
>science?
...
>the undergraduate computer science grad will assuredly know
>how to indent code, how to decompose a problem into way too many
>subroutines, how to criticize the way a program is commented, and how
>to avoid learning about any particular computer's machine language and
>peripherals.

I wouldn't give a degree to anyone like this.  Can you cite an example
of a program that feels this is the right way to go?  You make a lot of
claims about "CS does this", but your claims don't match my experience.

>My favorite example of the anti-intellectual current in CS is a paper by
>a very famous computer scientist in which he states that the teaching of
>certain computer languages should be treated as a criminal act.  

Dijkstra, wasn't it, on either Fortran or BASIC?  The point being made
was that learning a language with many restrictions tends to crimp your
ability to analyze a problem.  And it's true.  For example, people
brought up on Fortran IV don't tend to think of working through an
array from the high index to the low, because DO-loops only index up.
It wasn't denigrating teaching and learning.  It was denigrating forcing
people to think in excessively restrictive ways.

>Really good computer scientists don't often argue that
>there's no place for assembly language.  But many typical one do.  In
>my opinion assembly language is an underused technique with real power
>to make smaller, faster, cheaper systems.  It's one of many techniques
>for doing this and the negative attitudes about it are completely
>unwarranted.

No.  It's too labor-intensive, and if you're working on a program that
really does need to run on multiple platforms, it should be a technique
of last resort.  I use it if it's needed, but only if it's really needed.

In effect, we aren't really that far apart, but you generalize too much,
and as we know, all generalizations are bad.

  -- David Wright, not officially representing Stardent Computer Inc
     wright@stardent.com  or  uunet!stardent!wright

Join the war against violence

gillett@ceomax..dec.com (Christopher Gillett) (07/23/90)

In article <1990Jul21.004616.649@Stardent.COM> wright@stardent.Stardent.COM (David Wright @stardent) writes:
>In article <64044@sgi.sgi.com>, karsh@trifolium.esd.sgi.com (Bruce Karsh)
>writes:
>
>>I love the science part of computer science, but I am dissapointed with
>>some of the religious-sounding beliefs that have attached themselves to
>>the field.  
[...]
>>I left the field and switched to mathematics where
>>correctness was considered more important than cosmetics.
>
>And you can write incomprehensible proofs all day to your heart's
>content?  ( 0.5 :-)  I agree that there are plenty of second-rate
>CS departments where form doesn't follow function.  
[...]
>But I still object to your tarring all CS people with this brush.  It
>doesn't apply to me or to a lot of other people I know.  So either qualify
>your remarks or stop making them. 
>
Whoa!  Slow down...pop a couple valium. I didn't see Karsh's comments 
as indicative of *all* computer science types.  His remarks were
about the typical computer scientist, not all of them.  My background
and degree are in computer science, and I found his remarks more 
accurate than offensive.  Calm down.

>>I still hope for a day when programming professionals will evaluate
>>programs by how well they perform their intended function, not by how the
>>souce code is indented and commented, or how portably they were
>>written.  (This is, by the way, how people who purchase software
>>usually evaluate it).
>
>Yes it is.  And that's because those people don't have to maintain the
>code.  But if you think they don't care about portability, you're wrong.
>What if they want to buy new hardware?  And in effect, they do care
>how it's written, because they'll want improvements, and bug fixes. and
>quality.  Hard to get from spaghetti code.
>
Methinks you've missed Karsh's point.  Too many CS programs put the
emphasis on the esthetical beauty of a computer program.  CS students
tend to be evaluated on how their program source code looks, how it
maps to flowcharts, how well it is commented and indented, etc (Ok,
everyone who ever had a CS prof say something like "A subroutine is 
never more than x [pages,lines] long" raise your hands).  While it
is necessary to place some emphasis on these things, the Real World
wants to know about performance and buginess.  What got everyone
excited when Phillipe Kahn and Turbo Pascal hit the market?  Nobody
gave a rats ass about how well it was engineered, or what the source
code looked like.  What everyone was cranked up about was how small
and how fast the damned thing was.  Borland Industries makes a vast
majority of their money trading on the performance of their tools.
No, you don't want spaghetti code.  Yes, you want code that you
can maintain.  Yes, for most apps you'd like some platform 
independance.  But these are not what drives your market.  Accuracy,
speed, and efficiency seperate the successes from the failures.

>>If electrical engineering were taught like computer science, all the
>>schematics would be drawn perfectly symetrically, would all use the
>> [...deleted for brevity]

>Pfui.  The notion that a well-structured program is inherently slow
>is bullshit.  As for utility, well, you're right that it hasn't had
>the attention it deserves.  But that's independent of the existence
>or non-existence of CS as a discipline, and has everything to do with
>what users have been willing to bend over and take so far.
>
I agree with the assertion that well structured programs can perform
as well as a hacked mess.  Unfortunately, the notion of "well structured"
as taught by the vast majority of universities leads to the kinds of
programs that Karsh is discussing.  Yes, the Big Ten of Computer Science
may teach the principles correctly, but the 10 kazillion other CS
schools don't do it right.  This means that most of the computer science
types coming out the school don't have the right ideas about program
structure, data abstraction, information hiding, etc.

>>Just what is the common base of knowledge that is basic to computer
>>science?
>...
>>the undergraduate computer science grad will assuredly know
>>how to indent code, how to decompose a problem into way too many
>>subroutines, how to criticize the way a program is commented, and how
>>to avoid learning about any particular computer's machine language and
>>peripherals.
>
>I wouldn't give a degree to anyone like this.  Can you cite an example
>of a program that feels this is the right way to go?  You make a lot of
>claims about "CS does this", but your claims don't match my experience.
>
Most of the CS types I've encountered seem to have the point of view
described by Karsh, at least upon leaving school.  There is virtually
no emphasis placed on understanding hardware, understanding how mahines
work, or even on learning assembly language in most computer science 
programs.  The CS program I took (circa 1984) was ACM accredited and had
absolutely no courses in which hardware was discussed.  The only course
that came close was a logic circuit design course, and that was primarily
the wonders of boolean algebra.  Most of the courses were typical hammering
away at good form, instead of striving for good function.  After working
for several different places in several different states, I've met and 
talked with a lot of CS types.  The vast majority of these people had
experiences similar to mine.

>>My favorite example of the anti-intellectual current in CS is a paper by
>>a very famous computer scientist in which he states that the teaching of
>>certain computer languages should be treated as a criminal act.  
>
>Dijkstra, wasn't it, on either Fortran or BASIC?  
>
Based on the mail I got from folks regarding this question, it was
Dijkstra, on COBOL.

>
>>Really good computer scientists don't often argue that
>>there's no place for assembly language.  But many typical one do.  In
>>my opinion assembly language is an underused technique with real power
>>to make smaller, faster, cheaper systems.  It's one of many techniques
>>for doing this and the negative attitudes about it are completely
>>unwarranted.
>
>No.  It's too labor-intensive, and if you're working on a program that
>really does need to run on multiple platforms, it should be a technique
>of last resort.  I use it if it's needed, but only if it's really needed.
>
Aha!  Lets presume for a moment that you are truly a computer scientist,
and that you buy into all the stuff that computer "science" teaches.  So,
you've got really good modularity, excellent functional decomposition,
supporting design documents to explain the whole thing, etc.  Doesn't
it follow that you should be able to put in your assembly language routines
such that you don't interfere too much with portability issues?  Whenever
I use assembly language routines or modules, they wind up getting abstracted
down into the compatibility layer of my applications.  That means that the
entire application above the layer will port without modification to any
conforming environment (in this case, an ANSI Standard C environ), and the
compatibility layer varies on a platform-by-platform basis.  I've written
some pretty big apps this way, and ported them to fairly divergent platforms,
and never suffered from budget overruns, or labor-intensive problems.

Don't get me wrong here.  I personally buy into all the aforementioned
goodies: modularity, functional decomposition, design documents, etc.  I
even like to write code that is properly documented and in a form that
is easy to comprehend.  I just think your missing the boat about assembly
language.

Face it, if your architecting right (and you've already established that 
computer scientists know how to do this), then you can use whatever
combination of languages you want and not die in the effort.


>  -- David Wright, not officially representing Stardent Computer Inc
>     wright@stardent.com  or  uunet!stardent!wright


/Chris

P.S.  Ken Olsen speaks for Digital...I speak for me.  
      #include <standard-disclaimers>
---
Christopher Gillett               gillett@ceomax.dec.com
Digital Equipment Corporation     {decwrl,decpa}!ceomax.dec.com!gillett
Hudson, Taxachusetts              (508) 568-7172

peter@ficc.ferranti.com (Peter da Silva) (07/23/90)

In article <388@e2big.mko.dec.com> gillett@ceomax.dec.com (Christopher Gillett) writes:
> Whoa!  Slow down...pop a couple valium. I didn't see Karsh's comments 
> as indicative of *all* computer science types.  His remarks were
> about the typical computer scientist, not all of them.

That's no less offensive. His remarks were about a certain stereotype of the
computer scientist. About the stereotypical ivory tower type, not the typical
average case.

> Borland Industries makes a vast
> majority of their money trading on the performance of their tools.

And people trying to maintain portable programs curse them everyday. It's 
one thing to write a hot pascal compiler. It's one thing to define a superset
of pascal. But writing a hot compiler that breaks the rules for well behaved
programs (try running any Borland program under double-dos: there's NO reason
a compiler should ever write straight to the screen memory!), and defines a
language that's doesn't even include standard Pascal as a subset...

Yeh, lots of people bought Borland tools. Just like lots of people bought the
IBM-PC. In both cases it burned lots of people interested in portable programs
for no good reason... it's one thing to write a non-portable program, but
making it impossible to write portable ones is a whole other ball game.

> Aha!  Lets presume for a moment that you are truly a computer scientist,
> and that you buy into all the stuff that computer "science" teaches.

Which computer "science"? The real one, or the straw man Bruce Karsh and
you keep bringing up.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
<peter@ficc.ferranti.com>

gillett@ceomax..dec.com (Christopher Gillett) (07/23/90)

In article <RQU4JU7@ficc.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
>
>> Aha!  Lets presume for a moment that you are truly a computer scientist,
>> and that you buy into all the stuff that computer "science" teaches.
>
>Which computer "science"? The real one, or the straw man Bruce Karsh and
>you keep bringing up.

That's exactly my point!   IMHO, there's really no such thing as
"Computer Science".  Physics, chemistry, biology, mathematics, are all
real sciences.  There is a fundamental underpinning for everything,
and everything within these fields procedes from a well understood,
provable set of facts.  Some elements of computer science certainly
exhibit these traits, but for the most part it all seems to spring
forth from a mostly subjective, arguable basis.  I think that we
should stop holding out our discipline as a science and call it what
it really is...engineering.

Borland is but one example of a company whose success is based upon
their ability to deliver "performance products".  For the environment
and audience they've defined as market targets, their products are
excellent.  Griping about Turbo Whatever not running in some foreign
environment (like Double DOS) is akin to griping about how hard it is
to get your date to ride in your cool new garbage truck.  You need the
right tool for the right job.

>Peter da Silva.   `-_-'

/Chris
#include <standard_disclaimers>
---
Christopher Gillett               gillett@ceomax.dec.com
Digital Equipment Corporation     {decwrl,decpa}!ceomax.dec.com!gillett
Hudson, Taxachusetts              (508) 568-7172

ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) (07/23/90)

In article <391@e2big.mko.dec.com>, gillett@ceomax..dec.com (Christopher Gillett) writes:
> Physics, chemistry, biology, mathematics, are all
  *******             *******
> real sciences.  There is a fundamental underpinning for everything,
> and everything within these fields procedes from a well understood,
> provable set of facts.

I have an MSc in physics.  That really doesn't sound like the physics I
know.  Physics has its own equivalent of exponential algorithms:  equations
which theoretically tell you what you need to know, but which you can't
solve.  In the kind of ocean physics I did, the whole art of "doing physics"
was deciding which bits of the equations to throw away, figuring out which
crude approximations were crude enough to compute with but not too crude so
that the phenomenon you're interested in disappears.  Then you go out to the
real world and start looking for excuses why it doesn't actually work like
that.  As for biology, if anyone has a theoretical derivation of the
scaling law brain_weight ~ body_weight**(0.64..0.70) I would be VERY
interested in hearing it.  (For that matter, I'd be very interested in
hearing about a workable definition of body weight; should non-metabolising
tissues like hair, scales, claws, and so on be included?)

_Real_ sciences are _messy_.  Even in mathematics: do you _believe_ in the
Axiom of Choice?  If it is now established that it's a provable fact or
follows from a well understood provable set of facts, then some of the
biggest names in mathematics who thought they had shown it was independent
of the other axioms of set theory must be spinning in their graves.

Let's keep things straight:
    invention and analysis of algorithms: computer science
    design and implementation of reliable programs: software engineering
    analysis of most existing computer architectures: hardware pathology
(:-)
-- 
Science is all about asking the right questions.  | ok@goanna.cs.rmit.oz.au
I'm afraid you just asked one of the wrong ones.  | (quote from Playfair)

gillett@ceomax..dec.com (Christopher Gillett) (07/23/90)

In article <3455@goanna.cs.rmit.oz.au> ok@goanna.cs.rmit.oz.au (Richard A. O'Keefe) writes:
>Let's keep things straight:
>    invention and analysis of algorithms: computer science
>    design and implementation of reliable programs: software engineering
>    analysis of most existing computer architectures: hardware pathology
>(:-)

This makes sense to me.  But is the "invention and analysis of algorithms"
more in the domain of mathematics than anything else?  Seems to me that
a lot of the stuff like order of complexity analysis, formal languages, 
boolean algebra, sorting and searching techniques, etc. all tend to fall
into the realm of the mathematics.

Who was it, Tony Hoare who said something like "Computer Science is a
division of Mathematics.  Mathematics is a registered trademark of
Cambridge University". :-)

I'm amazed by the amount of hate mail I've gotten overnight since I 
first followed up on this! :-(

/Chris

P.S.  #include <standard_disclaimers>
      ^he's too lazy to rebuild his news reader to include more
       than 4 lines in the .signature file.

k
---
Christopher Gillett               gillett@ceomax.dec.com
Digital Equipment Corporation     {decwrl,decpa}!ceomax.dec.com!gillett
Hudson, Taxachusetts              (508) 568-7172

cik@l.cc.purdue.edu (Herman Rubin) (07/23/90)

In article <RQU4JU7@ficc.ferranti.com>, peter@ficc.ferranti.com (Peter da Silva) writes:
> In article <388@e2big.mko.dec.com> gillett@ceomax.dec.com (Christopher Gillett) writes:
< > Whoa!  Slow down...pop a couple valium. I didn't see Karsh's comments 
< > as indicative of *all* computer science types.  His remarks were
< > about the typical computer scientist, not all of them.
> 
> That's no less offensive. His remarks were about a certain stereotype of the
> computer scientist. About the stereotypical ivory tower type, not the typical
> average case.

This stereotype fits the great majority of so-called applied scientists, and
I am including applied numerical analysts and applied statisticians in the lot.

A good applied scientist would not exclude a tool on the grounds that it 
could be misused.  A good applied scientist does not try to use "standard"
methods when there are better ways.  A good applied scientist allows the
introduction of new methods, and even invents them if he can think of them
and they are appropriate.

Also, the good engineer asks the good applied scientist for advice and help.

< > Borland Industries makes a vast
< > majority of their money trading on the performance of their tools.
> 
> And people trying to maintain portable programs curse them everyday. It's 
> one thing to write a hot pascal compiler. It's one thing to define a superset
> of pascal. But writing a hot compiler that breaks the rules for well behaved
> programs (try running any Borland program under double-dos: there's NO reason
> a compiler should ever write straight to the screen memory!), and defines a
> language that's doesn't even include standard Pascal as a subset...
> 
> Yeh, lots of people bought Borland tools. Just like lots of people bought the
> IBM-PC. In both cases it burned lots of people interested in portable programs
> for no good reason... it's one thing to write a non-portable program, but
> making it impossible to write portable ones is a whole other ball game.
> 
< > Aha!  Lets presume for a moment that you are truly a computer scientist,
< > and that you buy into all the stuff that computer "science" teaches.
> 
> Which computer "science"? The real one, or the straw man Bruce Karsh and
> you keep bringing up.

Too many of them are the type Bruce Karsh brings up.  Those who say that 
one must not use gotos, or that one should contort the problem to fit in
the context of Fortran or Pascal or C, those who want RISC so that the
compiler can optimize the code, those who would replace insertions with
subroutine calls, etc.  Probably most of those so-called scientists would
have a difficult time optimizing the code, so they want the machine to do
it for a restricted set of code.  These people are legion.  The others are
scarce.
-- 
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907
Phone: (317)494-6054
hrubin@l.cc.purdue.edu (Internet, bitnet)	{purdue,pur-ee}!l.cc!cik(UUCP)

peter@ficc.ferranti.com (Peter da Silva) (07/23/90)

In article <391@e2big.mko.dec.com> gillett@ceomax.dec.com (Christopher Gillett) writes:
> Griping about Turbo Whatever not running in some foreign
> environment (like Double DOS) is akin to griping about how hard it is
> to get your date to ride in your cool new garbage truck.

No, it's more like griping that the guy you hired to mow your lawn
insists on using a combine harvester for the job. There is no reason for
a compiler to do direct screen writes on an IBM-PC. And there is no reason
for the *standard* run time library to so either.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
<peter@ficc.ferranti.com>

sysmgr@KING.ENG.UMD.EDU (Doug Mohney) (07/24/90)

In article <388@e2big.mko.dec.com>, gillett@ceomax..dec.com (Christopher Gillett) writes:
>>>My favorite example of the anti-intellectual current in CS is a paper by
>>>a very famous computer scientist in which he states that the teaching of
>>>certain computer languages should be treated as a criminal act.  
>>
>>Dijkstra, wasn't it, on either Fortran or BASIC?  
>>
>Based on the mail I got from folks regarding this question, it was
>Dijkstra, on COBOL.

What a good idea. I'd even support a Constitutional amendment on it ;-)

pjg@acsu.buffalo.edu (Paul Graham) (07/24/90)

gillett@ceomax..dec.com (Christopher Gillett) writes:

|I'm amazed by the amount of hate mail I've gotten overnight since I 
|first followed up on this! :-(

i think the problem is that this issue has been presented (initially by
karsh) in a rather inflammatory fashion.  another aggravation (at least
to me) is that this is comp.arch and not (despite karsh)

comp.edu                Computer science education.

or

comp.software-eng       Software Engineering and related topics.

while we can all skip articles we find inappropriate it is annoying when
folks continue to act in perhaps a rude fashion.  i fear we have drifted
rather far from comp.arch.

jesup@cbmvax.commodore.com (Randell Jesup) (07/24/90)

In article <Y=M4HP5@xds13.ferranti.com> peter@ficc.ferranti.com (Peter da Silva) writes:
>In article <64044@sgi.sgi.com> karsh@trifolium.sgi.com (Bruce Karsh) writes:
>> I still hope for a day when programming professionals will evaluate
>> programs by how well they perform their intended function, not by how the
>> souce code is indented and commented, or how portably they were
>> written.  (This is, by the way, how people who purchase software
>> usually evaluate it).
>
>You mean like the people who buy Word Perfect because it runs on just about
>anything? The people who can't upgrade their software when they switch from
>DOS to UNIX, Windows, or OS/2? Portability means repeat business.

	That would be a great point, Peter, except for one minor problem:
WordPerfect is 100% written in assembler for speed.

	However, I agree with you in principle.  Portability is very useful.
However, market pressures can force certain applications to be downcoded or
initially coded in assembler.  When you have a large number of customers,
the cost of coding, debugging, and maintenance is amortized over a larger base.
When it's a one-off, or for use by a small number of customers, or is to be
written and maintained in a university environment, or where speed is not
an important concern, then assembler should be avoided is possible.

	Remember the environment that CS teachers and grad students work in.
There's little or no benefit to be gained by making something particularily
fast, or small (or maintainable in some cases).  It's far more important to
get something implemented in a minimum of programmer time than to have it
run faster.  The trade-offs as seen by a CS prof or grad student are
different than those seen in industry.  Not totally different, but often
a difference of degree and philosophy.  It's fairly easy for theoretical CS
people to be "purists" (and for industry people to be converse).  There
was a good editorial about this in an IEEE publication a few months back.

	Most often programs written by undergrads are throw-aways
(fairly often grads too), and they're taught little of software engineering,
maintenance, etc that they'll need to know in industry.  Regardless of the
rhetoric about modularity, etc, many CS undergrads come out coding like
"fortran" programmers (insert your favorite spaghetti-code language for
fortran if you wish).  I've seen it happen, with students with 3.x averages
from a _good_ technical school.  If they're good they'll pick it up, but
what the hell were they being taught?

	When I was in CS, the only Software Engineering course was a grad-
level course.  I think it should be taught in 2nd or 3rd semester undergrad.

	I also think there's some confusion as to what computer science is.
Most CS grads go into industry, and most work as engineers.  Few are doing
"science" in industry with a BS in CS.  Most companies want a CS degree
for someone who is to do Software Engineering duties.  Most of the science
part of CS is only really taught to grad students.  Either undergrad CS
majors should be trained more in Software Engineering, or they should be
split, and CS should be more theoretical, and SE be more oriented to the
different techniques and knowlege needed in industry.

	Well, more than enough flaming well of the topic of newsgroup.  Sorry,
I guess something touched a nerve.  (In case you can't guess, I work in
industry, am not a "purist", and I'm sure my perceptions are affected by
this.)

-- 
Randell Jesup, Keeper of AmigaDos, Commodore Engineering.
{uunet|rutgers}!cbmvax!jesup, jesup@cbmvax.cbm.commodore.com  BIX: rjesup  
Common phrase heard at Amiga Devcon '89: "It's in there!"

peter@ficc.ferranti.com (peter da silva) (07/24/90)

In article <13392@cbmvax.commodore.com>, jesup@cbmvax.commodore.com (Randell Jesup) writes:
> WordPerfect is 100% written in assembler for speed.

What does it need that speed for? Like all editors, it's almost 100% I/O
bound on the user inetrface.
-- 
Peter da Silva.   `-_-'
+1 713 274 5180.   'U`
<peter@ficc.ferranti.com>

ian@sibyl.eleceng.ua.OZ (Ian Dall) (07/25/90)

In article <_YV4Y9A@ggpc2.ferranti.com> peter@ficc.ferranti.com (peter da silva) writes:
}In article <13392@cbmvax.commodore.com>, jesup@cbmvax.commodore.com (Randell Jesup) writes:
}> WordPerfect is 100% written in assembler for speed.
}
}What does it need that speed for? Like all editors, it's almost 100% I/O
}bound on the user inetrface.

I don't want to comment on word perfect particularly, but there is a
difficulty in benchmarking any interactive software. On the *average*
it spends almost all its time waiting for IO, but when you hit the
key, any delay is *very* noticable.



-- 
Ian Dall     life (n). A sexually transmitted disease which afflicts
                       some people more severely than others.       

limonce@pilot.njin.net (Tom Limoncelli) (07/25/90)

In article <_YV4Y9A@ggpc2.ferranti.com> peter@ficc.ferranti.com (peter da silva) writes:

> In article <13392@cbmvax.commodore.com>, jesup@cbmvax.commodore.com (Randell Jesup) writes:
> > WordPerfect is 100% written in assembler for speed.
> 
> What does it need that speed for? Like all editors, it's almost 100% I/O
> bound on the user inetrface.

That's because most (most!) of todays machines have video systems that
introduce a HUGE number of wait-states for each write to video memory.

Such is done because of various design decisions made by the system
engineers when they are planning the COMPupter ARCHitecture... which
is something that isn't being discussed much around here.

Semi-seriously,
Tom
P.S.  Before you ask me via email: No, I don't claim to be the
net.police and yes, I do know how to operate a kill file.  Let me ask
YOU a question: Did you read the "Summary:" line?
-- 
tlimonce@drew.edu      Tom Limoncelli     +1 201 408 5389
tlimonce@drew.uucp
tlimonce@drew.Bitnet    My new philosophy on life:
limonce@pilot.njin.net                  "Vogue 'til you puke"

pegram@uvm-gen.UUCP (Robert B. Pegram) (07/25/90)

 Randell Jesup writes:
>> WordPerfect is 100% written in assembler for speed.
 Peter da Silva then asks:

> What does it need that speed for? Like all editors, it's almost 100% I/O
> bound on the user inetrface.
> -- 
> Peter da Silva.   `-_-'
> +1 713 274 5180.   'U`
> <peter@ficc.ferranti.com>

Probably for scrolling and reformatting on the fly.
(Amazing, I can be terse!)

Bob Pegram  Internet: pegram@griffin.uvm.edu
	    UUCP: uunet!uvm-gen!pegram

tif@doorstop.austin.ibm.com (Paul Chamberlain) (07/26/90)

In article <_YV4Y9A@ggpc2.ferranti.com> peter@ficc.ferranti.com (peter da silva) writes:
>In article <13392@cbmvax.commodore.com>, jesup@cbmvax.commodore.com (Randell Jesup) writes:
>> WordPerfect is 100% written in assembler for speed.
>
>What does it need that speed for? Like all editors, it's almost 100% I/O
>bound on the user inetrface.

Maybe for a 4Mhz 8088?

Paul Chamberlain | I do NOT represent IBM         tif@doorstop, sc30661@ausvm6
512/838-7008     | ...!cs.utexas.edu!ibmaus!auschs!doorstop.austin.ibm.com!tif