[net.micro.cpm] Turbo Pascal--first impressions

POURNE%mit-mc@sri-unix.UUCP (02/16/84)

From:  Jerry E. Pournelle <POURNE@mit-mc>

Turbo, most tell me, is pretty good, BUT:

do this

foo := 1.23 * 100
now take frac of foo.
It won't be zero.

They say they'll fix that real soon now

young%uci-750a@sri-unix.UUCP (02/17/84)

From:  Michal Young <young@uci-750a>

Turbo Pascal arrived yesterday.  I'll share some first impressions now
and give a better review when I've used it for a while.

First-- it is very near standard.  Get and Put are not implemented, the
IO primitives are instead Read and Write.  The heap is really a stack
and storage is returned by using mark and release instead of dispose.
Goto may not leave a block (this may be a problem for error recovery).
Functions and procedures may not be passed as parameters. 'Packed' is 
allowed but meaningless, and pack and unpack are not provided.  

There are numerous extensions, but they are well thought out mostly and 
do not screw up the syntax or semantics of the standard portion of the
language.  For instance, initializers are provided by an extension to
the const declaration.  Structures and arrays can be initialized this
way.  Strings up to 255 characters are allowed.  

A real attempt has been made to provide a programming environment rather
than just a compiler.  Provided the program and pascal system both fit
in memory, you can edit a program, compile and run it, and edit again
to fix an error without leaving the environment.  And no annoying waits
for overlays to load from disk, either-- compiler, editor, and program
somehow fit in memory all at once.  When either a syntax error or a run-time
error is detected, you wind up back in the editor with the cursor at 
the error.  If you have to run your program from outside the pascal 
system (because it is too big to fit with everything else in memory),
you can still find the source line in error.  You reenter the pascal
system and tell it the program counter address, and it re-compiles
until it comes to that address.  Pretty slick.  There are a few
rough edges, but I haven't ever seen a compiler (not interpreter)
this nice to work with.

The documentation is good.  250+ pages in a paperback book, reasonably
well written but not outstanding.  This same manual covers CP/M-80,
CP/M-86, and MS-DOS versions.  Except for BIOS level diddling (which
turbo will allow), it looks to be portable.  

Michal Young
young@uci

Kenny%his-phoenix-multics.arpa@BRL.ARPA (02/18/84)

From:   Kevin Kenny <Kenny%his-phoenix-multics.arpa@BRL.ARPA>

The problem that you're describing (where frac (1.23 * 100) isn't zero)
is the usual truncation error in binary arithmetic.  If they say that
they'll fix it Real Soon Now, they either are lying, or mean that they
intend to foul things up further; to someone who's doing numerical
analysis, the result is CORRECT (if it's very close to 0 or 1; you
didn't say what the result is, just what it isn't).

[flame on] I am getting awfully tired of people who say that decimal
arithmetic is "inherently more accurate" than binary.  This claim is
absolute rubbish. [blowtorch valve off again].

The problem, of course, is that there is no exact binary representation
for 1.23; the expansion is a repeating string beginning
1.0011101011100001010001 with the last twenty digits repeating.  The
fact that 1.23 can be represented as a finite-length string in decimal
leads people to claim that "decimal is more accurate." But, try
representing 1/3 in either system.  It doesn't go, does it?  Does this
say that we should all switch to the ancient Babylonian (base sixty)
system, where 1/3 can be represented exactly as <00>.<20>? I don't think
so.

The point is, that any number can be represented to any level of
precision (short of exact) in any radix.  No radix can represent all
numbers exactly; Georg Cantor proved that a long time ago.

I concede that there is a problem in dealing with bankers and other
people who expect dollars and cents to come out even.  But a dollar
amount isn't a floating point number at all: it's an integer number of
cents!  In COBOL and PL/1, there are facilities to deal with the idea
that an integer might have a "decimal point" in its displayed
representation.  In most other languages, you just have to remember that
a variable contains an amount in cents and convert it before
displaying. It's not that tough.  Really it isn't.

The floating point implementations that "don't have this problem" use
"fuzzy comparisons".  What this means is that if the difference between
two numbers is less than some arbitrary constant times the smaller one,
they are considered equal.  This keeps the bankers happy, but drives the
engineers up a wall; there's an implicit loss of (real) precision to
gain the (perceived) accuracy.

Enough said.  Just a one sentence summary:

COMPARING TWO FLOATING POINT NUMBERS FOR EXACT EQUALITY IS NEARLY ALWAYS
A MISTAKE, WHATEVER BASE THE MACHINE USES.

/k**2

POURNE%mit-mc@sri-unix.UUCP (02/18/84)

From:  Jerry E. Pournelle <POURNE@mit-mc>

1. My mistake: I repeated something I was told.
2. Turbo is going to DOCUMENT the fact that frac of a floating
point number is close to ONE, not close to ZERO; apparently this
representation scheme is in use in other machines, and they're
staying compatible.
3. Computer science is wonderful, but I'm glad you don't do my
taxes or take care of my bank account.
4. I don't much care about the subjects of your flame, but I do
care about ease of use and just getting the job done; and I
don't think I want to train MBA candidates to think about
numerical representations, merely to be able to use the
machines.

Kenny%his-phoenix-multics.arpa@BRL.ARPA (02/19/84)

From:   Kevin Kenny <Kenny%his-phoenix-multics.arpa@BRL.ARPA>

My apologies for flaming; my last message was written fairly late on a
very bad day.  I, too, am primarily interested in getting the job done,
which involves selecting the right tool to do it.

For doing taxes or balancing bank accounts I'd use scaled fixed-point
arithmetic, which gets the pennies right, and not worry about whether
it's decimal or binary internally.  Funny, do you suppose that's why
COBOL was designed that way?

For engineering work, I want floating point sometimes (although the more
you use it, the less you trust it), and (by preference, not necessity)
binary arithmetic.  On nearly all machines, binary is faster; the
analysis of computation errors is easier, too.

For systems programming, I don't give a damn, since it's non-numeric
anyway.

A language trying to serve every application's needs can't do them all
right without falling into the trap of gigantism (_vide_ PL\1).  I think
Turbo has made the right decision, though I recognize that I'm
personally biased toward engineering and away from finance.