[comp.arch] VAX bashing and language specificity of processors

zs01+@andrew.cmu.edu (Zalman Stern) (08/28/89)

Jerry Leichter (leichter@CS.YALE.EDU) says approximately the following:

    Why does everybody bash on the VAX?

and:

    RISC chips seem to be designed for C only.

On the first point, the VAX gets bashed because it is comparatively slow
and hard to understand. As far as I can tell, the main reason the VAX is
"slow" is because it can't tell where the next instruction begins
without parsing the entire current instruction. There are ways around
this, but it isn't clear that DEC is really pushing hard to build
extremely high performance VAXen. (This probably makes sense from an
economic perspective. Proprietary general purpose processors are being
obsoleted by mass production microprocessors.) One reason the VAX is
hard to understand is because it has a lot of features that were
intended to handle specific cases which are not relevant (and even
hinderous) to the software I work with. Handling the specific case in
hardware also makes the hardware big and complex...

I have less respect for complexity arguments after having worked with
some RISC systems in detail. Sun style register windows are a bitch to
deal with. (Think about asynchronous modification of the stack on
context switches. AMD's scheme might be better.) MIPS' global pointer
addressing mode introduces problems which are reminiscent of large
model/small model libraries on the Intel 8086. (The Motorola 88k has a
similar linkage convention.) Its all relative, but the bottom line is
once you get your software running on one of these machines, it will run
fast.

Finally, on the point that RISCs were designed solely to run C, this is
just wrong. I know that both HP and MIPS consider support of other
languages to be very important. The literature on the HP Precision
architecture details consideration of COBOL as a force in instruction
set design. (The HP 3000/800 series is a business mini-computer based on
the same chip as their series 9000/800 UNIX workstations. The 3000 runs
a different operating system (MPE???).) The MIPS R2000/R3000 and the AMD
29000 both provide arithmetic instructions that trap on overflow. The
SPARC has tagged arithmetic instructions for dynamically typed language
(i.e. LISP) support. Neither of these things would have been put in is
these RISC chips had been designed for C only.

Sincerely,
Zalman Stern
Internet: zs01+@andrew.cmu.edu     Usenet: I'm soooo confused...
Information Technology Center, Carnegie Mellon, Pittsburgh, PA 15213-3890

shankar@hpclscu.HP.COM (Shankar Unni) (08/29/89)

Henry Spencer writes:

> the fine points is still desirable.)  Compiler designers today understand
> that it may be better to convert COBOL numbers to binary for arithmetic
> than to mess up the hardware with decimal instructions, for example.
> COBOL programs are seldom arithmetic-bound anyway.

A minor point (I agree with everything else you said on this topic):

It is *not* generally OK for COBOL numbers to be converted to binary for
arithmetic. COBOL programmers work in exact fixed point fractions.
Converting to floating point, while taking care of the magnitude, leaves
much to be desired in terms of precision. Integer arithmetic has the
opposite problem: precision is fine, magnitude is not.

Usually, the programmer determines what numbers may be stored in binary
form, and what in BCD (the "USAGE COMP(n)" phrase determines what
representation is used). What we (HP) do for COBOL on HP Precision
Architecture is to use finely-tuned BCD libraries for doing BCD operations.
There are a couple of instructions (like DCOR - Decimal CORrect) for
low-level decimal operations on packed BCD, so these libraries are
generally quite fast.

Besides, as you pointed out, COBOL programs are rarely arithmetic-bound.
-----
Shankar Unni                                   E-Mail: 
Hewlett-Packard California Language Lab.     Internet: shankar@hpda.hp.com
Phone : (408) 447-5797                           UUCP: ...!hplabs!hpda!shankar

khb@road.Sun.COM (road) (08/30/89)

In article <650012@hpclscu.HP.COM> shankar@hpclscu.HP.COM (Shankar Unni) writes:
>Henry Spencer writes:
>
>> the fine points is still desirable.)  Compiler designers today understand
>> that it may be better to convert COBOL numbers to binary for arithmetic
>> than to mess up the hardware with decimal instructions, for example.
>> COBOL programs are seldom arithmetic-bound anyway.
>
>A minor point (I agree with everything else you said on this topic):
>
>It is *not* generally OK for COBOL numbers to be converted to binary for
>arithmetic. COBOL programmers work in exact fixed point fractions.
>Converting to floating point, while taking care of the magnitude, leaves
>much to be desired in terms of precision. Integer arithmetic has the
>opposite problem: precision is fine, magnitude is not.

I believe Herny was suggesting that operations on integers (or sets of
integers) rather than BCD. No one is suggesting, that I know of, that
BCD be converted to floating point and etc.
Keith H. Bierman    |*My thoughts are my own. !! kbierman@sun.com
It's Not My Fault   |	MTS --Only my work belongs to Sun* 
I Voted for Bill &  | Advanced Languages/Floating Point Group            
Opus                | "When the going gets Weird .. the Weird turn PRO"