[comp.arch] Was the 360 badly-designed?

crowl@cs.rochester.edu (Lawrence Crowl) (08/21/87)

crowl@cs.rochester.EDU (Lawrence Crowl) writes:
>..., are you implying the 360 architecture was badly designed?  This claim
>will need VERY good arguments to over-ride 25 years (almost) of success.

mjr@osiris.UUCP (Marcus J. Ranum) writes:
)Stone hammers, along with flint knives, showed more success (in years) than
)EBCDIC architecture, but nobody uses them anymore.  Only the trailing edge of
)technology still supports 360 architecture...  Arguing that your flint axe has
)had '2000 years of success' is not going to change the fact that the times
)have changed.  Do you also favor laser-optical card reader technology ?

Yes, stone hammers and flint knives were used for a very long time.  Their
performance has not improved.  Implementations of the 360 architecture have
improved immensly.

On the contrary, the leading edge of technology supports the 360 architecture.
Some of the fastest scalar machines available are based on the 360.

Yes, times have changed, but "well-designed" is relative to the time at which
the design was done.  Roman roads were well-designed.  No one builds them any
more, but they were still well-designed.

dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
]The misconception here is that a broad user base implies high quality or
]elegance of design.  Instead of offering VERY good arguments, I will simply
]offer three counterexamples without further comment.
]1.   The 8086 family of CPUs versus the 680x0 family of CPUs
]2.   The National Enquirer versus the Wall Street Journal
]3.   Family Feud versus the MacNeil/Lehrer Report

I had no misconception, and these are not counter-examples.  I did not state
that something had to be well-designed to be popular.  Nor are popular things
necessarily poorly-designed.  Popular and well-designed are loosely related.

I am not necessarily stating that the 360 architecture was well-designed, but I
am saying the architecture has shown flexibility and adaptability for many
years.  If you wish to say the 360 architecture is bad, you must show why its
adaptability is illusory.  The 360 architecture has been implemented on
machines spanning roughly two orders of magnitude in performance.  It has gone
from physical memory to virtual memory.  It supported a virtual machine long
before many other architectures did.  

I repeat my statement: one needs VERY good arguments to claim that the 360
architecture was badly-designed.  Anyone care to provide them or refute them?
I have added comp.arch since they are likely to provide interesting input.
-- 
  Lawrence Crowl		716-275-8479	University of Rochester
		     crowl@cs.rochester.arpa	Computer Science Department
 ...!{allegra,decvax,seismo}!rochester!crowl	Rochester, New York,  14627

lyang%scherzo@Sun.COM (Larry Yang) (08/22/87)

In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
>crowl@cs.rochester.EDU (Lawrence Crowl) writes:
>>..., are you implying the 360 architecture was badly designed?  This claim
>>will need VERY good arguments to over-ride 25 years (almost) of success.
>[....]
>I repeat my statement: one needs VERY good arguments to claim that the 360
>architecture was badly-designed.  Anyone care to provide them or refute them?
>I have added comp.arch since they are likely to provide interesting input.

From what I have heard, one reason for the IBM 360's success was forsight
on the part of the designers.  They decided that the machine should have
24 address bits.  At the time, 16 MBytes seemed like a heck of a lot of
memory.  At the time, 16 MBytes *was* a heck of a lot of memory.  But the
architects recognized the trend that memory was getting denser;  and
as time wore on, those 16-bit address machines started falling to the wayside, 
whereas the 360 was able to just keep having its memory expanded.  They designed
in something that didn't become obsolete in just a few years.

This example was given to me by a professor;  I never really checked out
the accuracy of this claim.  I'm sure someone else out there can confirm/
deny this claim.

********************************************************************************


--Larry Yang [lyang@sun.com,{backbone}!sun!lyang]|   A REAL _|> /\ |
  Sun Microsystems, Inc., Mountain View, CA      | signature |   | | /-\ |-\ /-\
    Hobbes: "Why do we play war and not peace?"  |          <|_/ \_| \_/\| |_\_|
    Calvin: "Too few role models."               |                _/          _/

guy%gorodish@Sun.COM (Guy Harris) (08/22/87)

> From what I have heard, one reason for the IBM 360's success was forsight
> on the part of the designers.  They decided that the machine should have
> 24 address bits.  At the time, 16 MBytes seemed like a heck of a lot of
> memory.  At the time, 16 MBytes *was* a heck of a lot of memory.

Unfortunately, one reason for what I presume was a big effort on the part of
IBM (can you say XA?) was *lack* of foresight on the part of the designers;
they decided that the machine should have 24 address bits.  Unfortunately, this
not only applied to things such as memory buses, it applied to effective
address formation.  As a result, everybody stuffed things into the upper 8 bits
of pointers, since they weren't used; when they ran out of the 16MB virtual
address space (by that time, it had virtual memory), they had to introduce a
mode bit to permit old 24-bit-addressing applications and new 31-bit-addressing
applications to run together.

From reading some of the XA documentation, it seems they planned for further
expansion, by using - wait for it - segmentation; it appears to have some
similarities to that provided by a machine nearer the bottom of their product
line.  :-)
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

dhesi@bsu-cs.UUCP (Rahul Dhesi) (08/22/87)

In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
>. . .one needs VERY good arguments to claim that the 360
>architecture was badly-designed.

No stack, small segments, nonstandard character set with holes.
-- 
Rahul Dhesi         UUCP:  {ihnp4,seismo}!{iuvax,pur-ee}!bsu-cs!dhesi

rjh@ihlpa.ATT.COM (Herber) (08/23/87)

In article <1035@bsu-cs.UUCP>, dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
> In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
> >. . .one needs VERY good arguments to claim that the 360
> >architecture was badly-designed.
> 
> No stack, small segments, nonstandard character set with holes.
> -- 
> Rahul Dhesi         UUCP:  {ihnp4,seismo}!{iuvax,pur-ee}!bsu-cs!dhesi

1. No stack: make one with existing instructions.
2. Small segments: 1 megabyte is not big enough?
                   And the architecture transparently handles the transition
                   from one segment to another. An instruction or datum can
                   start in one segment and end in another; to the programmer
                   it looks like one piece of memory.
3. Nonstandard character set: Check, I believe that EBCDIC is a standard;
                   and, ASCII has its problems too -- numerics sort before
                   letters. The 360/370 architecture is not tied to EBCDIC.
                   The 360 architecture, in particular, had a bit in its
                   PSW (program status word) to tell the hardware whether to
                   generate EDCDIC or ASCII zones when converting from
                   packed decimal to zoned decimal.
4. BTW {:-)}, this message came from an Amdahl 5890-300 (a 360/370 architecture
                   processor) running UTS (tm-Amdahl) which is Unix (reg.tm-AT&T)
                   System V Release 2 compatible (see also: SVID and SVVS).
                   The character set is ASCII.

	Randolph J. Herber, Amdahl Sr Sys Eng, ..!ihnp4!ihlpa!rjh,
	(312) 979-6553, IH 6X213, AT&T Bell Labs, Naperville, IL 60566

guy%gorodish@Sun.COM (Guy Harris) (08/23/87)

> >. . .one needs VERY good arguments to claim that the 360
> >architecture was badly-designed.
> 
> No stack, small segments, nonstandard character set with holes.

He said "VERY good arguments"; these aren't.

"No stack": what do you mean by "no stack"?  There are no "push" or "pop"
instructions, and the procedure call instruction saves the return address in a
register, but so what?  Nothing *prevents* you from implementing a stack.

"Small segments": what do you mean by "segments"?  The original 360 didn't have
any sort of memory mapping.  If you *really* mean "12-bit offsets", yes, that
may be a nuisance, but it's not an insuperable problem, and it may have made
sense given the design constraints of the day.

"Nonstandard character set": considering ASCII was relatively new at the time
(I'm not even sure to what degree ASCII *existed* in 1963!), this is simply
bogus.

"with holes": well, ASCII has holes, too; why aren't "0-9" and "a-f" or "A-F"
contiguous?
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

bcase@apple.UUCP (Brian Case) (08/23/87)

In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
>>will need VERY good arguments to over-ride 25 years (almost) of success.
> Implementations of the 360 architecture have improved immensly.

BUT NOT THE ARCHITECTURE.

>
>On the contrary, the leading edge of technology supports the 360 architecture.

BUT NOT THE LEADING EDGE OF THE ART.

>Some of the fastest scalar machines available are based on the 360.
>
>I am not necessarily stating that the 360 architecture was well-designed, but I
>am saying the architecture has shown flexibility and adaptability for many
>years.  If you wish to say the 360 architecture is bad, you must show why its
>adaptability is illusory.  The 360 architecture has been implemented on

The adaptability is not illusory.  It is, however, bought at an extremenly
high price.

>machines spanning roughly two orders of magnitude in performance.  It has gone
>from physical memory to virtual memory.  It supported a virtual machine long
>before many other architectures did.  

On the contrary, the 360 (370) is (has been) more than an "almost" success.

You are correct in stating that some of the fastest scalar machines are
based on the 360 (370) architecture.  But that does NOT mean anything.
Take, for example, the EDGE 68010 implementation in six, huge, 256-pin PGA
gate arrays.  It is definitely a fast processor.  The Amdahl 5860 and
siblings are fast processors.  However, those machines are, relative
to recent offerings from a few sources, VERY expensive.  They are compatible,
yes, but painfully expensive.  Within reason, it is possible to have fast
implementations, virtual machine implementations, <your adjective>
implementations; the trick is to have SMALL, CHEAP fast implementations,
virtual machine implemenations, <your adjective> implementations.  The
MIPS Co. processor, SUN 4 processor, the Acorn RISC machine processor,
the Am29000 processor, etc. have, at least, for some problems, performance
equal to or greater than multimillion dollar machines, at prices orders
of magnitude lower.

The 360 (370) architecture was, for its time, perhaps not badly designed.
However, its flaws, relative to the current state of the art, are readily
apparent.  If it were to be introduced today, most (at least most of the
people *I* know who are concerned about such things) would call it a badly
designed architecture.

    bcase

chuck@amdahl.amdahl.com (Charles Simmons) (08/23/87)

In article <1035@bsu-cs.UUCP> dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
>In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
>>. . .one needs VERY good arguments to claim that the 360
>>architecture was badly-designed.
>
>No stack, small segments, nonstandard character set with holes.
>-- 
>Rahul Dhesi         UUCP:  {ihnp4,seismo}!{iuvax,pur-ee}!bsu-cs!dhesi

What do you mean by "no stack"?  Our C compiler uses a stack on our
370 architechture.  Are you complaining that auto-increment/auto-decrement
addressing modes weren't implemented?  Do current RISC chips use these
addressing modes?

What does the character set that tends to be used with an architechture
have to do with the architechture?  I don't think we have any problems
using ascii with our architechture...

-- Chuck
amdahl!chuck

mjr@osiris.UUCP (Marcus J. Ranum) (08/23/87)

	Would you guys with the religious debate take it somewhere else or
at least stop following it up to comp.lang.c ? 

Ut!
--mjr();

-- 
If they think you're crude, go technical; if they think you're technical,
go crude. I'm a very technical boy. So I get as crude as possible. These
days, though, you have to be pretty technical before you can even aspire
to crudeness...			         -Johnny Mnemonic

dhesi@bsu-cs.UUCP (08/23/87)

"Make every word count!" they say.  So I do, and now I have to clarify.
I wrote this about the IBM 360:

        No stack, small segments, nonstandard character set with holes.

guy%gorodish@Sun.COM (Guy Harris) writes:

>"No stack": what do you mean by "no stack"?  There are no "push" or "pop"
>instructions, and the procedure call instruction saves the return address in a
>register, but so what?  Nothing *prevents* you from implementing a stack.

Exactly.

>"Small segments": what do you mean by "segments"?  The original 360 didn't have
>any sort of memory mapping.  If you *really* mean "12-bit offsets", yes, that
>may be a nuisance, but it's not an insuperable problem, and it may have made
>sense given the design constraints of the day.

Yes, I did mean 12-bit offsets, and they can be a nuisance and not an
insuperable problem, and may (or may not) have made sense given the
design contraints of the day.  The same is often said of the 64-K
blocks of code and data imposed by the 8086 architecture, and I don't
think that that CPU deserves any awards for design excellence either.

>"Nonstandard character set": considering ASCII was relatively new at the time
>(I'm not even sure to what degree ASCII *existed* in 1963!), this is simply
>bogus.
>
>"with holes": well, ASCII has holes, too; why aren't "0-9" and "a-f" or "A-F"
>contiguous?

ASCII did exist then.

ASCII was designed quite logically.  It's convenient to have the digits
begin with a value that has four low-order zero bits, so one can simply
mask with binary 11110000 and get the numeric value of the digit
character.  It's convenient to have the alphabetic characters begin
with a value that has 0001 as the low-order bits, for one can mask with
binary 00011111 and get a number representing the position of the
character in the alphabet.  (E.g., 'Z' & 0x1f gives 26.)  Case
conversion is equally simple:  A single bit needs to be flipped.  One
can derive a neat subset alphabet by using only the low-order 6 bits.
One gets all the arithmetic operators, all the essential mathematical
symbols including (), <, =, and >, and all the digits--perfect for use
in a calculator.  The set of control characters neatly maps to the
alphabetic characters by flipping a single bit, allowing the logical
"control A", ^B, etc. notation to be used.  (There aren't enough
alphabetic characters to go around so you also have ^[ etc. but one
can't blame ASCII for that.)

There were a few other design thoughts that went into ASCII but I don't
remember them; I seem to faintly remember there was another way of
deriving a subset alphabet.  Certainly it wasn't put together
haphazardly like EBCDIC was.  It's hard to defend a character set in
which (x >= 'A' && x <= 'Z') doesn't assure us that x is alphabetic:
this is plainly a very undesirable property of EBCDIC.  On the other
hand, perhaps we should count our blessings and be thankful that IBM
chose to make the digits contiguous, so we don't have to use a lookup
table to evaluate a string of digits.
-- 
Rahul Dhesi         UUCP:  {ihnp4,seismo}!{iuvax,pur-ee}!bsu-cs!dhesi

bcase@apple.UUCP (Brian Case) (08/24/87)

In article <1035@bsu-cs.UUCP> dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
>In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
>>. . .one needs VERY good arguments to claim that the 360
>>architecture was badly-designed.
>
>No stack, small segments, nonstandard character set with holes.

Wait, is the character set part of the architecture!?!?  I didn't think so,
but....  Also, some of the best architectures, in my opinion, don't "have"
any stacks either (what does it mean to "have" a stack?).  The 4K byte 
addressability  problem is real.  The real problems with the architecture
are related to system software interface issues and implementation
ramifications of the instruction set definition, e.g. things like too few
registers, two-address operations, hard-to-pipeline addressing modes, etc.

    bcase

bcase@apple.UUCP (Brian Case) (08/24/87)

In article <1589@apple.UUCP>, bcase@apple.UUCP (Brian Case) writes:
> >No stack, small segments, nonstandard character set with holes.
> 
> Wait, is the character set part of the architecture!?!?  I didn't think so,

But, I was wrong:  I forgot about the character instructions (edit, etc.).
These make assumptions about the character set, don't they?

    bcase

ken@argus.UUCP (Kenneth Ng) (08/24/87)

In article <1035@bsu-cs.UUCP>, dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
> In article <1580@sol.ARPA> crowl@cs.rochester.edu (Lawrence Crowl) writes:
> >. . .one needs VERY good arguments to claim that the 360
> >architecture was badly-designed.
> No stack, small segments, nonstandard character set with holes.
> -- 
> Rahul Dhesi         UUCP:  {ihnp4,seismo}!{iuvax,pur-ee}!bsu-cs!dhesi


Talking about holes, what are those characters between 5B hex and 60 hex
in ascii?  Surely they aren't part of the alphabet.  But the character
set has no meaning on architecture of a machine.  I've seen an Amadal
(which is an IBM mainframe work-alike) running UTS with an ascii
character set.  I'm pretty sure if you try you can get EBCDIC on
a DEC machine.  As for a stack, the 360 assembler is sophisticated
enough to write macros that emulate stacks quite well, I use them
all the time.  As for small segments, that's where I must agree.  Granted
I don't have too many data structures larger than 4K, but it is a bit
of an irritant.

What I don't like about the 360 architecture is a lack of a one instruction
load and/or store indirect.  I've written macros to do the job, but it's
still a bit of an irritant knowing that the instruction is not available.


Kenneth Ng: Post office: NJIT - CCCC, Newark New Jersey  07102
uucp !ihnp4!allegra!bellcore!argus!ken *** NOT ken@bellcore.uucp ***
bitnet(prefered) ken@orion.bitnet

mwm@eris.BERKELEY.EDU (Mike (My watch has windows) Meyer) (08/24/87)

In article <26312@sun.uucp> guy%gorodish@Sun.COM (Guy Harris) writes:
<> >. . .one needs VERY good arguments to claim that the 360
<> >architecture was badly-designed.
<> 
<> No stack, small segments, nonstandard character set with holes.
<
<He said "VERY good arguments"; these aren't.

Quite right. Worse yet, those arguments ignore the environment the 360
was designed in. By todays criterion, the PDP-11 is badly designed: 16
bit addresses, not nearly enough real memory, and poor support for
floating point numbers and 32 bit integers.

<"No stack": what do you mean by "no stack"?  There are no "push" or "pop"
<instructions, and the procedure call instruction saves the return address in a
<register, but so what?  Nothing *prevents* you from implementing a stack.

And at the time the 360 was designed, you generally wrote in languages
that didn't support recursion, so having a stack was far less important.

<"Small segments": what do you mean by "segments"?  The original 360 didn't have
<any sort of memory mapping.  If you *really* mean "12-bit offsets", yes, that
<may be a nuisance, but it's not an insuperable problem, and it may have made
<sense given the design constraints of the day.

Actually, the 4K offsets are really only a problem with functions and
data objects that want to grow past that boundary. Since they reside
in a flat 16MB address space, you can usually ignore them.

<"Nonstandard character set": considering ASCII was relatively new at the time
<(I'm not even sure to what degree ASCII *existed* in 1963!), this is simply
<bogus.

IBM was involved in ASCII. They pushed for an eight-bit character set.
When that didn't happen, rather than go with a standard they
considered inadequate (guess what - they were right!), they extended
the character set they had been using before (BCD). In retrospect, an
extended ASCII would have been better for the world, but their
decision caused *their customers* less pain.

	<mike
--
When logic and proportion have fallen soggy dead,	Mike Meyer
And the white knight is talking backwards,		mwm@berkeley.edu
And the red queen's on her head,			ucbvax!mwm
Remember what the dormouse said.			mwm@ucbjade.BITNET

jay@splut.UUCP (Jay Maynard) (08/24/87)

In article <1580@sol.ARPA>, crowl@cs.rochester.edu (Lawrence Crowl) writes:
> crowl@cs.rochester.EDU (Lawrence Crowl) writes:
> [...] the leading edge of technology supports the 360 architecture.
> Some of the fastest scalar machines available are based on the 360.

Yup. Just look at a 3090-600E. Blindingly fast, and will still solve the
real-world problems that business faces daily.

> Yes, times have changed, but "well-designed" is relative to the time at which
> the design was done.  Roman roads were well-designed.  No one builds them any
> more, but they were still well-designed.

Actually, there will be a new runway installed at Houston's Intercontinental
Airport (I think...been a while since I heard the news report) using Roman
road-building technology. Seems that they think that the runway will last
longer and be easier to maintain.

> dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
> ]The misconception here is that a broad user base implies high quality or
> ]elegance of design.  Instead of offering VERY good arguments, I will simply
> ]offer three counterexamples without further comment.
> ]1.   The 8086 family of CPUs versus the 680x0 family of CPUs

While people are bashing the 360 and 80x86 architectures, millions of
businesses and people are getting real, useful work done on them.

> ]2.   The National Enquirer versus the Wall Street Journal
> ]3.   Family Feud versus the MacNeil/Lehrer Report
> 
> I had no misconception, and these are not counter-examples.  I did not state
> that something had to be well-designed to be popular.  Nor are popular things
> necessarily poorly-designed.  Popular and well-designed are loosely related.

Yeah. Just look at Volvos and 680x0s. (BTW, have you noticed that people
who drive Volvos, just as people who use 680x0s and Unix, are convinced that
the rest of us are screwing up horribly if we don't follow their lead?)

> I am not necessarily stating that the 360 architecture was well-designed, but I
> am saying the architecture has shown flexibility and adaptability for many
> years.  If you wish to say the 360 architecture is bad, you must show why its
> adaptability is illusory.  The 360 architecture has been implemented on
> machines spanning roughly two orders of magnitude in performance.  It has gone
> from physical memory to virtual memory.  It supported a virtual machine long
> before many other architectures did.  
> 
> I repeat my statement: one needs VERY good arguments to claim that the 360
> architecture was badly-designed.  Anyone care to provide them or refute them?
> I have added comp.arch since they are likely to provide interesting input.

And those arguments will STILL fly in the face of practical, real-world
problem solving. Business isn't interested in conceptual purity; they want
their problems solved, now, and don't really care how they get that way -
except that they won't throw away many years and millions of dollars of
investment without a very good reason. Unix and VAXen haven't been good
enough reasons.

-- 
Jay Maynard, K5ZC...>splut!< | uucp: hoptoad!academ!uhnix1!nuchat!splut!jay
"Don't ask ME about Unix...  | (or sun!housun!nuchat)       CI$: 71036,1603
I speak SNA!"                | internet: beats me         GEnie: JAYMAYNARD
The opinions herein are shared by neither of my cats, much less anyone else.

gwyn@brl-smoke.ARPA (Doug Gwyn ) (08/24/87)

In article <26312@sun.uucp> guy%gorodish@Sun.COM (Guy Harris) writes:
>"Nonstandard character set": considering ASCII was relatively new at the time
>(I'm not even sure to what degree ASCII *existed* in 1963!), this is simply
>bogus.

The really annoying thing is that IBM was on the committee that came
up with the ASCII code, but they then proceeded to introduce EBCDIC
and totally ignore ASCII for many years.  (Sounds like the Algol story,
doesn't it?)

guy%gorodish@Sun.COM (Guy Harris) (08/24/87)

> >"No stack": what do you mean by "no stack"?  There are no "push" or "pop"
> >instructions, and the procedure call instruction saves the return address
> >in a register, but so what?  Nothing *prevents* you from implementing a
> >stack.
> 
> Exactly.

I.e., your only complaint is that they didn't wrap a pretty ribbon around a
nicely packaged stack construct?  Sorry, that alone does not a bad design make.
There are many cases where a machine that doesn't provide something as a neatly
packaged construct can nevertheless implement that something more efficiently
(by execution time, at least, and maybe even by code size) than a machine that
does.

> Yes, I did mean 12-bit offsets, and they can be a nuisance and not an
> insuperable problem, and may (or may not) have made sense given the
> design contraints of the day.  The same is often said of the 64-K
> blocks of code and data imposed by the 8086 architecture, and I don't
> think that that CPU deserves any awards for design excellence either.

Well, there is one difference; *pointers* are 24 bits long on the 360 and
successors (except in XA mode, where pointers are 31 bits long).  You can use a
pointer simply by loading it into a general register and indexing off it; if
you want to use a long pointer on an 8086 (other than the 386) you have to
monkey with segment registers as well.

If you don't like machines with limited offset sizes, you'll really hate MIPS
chips or SPARC chips.  The MIPS architecture's loads and stores have 16-bit
offsets; you have to glue two instructions with 16-bit offsets together to load
up a 32-bit address directly.  The SPARC architecture's loads and stores have
13-bit offsets; you have to stick a SETHI instruction in front to load the
upper 19 bits into a register (SETHI actually loads 22 bits, because that's how
many fit into the instruction format).

In fact, if you don't like machines that don't give you nice gift-wrapped
primitives, you're probably not going to like RISC machines, period.  I'm
willing to give up a fair bit of gift-wrapping in exchange for performance, and
I'm sure a lot of other people are as well.

> ASCII was designed quite logically.  It's convenient to have the digits
> begin with a value that has four low-order zero bits, so one can simply
> mask with binary 11110000 and get the numeric value of the digit
> character.

Yup.  EBCDIC certainly is convenient, since '0' is hex F0 and '9' is hex F9, at
least according to the EBCDIC-to-ASCII table in the back of the TOPS-20 Monitor
Calls manual.

> It's convenient to have the alphabetic characters begin with a value that
> has 0001 as the low-order bits, for one can mask with binary 00011111 and
> get a number representing the position of the character in the alphabet.
> (E.g., 'Z' & 0x1f gives 26.)

Well, that doesn't work with EBCDIC, even though alphabetic characters *do*
begin with a value that has 0001 as the low-order bits; then again, 256 bytes
of translation table can't be too bad.

> Case conversion is equally simple:  A single bit needs to be flipped.

Yup.  'a' = hex 81, 'A' = hex C1.  The same pattern holds true for all other
alphabetics.  Boy, that EBCDIC sure is convenient!

> One can derive a neat subset alphabet by using only the low-order 6 bits.
> One gets all the arithmetic operators, all the essential mathematical
> symbols including (), <, =, and >, and all the digits--perfect for use
> in a calculator.

I presume this is intended as a point in favor of SIXBIT, not of ASCII, since
EBCDIC has them all as well.  Why is the ability to pack characters into 6 bits
so wonderful for this application - especially on an 8-bit-byte machine like a
360?

> It's hard to defend a character set in which (x >= 'A' && x <= 'Z') doesn't
> assure us that x is alphabetic: this is plainly a very undesirable property
> of EBCDIC.

And ASCII as well; the code for 'a' is not between 'A' and 'Z', but 'a' is
definitely alphabetic.  This is also a property of ISO Latin #1; given that
it's based on ASCII, it has to be, since there's no place to put the accented
characters that's contiguous with the regular alphabetics.

Give a look at the UNIX <ctype.h> sometime; all the "is this alphabetic"-type
predicates, except for "isascii", are implemented by table lookup - including
"isalpha".  What's so terrible about table lookup?
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

stuart@bms-at.UUCP (Stuart D. Gathman) (08/24/87)

The original reference to 360 archecture referred to *software*, not hardware
since the use of EBCDIC is primarily a software (and firmware in the case
of printers and terminals) issue.  The 360 is obviously a very good hardware
design since so many people manage to do useful things with such
terrible software.

I think VM370 is elegant, but then IBM never did like it very much.  The
single user OS's that run on top of VM are, in general, still awful.

NOTE - no hard facts here (except for the first sentence).  Just opinions.
I don't have time at the moment for hard facts, but my opinions are based
on 9 years experience.
-- 
Stuart D. Gathman	<stuart@bms-at.uucp>
			<..!{vrdxhq|dgis}!bms-at!stuartis

henry@utzoo.UUCP (Henry Spencer) (08/24/87)

> I repeat my statement: one needs VERY good arguments to claim that the 360
> architecture was badly-designed.  Anyone care to provide them or refute them?

Ask any 360 compiler implementer about base registers.  Wear your asbestos
suit.
-- 
Apollo was the doorway to the stars. |  Henry Spencer @ U of Toronto Zoology
Next time, we should open it.        | {allegra,ihnp4,decvax,utai}!utzoo!henry

henry@utzoo.UUCP (Henry Spencer) (08/24/87)

> 2. Small segments: 1 megabyte is not big enough?
>                    And the architecture transparently handles the transition
>                    from one segment to another...

I believe what was being referred to was not the way the MMU does segments,
but the 12-bit addressing offset, which effectively gives you 4096-byte
segments.  The management of the base registers needed to address things
within said segments is *not* transparent by a damn long sight.  Pointer
arithmetic, at least, uses a uniform address space, but ordinary addressing
doesn't.

> 4. BTW {:-)}, this message came from an Amdahl 5890-300 ...

Amdahl builds fine implementations of a truly scummy architecture.
-- 
Apollo was the doorway to the stars. |  Henry Spencer @ U of Toronto Zoology
Next time, we should open it.        | {allegra,ihnp4,decvax,utai}!utzoo!henry

drw@cullvax.UUCP (Dale Worley) (08/24/87)

The 360 *architecture* is really clean and elegant (although some of
the 370 and later extensions aren't as nice).  It was the first
machine language I learned.  When I later learned pdp-11 machine
language, I realized that the two shared a certain elegance...  mostly
revolving around general registers and symmetry of instruction
structure.  (Though the 11, using the concept of 'addressing modes'
was much better in that regard.)  Compare this with, say, the 8086,
which has about 15 flavors of 'move' instruction.

Now, the *software* that IBM put on the 360, on the other hand, takes
absolutely *no* awards for design.  The best proof of this is the
success of VM/370, which (in its original incarnation) essentially
places a raw 370 in the hands of the user.  VM/370 is a better program
development environment than MVS (nee OS/360), showing that a raw 370
is a better development environment than a 370 with MVS running on it.

Dale
-- 
Dale Worley	Cullinet Software		ARPA: cullvax!drw@eddie.mit.edu
UUCP: ...!seismo!harvard!mit-eddie!cullvax!drw
OS/2: Yesterday's software tomorrow	    Nuclear war?  There goes my career!

chuck@amdahl.amdahl.com (Charles Simmons) (08/25/87)

In article <1588@apple.UUCP> bcase@apple.UUCP (Brian Case) writes:
>                                                                   The
>MIPS Co. processor, SUN 4 processor, the Acorn RISC machine processor,
>the Am29000 processor, etc. have, at least, for some problems, performance
>equal to or greater than multimillion dollar machines, at prices orders
>of magnitude lower.
>
>    bcase

Anyone have an example of an application that runs faster on a MIPS,
Sun, Acorn, or AMD machine than it does on either a 5890 or a Cray 2?

Thanks, Chuck

madsen@vijit.UUCP (Dave Madsen) (08/25/87)

----- Sorry about the length, see last paragraph ------

In article <1590@apple.UUCP>, bcase@apple.UUCP (Brian Case) writes:
> In article <1589@apple.UUCP>, bcase@apple.UUCP (Brian Case) writes:
> > >No stack, small segments, nonstandard character set with holes.
> > Wait, is the character set part of the architecture!?!?  I didn't think so,
> But, I was wrong:  I forgot about the character instructions (edit, etc.).
> These make assumptions about the character set, don't they?
>     bcase

1)  The holey character set issue is dead; see many earlier messages 
    in this continuing saga.

2)  The ED and EDMK instructions don't make assumptions about the character
    set, they specify edit mask characters and editing actions.  
    Since these characters are replaced during the editing process, they 
    obviously can't be in the final string, and so shouldn't be normally
    used "printable" chars.  Ie, if "A" WOULD have been defined as an editing
    character, you would not have any "A"s in the final edited string, as 
    they would have been replaced by digits.  SO the designers made the 
    editing characters low in the collating sequence, where there aren't any
    "printables".  

3)  The packed instructions are optimized for EBCDIC, and for dealing with
    overpunched signs in numeric fields.  However, the sign zone rules are
    lax to the extent that if it ain't a 0xD, it's positive.  (I seem to 
    remember that there's a non-preferred negative zone, but I can't
    remember... 0xB?).  For the machine I work on (whose peripherals
    "know" ASCII), that's a pain.  But my machine mfr (Wang) has 
    taken the UNPK insn and made it make '3' zones instead of 'F' zones.

4)  About segments:  NO NO NO NO NO.  You have the wrong idea.  I program 
    daily in assembler on a Wang Labs VS 100 (which has taken 370 architecture
    and insn set and added stack, indirect call, and instruction-counter
    relative instructions), and it has NEVER EVER occurred to me to think of
    4K OFFSETS as defining 'segments'.  The 370 architecture defines a LINEAR
    address space.  The target address (for one common instruction format) is
    computed by adding 2 registers and an offset.  ANY register (except 0) 
    may be used in this computation, not just some 'segment' register.
    Registers are general-purpose.  No special registers, even for 
    'address' and 'data', let alone 'segments'.
    The 4k is not much of a limitation, as the coding style for the machine
    does not depend on a relatively fixed value in a register.  To be more
    concrete, suppose I have a large array of structure.  A register would
    typically be used to resolve to the array element, and any offset 
    would address into the structure.  Not many structures have over 4k worth
    of data.  If you have data longer than that, you can always use another 
    register, so that the 2nd register points to 4k past the first.  Then you
    have 8k.  The *whole idea* is that address manipulation in registers 
    is easy, convenient, and natural (Please no flames on what natural is. 
    Some would say that pre/post increment is natural, and for some machine
    architectures, they're right.  Same idea here).
    I VERY SELDOM run into a program that has to use more than one register at a
    time to get more than 4k addressability for either code or data.  
    Reenterability is easy.  Subroutine calls (via BAL or BALR)
    usually result in the subroutine using a new 'base' register and saving 
    the old one in a stack or linked list.  The code is such that you usually
    don't see routines over 4k in length.  Even for data addressability you 
    usually don't find over 4k.  Directly addressed items are put in the 
    first 4k and other items (like data management control blocks, for 
    example) are put later.  The structure of the calls to the OS sometimes
    make it natural to put a pointer to that control block in the first 4k
    and use that.  Suffice to say, I know that as a consciencious programmer,
    I feel guilty when I have to use more than one base register:  It's
    simply poor technique, and it's NOT confining to use only one.  

Please no flames from those who work on special-purpose machines; this 
architecture was designed for general-purpose work, primarily business.

I would be more than happy to converse AT LENGTH with any who would call me
(312) 954 6512 or e-mail about this.  Summaries (as if this hasn't been 
beaten to death already) could be posted if there's an idle newsgroup day.
(Maybe in talk.bizarre)    :-)

Finally, I wish to apologize 1) for this message length, and 2) for having 
it in this newsgroup.  I simply find that having been familiar with this 
architecture for 19 years, I have a lot to say to those who make postings 
who are less informed about or experienced with the architecture.

Dave Madsen   ---dcm

ihnp4!vijit!madsen    or    vijit!madsen@gargoyle.uchicago.edu

I sure can't help what my employer says; they never ask me first!

guy%gorodish@Sun.COM (Guy Harris) (08/25/87)

> VM/370 is a better program development environment than MVS (nee OS/360),
> showing that a raw 370 is a better development environment than a 370 with
> MVS running on it.

Do you mean "VM/370" or "VM/CMS"?  If the latter, it really shows that a 370
with CMS running on it is a better development environment than a 370 with MVS
running on it (oops, typoed that as "a 370 with VMS" twice; one thing UNIX has
going for it is that its name has neither a V, nor an M, nor an S in it).  I
doubt that a raw 370 is much of a development environment at all; toggling
(turning?) programs in through the console switches (or the VM equivalent of
same) can't be much fun.
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

hamish@root.co.uk (Hamish Reid) (08/25/87)

In article <1044@bsu-cs.UUCP> dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
>
>	[... much stuff about the 360 series, mostly
>	already debated/maligned/abused/etc...]
>
>ASCII was designed quite logically.
   [ Lots of stuff deleted about bitmasks, contiguous characters, etc ...]
>It's hard to defend a character set in
>which (x >= 'A' && x <= 'Z') doesn't assure us that x is alphabetic:
>this is plainly a very undesirable property of EBCDIC. [...]

[As an aside to Rahul:
	It's even harder to defend a character set that can't
	cope with very nearly all of the world's languages - what you say
	might well be true for English, but its not even remotely true
	or relevant for languages without uppercase/lowercase
	distinction, or without English sorting sequences, with
	different alphabets, different characters, etc....
	This is plainly a very undesirable property of ASCII.
]

Look, I don't want to get into the "My character set is better
than yours, nyah nyah" argument, but surely the point is that the
character set should either not be an architectural issue at all
(i.e. leave the representation and manipulation
of human-readable characters to software), or the architecture
should be flexible enough to cope with all (or a large number of)
character sets (difficult!).

There's just too much at stake (sorting, etc) to hard-wire into
any architecture a character set so language-(English)-specific
as ASCII (or EBCDIC?) - see comp.std.internat if you don't
believe me. The 360, (along with several other systems) *does*
hardwire in a preference for a specific character set, and
this (*not the specific character set itself*) is something
you could criticise the 360 for. However, as someone else has
pointed out at length, there are more important issues that the
360 could be criticised for than supposed lack of a stack, small
segments, use of EBCDIC etc - and that in its day, it was revolutionary.
(In a way that the 8086 (to which you compare the 360 on the segment
issues) wasn't).

	Hamish
----------------------------------------------------------------------------
Hamish Reid	Root Computers Ltd, Hayne St, London EC1A 9HH England
+44-1-606-7799	hamish@root.co.uk	mcvax!ukc!root44!hamish

gwl@rruxa.UUCP (George W. Leach) (08/25/87)

In article <1580@sol.ARPA>, crowl@rochester.UUCP writes:

> 
> I am not necessarily stating that the 360 architecture was well-designed, but I
> am saying the architecture has shown flexibility and adaptability for many
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> years.  If you wish to say the 360 architecture is bad, you must show why its
  ^^^^^
> adaptability is illusory.  The 360 architecture has been implemented on
> machines spanning roughly two orders of magnitude in performance.  It has gone
> from physical memory to virtual memory.  It supported a virtual machine long
> before many other architectures did.  
> 

     I will not argue the architecture design issues.  The 360 was the
top of the line when it was introduced.  I worked with one from 1980
thru 1983 and from a software development environment point of view 
(VM/CMS) it was terrible.  UNIX is such a far superior programming 
environment to CMS that there is NO ARGUMENT here.

     What I would like to take issue with is the longevity of the 360/370
architecture.  Is it really the adaptability and flexibility of the
architecture or is it the fact that the huge customer base is tied into
that IBM environment?  There is a tremendous amount of $$$$ invested in
COBOL, FORTRAN and PL/1 code on those beasts that CAN NOT be moved easily
to another architecture.  This is due to such nice IBM-ONLY features,
such as EBCIDIC character sets.

     On the other hand, the $$$$ invested in code written in C under
UNIX is easily ported (if written with portability in mind) to other
architectures as they come along.  Thus one can take advantage of new
advances in computer architecture without the pain and cost of moving
unportable code.

> -- 
>   Lawrence Crowl		716-275-8479	University of Rochester
> 		     crowl@cs.rochester.arpa	Computer Science Department
>  ...!{allegra,decvax,seismo}!rochester!crowl	Rochester, New York,  14627

George W. Leach

Bell Communications Research      New Jersey Institute of Technology 
444 Hoes Lane       4A-1129       Computer & Information Sciences Dept.
Piscataway,  New Jersey   08854   Newark, New Jersey   07102
(201) 699-8639

UUCP:  ..!bellcore!indra!reggie
ARPA:  reggie%njit-eies.MAILNET@MIT-MULTICS.ARPA

From there to here, from here to there, funny things are everywhere
Dr. Seuss "One fish two fish red fish blue fish"

dmt@ptsfa.UUCP (Dave Turner) (08/25/87)

In article <26390@sun.uucp> guy%gorodish@Sun.COM (Guy Harris) writes:
>
>I presume this is intended as a point in favor of SIXBIT, not of ASCII, since
>EBCDIC has them all as well.  Why is the ability to pack characters into 6 bits
>so wonderful for this application - especially on an 8-bit-byte machine like a
>360?
>

I began my programming career on IBM 1401's, 7074's and 360's, then moved on
to PDP11's and UNIX.

We used punched cards for all input; the 1401's used BCD (six bits) internally.

IBM created E BCD IC to allow its customers to migrate to the 360 without
having to pay the cost of rewriting all existing software.

-- 
Dave Turner	415/542-1299	{ihnp4,lll-crg,qantel,pyramid}!ptsfa!dmt

larry@mips.UUCP (Larry Weber) (08/26/87)

Getting into this mess may be a mistake, but here it goes...

In article <8471@utzoo.UUCP> henry@utzoo.UUCP (Henry Spencer) writes:
>> I repeat my statement: one needs VERY good arguments to claim that the 360
>> architecture was badly-designed.  Anyone care to provide them or refute them?
>
>Ask any 360 compiler implementer about base registers.  Wear your asbestos
>suit.

I never did a 360 compiler but I had a hand in some 370 compilers including
PLS (IBM's internal systems language) and Pascal/VS.  Yes, the small span base
registers are a pain. For data you have to treat memory as if there were lots
of base registers creating each when when an if needed, each pointing
to the next 4KB of data.  If you are lucky enough to have an 
optimizer (even a local common sub-expressions) you can common
references to data items in the same 4K.  But honestly data isn't a 
problem,  if you have more than 4KB it is likely to be in an array, 
and then array references techniques are used.

Code references are a different story, it is easy to exceed 4KB of code and
you need a way to handle this case.  A simple solution is to allocate 2
registers for this purpose and be sympathetic when people call to complain.
The real solution requires dividing the code into blocks and load base
registers appropriately on entry into the block - a real messy problem.  Another
way is to never use base/displacment branches unless you know its span
will be less than 4KB, when you don't know, you generate a branch that 
uses a load first to get the address into a reg then do an indirect branch. 
Other solutions, you could compile, and then recompile when you run out
of branch span and use one of the previous tricks in that case.

Now that we've shown that 360's have an ugly problem in adrressing, can we
conclude that it was badly-designed.  I don't think so!  Whether you agree
depends on how much weight you give to all those arguments already
advanced; it will forever by a matter of opinion.  But for me, and despite
all the ugly features that I DON'T LIKE: it has aged well and been extended
in may ways not anticipated by the original designers.  Lots of people
build fast versions of them, and I doubt that DEC just isn't interested in
building fast VAXEN - its harder to do.  EBCDIC has its problems, but it
is designed for international charater sets (if only they could have
gotten [] right), and it was designed to be closely related to BCD. 

The operating systems are NOT works of art. Memory management is abysmal and
IO is messy (kinda like VMS).  They are quite usable, especially with SPF
for MVS, I always felt SPF was a step back for VM/CMS.  

This whole topic sounds like another "the big guy is bad". There are lots to be
learned from 360/370, some are good ideas, like lots of registers that are
the same.  Lots of bad ideas like small dispalcements.  There are many on
new architectures lately, many seem pretty good, but time will tell.

Here's a toast to use at your next party:
  May you favorite architecture be as healthy as the 360/370 in 25 years!

Larry

-- 
UUCP: 	decwrl!mips!larry
DDD:  	408-720-1700
USPS: 	MIPS Computer Systems, 930 Arques Avenue,  Sunnyvale, CA 94086

hank@spook.UUCP (Hank Cohen) (08/26/87)

One thing that people seem to be missing in this discussion of the 370
architecture is that the 370 POO specifies much more than the user
instruction set.  The architecture specification is for a complete
system of which the instruction set is only one component.  One of the 
best features of the 370 is its I/O architecture.  (I expect this 
might be somewhat controversial.)  It took SCSI to bring simplicity
and consistency to small computer I/O systems.  System 360 has had it
for 20 years.
	 Another provision of the system architecture that is  overlooked 
in all micro systems that I have seen is error logging and diagnosis.
In this age of super fast micro processors and very large scale
integration none of the new RISC chips have thought it prudent to
provide architectural support for fault detection and diagnosis.  At
least if such support has been provided it has not been made evident
from the discussion in this group.  I think that there are mistaken
priorities at work when speed of computation is persued to the
detriment of correctness.
Again the 360 architecture was ahead of the pack by specifying 
a minimum error logging capability for all members of the family.
Most current implementations of the architecture provide much more
error logging and diagnostic capability than the minimum.
     In the UNIX community there is a general tendency to bash IBM for
 the shortcomings of the 370.  I will be the first to grant that
 MVS/TSO and all of the standard IBM software products give the term
 "User Hostile" new meaning.  But before condeming the system
 architecture for it's faults we should learn from the many things
 that the designers of the 360/370 did right.

 History is two steps foreward and one step back.

 I suppose that I should add that these views are my own and not
 necessarily those of anyone else associated with MASSCOMP.

 Hank Cohen
 MASSCOMP 
 7315 Wisconsin Ave. Suite 1245W
 Bethesda Md. 20814 (301)657-9855

sbanner1@uvicctr.UUCP (S. John Banner) (08/26/87)

In article <294@rruxa.UUCP> gwl@rruxa.UUCP (George W. Leach) writes:
>In article <1580@sol.ARPA>, crowl@rochester.UUCP writes:
>
>> 
>> I am not necessarily stating that the 360 architecture was well-designed, but I
>> am saying the architecture has shown flexibility and adaptability for many
>                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> years.  If you wish to say the 360 architecture is bad, you must show why its
>  ^^^^^
>> adaptability is illusory.  The 360 architecture has been implemented on
>> machines spanning roughly two orders of magnitude in performance.  It has gone
>> from physical memory to virtual memory.  It supported a virtual machine long
>> before many other architectures did.  
>> 
>
>     I will not argue the architecture design issues.  The 360 was the
>top of the line when it was introduced.  I worked with one from 1980
>thru 1983 and from a software development environment point of view 
>(VM/CMS) it was terrible.  UNIX is such a far superior programming 
>environment to CMS that there is NO ARGUMENT here.
>

    I hate to get into this argument, but I just couldn't hold back
here.  You say "NO ARGUMENT", however I know of one person who I work
with (he is not on the net, he does ALL his work on CMS, and VS1), who
I am sure would disagree.  I know he has tried UNIX, and on several
ocasions he has asked me why I like it (UNIX is my prefered enviornment,
but I quite like VM/CMS as well).  Just as a side note, he has also told
me that he prefers 327x full-screen programming to windows on his Amiga
at home, and does most of his programming in /370 assembler, and REXX
(the system interpreter for those unfamiliar with VM/CMS).
   I do hope I havn't stepped on any toes here, because I don't really
want to see this topic go on for annother month or two.  It has been
an interesting and to some extent informative discussion, however, I
think it is beginning to degenerate (as do all of these discussions
eventually).

        Thanks for listening (assuming of course you did),

                      S. John Banner

...!uw-beaver!uvicctr!sol!sbanner1
...!ubc-vision!uvicctr!sol!sbanner1
ccsjb@uvvm
sbanner1@sol.UVIC.CDN

esf00@amdahl.amdahl.com (Elliott S. Frank) (08/26/87)

In article <294@rruxa.UUCP> gwl@rruxa.UUCP (George W. Leach) writes:
>
>     What I would like to take issue with is the longevity of the 360/370
>architecture.  Is it really the adaptability and flexibility of the
>architecture or is it the fact that the huge customer base is tied into
>that IBM environment?  There is a tremendous amount of $$$$ invested in
>COBOL, FORTRAN and PL/1 code on those beasts that CAN NOT be moved easily
>to another architecture.  This is due to such nice IBM-ONLY features,
>such as EBCIDIC character sets.
>
>     On the other hand, the $$$$ invested in code written in C under
>UNIX is easily ported (if written with portability in mind) to other
>architectures as they come along.  Thus one can take advantage of new
>advances in computer architecture without the pain and cost of moving
>unportable code.
>
Having spent most of the past twenty years working in the 360/370
environment, "there is truly nothing new under the sun."  It is as
possible to write machine dependant code in C in a UNIX environment
(cf the "how many bits are in an int, and which is the low-order one"
discussion recently concluded in this newsgroup) as is to write portable
COBOL in the EBCDIC MVS eenvironment.

If you stick to a single machine architecture and operating system
environment, machine-to-machine migration becomes a problem of power
cables and air conditioning.  This becomes a very powerful economic
argument for sticking with that single machine architecture.  VAX VMS
has ensured its survival for the same reason.  You can move an
application from an 11/750 (running VMS) to an 8650 (also running VMS)
with minimal porting effort.

Despite the CISC aggregations that have grown up on the original 360
instruction set (Niklaus Wirth did not include support for the BXH and
BXLE [decrement {increment} index and test against limit] instructions
in his pioneering PL/360 structured assembler) (can you say "Compare
and Form Codeword" or "Update Tree"?)) the longevity of the 360/370
architecture has come from the simplicity of most of the instructions.


-- 

Elliott S Frank    ...!{hplabs,ames,seismo,sun}!amdahl!esf00     (408) 746-6384
                or ....!{bnrmtv,drivax,hoptoad}!amdahl!esf00

[the above opinions are strictly mine, if anyone's.]
[the above signature may or may not be repeated, depending upon some
inscrutable property of the mailer-of-the-week.]

peter@sugar.UUCP (Peter da Silva) (08/27/87)

> >equal to or greater than multimillion dollar machines, at prices orders
> >of magnitude lower.
> Anyone have an example of an application that runs faster on a MIPS,
> Sun, Acorn, or AMD machine than it does on either a 5890 or a Cray 2?

I don't know about the 5890 or the Cray-2, but the Sun 4 sure as hell beats
out the Univac-I-mean-Sperry-I-mean-Unisys 1100/72 we're using here (and it's
a multimillion dollar machine) so long as the number of users is small.
-- 
-- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter
--                  U   <--- not a copyrighted cartoon :->

drw@cullvax.UUCP (Dale Worley) (08/27/87)

gwl@rruxa.UUCP (George W. Leach) writes:
> I will not argue the architecture design issues.  The 360 was the
> top of the line when it was introduced.  I worked with one from 1980
> thru 1983 and from a software development environment point of view 
> (VM/CMS) it was terrible.  UNIX is such a far superior programming 
> environment to CMS that there is NO ARGUMENT here.

Eh?  Here we're arguing about hardware architecture, and this guy
starts arguing OSs architecture.  Do you *really* mean '360', or do
you really mean 'the software that's usually run on 360s'?  After all,
you can get Un*x for 360s now, and it looks just about like any other
Un*x.  (And if you went to the work, you could port OS/360 to the
Vax!)

yours for linguistic purity,

Dale
-- 
Dale Worley    Cullinet Software      ARPA: cullvax!drw@eddie.mit.edu
UUCP: ...!seismo!harvard!mit-eddie!cullvax!drw
Apollo was the doorway to the stars - next time we should open it.
Disclaimer: Don't sue me, sue my company - they have more money.

tim@amdcad.AMD.COM (Tim Olson) (08/27/87)

In article <114@spook.UUCP> hank@masscomp.UUCP (Hank Cohen) writes:
+-----
| 	 Another provision of the system architecture that is  overlooked 
| in all micro systems that I have seen is error logging and diagnosis.
| In this age of super fast micro processors and very large scale
| integration none of the new RISC chips have thought it prudent to
| provide architectural support for fault detection and diagnosis.  At
| least if such support has been provided it has not been made evident
| from the discussion in this group.  I think that there are mistaken
| priorities at work when speed of computation is persued to the
| detriment of correctness.
+-----

The Am29000 has Master/Slave mode operation, where two processors
operate in lock-step.  The Slave processor checks all of the master's
outputs against the slave's internal values cycle by cycle, and signals
an error (MSERR) if there is a discrepency in any value.

	-- Tim Olson
	Advanced Micro Devices
	(tim@amdcad.amd.com)

henry@utzoo.UUCP (Henry Spencer) (08/27/87)

>      What I would like to take issue with is the longevity of the 360/370
> architecture.  Is it really the adaptability and flexibility of the
> architecture or is it the fact that the huge customer base is tied into
> that IBM environment? ...

In other words, the 360's longevity is not the result of the adaptability
and flexibility of the architecture, but of the *un*adaptability and
*in*flexibility of most of the 360 software.
-- 
"There's a lot more to do in space   |  Henry Spencer @ U of Toronto Zoology
than sending people to Mars." --Bova | {allegra,ihnp4,decvax,utai}!utzoo!henry

paul@unisoft.UUCP (Paul Campbell) (08/27/87)

I've been waiting for someone to mention it, and unless I've missed it no
one has, but there was an excellent article/interview in CACM this year
(April I think) in which the 360/370 designers talk about their architecture
and its development. A very interesting article which I recomend even to
those biased against Big Blue (such as I).

	Paul Campbell

(C) Copyright Paul Campbell, you only may redistribute if your recipients can. 
	E-mail:		..!{ucbvax,hoptoad}!unisoft!paul  
Nothing here represents the opinions of UniSoft or its employees (except me)
	"Nuclear war doesn't prove who's right, just who's left"

ken@argus.UUCP (Kenneth Ng) (08/27/87)

In article <1044@bsu-cs.UUCP>, dhesi@bsu-cs.UUCP (Rahul Dhesi) writes:
: ASCII was designed quite logically.  It's convenient to have the digits
: begin with a value that has four low-order zero bits, so one can simply
: mask with binary 11110000 and get the numeric value of the digit
: character.

Guess what, this works for EBCDIC as well, so this arguement is moot.

: It's convenient to have the alphabetic characters begin
: with a value that has 0001 as the low-order bits, for one can mask with
: binary 00011111 and get a number representing the position of the
: character in the alphabet.  (E.g., 'Z' & 0x1f gives 26.)  Case
: conversion is equally simple:  A single bit needs to be flipped.

All these 'neat' tricks fall apart when going to international alphabets,
something that IBM has been concerned about for quite some time.  The
360 has an instruction that makes all this stuff real easy, TR(anslate).
This way one can define a table that can be any way you want it to be.

Kenneth Ng: Post office: NJIT - CCCC, Newark New Jersey  07102
uucp !ihnp4!allegra!bellcore!argus!ken *** NOT ken@bellcore.uucp ***
bitnet(prefered) ken@!tis Ana bata

ken@argus.UUCP (Kenneth Ng) (08/27/87)

In article <1590@apple.UUCP>, bcase@apple.UUCP (Brian Case) writes:
> In article <1589@apple.UUCP>, bcase@apple.UUCP (Brian Case) writes:
> > >No stack, small segments, nonstandard character set with holes.
> > Wait, is the character set part of the architecture!?!?  I didn't think so,
> But, I was wrong:  I forgot about the character instructions (edit, etc.).
> These make assumptions about the character set, don't they?
>     bcase


That's why there is a bit in the PSW (on the 360 at least) that indicates
whether the machine is using ASCII or EBCDIC.


Kenneth Ng: Post office: NJIT - CCCC, Newark New Jersey  07102
uucp !ihnp4!allegra!bellcore!argus!ken *** NOT ken@bellcore.uucp ***
bitnet(prefered) ken@orion.bitnet

mash@mips.UUCP (John Mashey) (08/27/87)

In article <114@spook.UUCP> hank@masscomp.UUCP (Hank Cohen) writes:
>One thing that people seem to be missing in this discussion of the 370
>architecture is that the 370 POO specifies much more than the user
>instruction set.
>	 Another provision of the system architecture that is  overlooked 
>in all micro systems that I have seen is error logging and diagnosis.
>In this age of super fast micro processors and very large scale
>integration none of the new RISC chips have thought it prudent to
>provide architectural support for fault detection and diagnosis.

I don't know what the other folks do.  MIPS put a fair amount of effort
into this, although it is important to note that the error logging /
diagnostics approaches inherently differ at least somewhat between
VLSI approaches and mainframe designs, i.e., you don't replace part
of a chip!  Here are things MIPS did:

a) The CPU/FPU are designed to be easily diagnosable by ordinary code,
i.e., hidden state was avoided, and you generally can exercise the
paths quite well.  This is a necessity for testing the chips in the first
place.  [Maybe one of our VLSI folks will comment in some detail.]

b) There aren't "don't care" bits that can surprise you.

c) The CPU-cache interface includes parity bits, 3 for Tag, Validity,
and Page Frame, and 4 for the data.  On a parity error,
the CPU treats it as a cache miss, then does a refill, thus getting you
over an occasional error.  This is very important, in that the speed
ofthe whole system depends on the CPU-cache interface speed, and one will
always be pushing that.  A bit is set that the OS can test whenever it
feels like to detect that an SRAM is failing.

d) The CPU contains status bits for isolating the caches, swapping the
caches, and testing the parity-checking circuits.

f) The write buffer gate arrays have a loop-back mode for testing them.

g) External memory systems can be built with either parity or ECC.
We use ECC, and the CPU was designed in such a way as to be able to do
ECC-checking in parallel with access, without losing performance.

h) There are a bunch of other minor things that are needed for
handling other error conditions in reasonable ways.

In general, the original point is well taken: higher-performance
systems NEED to be designed with diagnosability in mind, or there
will be serious problems sooner or later, especially in wanting these
things to be big multi-user / servers.
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	{decvax,ucbvax,ihnp4}!decwrl!mips!mash  OR  mash@mips.com
DDD:  	408-991-0253 or 408-720-1700, x253
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

neil@dsl.cis.upenn.edu (Neil Radisch) (08/27/87)

>Eh?  Here we're arguing about hardware architecture, and this guy
>starts arguing OSs architecture.  Do you *really* mean '360', or do
>you really mean 'the software that's usually run on 360s'?  After all,
>you can get Un*x for 360s now, and it looks just about like any other
>Un*x.  (And if you went to the work, you could port OS/360 to the
>Vax!)
>
>yours for linguistic purity,
>
>Dale

Although technically true, in practice it just doesn't happen enough.
Vaxes mostly run Unix and VMS, 360's VM, Cybers NOS or KRONOS etc.
So from the point of view of a system user, the OS is representative
of the entire computer family architecture. Sure you could put
Unix on a Cyber but evey time I sit down to one, the damn thing
is running NOS or KRONOS (can you say useless).


-neil-
(Actually I just wanted to do some Cyber bashing)

guy@gorodish.UUCP (08/28/87)

> > But, I was wrong:  I forgot about the character instructions (edit, etc.).
> > These make assumptions about the character set, don't they?
> 
> That's why there is a bit in the PSW (on the 360 at least) that indicates
> whether the machine is using ASCII or EBCDIC.

ASCII-8, anyway; I don't remember whether that was compatible with ASCII or
not.  That bit is gone in the 370 (it was used for something else, possibly the
"basic control"/"extended control" mode bit).
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com

ntm1569@dsacg3.UUCP (Jeff Roth) (08/28/87)

in article <1884@super.upenn.edu>, neil@dsl.cis.upenn.edu (Neil Radisch) says:
> Vaxes mostly run Unix and VMS, 360's VM, ....
360's (370's, 308X's, 309X's...) mostly run MVS out here in the world of
big iron. Otherwise I agree with your point re OS and architecture.
-- 
Jeff Roth             {seismo!gould,cbosgd!osu-eddie}!dsacg1!jroth 
Defense Logistics Agency Systems Automation Center | 614-238-9421
DSAC-TMP, P.O. Box 1605, Columbus, OH 43216        | Autovon 850-
All  views  expressed  are  mine,  not  necessaril> <49 kl kl 

rick@pcrat.UUCP (rick) (08/29/87)

In article <8493@utzoo.UUCP>, henry@utzoo.UUCP (Henry Spencer) writes:
> In other words, the 360's longevity is not the result of the adaptability
> and flexibility of the architecture, but of the *un*adaptability and
> *in*flexibility of most of the 360 software.

Naw, its just a fast, affordable SOLUTION.  We almost went with two VAX 8700
in our latest processor upgrade.  But it looks like '370 arch will win
the price/perf war on this end.  The software we run? UNIX.  Yes, you
can teach an old dog new tricks.  
-- 
	Rick Richardson, President, PC Research, Inc.
(201) 542-3734 (voice, nights)   OR   (201) 834-1378 (voice, days)
		seismo!uunet!pcrat!rick

biff@nuchat.UUCP (08/30/87)

In article <26312@sun.uucp>, guy%gorodish@Sun.COM (Guy Harris) writes:
> > >. . .one needs VERY good arguments to claim that the 360
> > >architecture was badly-designed.
> > 
> > No stack, small segments, nonstandard character set with holes.
> He said "VERY good arguments"; these aren't.
>
> [Questionable arguments concerning stack and the use of the word
>  "segment" deleted]
> 
> "Nonstandard character set": considering ASCII was relatively new at the time
> (I'm not even sure to what degree ASCII *existed* in 1963!), this is simply
> bogus.
> 
> "with holes": well, ASCII has holes, too; why aren't "0-9" and "a-f" or "A-F"
> contiguous?

In response to your statement that ASCII has "holes", I agree that
the separation of "0-9" from the alphabetic characters is in rare
cases a nuisance, but I think it is far easier to check for numbers
and the sets of upper and lower case letters in ASCII than to try
the same thing in EBCDIC.  Also, if you look at the binary encoding
of ASCII, these "holes" make sense.  The numerical difference between
the upper and lower case versions of a letter is 32.  This is because
lower case letters have bit 5 set, while upper case letters do not.
Given this convenient (although rarely used anymore) mechanism for
differntiating upper and lower case letters, it becomes impossible to
place 0-9 between the upper and lower case alphabets, because there are
10 digits, and only 6 unassigned values in the alphabetic "block".  Thus,
it makes a great deal of sense to fill those spaces up with various
punctuation symbols which have no natural ordering anyway.  To further
simplify things, the numbers are contiguously located at 48-58.  Thus,
they can be decoded by AND-ing their ASCII values with 15.

Granted, these considerations may not seem useful any more, but when
I was coding in assembly, I used to take advantage of these layout
features of ASCII.  Granted, I wasn't there when ASCII was developed,
but I can't believe that these useful aspects of the coding scheme were
coincidental.  Can you make similar claims for EBCDIC?

[BTW - The complaint about "no stack" was undoubtedly referring to
the lack of a hardware stack or a set of instructions designed for
efficiently implementing a stack.  You can implement a stack on almost
any machine.  That doesn't change the fact that it would be useful to
have one built in]

				- Brad

-- 
Brad Daniels				...!soma!eyeball!biff
Now that I have my own account,		biff@tethys.rice.edu
I don't	NEED a disclaimer.		...!uhnix1!nuchat!biff

pf@diab.UUCP (Per Fogelstrom) (08/31/87)

In article <18102@amdcad.AMD.COM> tim@amdcad.UUCP (Tim Olson) writes:
>The Am29000 has Master/Slave mode operation, where two processors
>operate in lock-step.  The Slave processor checks all of the master's
>outputs against the slave's internal values cycle by cycle, and signals
>an error (MSERR) if there is a discrepency in any value.
>

Question:	What effect has this function on performence ?

ken@argus.UUCP (09/01/87)

In article <572@sugar.UUCP>, peter@sugar.UUCP (Peter da Silva) writes:
> I don't know about the 5890 or the Cray-2, but the Sun 4 sure as hell beats
> out the Univac-I-mean-Sperry-I-mean-Unisys 1100/72 we're using here (and it's
> a multimillion dollar machine) so long as the number of users is small.

I think RCA was also in there at some time.  Uh, how many people and
tasks is the 1100 running compared to the Sun?  One of the things that
has always bugged me about some "my computer is faster than yours" is
when people compare something like a VS90/80 with about 100 people
on it with a VAX/750 with 2 people on it, and say that the VAX is a
better machine because its response time is faster.  Note: true the 1100
cost a lot more than the Sun, but I believe as many factors should be
accounted for as possible.
> -- 
> -- Peter da Silva `-_-' ...!seismo!soma!uhnix1!sugar!peter
> --                  U   <--- not a copyrighted cartoon :->


As an aside, has anyone figured out if the company name changes of
Unisys are progressing somewhere besides confusion?


Kenneth Ng: Post office: NJIT - CCCC, Newark New Jersey  07102
uucp !ihnp4!allegra!bellcore!argus!ken *** NOT ken@bellcore.uucp ***
bitnet(prefered) ken@orion.bitnet

guy%gorodish@Sun.COM (Guy Harris) (09/01/87)

> In response to your statement that ASCII has "holes", I agree that
> the separation of "0-9" from the alphabetic characters is in rare
> cases a nuisance, but I think it is far easier to check for numbers
> and the sets of upper and lower case letters in ASCII than to try
> the same thing in EBCDIC.

You may think that, but you're wrong.  In *either* code set, you can test
whether a given code is a digit by seeing whether its numerical value is
between that for '0' and '9'.  The EBCDIC chart from the TOPS-20 manual
indicates no characters in the gaps in the alphabetic characters; if there
really are no such gaps, you can test whether something is an upper-case or
lower-case alphabetic in the same fashion in EBCDIC as you do in ASCII.

Then again, all this is somewhat irrelevant; indexing into a 256-byte table of
character type codes is equally easy no matter *what* character set you use.

> Also, if you look at the binary encoding of ASCII, these "holes" make sense.
> The numerical difference between the upper and lower case versions of a
> letter is 32.  This is because lower case letters have bit 5 set, while upper
> case letters do not.  Given this convenient (although rarely used anymore)
> mechanism for differntiating upper and lower case letters,

Which, as I pointed out, is also present in EBCDIC (I presume you mean for
*converting* betwen upper and lower case letters; the value of bit 5 is not
interesting unless you've already determined that the character is a letter,
and as long as you're determining *that* you might as well determine the
character's case at the same time).

> To further simplify things, the numbers are contiguously located at 48-58.
> Thus, they can be decoded by AND-ing their ASCII values with 15.

As opposed to EBCDIC, where the numbers are contiguously located at hex F0 to
hex F9, where they can be decoded by AND-ing their EBCDIC values with 15.

What was the point you were trying to make with this?

> [BTW - The complaint about "no stack" was undoubtedly referring to
> the lack of a hardware stack or a set of instructions designed for
> efficiently implementing a stack.  You can implement a stack on almost
> any machine.  That doesn't change the fact that it would be useful to
> have one built in]

Umm, I don't know that I'd use the term "fact" here; that claim is very
questionable.  There are plenty of quite fast machines out there that don't
have whizzo instructions designed for "efficiently" implementing a stack.
Even die-hard assembler programmers can synthesize stack instructions with a
macro assembler.
	Guy Harris
	{ihnp4, decvax, seismo, decwrl, ...}!sun!guy
	guy@sun.com