[comp.arch] IBM RISC

mpogue@dg.dg.com (Mike Pogue) (02/20/90)

In article <9327@portia.Stanford.EDU> underdog@portia.Stanford.EDU (Dwight Joe) writes:
>Are we going to see a replay of the Apple II vs. IBM pc situation in
>the early 80s?  

	
  Certainly the IBM announcement is interesting and challenging from a number of 
perspectives.  However, there are a few things different here:

	1) The UNIX workstation market is much more OPEN (sorry to use the buzzword)
		than the PC market of the early 80's.
	2) Software that is ported to the IBM machine can easily be ported to 
		other machines (MIPS, SPARC, 88K).

  I think these factors are worrying IBM folks, because as has been stated in the trade
mags, IBM customers may be buying the AS/400 stuff now, but their NEXT purchase may
very well be the 6000 series.

  Once this happens, IBM software vendors will have a lot of incentive to move their
IBM-locked programs to the 6000.  From there to making the software available on other
platforms (MIPS, SPARC, 88K) is a short step.

  Pretty soon, IBM customers will have a CHOICE of vendor HW platforms for running 
their software.  (Oh my God!)

  IBM can do a few things to try to stop this:

	1) Buy into software companies to stop this from happening.  IBM is
		already buying small but significant portions of software companies
		all over the place.  They would CERTAINLY have some say as to whether
		the software gets ported to non-IBM machines.
	2) Encourage software developers to use IBM-specific features.  IBM is also 
		already doing this, by encouraging people to use NextStep (instead of
		Motif), and by adding special features to AIX.  Unfortunately, software
		is comparatively easy to clone, and software vendors in the UNIX world 
		tend to stay away from proprietaryness, even if it is IBM.

  So, it sounds to me like IBM is trying real hard for market share here, and they are hoping
that their mainstay in the commercial world won't be affected too much.  Time will tell how
successful they are!

Mike Pogue
Data General 

I speak for myself, not for my employer....

chip@tct.uucp (Chip Salzenberg) (02/21/90)

According to underdog@portia.Stanford.EDU (Dwight Joe):
>[Will a hypothetical new RISC architecture from Sun] incorporate all the 
>old instructions from the SPARC to maintain compatibility...?
>
>If the answer is "no" (we don't incorporate upward compatibility),
>then Sun LOSES.  The new Sun competitor against the IBM 6000
>won't have ANY software adantage, which would be crucial in
>the highly competitive marketplace.

Not so.  The advantage is source compatibility and the unity of OS
environments across architectures.  Sun already supports three
architectures with SunOS (Intel, Moto, Sparc); adding a fourth
wouldn't be much of a stretch.  Then again, SunOS may be more of a
liability than an asset.  (No smiley.)

>I still think that if the IBM 6000's specs are as good as
>they appear, then certain RISC workstation manufacturers
>will have some sleepless nights.

Right you are.  We've been thinking about using Suns around here, but
things are looking good for a move to the 6000 instead.  I still
remember Henry Spencer's comment: "It is well-known that Sun's quality
assurance department consists of a single clerk with an IQ of 37
equipped with a rubber stamp marked 'PASSED.'"  It can't be a
cooincidence that 90% of all messages in the security mailing list
are about SunOS security holes...
-- 
Chip Salzenberg at ComDev/TCT   <chip%tct@ateng.com>, <uunet!ateng!tct!chip>
          "The Usenet, in a very real sense, does not exist."

gerry@zds-ux.UUCP (Gerry Gleason) (02/22/90)

In article <9376@portia.Stanford.EDU> underdog@portia.Stanford.EDU (Dwight Joe) writes:
|In article <186@zds-ux.UUCP> gerry@zds-ux.UUCP (Gerry Gleason) writes:
||(Dwight Joe) wrote:
|||Here we have SUN and MIPS, the companies that first came out with
|||good RISC machines (like Apple that first came out with a good
|||personal computer--much better than the TRS-80 8^) ).  hmmmm....

||You obviously don't know that much about the beginnings of the personal
||computer market.  Apple's first accomplishment was to be just about
||the only playing in the "hobby" computer market to survive the transition
||to personal computers (pc, not IBM pc).

|Commodore and Tandy (i.e. TRS-XX) survived.  Atari survived--barely.
|But Ohio Scientific's line of pc's died.

None of these (except maybe Ohio Scientific) were really in the "hobby"
market.  These were late commers whose machines were really game machines
until much later (Amiga, Atari 520st etc.).  There were (are?) other
companies that survived in niche markets, for example Cromemco or Altos
(maybe a bad example, but I don't know Altos' early history).  Apple is
really the only one who started more or less from scratch and survived.
Atari, Commodore and Tandy were all established companies before entering
the small computer markets.  Also, Tandy doesn't really count because they
have a large captive retail market, and they will (do?) continue to market
machines even if there products fail and they have to OEM machines (I
suppose they already do).

|I wonder about the NEXT......

||In this case, your analogy doesn't even work since, although Apple II's
||and PC clones overlap, the Apples could not handle many applications that
||the clones were spec'ed for.

|You've overlooked something.  The very first model of the 
|IBM pc with 8088 was not designed for all those specs either.  Later
|models had changes.

But the first models did have options that made them more suitable for
the office (the available 25x80 screen was important then), and quite
soon after you could put in a multi-function board with more memory, and
maybe a hard disk, and then it's just as useful as present 8088 based
clones.

|IMHO, the IBM pc was initially designed to compete against the Apple II.
|The very first model actually had a port for connecting to a 
|tape recorder so that you could operate the system WITHOUT a disk
|drive.  The very first model came with a max. of 64 kB.

I agree.

|This doesn't sound like a computer that was initially design
|for those "specs" that you were mentioning.

Actually, this fits in with what I said before.  IBM would never
have knowingly introduced a line of machines that would eventually
cut into their other lines of more expensive machines; it was a
mistake.  The early standard configurations allowed it to come to
market as a home machine, but it was they way it could be configured
as a desktop that made it take off.  I'm sure the designers and
promoters understood this, but the "big boys" at corporate HQ didn't,
and away it went.  Took them most of ten years to kill it, and then
it was too late.

||The workstation market, on the other hand,
||is a continueum in which even Sun's change from 680x0 based systems to
||SPARC doesn't really open up new applications, but only new capacities
||for speed and the size of problems.

|Actually, what's REALLY different about this new RISC workstation
|market is that no one's thought about upward compatibility with
|the next generation of RISC.  

Wrong again.  The reason UNIX (or open systems if you prefer buzz
words) is taking off now is because of its portability to new
architectures.  Just as they shifted emphasis from the 680x0
machines to SPARC, they can move to a new architecture if necessary.
When (if?) someone develops an ANDF that effectively addresses
application portability, upward compatibility becomes a non-issue.

|The RISC argument is that you find the optimum instruction set
|to mate with the technology.  Fine.  But what happens when you've
|come up with a better architecture, like the IBM's 6000?
| [ ... ]

|If the answer is "no" (we don't incorporate upward compatibility),
|then Sun LOSES.  The new Sun competitor against the IBM 6000
|won't have ANY software adantage, which would be crucial in [ ... ]

The real question for Sun (and others) is whether the architectural
features of the 6000 really are that big of a win.  This is not a
question that can be answered by comparing specific machines (say
IBM 6000 vs the fastest SPARCserver) because there are too many
variables to say whether the difference is in the processor or the
system architecture.  Probably multiple functional units operating
in parallel are a big win, if complexity doesn't kill you (HW and SW).
If true, Sun will need to develop a SPARCII and MIPS a MIPSII instruction
set, but even so the SPARC and MIPS performance envelopes will continue
to be pushed.  The 6000 isn't even good enough to compete with the ECL
RISC's presently coming to market, so these architectures still have a
lot of room.

|I still think that if the IBM 6000's specs are as good as
|they appear, then certain RISC workstation manufacturers
|will have some sleepless nights.

Maybe, but the foundations of the workstation market is open systems,
something IBM has always been hostile towards.  Many people went to
these markets to get away from the locked in proprietary solutions
sold by companies like IBM, so they don't have a very good image with
these people.  The bottom line in this market is price/performance, so
if they score well here, people will buy, the IBM label doesn't mean
that much.

Gerry Gleason

swarren@convex.com (Steve Warren) (02/22/90)

In article <E58.+d@cs.psu.edu> schwartz@barad-dur.endor.cs.psu.edu (Scott E. Schwartz) writes:
>In article <9376@portia.Stanford.EDU> Dwight Joe writes:
>>So, does Sun launch that new architecture, say ARX, to compete
>>against the IBM 6000?  ....
>>If the answer is "no" (we don't incorporate upward compatibility),
>>then Sun LOSES.  The new Sun competitor against the IBM 6000
>>won't have ANY software adantage, which would be crucial in
>>the highly competitive marketplace.  
>
>I don't understand your claim here.  In this day and age porting an
>application (other than a compiler of some sort) to a new architecture
>(at least in the Unix world) usually involves typing "cc *.c".  The
                              [...]
Well, some disadvantages to loosing binary compatibility that come to
mind are,

1)  Rewrite the compiler (instead of re-optimizing the same compiler),
    which is added expense/time.

2)  Convince software venders to port to the new architecture.  Admittedly
    this should not be too bad if they have been successful with the
    previous architecture.  But any vendors who are disappointed with
    sales may not want to port.  And they are unlikely to provide the
    source code to individuals.  Even an unpopular application might
    make or break a few sales to specific niche customers.

3)  Risk the dissappointment of past customers who realise that new
    software will not be available for their (now) obsolete machines.
    The applications may slack off gradually, but the trend starts when
    the old architecture goes out of production.  (This is something
    that every customer has to face eventually, but the longer the
    manufacturer holds it off while remaining competitive, the happier
    his customer base is going to be.)  Naturally, upward compatibility
    does not imply downward compatibility.

4)  Customers with mixed machines will need duplicate partitions with
    appropriate copies of all their executables. 

Of course if the price/performance boost is high enough then it will
outway these dissadvantages.

--
--Steve
-------------------------------------------------------------------------
	  {uunet,sun}!convex!swarren; swarren@convex.COM

henry@utzoo.uucp (Henry Spencer) (02/23/90)

In article <192@zds-ux.UUCP> gerry@zds-ux.UUCP (Gerry Gleason) writes:
>The real question for Sun (and others) is whether the architectural
>features of the 6000 really are that big of a win...
>... Probably multiple functional units operating
>in parallel are a big win, if complexity doesn't kill you (HW and SW).
>If true, Sun will need to develop a SPARCII and MIPS a MIPSII instruction
>set...

Um, I haven't followed the details of the new IBM stuff, but my impression
is that most of the "super-scalar" parallelism being touted is just
parallelism between integer and floating-point operations, which both
MIPS and SPARC have had from the beginning...
-- 
"The N in NFS stands for Not, |     Henry Spencer at U of Toronto Zoology
or Need, or perhaps Nightmare"| uunet!attcan!utzoo!henry henry@zoo.toronto.edu

alan@oz.nm.paradyne.com (Alan Lovejoy) (02/23/90)

In article <8064@pt.cs.cmu.edu> lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) writes:
>
>The machine certainly has nice floating point performance for the
>price.  But I'm a little disturbed that the non-floating performance
>is so similar, per clock, to that of the conventional RISCs.  The
>chip set fills 13 square centimeters of silicon, and contains an
>order of magnitude more logic than (say) an R3000/3010 pair. It's
>superscalar, and the R3000 isn't.  So why isn't it faster per clock?
>There are two possibilities:
>
>- the tricks aren't a very good idea and didn't buy much. When
>  the competition uses all these tricks, too, it won't be
>  much of an improvement. (Naaah.)
>- when the competition uses all those tricks, too, then the 6000
>  will be revealed as deficient in some way. (But what way?)

You forgot at least one possibility, which an IBM spokesman claims just
happens to be the real reason:  the compilers have not yet been updated
to take any real advantage of the integer instruction paralellism provided
by the CPU.  Of course, some of the paralellism (5 instructions/cycle max)
is only available when FP instructions are to be executed, but the same is
true for most other "superscalars" and/or "near superscalars" which are
currently available.  Expect integer benchmark performance to much more
closely resemble the claimed "MIPS" ratings when the new compilers become
available.


____"Congress shall have the power to prohibit speech offensive to Congress"____
Alan Lovejoy; alan@pdn; 813-530-2211; AT&T Paradyne: 8550 Ulmerton, Largo, FL.
Disclaimer: I do not speak for AT&T Paradyne.  They do not speak for me. 
Mottos:  << Many are cold, but few are frozen. >>     << Frigido, ergo sum. >>

gsarff@sarek.UUCP (Gary Sarff) (02/24/90)

In article <E58.+d@cs.psu.edu> schwartz@barad-dur.endor.cs.psu.edu (Scott E. Schwartz) writes:
>In article <9376@portia.Stanford.EDU> Dwight Joe writes:
>>So, does Sun launch that new architecture, say ARX, to compete
>>against the IBM 6000?  ....
>>If the answer is "no" (we don't incorporate upward compatibility),
>>then Sun LOSES.  The new Sun competitor against the IBM 6000
>>won't have ANY software adantage, which would be crucial in
>>the highly competitive marketplace.  
>
>I don't understand your claim here.  In this day and age porting an
>application (other than a compiler of some sort) to a new architecture
>(at least in the Unix world) usually involves typing "cc *.c".  The
                              [...]

There seem to be two different camps here.  I have seen numerous postings in
this group saying "architecture/machine xxx (sparc,mips,...) will do better,
look at all the applications/software base we have for the xxx architecture."
The poster above, says porting is nothing more than recompiling.  We can't
have it both ways.

----------------------------------------------------------------------------

seanf@sco.COM (Sean Fagan) (02/25/90)

In article <1990Feb22.175120.12835@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:
>Um, I haven't followed the details of the new IBM stuff, but my impression
>is that most of the "super-scalar" parallelism being touted is just
>parallelism between integer and floating-point operations, which both
>MIPS and SPARC have had from the beginning...

Not from what I've understood; it seems as if they have multiple functional
units on their chip.  Nice idea, of course:  Seymour Cray had it in the CDC
Cybers, decades ago (ob. Seymour plug 8-)), and so do MIPS chips, I believe
(although I'm not sure how many the MIPS chips have).

And, of course, the 8088 and 8087 could operate in parallel.  Frightening,
eh?

-- 
Sean Eric Fagan  | "Time has little to do with infinity and jelly donuts."
seanf@sco.COM    |    -- Thomas Magnum (Tom Selleck), _Magnum, P.I._
(408) 458-1422   | Any opinions expressed are my own, not my employers'.

mash@mips.COM (John Mashey) (02/25/90)

In article <1990Feb22.175120.12835@utzoo.uucp> henry@utzoo.uucp (Henry Spencer) writes:

>Um, I haven't followed the details of the new IBM stuff, but my impression
>is that most of the "super-scalar" parallelism being touted is just
>parallelism between integer and floating-point operations, which both
>MIPS and SPARC have had from the beginning...

Actually, this is not right.  Even with multiple function units,
and even with whatever degree of parallelism there is, none of
the {MIPS, SPARC, 88K, HP PA} set issues more than one instruction/cycle.
The IBM certainly can issue more, as does the Intel i960, and they
are superscalars.  The HP Apollo DN10000 and Intel i860 are more
properly called short-VLIWs (in my opinion), since they are restricted
to instruction pairs of 1-integer+1 FP, in a definite order.
-- 
-john mashey	DISCLAIMER: <generic disclaimer, I speak for me only, etc>
UUCP: 	{ames,decwrl,prls,pyramid}!mips!mash  OR  mash@mips.com
DDD:  	408-991-0253 or 408-720-1700, x253
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

lindsay@MATHOM.GANDALF.CS.CMU.EDU (Donald Lindsay) (02/25/90)

In article <1505@zipeecs.umich.edu> billms@eecs.umich.edu 
	(Bill Mangione-Smith) writes:

>As many people have said in the past
>(seemingly everyone at ASPLOS III :}, didn't anyone study anything
>else that year?) there isn't much fine-grain parallelism available in
>'UNIX' code.  Thus, gcc doesn't have much parallelism available, and
>if it did, there isn't hardware to support it (because it isn't
>floating point code).

The conclusion I take from the literature, and from ASPLOS III, is
that compilers that _don't_ use trace scheduling, have on the
average found a parallelism of approximately 2-2.5 in integer
programs. 

The IBM can issue one ALU operation, and one comparison, and one
branch, per clock. That's not "ideal": it would be nice if it had,
say, two ALUs. I believe that Intel has promised this for the next
i960.

Which reminds me: the only benchmarks I've seen for the superscalar
i960 were on the order of Dhrystone 1.1. Has anyone got an integer
SPECmark by now?
-- 
Don		D.C.Lindsay 	Carnegie Mellon Computer Science

schwartz@shire.cs.psu.edu (Scott E. Schwartz) (02/26/90)

In article <00405@sarek.UUCP> gsarff@sarek.UUCP (Gary Sarff) writes:
>There seem to be two different camps here.  I have seen numerous postings in
>this group saying "architecture/machine xxx (sparc,mips,...) will do better,
>look at all the applications/software base we have for the xxx architecture."
>The poster above, says porting is nothing more than recompiling.  We can't
>have it both ways.

Don't take my word for it, look at recent history.  Three years ago a
Sun4 was just a wet dream.  Now SPARC is Sun's flagship architecture.
If porting software was that serious a problem we'd know it.  Ditto for
MIPS, ibm RT, Moto 88K, etc, of course.  Granted, the key to all this
is coding applications in a high level language and for a portable OS.

--
Scott Schwartz		schwartz@cs.psu.edu
"the same idea is applied today in the use of slide rules." -- Don Knuth 

lamaster@ames.arc.nasa.gov (Hugh LaMaster) (02/27/90)

In article <273@dg.dg.com> uunet!dg!mpogue (Mike Pogue) writes:
>In article <9327@portia.Stanford.EDU> underdog@portia.Stanford.EDU (Dwight Joe) writes:
>>Are we going to see a replay of the Apple II vs. IBM pc situation in
>>the early 80s?  
>Certainly the IBM announcement is interesting and challenging from a number of 
>perspectives.

>1) The UNIX workstation market is much more OPEN (sorry to use the buzzword)
>		than the PC market of the early 80's.

>	2) Software that is ported to the IBM machine can easily be ported to 
>		other machines (MIPS, SPARC, 88K).
:
>Pretty soon, IBM customers will have a CHOICE of vendor HW platforms for running 
>their software.  (Oh my God!)

I think the IBM announcement will turn out to be the best possible outcome
for current Unix/Workstation vendors.  The reason is this:

Fortune 500 companies will see this as "legitimizing" Unix, the same way that
the IBM PC "legitimized" the PC.  Never mind the stupidity or illogicality of
this attitude - that is how they think about IBM and computers.  Today.

Which means that they will permit their staffs to buy Unix machines *from IBM*.
But probably not the competition.  Today.

Software vendors are, and will, scramble to convert their packages to the IBM
systems.  And, Sun and DEC, since those are now blue chip companies
as well.  Technicians will often succeed on at least supporting other
standard ABI systems like m88k and MIPS, which gets more players in the game.
By then, most of the non-portable garbage is out of the code, and the smaller
companies can assist porting and supporting as needed.  Now, notice that a
software package which was formerly IBM only now runs on other systems.

This announcement opens up the potential for a flood of commercial software
on Unix workstations.

At the same time, the captive I/O peripheral approach IBM is using will limit
IBM's penetration of the hardware market in existing Unix shops, the same
as it did for DEC initially.  So, the net effect will be to add to the size 
of the Unix market while limiting penetration into the existing marketplace.
And, at the same time, provide an impetus for porting a lot of commercial
software to the Unix marketplace.

*******************

What to watch out for: Captive Software.  I have already seen a
reactionary trend forming on the part of marketeers - big vendors are
trying to recapture software by signing agreements which will delay or
prevent the introduction of particular packages on rival systems.  Sometimes
the price of getting "preferred status" is that a package *not* be
introduced on other systems for some specified time, if ever. 

I wonder about the legality of this activity, but, in any case, 
the consumer's protection against this will be *to buy software 
that is available on the widest variety of platforms even if only 
one is needed today*.   That is the protection that the investment you make
in software will not be chaining you to a particular hardware vendor in
the future. 


  Hugh LaMaster, m/s 233-9,  UUCP ames!lamaster
  NASA Ames Research Center  ARPA lamaster@ames.arc.nasa.gov
  Moffett Field, CA 94035     
  Phone:  (415)604-6117       

jkenton@pinocchio.encore.com (Jeff Kenton) (02/27/90)

From article <43746@ames.arc.nasa.gov>, by lamaster@ames.arc.nasa.gov (Hugh LaMaster):
> I think the IBM announcement will turn out to be the best possible outcome
> for current Unix/Workstation vendors.  The reason is this:
> 
> Fortune 500 companies will see this as "legitimizing" Unix, the same way that
> the IBM PC "legitimized" the PC.  Never mind the stupidity or illogicality of
> this attitude - that is how they think about IBM and computers.  Today.
> 

But, IBM has "legitimized" Unix several times before.  In fact, each time it
was a different flavor of Unix, giving rise to wondrous conspiracy theories.

I think we may have to wait and see if the latest offering has any impact on
the Unix market.



- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      jeff kenton  ---	temporarily at jkenton@pinocchio.encore.com	 
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

bzs@world.std.com (Barry Shein) (02/27/90)

>Don't take my word for it, look at recent history.  Three years ago a
>Sun4 was just a wet dream.  Now SPARC is Sun's flagship architecture.
>If porting software was that serious a problem we'd know it.  Ditto for
>MIPS, ibm RT, Moto 88K, etc, of course.  Granted, the key to all this
>is coding applications in a high level language and for a portable OS.

Yes, it's true that machines have been ported to in big ways,
particularly machines which run Unix.

But that doesn't mean it's easy and there's little resistance out
there.

I'm doing a major port right now of a major piece of software (mpofs)
just about everyone on this list has heard of. I've also ported some
major hunks of university-ware that were non-trivial, like GNU Emacs,
Franz Lisp, Macsyma, KCL, TeX, etc.

The QA tests alone (on the mpofs) have taken weeks and have turned up
numerous little problems which are being fixed. Sure, just recompile,
sure, it even *seems* to work, until someone bothers to notice that
the floating point unit is a little different or some such thing, hey,
it runs w/o a peep, even gives answers, who cares whether or not
they're the same as you get on the other machines in the room.

Customers are such nitpickers!

There's a term in this business (this business being "software")
called "code spreading". It is used to describe the result of putting
your code onto more than one machine and maintaining the differences.
It is a term which makes major software vendors quake, it's the dark
side of portability.

Sure, it's not all that much work to make things go on another
relatively compatible platform. And I'm sure your customer service
folks will pick up the little differences in system error messages,
directory hierarchies, behavior on bootstrap and recovery mechanisms
available, reliability quirks, floating point units, slight version
differences in libraries, to socket or not to socket, etc etc in no
time at all. And the customers can't expect to have these quirks of
installation and operation in the manuals you ship, and besides, the
printers will re-print all those manuals for free for you, it's just a
"few leetle changes". And it's fortunate that the telepathic networks
are in place so you merely have to finish the port and mentally think
it out loud and every customer on the new platform will know about
your product and begin ringing your phones! Won't cost a nickel.

Let's leave it at: It's quite possible to have your software running
on many different platforms, and it can even be very profitable. It's
nice that this possibility is a thousand times more open than it was a
decade ago.

But it's not just a matter of getting someone to type "make install"
on the new platform. Itt does take a good lump of cash and a lot of
work to accomplish if your product is worth anything at all.
-- 
        -Barry Shein

Software Tool & Die    | {xylogics,uunet}!world!bzs | bzs@world.std.com
Purveyors to the Trade | Voice: 617-739-0202        | Login: 617-739-WRLD

ron@woan.austin.ibm.com (Ronald S. Woan/2100000) (02/27/90)

In article <7454@pdn.paradyne.com>, alan@oz.nm.paradyne.com (Alan
Lovejoy) writes:
|> You forgot at least one possibility, which an IBM spokesman claims
|> just happens to be the real reason: the compilers have not yet been
|> updated to take any real advantage of the integer instruction
|> paralellism provided by the CPU.  Of course, some of the
|> paralellism (5 instructions/cycle max) is only available when FP
|> instructions are to be executed, but the same is true for most
|> other "superscalars" and/or "near superscalars" which are currently
|> available.  Expect integer benchmark performance to much more
|> closely resemble the claimed "MIPS" ratings when the new compilers
|> become available.

I must have missed something here... What integer unit parallelism?
From what I understand, the integer unit is a single pipeline (5?
stages), so it can never do better than one instruction/cycle. To get
five/cycle, you need two branches, float add, float mult, and integer
operation. Only an amazing compiler can schedule an application to use
this frequently.

						Ron

+-----All Views Expressed Are My Own And Are Not Necessarily Shared By------+
+------------------------------My Employer----------------------------------+
+ Ronald S. Woan  (IBM VNET)WOAN AT AUSTIN, (AUSTIN)ron@woan.austin.ibm.com +
+ outside of IBM       @cs.utexas.edu:ibmchs!auschs!woan.austin.ibm.com!ron +
+ last resort                                        woan@peyote.cactus.org +

sysmgr@King.eng.umd.edu (Doug Mohney) (02/28/90)

>IBM has a couple of things going for it.
>1.  The 6000 has the IBM logo.  (This was one reason often presented for
>    IBM's success with the IBM PC.)
>2.  The 6000 is significantly better (i.e. faster) than the competition.
>3.  The 6000 has the backing of a large sales and support network.
>
You left out:
 4.  IBM went to the time and trouble to get a large base of third-party
     software running on their platforms
 5.  IBM got "real UNIX" compatability, in the form of Berkley/POSIX and NFS
     stuff. 
 6.  MicroChannel provides an upgun to co/multiprocessing, plus lots of
     boards developed for the PS/2 world.

>SUN and MIPS have a couple of things against them.
>1.  Their machines are slower than the 6000.

Both DEC/MIPS and Sun haven't put their cards on the table yet. They are
rumored to release 20 MIPS boxes by May, and bring something (current product
line?) in under 10K/box; IBM's entry-level machine is 13K. If DEC and/or Sun
come out with 10 MIPS/5K-7K boxes to complement their 20K/10-12K boxes, life
gets very interesting. 

>Are we going to see a replay of the Apple II vs. IBM pc situation in
>the early 80s?  
>...
>Here we have SUN and MIPS, the companies that first came out with
>good RISC machines (like Apple that first came out with a good
>personal computer--much better than the TRS-80 8^) ).  hmmmm....
>

IBM's machines will promote UNIX/workstation concepts, but it remains to be
see if IBM can continue to perform technology upgunning year after year. The
PS/2 lines are overpriced and stagnant, while Compaq and various other PC-clone
makers have bigger market share than IBM. 

Finally, the workstation market is inheritantly competitive. Before IBM
jumped into the fray, DEC, Sun and HP/Apollo were/are having a blood battle
of MIPS/bucks.

Look for IBM to take 7-10% of existing marketshare due to price/performance
over the next two years, and for them to grab up to 50% of whatever more
market which they create by their announcement.

The speculative math is left as an exercise to the reader :-)

ron@woan.austin.ibm.com (Ronald S. Woan/2100000) (02/28/90)

In article <1653@awdprime.UUCP>, ron@woan.austin.ibm.com (Ronald S.
Woan/2100000) writes:
|> In article <7454@pdn.paradyne.com>, alan@oz.nm.paradyne.com (Alan
|> Lovejoy) writes:
|> |> You forgot at least one possibility, which an IBM spokesman claims
|> |> just happens to be the real reason: the compilers have not yet been
|> |> updated to take any real advantage of the integer instruction
|> |> paralellism provided by the CPU.  
|> 
|> I must have missed something here... What integer unit parallelism?
|> From what I understand, the integer unit is a single pipeline (5?
|> stages), so it can never do better than one instruction/cycle. To get
|> five/cycle, you need two branches, float add, float mult, and integer
|> operation. Only an amazing compiler can schedule an application to use
|> this frequently.

Some people within IBM have asked me to clarify my statements (I
didn't realize there was any confusion). I did not mean to say that
compiler technology did not greatly enhance non-floating point
application speed. It can clearly do so by proper scheduling of
branches with respect to integer operations; however, with a single
integer operation pipeline, you will never see more than one integer
operation/cycle. Also, it would take an amazing compiler to schedule
five concurrent operations regularly for any imaginable typical
application to maintain the peak execution figure that some people
have been quoting.

Also, someone wanted me to mention that when I said a floating point
add and multiply in parallel, I did not imply that these were seperate
ops but rather a float multadd operation (instruction).

					Sorry folks,
					Ron

+-----All Views Expressed Are My Own And Are Not Necessarily Shared By------+
+------------------------------My Employer----------------------------------+
+ Ronald S. Woan  (IBM VNET)WOAN AT AUSTIN, (AUSTIN)ron@woan.austin.ibm.com +
+ outside of IBM       @cs.utexas.edu:ibmchs!auschs!woan.austin.ibm.com!ron +
+ last resort                                        woan@peyote.cactus.org +

frazier@oahu.cs.ucla.edu (Greg Frazier) (02/28/90)

In article <1666@awdprime.UUCP> @cs.utexas.edu:ibmchs!auschs!woan.austin.ibm.com!ron writes:
>
>In article <1653@awdprime.UUCP>, ron@woan.austin.ibm.com (Ronald S.
>Woan/2100000) writes:
+|> I must have missed something here... What integer unit parallelism?
+|> From what I understand, the integer unit is a single pipeline (5?
+|> stages), so it can never do better than one instruction/cycle. To get
+|> five/cycle, you need two branches, float add, float mult, and integer
+|> operation. Only an amazing compiler can schedule an application to use
+|> this frequently.
+
+Some people within IBM have asked me to clarify my statements (I
[stuff deleted]
+branches with respect to integer operations; however, with a single
+integer operation pipeline, you will never see more than one integer
+operation/cycle. Also, it would take an amazing compiler to schedule
+five concurrent operations regularly for any imaginable typical
+application to maintain the peak execution figure that some people
+have been quoting.

My understanding is that there are 3 ALU's, each with it's own
pipeline, and that is where the 5 ops/cycle peak is (i.e. 3 int
ops, 1 fp mult and 1 fp add).  Of course, you don't often get
more than 1 int opn at a time, as is revealed by the discrepancy
between the fp and int benchmarks.

+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
Greg Frazier				"Big A, little a / What begins with A?
frazier@CS.UCLA.EDU			Aunt Annie's Alligator / A ... a ... A"
!{ucbvax,rutgers}!ucla-cs!frazier			_Dr._Seuss's_ABCs_

woan@peyote.cactus.org (Ronald S. Woan) (02/28/90)

In article <32344@shemp.CS.UCLA.EDU>, frazier@oahu.cs.ucla.edu (Greg Frazier) writes:
> My understanding is that there are 3 ALU's, each with it's own
> pipeline, and that is where the 5 ops/cycle peak is (i.e. 3 int
> ops, 1 fp mult and 1 fp add).  Of course, you don't often get
> more than 1 int opn at a time, as is revealed by the discrepancy
> between the fp and int benchmarks.

Where did this come from? From the simulator and talking to the designers,
the three ALUs are the float add, float mult, and integer op units. If we
could do (and schedule) multiple integer ops/cycle, you would be seeing
some even more phenomenal benchmark results. I repeat 5 op/peak achieved
by two branch instructions (counting these as integer ops these days?),
float add-multiply instruction, and a single integer op.

						Ron
-- 
+-----All Views Expressed Are My Own And Are Not Necessarily Shared By------+
+------------------------------My Employer----------------------------------+
+ Ronald S. Woan         @cs.utexas.edu:romp!auschs!woan.austin.ibm.com!ron +
+ second choice:                                     woan@peyote.cactus.org +

touati@ucbarpa.Berkeley.EDU (Herve J. Touati) (03/01/90)

In article <438@peyote.cactus.org> woan@peyote.cactus.org (Ronald S. Woan) writes:
[...]
>Where did this come from? From the simulator and talking to the designers,
>the three ALUs are the float add, float mult, and integer op units. If we
>could do (and schedule) multiple integer ops/cycle, you would be seeing
>some even more phenomenal benchmark results. I repeat 5 op/peak achieved
>by two branch instructions (counting these as integer ops these days?),
>float add-multiply instruction, and a single integer op.
>
>						Ron
>-- 
>+-----All Views Expressed Are My Own And Are Not Necessarily Shared By------+
>+------------------------------My Employer----------------------------------+
>+ Ronald S. Woan         @cs.utexas.edu:romp!auschs!woan.austin.ibm.com!ron +
>+ second choice:                                     woan@peyote.cactus.org +

My understanding (after an IBM presentation here) is that the two
branch instructions that can be executed in one cycle have to be
understood of the branch unit. Only one can actually be a branch. The
second is a condition code operation. One of the innovations of R/6000
architecture is that the processor comes with several (8) condition
code registers. There are bits in the opcodes of instructions that can
set the condition codes to specify in which register the condition
code should be put. The processor provides boolean operations on the
condition code registers as separate instructions, and only those
instructions can be executed in parallel with a branch by the branch
unit. 

--- Herve' Touati
UC Berkeley

frazier@oahu.cs.ucla.edu (Greg Frazier) (03/01/90)

In article <438@peyote.cactus.org> woan@peyote.cactus.org (Ronald S. Woan) writes:
+In article <32344@shemp.CS.UCLA.EDU>, frazier@oahu.cs.ucla.edu (Greg Frazier) writes:
+> My understanding is that there are 3 ALU's, each with it's own
+> pipeline, and that is where the 5 ops/cycle peak is (i.e. 3 int
+> ops, 1 fp mult and 1 fp add).  Of course, you don't often get
+> more than 1 int opn at a time, as is revealed by the discrepancy
+> between the fp and int benchmarks.
+
+Where did this come from? From the simulator and talking to the designers,
+the three ALUs are the float add, float mult, and integer op units. If we
+could do (and schedule) multiple integer ops/cycle, you would be seeing
+some even more phenomenal benchmark results. I repeat 5 op/peak achieved
+by two branch instructions (counting these as integer ops these days?),
+float add-multiply instruction, and a single integer op.

I have since been informed that the two "other" int ALUs are
dedicated to cond. code generation and branch calculation,
respectively.  Oh, well, at least my source was correct about
the ALUs' existance, if not their operation.  Certainly, CC
generation and branch calculation would be counted as int ops,
since they *do* have to be executed one way or another on all
machines, they do require the ALU, and they are not fp.

+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
Greg Frazier				"Big A, little a / What begins with A?
frazier@CS.UCLA.EDU			Aunt Annie's Alligator / A ... a ... A"
!{ucbvax,rutgers}!ucla-cs!frazier			_Dr._Seuss's_ABCs_

oscar@oscar.uucp (Oscar R. Mitchell;3A-17 (045)) (03/01/90)

In article <32344@shemp.CS.UCLA.EDU> frazier@oahu.UUCP (Greg Frazier) writes:
>In article <1666@awdprime.UUCP> @cs.utexas.edu:ibmchs!auschs!woan.austin.ibm.com!ron writes:
>>
>>In article <1653@awdprime.UUCP>, ron@woan.austin.ibm.com (Ronald S.
>>Woan/2100000) writes:
>+|> I must have missed something here... What integer unit parallelism?
>+|> From what I understand, the integer unit is a single pipeline (5?
>+|> stages), so it can never do better than one instruction/cycle. To get
>+|> five/cycle, you need two branches, float add, float mult, and integer
>+|> operation. Only an amazing compiler can schedule an application to use
>+|> this frequently.
>+
>+Some people within IBM have asked me to clarify my statements (I
>[stuff deleted]
>+branches with respect to integer operations; however, with a single
>+integer operation pipeline, you will never see more than one integer
>+operation/cycle. Also, it would take an amazing compiler to schedule
>+five concurrent operations regularly for any imaginable typical
>+application to maintain the peak execution figure that some people
>+have been quoting.
>
>My understanding is that there are 3 ALU's, each with it's own
>pipeline, and that is where the 5 ops/cycle peak is (i.e. 3 int
>ops, 1 fp mult and 1 fp add).  Of course, you don't often get
>more than 1 int opn at a time, as is revealed by the discrepancy
>between the fp and int benchmarks.
>
>+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
>Greg Frazier                           "Big A, little a / What begins with A?
>frazier@CS.UCLA.EDU                    Aunt Annie's Alligator / A ... a ... A"
>!{ucbvax,rutgers}!ucla-cs!frazier                      _Dr._Seuss's_ABCs_



I hope the following will clarify any confusion about the
comments made about the RISC System/6000's five (5) operations/second:

	The RS/6000 is composed of several "Sub-Units"
	some of these are:
	        Instruction Cache Unit   (ICU)
	        Fixed Point Unit         (FXU)
	        Floating Point Unit      (FPU)
	        Data Cache Unit          (DCU)

Therefore, in order for the RS/6000 to accomplish 5 operations/cycle
the:
       ICU would be executing a Branch/IFetch and a Condition
       Register operation.

       FXU would be executing a Interger operation.

       FPU would be executing the Multiply-Add/Sub Instruction
	 (Note: This instruction is done with only a SINGLE rounding
	        operation - this is not a multiply {w/rounding}
	        followed by an addition/subtraction {w/rounding} )


Regards,
Oscar.
><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><
 IBM Advanced Workstations Division                  IBM Tieline:  678-6733
 RISC System/6000(tm) Floating Point Processor Design Group
   Mail Stop: ZIP 4359                                 USA Phone: (512)838-6733
   Austin, Texas  78758                   CI$ Net: 76356.1170@compuserve.com
   IBM VNet:  OSCAR at AUSVM6        IBM InterNet:  oscar@oscar.austin.ibm.com
   USA InterNet:  uunet!cs.utexas.edu!ibmaus!auschs!oscar.austin.ibm.com!oscar
#include <standard.disclaimer>  /* I DO NOT speak for IBM */  Oscar R. Mitchell

stripes@eng.umd.edu (Joshua Osborne) (03/02/90)

In article <00405@sarek.UUCP> gsarff@sarek.UUCP (Gary Sarff) writes:
>There seem to be two different camps here.  I have seen numerous postings in
>this group saying "architecture/machine xxx (sparc,mips,...) will do better,
>look at all the applications/software base we have for the xxx architecture."
>The poster above, says porting is nothing more than recompiling.  We can't
>have it both ways.
Porting is not much more then recompileing when the new system has the same
data order, sizes and padding requirments (or if you wrote correct code that
doesn't make assumptions about that stuff), and when the *exact* same (or
at least upwardly compatable) OS is running.  It is easy to move code from
a Sun3 to a Sun4 (allowing for padding diffrences) because they are both the
same mix of SysV and BSD, both have RPC, both have XDR, both use the same
extenions to make, both have the same lightweight process lib, both have the
same shared libs, &c &c...
However going from a SPARCStarion 1 to a DECStation 3000 you have to deal
with diffrences between SunOS & Ultrix.  If it was a SunView program you have
to re-write it to use X (even with the XView toolkit it's far harder then a
re-compile).
-- 
           stripes@eng.umd.edu          "Security for Unix is like
      Josh_Osborne@Real_World,The          Mutitasking for MS-DOS"
      "The dyslexic porgramer"                  - Kevin Lockwood
"Don't try to change C into some nice, safe, portable programming language
 with all sharp edges removed, pick another language."  - John Limpert

gsarff@sarek.UUCP (Gary Sarff) (03/03/90)

In article <En$_--@cs.psu.edu>, schwartz@shire.cs.psu.edu (Scott E. Schwartz) writes:
>In article <00405@sarek.UUCP> gsarff@sarek.UUCP (Gary Sarff) writes:
>>There seem to be two different camps here.  I have seen numerous postings in
>>this group saying "architecture/machine xxx (sparc,mips,...) will do better,
>>look at all the applications/software base we have for the xxx architecture."
>>The poster above, says porting is nothing more than recompiling.  We can't
>>have it both ways.
>
>Don't take my word for it, look at recent history.  Three years ago a
>Sun4 was just a wet dream.  Now SPARC is Sun's flagship architecture.
>If porting software was that serious a problem we'd know it.  Ditto for
>MIPS, ibm RT, Moto 88K, etc, of course.  Granted, the key to all this
>is coding applications in a high level language and for a portable OS.
>

How did _I_ get involved in this?  I was merely pointing out the different
opinions of two groups here, and now this. 8-)  (I have also received email
from a person who also seems to have misunderstood my post.)  All I was
saying was that _IF_ one says that porting is easy, _THEN_ one may not
logically say that any benefit accrues to any particular architecture because
of the software that is available on a platform using that architecture.  I
have known people to use software availabilty argument to show the
superiority of one machine over another.  I was not speaking to the point of
the ease or difficulty of porting software.

----------------------------------------------------------------------------
                       I _DON'T_ live for the leap!

antony@lbl-csam.arpa (Antony A. Courtney) (03/04/90)

In article <438@peyote.cactus.org> woan@peyote.cactus.org (Ronald S. Woan) writes:
>In article <32344@shemp.CS.UCLA.EDU>, frazier@oahu.cs.ucla.edu (Greg Frazier) writes:
>> 
>> [ lots of stuff about IBM's ALU architecture]
>
> [lots of confusion from IBM-type]
>
>						Ron
>-- 
>+-----All Views Expressed Are My Own And Are Not Necessarily Shared By------+
>+------------------------------My Employer----------------------------------+
>+ Ronald S. Woan         @cs.utexas.edu:romp!auschs!woan.austin.ibm.com!ron +
>+ second choice:                                     woan@peyote.cactus.org +

Question: 

	Why doesn't IBM just make all the information about the machine's
architecture available to the world, so that engineers and scientists can
make their own decisions about how good it is or isn't and this group can
have a decent discussion based on published fact and not based on rumor?
It might certainly help get rid of all the postings which seem to have no
foundation whatsoever (such as this one)...

(and to the author of that article: The blame here is on IBM for not making
the information easily available, not on you for not knowing.)

Or:  Is such information already available?  If so, how can I obtain it?
Learning about a neat machine by saving hundreds of comp.arch articles doesn't
seem to me to be the best way to get a clear view of the design, which is
what I have been doing up to now...

					antony


--
*******************************************************************************
Antony A. Courtney				antony@lbl.gov
Advanced Development Group			ucbvax!lbl-csam.arpa!antony
Lawrence Berkeley Laboratory			AACourtney@lbl.gov

christy@argosy.UUCP (Peter Christy) (03/04/90)

It is pretty unfair to criticize IBM on information disclosure.

See for example, from the compcon 90 proceedings, the three papers
on the IBM Second-Generation RISC Machine, and then the already
referenced IBM compendeum on technology, available from IBM sales
people.

IBM may not be perfect in terms of information availability,  but for
a product introduced a month ago, it's an amazing wealth of detailed
and clear information.


Peter Christy
MassPar Inc.
2840 San Tomas Expressway
Santa Clara, CA  95051

christy@argosy.UUCP (Peter Christy) (03/04/90)

The last posting had an incorrect .signature.. the correct one follows/pc


Peter Christy, MasPar Computer Corporation
  749 North Mary Ave, Sunnyvale, California  94086
  (408) 736-3300 -- christy@maspar.com

kolding@cs.washington.edu (Eric Koldinger) (03/04/90)

In article <5004@helios.ee.lbl.gov> antony@lbl-csam.arpa (Antony A. Courtney) writes:
>Question: 
>
>	Why doesn't IBM just make all the information about the machine's
>architecture available to the world, so that engineers and scientists can
>make their own decisions about how good it is or isn't and this group can
>have a decent discussion based on published fact and not based on rumor?
>It might certainly help get rid of all the postings which seem to have no
>foundation whatsoever (such as this one)...
>

The information is somewhat available.  There are two papers in ICCD '89 about
the procesors architecture.  I haven't read either one of them yet, but I plan
to later this week.  They look like they might contain more information than
most IBM papers do (most papers from IBeam are rather pruned by the lawyers
before they get out.  As a result, they end up being rather light on the
content side).

You really can't blame IBM for not giving out all the information on their
latest whiz-bang machines.  They're in the business of making money, not
providing their competition with potentially damaging information.  I just wish
that they'd release more in depth information on research/non-product stuff
(the papers I've seen on the PL.8 compiler, a research compiler as far as I
know, are all rather skimpy).

-- 
        _   /|                          Eric Koldinger
        \`o_O'                          University of Washington
          ( )     "Gag Ack Barf"        Department of Computer Science
           U                            kolding@cs.washington.edu