martin@fritz.UUCP (06/02/87)
Gentlemen:
I have read with interest the discussion on benchmarking. I am
sure that a benchmarking bake-off would be a good thing
for prospective customers. But for the engineers in the
audience, I would like to point out some of the social problems
associated with touting your company's wares.
I, for one, was recently fired ("laid off") in a manner that
would suggest it was associated with benchmarking activities (and
that was internal to the company). So were my boss, people who worked for
me (some made it out before the formal festivities), and an entire
performance group. This, of course, is my personal opinion. The
formal reason was job elimination due to merger. At the time we
were working on performance evaluations & improvement proposals
for a particular mainframe architecture.
In article <4294@nsc.nsc.com>, grenley@nsc.nsc.com (George Grenley) writes:
> May the best CPU win!
In article <2128@hoptoad.uucp> gnu@hoptoad.uucp (John Gilmore) writes:
>
> Let's have the bake-off in the trade show at, say, next Winter
> Usenix. Probably the actual setup and running of the benchmarks
> can be done a day or two before the show, so the results can be
> printed for distribution, and to give the losers time to think
> up (and print up) good explanations before we descend on them :-).
>...
>--
>Copyright 1987 John Gilmore; you may redistribute only if your recipients may.
Any bake-off will have one winner and many losers. As an engineer
working for a particular vendor, you may have run tests that
convince you that you will be a winner. But be assured that in
some dimension you will be a loser. And if you proposed that
your company enter, you will be the one "responsible" for the
exposure of "weakness".
There has been some discussion of "holistic" vs "reductionist" approaches
to benchmarking. We went through this. We tried various tests
of particular components. Mostly industry standards. We did finally
get through with these, but were met with arguments that the particular
way the system was put together was such that the sum of the whole
exceeded the individual parts. The problem with this argument is that
it is extremely difficult and expensive ($10**6+) to prove or
disprove the conjecture in a manner that is "finally" convincing. And
then, of course, any vendor can point to happy customers (forget
dissonance) to show that rational people are buying the gear, so
it must be better than the competition's.
We found that management a) did not understand the notion of
component (reductionist) tests, or the orthogonality (or lack
theoreof) of particular components, and b) did not care to hear
discussion. I will predict that any open bake-off will lead
to problems for almost anyone who enters it. There are very
few "clean wins" to be had. There are lots of arguments to
be had -- was array bounds checking on? Was loop unrolling
allowed? Does their equipment detect null-pointers in
hardware? None of these arguments are understood by management
(read that: "your management"), who are generally just hacked
that the discussion is going on at all (particularly if its
in the public domain).
There are some companies that are likely to benefit hugely from
such an event. They, of course, are the small companies,
eager to get market share and with little market share to
protect, who have been able to design an architecture to
exploit current technology without regard for compatibility :-).
For them, the likelihood of "winning" and the attendant
publicity can do nothing but good. Who could they be?
For myself, I'd love to see such an event, and since I'm
not foolish enough to get involved, it won't hurt me. So I say
go for it!
Martin S. McKendry
FileNet Corp
{hplabs,trwrb}!felix!martin
Disclaimer: The statements above are completely untrue. They are the
author's alleged opinion only, and even he is not sure he would
admit to them. They in no way reflect the opinion of the author's
current employer or previous employer (especially previous employer).
All claimed facts have already been discredited.
--
Martin S. McKendry
FileNet Corp
{hplabs,trwrb}!felix!martinram@nucsrl.UUCP (Renu Raman) (06/09/87)
>Gentlemen: If you don't get flamed for addressing "Gentlemen" alone, we can safely assume that this is an all male newsgroup (How sad). >There are some companies that are likely to benefit hugely from >such an event. They, of course, are the small companies, >eager to get market share and with little market share to >protect, who have been able to design an architecture to >exploit current technology without regard for compatibility :-). Martin hit it on the nail here about designing with compatibility in mind. Take the case of the IBM 360s/VAXens/Intel. The 360 was definitely a workhorse of the 60-70s (with an excellent architecture for its time) but got plagued by compatability features. The other end of the spectrum like AMD, MIPS (certainly not small) where compatibility is not the issue, the designs have been bold and innovative. Would they also degenerate into mediocrity with market build-up? The problem becomes more severe to-day with RISCy processors, [as design issues/RISC ideas are notoriously different from group to group (group = individual/corp/univ/research ctr)] where maintaining compatibility over a generation of processors will certainly be difficult. At the same time one needs compatibilty to stay in business. What do you optimize here? >Martin S. McKendry >FileNet Corp >{hplabs,trwrb}!felix!martin
ps@celerity.UUCP (Pat Shanahan) (06/11/87)
In article <3810039@nucsrl.UUCP> ram@nucsrl.UUCP (Renu Raman) writes: > > >>Gentlemen: > > If you don't get flamed for addressing "Gentlemen" alone, we can safely > assume that this is an all male newsgroup (How sad). It is not an all male newsgroup, but it is also not a suitable newsgroup for discussing sexism in forms of address. Patricia Shanahan -- ps (Pat Shanahan) uucp : {decvax!ucbvax || ihnp4 || philabs}!sdcsvax!celerity!ps arpa : sdcsvax!celerity!ps@nosc
ps@celerity.UUCP (Pat Shanahan) (06/11/87)
In article <3810039@nucsrl.UUCP> ram@nucsrl.UUCP (Renu Raman) writes: > > Martin hit it on the nail here about designing with compatibility in > mind. Take the case of the IBM 360s/VAXens/Intel. The 360 > was definitely a workhorse of the 60-70s (with an excellent architecture > for its time) but got plagued by compatability features. > The other end of the spectrum like AMD, MIPS (certainly not small) > where compatibility is not the issue, the designs have been bold and > innovative. Would they also degenerate into mediocrity with market > build-up? > >... I think these things go in fairly long cycles. There are relatively short periods when major architectural ideas become real machines, separated by a decade or so of gradual improvement. Those periods of consolidation do not necessarily represent "mediocrity". When the 360 was designed much larger volumes of code were written in assembly language, resulting in very tight lock-in. The cost of incompatibility reduces as more code is written in standard languages. There is still a long way to go before source programs can automatically be moved to new architectures without any trouble. (I wish they could, and I wish we did not have to implement a load of FORTRAN language extensions, just because DEC did in VMS-FORTRAN, and too many programs depend on them). I think the trade-off between the value of a performance gain and the cost of incompatibility is one that users have to make for themselves when selecting a system. It is in any case useful to measure performance, even though it will rarely be the only issue. -- ps (Pat Shanahan) uucp : {decvax!ucbvax || ihnp4 || philabs}!sdcsvax!celerity!ps arpa : sdcsvax!celerity!ps@nosc
blatt@Shasta.UUCP (06/11/87)
In article <3810039@nucsrl.UUCP> ram@nucsrl.UUCP (Renu Raman) writes: > > >>Gentlemen: > > If you don't get flamed for addressing "Gentlemen" alone, we can safely > assume that this is an all male newsgroup (How sad). >>Martin S. McKendry FLAME: Contrary to your expectations, there do exist women (for instance me) who read this group regularly, and do not take kindly to being addressed as gentlemen. END FLAME. Now for something technical. Does anyone out there know of computers that use immersion cooling? I have data only on the Cray-2. I am interested in the heat flux per square cm, what fluid, what temperature, whether it is laminar flow, turbulent flow, boiling, or flow boiling. I've heard that the ETA computer will use immersion cooling. Can someone tell me something about that? Miriam Blatt Center for Integrated Systems Stanford University blatt@amadeus.stanford.edu, ...!decvax!decwrl!shasta!blatt