[net.micro.pc] Standard, What standard???

root@idmi-cc.UUCP (Admin) (01/30/85)

	This is sort of a flame, though it's also an honest attempt
to get some support one way or the other for an arguement on this issue
that has been going around my office for over a year.
	My problem is that every time IBM announces a 'new' product for 
one of it's PC's there is a flurry of activity in the media that centers 
around the question "Is IBM defining a new standard in ...(whatever)?".
Times used to be, when setting a standard refered to either a) a standard
of excellence or b) a standard for 'state of the art' (ie/ a breakthrough
of some sort).  My understanding of the IBM line of PC's is: a) none of them
have ever used anything but already exisisting technology,  b) none of them
have been any bit more reliable than all but the worst of their competitors,
c) some of them (the PCjr for instance) have been a good deal worse than
other similarly priced products, d) and finally, rarely has any of the
software with IBM's name on it been 'state of the art' (let alone a breakthrough
on the 'leading edge').  The fact is I can't recall ever having seen anything
come from IBM that hadn't already been available in equal or better form
from somewhere else for less money.
	Don't get me wrong, I like IBM.  I think they have had a big hand
in the popularization of computers.  I just don't think they have set any
standards in the computer world.  I am tired of hearing the question, over
and over again, "is IBM setting a standard in ..." when what realy is being
asked is "is IBM doing another thing in a mediocre way that everyone else
will be forced to accept, emulate, or improve on because of the general public's
low level of knowledge concerning computers".
	It is my feeling that to say IBM sets standards in the computer
industry is like saying Betty Crocker sets the standards for fine french pastry.

-----
"This message will self destruct in 5 seconds"

The views expressed herein are probably not worth much to anyone and
therefore should not be mistaken to represent I.D.M.I, it's officers
or any other people heretofore or hereafter associated with said company.

			Andrew R. Scholnick
			Information Design and Management Inc., Alexandria, Va.
		{rlgvax,prcrs}!idmi-cc!andrew

mf1@ukc.UUCP (M.Fischer) (02/02/85)

<>
I have to agree there is a conservative effect IBM seems to have
introduced into the marketplace, at the same time increasing the
popularity of the micro.  I went to a technologically isolated area in
asia for two years.  When I returned I noted many changes; the
traditional culture shock.  One thing that hadn't changed in those two
years was micros.  I cam back expecting great marvels, and obsolesence
of my own knowledge, and instead found marketing expansion, but few new
ideas.  I have very mixed feelings about this, and must admit some
disappointment.

Michael Fischer vax135!ukc!mf1

jss@sjuvax.UUCP (J. Shapiro) (02/06/85)

[Aren't you hungry...?

> My understanding of the IBM line of PC's is: a) none of them
> have ever used anything but already exisisting technology.
> 
> 			Andrew R. Scholnick
> 			Information Design and Management Inc., Alexandria, Va.
> 		{rlgvax,prcrs}!idmi-cc!andrew

Not only that, but they didn't even try to use it innovatively, nor even
efficiently.  Same true for the PC-AT.

Segmented Architecture... AAAAAAAARRRRRRRRRRRRRRGGGGGGGGGGHHHHHHHH!

Jonathan S. Shapiro
Haverford College
..!allegra!sjuvax!jss

david@daisy.UUCP (David Schachter) (02/21/85)

Mr. Shapiro writes that IBM doesn't use new technology innovatively or
efficiently.  He closes with "Segmented Architecture... AAAAARRRRRRRRRR-
GGGGGGGGHHHHHHHH!"  I beg to differ.

The circuitry coupling the PC-AT bus to the PC-XT bus (to remain compatible)
is neither simple nor brilliant.  But it does accomplish the presumed design
goal: get the job done cheaply.  In this case, efficiency with respect to
cost.  The base concept, that of connecting the old and new busses to remain
compatible, is innovative, at least mildly.  Most companies would simply
say "sorry folks but this is a new generation.  Throw out your old hardware."
IBM didn't.  (They did the same thing with the IBM 360/370/30xy mainframes.
DEC did the same with the VAX.  Intel did the same with the 80x86.)  Note
that I am referring to hardware compatibility: the hardware interface to the
outside world is retained even though the hardware guts are radically dif-
ferent.  Compared with the rest of the micro-world, IBM's approach is
innovative.

Finally, although I am not a fan of segmentation a la Intel, I am compelled
to point out that my company has done quite a lot within the Intel archi-
tecture.  Our experience in writing complex Computer-Aided-Engineering
programs is that if you need segments > 64kB, you probably don't know
what you are doing: there exists a better algorithm to do what you want to
do.  This is not always true but it is true often enough that the Intel
architecture doesn't cause us much pain.  In summary, while segmentation
looks bad, it really doesn't hurt too much.

(I have no connection with Intel or its competitors.  Nobody likes me.
The opinions expressed herein are mine, not those of my company or its
employees.)  {Perfection is temporary.}

agn@cmu-cs-k.ARPA (Andreas Nowatzyk) (02/26/85)

Defending the 80x86's segmented architecture, Mr. Schachter writes:

> Finally, although I am not a fan of segmentation a la Intel, I am compelled
> to point out that my company has done quite a lot within the Intel archi-
> tecture.  Our experience in writing complex Computer-Aided-Engineering
> programs is that if you need segments >  64kB, you probably don't know
> what you are doing: there exists a better algorithm to do what you want to
> do.  This is not always true but it is true often enough that the Intel
> architecture doesn't cause us much pain.  In summary, while segmentation
> looks bad, it really doesn't hurt too much.

Using said software (on Daisy's 80286-based CAD workstations), I find
the opposite to be true: Segmented addressing with a 16bit limit
is a royal pain in the neck! I ran into quite a few problems that were
directly related to the fact that some table in some data structure had
to fit into 64K byte. While the CAD software itself is reasonable, I
wished more than once that they had used a 68K or 16/32K processor.

  Andreas.Nowatzyk              ARPA:   agn@cmu-cs-k.ARPA
                              USENET:   ...!seismo!cmu-cs-k!agn

david@daisy.UUCP (David Schachter) (02/28/85)

Mr. Nowatzyk of Carnegie Mellon states that 64K segmentation limits have
caused him problems in using 80286 software on our workstations.  If he is
running very large designs through our older software, this can happen.
This has been corrected in newer releases in those places where it has
caused problems.  (When we designed the software, we designed it with what
we thought were generous safety margins.  Our customers promptly used the
increased efficiency of computer aided engineering to do much larger designs
than before!  Parkinson's law strikes again.)

All of the newer software, particularly in the physical layout tools and
in the hardware accelerator realm have taken advantage of what we learned
in doing the older software.  (That's what I meant in my earlier posting
when I  used the term "experience.")  We learned, in short, how to
design our code to run in the method intended by the designers of the CPU.
If you want to get maximum performance on a CPU you didn't design, this is
always a requirement, be it a NS32000, an MC68000, an 80286, or a PDP-8.

In our experience writing CAE software, in the rare cases where 64K segmentation
is a problem, it usually means that we don't know what we are doing yet.  There
is almost always a better algorithm that we haven't discovered yet, one which
uses smaller data structures >faster<.

Large address spaces are convenient.  They are not essential.  Moreover, their
convenience can rob you of the incentive to get maximum performance.  The
Intel architecture is a dark cloud with a silver lining: the need to keep within
the small address space frequently causes us to find solutions that are smaller
and faster, helping us meet our performance goals.

rsellens@watdcsu.UUCP (Rick Sellens - Mech. Eng.) (03/04/85)

In article <77@daisy.UUCP> david@daisy.UUCP (David Schachter) writes:
>
>In our experience writing CAE software, in the rare cases where 64K segmentation
>is a problem, it usually means that we don't know what we are doing yet.  There
>is almost always a better algorithm that we haven't discovered yet, one which
>uses smaller data structures >faster<.
>
>Large address spaces are convenient.  They are not essential.  Moreover, their
>convenience can rob you of the incentive to get maximum performance.  The
>Intel architecture is a dark cloud with a silver lining: the need to keep within
>the small address space frequently causes us to find solutions that are smaller
>and faster, helping us meet our performance goals.


I understand this to mean that it is desirable to have arbitrary restrictions
imposed on your software development by a hardware design. (By arbitrary I
mean that the restriction, in this case 64K addressable by 16 bits, has
nothing to do with the application, but is dictated by the hardware.)

I submit that:
    1. Small efficient algorithms can be implemented with equal ease in 
       any address space larger than the algorithm.
    2. Larger algorithms are often difficult to implement in small address
       spaces.
    3. Larger address spaces require larger addresses, which in turn may
       give larger overheads in the address arithmetic.

On this basis I feel that the only good thing about the 64K maximum segment
size is that it keeps address arithmetic within a segment down to the 16 bit
capabilities of the 8088/8086 processors. Offsetting this advantage is the
sometimes significant disadvantage that larger algorithms and data structures
are difficult to implement. With the coming of 32 bit systems for relatively
low prices, the advantage of a small maximum segment size will go away.

In any case, there are only two valid incentives for increasing the speed of
a piece of software. The first is the price/performance incentive. Faster
software *may* mean a significant reduction in hardware cost. Without this
reduction in hardware cost there is no incentive to achieve "maximum
performance" except where there is the need to accomplish a task in some
fixed amount of real time. Interactive tasks need to move quickly enough to 
keep up with the user. Real time tasks like data aquisition need to keep up 
with the real world. In these cases there is still some limit beyond which
further improvement in speed gives no improvement in the true performance
of the task.

I hate to hear restrictive hardware designs defended as "good in themselves".
Hardware restrictions will always be with us, but they are never desirable.


Rick Sellens
UUCP:  watmath!watdcsu!rsellens
CSNET: rsellens%watdcsu@waterloo.csnet
ARPA:  rsellens%watdcsu%waterloo.csnet@csnet-relay.arpa

david@daisy.UUCP (David Schachter) (03/10/85)

In article <1068@watdcsu.UUCP>, Rick Sellens writes:
>In article <77@daisy.UUCP> david@daisy.UUCP (David Schachter) writes:
>>
>>In our experience writing CAE software, in the rare cases where 64K segment
>>ation is a problem, it usually means that we don't know what we are doing yet.
>>There is almost always a better algorithm that we haven't discovered yet, one
>> which uses smaller data structures >faster<.
>>
>>Large address spaces are convenient.  They are not essential.  Moreover, their
>>convenience can rob you of the incentive to get maximum performance.  The
>>Intel architecture is a dark cloud with a silver lining: the need to keep 
>>within the small address space frequently causes us to find solutions that are
>>smaller and faster, helping us meet our performance goals.
>
>
>I understand this to mean that it is desirable to have arbitrary restrictions
>imposed on your software development by a hardware design. (By arbitrary I
>mean that the restriction, in this case 64K addressable by 16 bits, has
>nothing to do with the application, but is dictated by the hardware.)
... (omitted text) ...
>
>I hate to hear restrictive hardware designs defended as "good in themselves".
>Hardware restrictions will always be with us, but they are never desirable.

Bosh and twaddle, Mr. Sellens.  Normally, I would assume that my posting was
unclear.  In this case, I believe it was clear and you mis-interpreted it.
Small address spaces are not good, in and of themselves.  But they force you
to find smaller algorithms which often run faster as well.  I don't know why
smaller algorithms >tend< to run faster; I'm not a philosopher.

In the applications I write, CAE programming, the small address space of the
miserable Intel architecture does not often cause pains.  When it does, it
is usually because the algorithm stinks.  The effort to find an algorithm
which uses less space often produces, as a nice side effect, a program that
runs faster.

Mr. Sellens claims that 'fast enough' is often sufficient and he would be
correct if he was talking about a single-job CPU.  But in the real world,
systems frequently run multiple jobs.  Any spare cycles left over by a pro-
gram that runs 'too fast' are available for other programs.

The Intel architecture provides the ability to write very fast programs.  It
provides the ability to write very small programs.  If you want to provide the
best price-performance ratio for your customers, the Intel architecture can be
a good choice.  If your only goal is to get something out the door, other
architectures are better.

Mr. Sellens also states that with the coming availability of 32 bit micro-
processors, the speed advantage of a processor that uses 16 bits as the
native object size will disappear.  (The argument is that if you have a
16 bit bus, you don't want to deal with 32 bit quantities when 16 bits will
do.)  Mr. Sellens is right.  SOME DAY, 32 bit machines will be available
in production quantity.  But they are not available now.  Our customers
don't want to wait a year or two.  They want solutions now.

Architectural chavinism helps no one.  I don't like the Intel architecture.
But it is not the swamp that others make it out to be.

[The opinions expressed above are my own and not necessarily those of Daisy
Systems Corporation, its employees, or subsidiaries.  If anyone else would
like these opinions, they are available for $40 each, $75 for two.]
{Eight foot four, mouth that roars, pitch storm troopers out the door, has
anybody seen my Wookie?}