[net.micro] Standard, What standard???

root@idmi-cc.UUCP (Admin) (01/30/85)

	This is sort of a flame, though it's also an honest attempt
to get some support one way or the other for an arguement on this issue
that has been going around my office for over a year.
	My problem is that every time IBM announces a 'new' product for 
one of it's PC's there is a flurry of activity in the media that centers 
around the question "Is IBM defining a new standard in ...(whatever)?".
Times used to be, when setting a standard refered to either a) a standard
of excellence or b) a standard for 'state of the art' (ie/ a breakthrough
of some sort).  My understanding of the IBM line of PC's is: a) none of them
have ever used anything but already exisisting technology,  b) none of them
have been any bit more reliable than all but the worst of their competitors,
c) some of them (the PCjr for instance) have been a good deal worse than
other similarly priced products, d) and finally, rarely has any of the
software with IBM's name on it been 'state of the art' (let alone a breakthrough
on the 'leading edge').  The fact is I can't recall ever having seen anything
come from IBM that hadn't already been available in equal or better form
from somewhere else for less money.
	Don't get me wrong, I like IBM.  I think they have had a big hand
in the popularization of computers.  I just don't think they have set any
standards in the computer world.  I am tired of hearing the question, over
and over again, "is IBM setting a standard in ..." when what realy is being
asked is "is IBM doing another thing in a mediocre way that everyone else
will be forced to accept, emulate, or improve on because of the general public's
low level of knowledge concerning computers".
	It is my feeling that to say IBM sets standards in the computer
industry is like saying Betty Crocker sets the standards for fine french pastry.

-----
"This message will self destruct in 5 seconds"

The views expressed herein are probably not worth much to anyone and
therefore should not be mistaken to represent I.D.M.I, it's officers
or any other people heretofore or hereafter associated with said company.

			Andrew R. Scholnick
			Information Design and Management Inc., Alexandria, Va.
		{rlgvax,prcrs}!idmi-cc!andrew

Alastair Milne <milne@uci-icse> (02/01/85)

   You appear to be using the word "standard" in a somewhat ambiguous way.  I
   see these two meanings coming up:
     1 - the level of quality to which others aspire
     2 - a generally accepted convention (for a language, or a set of
         controls, or a format, or many other things).

   The first is something I think IBM *could* set, if they cared to.
   They've introduced a number of things that are now so general we
   take them for granted.  Floppy discs, for example, were originally
   devised by IBM for loading microcode (into 360/370 control store, I
   believe, though I don't remember clearly).  But their corporate
   policy seems to be always to keep their quality just a bit ahead of
   whatever competitor they're against (eg.  an 8088 with 64K [or less]
   against Apple II's 6502 with 48K, when the PC first came out).  I
   guess they want to make sure they always have a comfortable margin
   in which to move ahead.  Which means that we seldom or never benefit
   from their true ability.

   IBM certainly does not set the second in any official manner, though
   I'm sure they'd like to (some people feel that, in a practical
   manner, they do), just as much as any other company would.  The
   popularity of some of their products has caused great followings of
   them (eg. PC's and PC lookalikes); but that is different from
   defining standards, which is the proper business of bodies like
   ANSI, ISO, and BSI.  In fact, IBM rather likes to buck those
   standards.  If they had had their way, we'd all be using EBCDIC now
   (not to mention PL/I).

   I'm afraid your argument is never likely to be resolved, though.  In my
   experience, lots of people have strong and opposing opinions about IBM.  A
   long debate over them may be interesting; it will certainly be fiery.

				A. Milne		UC Irvine

mf1@ukc.UUCP (M.Fischer) (02/02/85)

<>
I have to agree there is a conservative effect IBM seems to have
introduced into the marketplace, at the same time increasing the
popularity of the micro.  I went to a technologically isolated area in
asia for two years.  When I returned I noted many changes; the
traditional culture shock.  One thing that hadn't changed in those two
years was micros.  I cam back expecting great marvels, and obsolesence
of my own knowledge, and instead found marketing expansion, but few new
ideas.  I have very mixed feelings about this, and must admit some
disappointment.

Michael Fischer vax135!ukc!mf1

moriarty@fluke.UUCP (Jeff Meyer) (02/04/85)

I suspect that that IBM standard the press is referring to is "the standard
line" (e.g. "State of the Art", "full documentation available", "The check's
in the mail")...

					"Nobody here but us folk heroes."

					Moriarty, aka Jeff Meyer
					John Fluke Mfg. Co., Inc.
UUCP:
 {cornell,decvax,ihnp4,sdcsvax,tektronix,utcsrgv}!uw-beaver \
    {allegra,gatech!sb1,hplabs!lbl-csam,decwrl!sun,ssc-vax} -- !fluke!moriarty
ARPA:
	fluke!moriarty@uw-beaver.ARPA

jss@sjuvax.UUCP (J. Shapiro) (02/06/85)

[Aren't you hungry...?

> My understanding of the IBM line of PC's is: a) none of them
> have ever used anything but already exisisting technology.
> 
> 			Andrew R. Scholnick
> 			Information Design and Management Inc., Alexandria, Va.
> 		{rlgvax,prcrs}!idmi-cc!andrew

Not only that, but they didn't even try to use it innovatively, nor even
efficiently.  Same true for the PC-AT.

Segmented Architecture... AAAAAAAARRRRRRRRRRRRRRGGGGGGGGGGHHHHHHHH!

Jonathan S. Shapiro
Haverford College
..!allegra!sjuvax!jss

david@daisy.UUCP (David Schachter) (02/21/85)

Mr. Shapiro writes that IBM doesn't use new technology innovatively or
efficiently.  He closes with "Segmented Architecture... AAAAARRRRRRRRRR-
GGGGGGGGHHHHHHHH!"  I beg to differ.

The circuitry coupling the PC-AT bus to the PC-XT bus (to remain compatible)
is neither simple nor brilliant.  But it does accomplish the presumed design
goal: get the job done cheaply.  In this case, efficiency with respect to
cost.  The base concept, that of connecting the old and new busses to remain
compatible, is innovative, at least mildly.  Most companies would simply
say "sorry folks but this is a new generation.  Throw out your old hardware."
IBM didn't.  (They did the same thing with the IBM 360/370/30xy mainframes.
DEC did the same with the VAX.  Intel did the same with the 80x86.)  Note
that I am referring to hardware compatibility: the hardware interface to the
outside world is retained even though the hardware guts are radically dif-
ferent.  Compared with the rest of the micro-world, IBM's approach is
innovative.

Finally, although I am not a fan of segmentation a la Intel, I am compelled
to point out that my company has done quite a lot within the Intel archi-
tecture.  Our experience in writing complex Computer-Aided-Engineering
programs is that if you need segments > 64kB, you probably don't know
what you are doing: there exists a better algorithm to do what you want to
do.  This is not always true but it is true often enough that the Intel
architecture doesn't cause us much pain.  In summary, while segmentation
looks bad, it really doesn't hurt too much.

(I have no connection with Intel or its competitors.  Nobody likes me.
The opinions expressed herein are mine, not those of my company or its
employees.)  {Perfection is temporary.}

agn@cmu-cs-k.ARPA (Andreas Nowatzyk) (02/26/85)

Defending the 80x86's segmented architecture, Mr. Schachter writes:

> Finally, although I am not a fan of segmentation a la Intel, I am compelled
> to point out that my company has done quite a lot within the Intel archi-
> tecture.  Our experience in writing complex Computer-Aided-Engineering
> programs is that if you need segments >  64kB, you probably don't know
> what you are doing: there exists a better algorithm to do what you want to
> do.  This is not always true but it is true often enough that the Intel
> architecture doesn't cause us much pain.  In summary, while segmentation
> looks bad, it really doesn't hurt too much.

Using said software (on Daisy's 80286-based CAD workstations), I find
the opposite to be true: Segmented addressing with a 16bit limit
is a royal pain in the neck! I ran into quite a few problems that were
directly related to the fact that some table in some data structure had
to fit into 64K byte. While the CAD software itself is reasonable, I
wished more than once that they had used a 68K or 16/32K processor.

  Andreas.Nowatzyk              ARPA:   agn@cmu-cs-k.ARPA
                              USENET:   ...!seismo!cmu-cs-k!agn

david@daisy.UUCP (David Schachter) (02/28/85)

Mr. Nowatzyk of Carnegie Mellon states that 64K segmentation limits have
caused him problems in using 80286 software on our workstations.  If he is
running very large designs through our older software, this can happen.
This has been corrected in newer releases in those places where it has
caused problems.  (When we designed the software, we designed it with what
we thought were generous safety margins.  Our customers promptly used the
increased efficiency of computer aided engineering to do much larger designs
than before!  Parkinson's law strikes again.)

All of the newer software, particularly in the physical layout tools and
in the hardware accelerator realm have taken advantage of what we learned
in doing the older software.  (That's what I meant in my earlier posting
when I  used the term "experience.")  We learned, in short, how to
design our code to run in the method intended by the designers of the CPU.
If you want to get maximum performance on a CPU you didn't design, this is
always a requirement, be it a NS32000, an MC68000, an 80286, or a PDP-8.

In our experience writing CAE software, in the rare cases where 64K segmentation
is a problem, it usually means that we don't know what we are doing yet.  There
is almost always a better algorithm that we haven't discovered yet, one which
uses smaller data structures >faster<.

Large address spaces are convenient.  They are not essential.  Moreover, their
convenience can rob you of the incentive to get maximum performance.  The
Intel architecture is a dark cloud with a silver lining: the need to keep within
the small address space frequently causes us to find solutions that are smaller
and faster, helping us meet our performance goals.

rsellens@watdcsu.UUCP (Rick Sellens - Mech. Eng.) (03/04/85)

In article <77@daisy.UUCP> david@daisy.UUCP (David Schachter) writes:
>
>In our experience writing CAE software, in the rare cases where 64K segmentation
>is a problem, it usually means that we don't know what we are doing yet.  There
>is almost always a better algorithm that we haven't discovered yet, one which
>uses smaller data structures >faster<.
>
>Large address spaces are convenient.  They are not essential.  Moreover, their
>convenience can rob you of the incentive to get maximum performance.  The
>Intel architecture is a dark cloud with a silver lining: the need to keep within
>the small address space frequently causes us to find solutions that are smaller
>and faster, helping us meet our performance goals.


I understand this to mean that it is desirable to have arbitrary restrictions
imposed on your software development by a hardware design. (By arbitrary I
mean that the restriction, in this case 64K addressable by 16 bits, has
nothing to do with the application, but is dictated by the hardware.)

I submit that:
    1. Small efficient algorithms can be implemented with equal ease in 
       any address space larger than the algorithm.
    2. Larger algorithms are often difficult to implement in small address
       spaces.
    3. Larger address spaces require larger addresses, which in turn may
       give larger overheads in the address arithmetic.

On this basis I feel that the only good thing about the 64K maximum segment
size is that it keeps address arithmetic within a segment down to the 16 bit
capabilities of the 8088/8086 processors. Offsetting this advantage is the
sometimes significant disadvantage that larger algorithms and data structures
are difficult to implement. With the coming of 32 bit systems for relatively
low prices, the advantage of a small maximum segment size will go away.

In any case, there are only two valid incentives for increasing the speed of
a piece of software. The first is the price/performance incentive. Faster
software *may* mean a significant reduction in hardware cost. Without this
reduction in hardware cost there is no incentive to achieve "maximum
performance" except where there is the need to accomplish a task in some
fixed amount of real time. Interactive tasks need to move quickly enough to 
keep up with the user. Real time tasks like data aquisition need to keep up 
with the real world. In these cases there is still some limit beyond which
further improvement in speed gives no improvement in the true performance
of the task.

I hate to hear restrictive hardware designs defended as "good in themselves".
Hardware restrictions will always be with us, but they are never desirable.


Rick Sellens
UUCP:  watmath!watdcsu!rsellens
CSNET: rsellens%watdcsu@waterloo.csnet
ARPA:  rsellens%watdcsu%waterloo.csnet@csnet-relay.arpa

david@daisy.UUCP (David Schachter) (03/10/85)

In article <1068@watdcsu.UUCP>, Rick Sellens writes:
>In article <77@daisy.UUCP> david@daisy.UUCP (David Schachter) writes:
>>
>>In our experience writing CAE software, in the rare cases where 64K segment
>>ation is a problem, it usually means that we don't know what we are doing yet.
>>There is almost always a better algorithm that we haven't discovered yet, one
>> which uses smaller data structures >faster<.
>>
>>Large address spaces are convenient.  They are not essential.  Moreover, their
>>convenience can rob you of the incentive to get maximum performance.  The
>>Intel architecture is a dark cloud with a silver lining: the need to keep 
>>within the small address space frequently causes us to find solutions that are
>>smaller and faster, helping us meet our performance goals.
>
>
>I understand this to mean that it is desirable to have arbitrary restrictions
>imposed on your software development by a hardware design. (By arbitrary I
>mean that the restriction, in this case 64K addressable by 16 bits, has
>nothing to do with the application, but is dictated by the hardware.)
... (omitted text) ...
>
>I hate to hear restrictive hardware designs defended as "good in themselves".
>Hardware restrictions will always be with us, but they are never desirable.

Bosh and twaddle, Mr. Sellens.  Normally, I would assume that my posting was
unclear.  In this case, I believe it was clear and you mis-interpreted it.
Small address spaces are not good, in and of themselves.  But they force you
to find smaller algorithms which often run faster as well.  I don't know why
smaller algorithms >tend< to run faster; I'm not a philosopher.

In the applications I write, CAE programming, the small address space of the
miserable Intel architecture does not often cause pains.  When it does, it
is usually because the algorithm stinks.  The effort to find an algorithm
which uses less space often produces, as a nice side effect, a program that
runs faster.

Mr. Sellens claims that 'fast enough' is often sufficient and he would be
correct if he was talking about a single-job CPU.  But in the real world,
systems frequently run multiple jobs.  Any spare cycles left over by a pro-
gram that runs 'too fast' are available for other programs.

The Intel architecture provides the ability to write very fast programs.  It
provides the ability to write very small programs.  If you want to provide the
best price-performance ratio for your customers, the Intel architecture can be
a good choice.  If your only goal is to get something out the door, other
architectures are better.

Mr. Sellens also states that with the coming availability of 32 bit micro-
processors, the speed advantage of a processor that uses 16 bits as the
native object size will disappear.  (The argument is that if you have a
16 bit bus, you don't want to deal with 32 bit quantities when 16 bits will
do.)  Mr. Sellens is right.  SOME DAY, 32 bit machines will be available
in production quantity.  But they are not available now.  Our customers
don't want to wait a year or two.  They want solutions now.

Architectural chavinism helps no one.  I don't like the Intel architecture.
But it is not the swamp that others make it out to be.

[The opinions expressed above are my own and not necessarily those of Daisy
Systems Corporation, its employees, or subsidiaries.  If anyone else would
like these opinions, they are available for $40 each, $75 for two.]
{Eight foot four, mouth that roars, pitch storm troopers out the door, has
anybody seen my Wookie?}

BillW@SU-SCORE.ARPA (William Chops Westfield) (03/31/85)

Al Filipski <al%mot.uucp@BRL-TGR.ARPA> says...

    Funny, I've always thought that size and speed were INVERSELY related.
    Take sort algorithms, f'rinstance.  One of the smallest sorts you can
    write is one of the worst-- the bubble....

From an abstract point of view, he is correct.  However, we are not
really talking about small algorithms vs large algorithms here.  Rather,
we are talking about the implementation of a given algorithm.  In this
latter case, it is true that a small implementation of an algorithm is
is faster than a large implementation.  You could call implementation a
sort of sub-algorithm.  Consider the following two pieces of code (part
of a TCP checksum routine for a 8088 processor.  Assume we want a 1's
complement 16 bit checksum of CX words pointed to by BX:)

---------------
Example 1:

lpchk:	add	ax,(bx)		| add value and carry	2 bytes, 18 cycles
	adc	ax,*0		|			3 bytes,  8 c
	inc	bx		| bump pointer		1 byte,   2 c
	inc	bx		|			1 byte,   2 c
	loop	lpchk		| do it again		2 bytes, 17 c

Example #2

	mov	si,bx
lpchk:	lodw			|get next word		1 byte, 16 cycles
	adc	bx,ax		|overlap adding last CY	2 bytes, 3 c
	loop	lpchk		|next word		2 bytes, 17 c
	mov	ax,bx		| put where results have to go
	adc	ax,*0		| add final carry bit.
---------------

Although both example use essentailly the same algorithm (there aren't a
whole lot of ways to calculate such a checksum!), the second example
is smaller and faster - It makes better use of the 8088 architecture.

Which brings up another point:  When trying to optimize code, I like
to divide optimization into three different levels: Algorithmic
optimization, architectural optimization, and system level optimization.

Algorithmic optimization is what is most frequently talked about. (for
exampe going from bubble to quick sort).  Unfortunately, timings in
this area are rather abstract.  Sorts tend to be rated by the number of
data comparisons that have to be done, for example.  Which is usually a
pretty good idea, but what if a compare is very fast compared to
instruction fetches (for example)?  Which brings us to:

Architectural optimization is where you take a look at your processor,
and try to make the fast parts of your code use fast instructions, and
such.  This is usually where us assembly language hackers gain speed
over HLL programmers.  In the examples given above, I might try to
ensure that the words being checksummed inside the loop were being
fetched froma word boundry if it were running on an 8086 proceessor,
since this would require half the bus cycles, for example...

Systems Level optimization is when you take into account nasty things
like the operating system and such.  For example, using the key as
an address may sound like a very fast way to sort things, but under
most operating systems, you are likely to spend most of the time
you might have saved down somewhere in the pager, moving pieces of
your address space on and off of disk, meanwhile letting other users
run as your console time climbs above that which a simple bubble sort
might have taken (run-time vs console time optimizations are a very
interesting instance of SLO problems...)


Sorry to have been so long winded...
Bill Westfield