[net.arch] Bad devices

mjs@sfsup.UUCP (M.J.Shannon) (02/23/86)

As a kernel hacker, I would maintain that a device that requires a
certain latency and neither rejects further commands nor signals an
iterrupt until it's ready is a botch.  Why patch software when the
hardware CAN do it right?  Software is not the answer to hardware
designer ineptitude.  Even if it has to be done at the board level,
the proper choice is to add the hardware to disable access to the
device until its latency period is over.
-- 
	Marty Shannon
UUCP:	ihnp4!attunix!mjs
Phone:	+1 (201) 522 6063

Disclaimer: I speak for no one.

"If I never loved, I never would have cried." -- Simon & Garfunkel

bass@dmsd.UUCP (John Bass) (02/26/86)

> As a kernel hacker, I would maintain that a device that requires a
> certain latency and neither rejects further commands nor signals an
> iterrupt until it's ready is a botch.  Why patch software when the
> hardware CAN do it right?  Software is not the answer to hardware
> designer ineptitude.  Even if it has to be done at the board level,
> the proper choice is to add the hardware to disable access to the
> device until its latency period is over.
> -- 
> 	Marty Shannon
> UUCP:	ihnp4!attunix!mjs

DO IT RIGHT ???? ^&%^%@%(*) Right depends on the goals and requirements.
From were many of us sit RIGHT is LOW COST, LOW POWER, SMALL SIZE, and
a dozen other reasons for using a KNOWN hardware/software tradeoff to
reduce componet counts. From Marty's IVORY TOWER in AT&T land he has a very
obscure view of RIGHT -- a company that produces $300 power supplies that
deliver 40 watts (they do last 30 years with minor service though) and other
common place electronics (like telephones) that are now 1/3 the cost once the
repairability and service life requirements have been reduced by other vendors.
Not that I especially like some of the CHEAP phones -- but a good trend overall.

Timing loops are FAIR GAME for any lowcost design -- and can be VERY general
with the aid of a subroutine that takes as it's argument the min number of
time units to spin out.

John Bass

dyer@atari.UUcp (Landon Dyer) (02/27/86)

In article <144@sfsup.UUCP>, mjs@sfsup.UUCP (M.J.Shannon) writes:
> As a kernel hacker, I would maintain that a device that requires a
> certain latency and neither rejects further commands nor signals an
> iterrupt until it's ready is a botch.  Why patch software when the
> hardware CAN do it right?  Software is not the answer to hardware
> designer ineptitude.  Even if it has to be done at the board level,
> the proper choice is to add the hardware to disable access to the
> device until its latency period is over.

That is, of course, unless the cost of hardware is a concern.  Software
is usually a one-time cost in a device driver for a personal computer,
whereas the hardware continues to cost money, machine after machine.
Given a part with bugs that is half the cost of a similar part,
without bugs, I would take the first part any day, for a "mass" market
computer.

Does anyone remember the Atari VCS (2600)?  It was a 6507 with 128 bytes
of RAM, a *sleazy* video chip, and a PIA.  Something like 18 million
of them were sold.  By all accounts it was one of the *worst* machines
to program ever devised by man.  Lines of video were generated by
counting cycles on the scanline and twiddling bits in the hardware
at just the /right/ clock on the screen.

Obviously a VCS is not a $10,000 Unix(tm) engine, but "pretty" hardware
may still cost money.  It is up to the marketplace to determine whether
or not it is worth it.  It wasn't worth it in the VCS, and it may not
be worth it in your Unix(tm) box.

And ... c'mon!  Surely you can write a piece of assembly language
that is g'teed to take 3us of processor time.  There are already
worse processor dependencies in the kernel and device drivers.


-Landon

mc68020@gilbbs.UUCP (Tom Keller) (02/27/86)

In article <221@dmsd.UUCP>, bass@dmsd.UUCP (John Bass) writes:
> > As a kernel hacker, I would maintain that a device that requires a
> > certain latency and neither rejects further commands nor signals an
> > iterrupt until it's ready is a botch.  Why patch software when the
> > hardware CAN do it right?  Software is not the answer to hardware
> > designer ineptitude.  Even if it has to be done at the board level,
> > the proper choice is to add the hardware to disable access to the
> > device until its latency period is over.
> DO IT RIGHT ???? ^&%^%@%(*) Right depends on the goals and requirements.
> From were many of us sit RIGHT is LOW COST, LOW POWER, SMALL SIZE, and
> a dozen other reasons for using a KNOWN hardware/software tradeoff to
> reduce componet counts. From Marty's IVORY TOWER in AT&T land he has a very
> obscure view of RIGHT -- a company that produces $300 power supplies that
> deliver 40 watts (they do last 30 years with minor service though) and other
> common place electronics (like telephones) that are now 1/3 the cost once the
> repairability and service life requirements have been reduced by other vendors.
> Not that I especially like some of the CHEAP phones -- but a good trend overall.
> Timing loops are FAIR GAME for any lowcost design -- and can be VERY general
> with the aid of a subroutine that takes as it's argument the min number of
> time units to spin out.


   Methinks that you are missing the point here, John.  What is being said is
that the design and implementation of the VLSI component itself is a botch.
If you are suggesting that designers should settle for bad designs (and let's
face it, any component at the chip level that can't be bothered to *TELL* me
that it isn't ready for further interaction is a ***BAD*** design!) simply 
because it is *POSSIBLE* to gloss over the problems in software, then I would
suggest to you that your concepts of engineering and quality are warped.

   *GIVEN* that this component was the only choice (for some inexplicable
reason), it *STILL* does not follow that the cheapest solution is necessarily
the best.  KNowing very little about the other requirements of the system being
designed by the original poster, I don't believe that you have any grounds for
your argument in this case.

   tom keller
   {ihnp4, dual}!ptsfa!gilbbs!mc68020

   (* we may not be big, but we're small! *)

olson@harvard.UUCP (Eric Olson) (03/02/86)

>common place electronics (like telephones) that are now 1/3 the cost once the
>repairability and service life requirements have been reduced by other vendor
>Not that I especially like some of the CHEAP phones-- but a good trend overall
>
>Timing loops are FAIR GAME for any lowcost design -- and can be VERY general
>with the aid of a subroutine that takes as it's argument the min number of
>time units to spin out.
>
>John Bass

I hate using other than AT&T phones.  I can never believe how poor they are.
The manufacturers seem to totally disregard functionality.

I would have agreed before someone suggested that the timing constant be
determined when the program is run via looping while watching a clock (or
waiting for an interrupt, or anything else not processor speed dependent).
I really like that solution.  It is very clean.  And very little extra work.

-Eric

mark@mips.UUCP (Mark G. Johnson) (03/04/86)

123
In article <10040@amdcad.UUCP>   phil@amdcad.UUCP (Phil Ngai)  writes:

 > .....
 > As a matter of fact, most chips don't tell you when they are
 > ready.  When was the last time a RAM chip told you it was ready
 > for CAS after you sent RAS, or even when read data is valid?
                                  ^^^^ ^^^^ ^^^^ ^^^^ ^^ ^^^^^^

Actually, RAM chips with this feature were built ten years ago....
by AMD-Sunnyvale (Mr. Ngai's employer).  Self-timed access cycles
were accomplished by providing an "MS" output pin on the RAM,
which signaled cycle-is-done.  This allowed simple handshake
protocols, as outlined in the reference below:

ref: Jeffrey M. Schlageter, Nagab Jayakunar, Joseph H. Kroeger,
     and Vahe Sarkissian, Advanced Micro Devices, Sunnyvale, CA,
     "A 4K Static 5-V RAM", Paper THPM-12.5, International Solid-
     State Circuits Conference, Digest of Technical Papers,
     February 19-21, 1976, pp. 132-7.

-- 
-Mark Johnson
UUCP: 	{decvax,ucbvax,ihnp4}!decwrl!mips!mark
DDD:  	408-720-1700
USPS: 	MIPS Computer Systems, 930 E. Arques, Sunnyvale, CA 94086

phil@amdcad.UUCP (Phil Ngai) (03/06/86)

In article <374@mips.UUCP> mark@mips.UUCP (Mark G. Johnson) writes:
>Actually, RAM chips with this feature were built ten years ago....
>by AMD-Sunnyvale (Mr. Ngai's employer).  Self-timed access cycles
>were accomplished by providing an "MS" output pin on the RAM,
>which signaled cycle-is-done.  This allowed simple handshake
>protocols, as outlined in the reference below:

Yes, I'm aware of that device. I hate to admit it, but the idea
doesn't seem to have caught on, did it? Pins are a rather precious
resource.
-- 
 "We must welcome the future, remembering that soon it will become the
  present, and respect the past, knowing that once it was all that was
  humanly possible."

 Phil Ngai +1 408 749 5720
 UUCP: {ucbvax,decwrl,ihnp4,allegra}!amdcad!phil
 ARPA: amdcad!phil@decwrl.dec.com

mc68020@gilbbs.UUCP (Tom Keller) (03/07/86)

In article <374@mips.UUCP>, mark@mips.UUCP (Mark G. Johnson) writes:
> In article <10040@amdcad.UUCP>   phil@amdcad.UUCP (Phil Ngai)  writes:
> 
>  > .....
>  > As a matter of fact, most chips don't tell you when they are
>  > ready.  When was the last time a RAM chip told you it was ready
>  > for CAS after you sent RAS, or even when read data is valid?
>                                   ^^^^ ^^^^ ^^^^ ^^^^ ^^ ^^^^^^
> 
> Actually, RAM chips with this feature were built ten years ago....
> by AMD-Sunnyvale (Mr. Ngai's employer).  Self-timed access cycles
> were accomplished by providing an "MS" output pin on the RAM,
> which signaled cycle-is-done.  This allowed simple handshake
> protocols, as outlined in the reference below:
> 
> ref: Jeffrey M. Schlageter, Nagab Jayakunar, Joseph H. Kroeger,
>      and Vahe Sarkissian, Advanced Micro Devices, Sunnyvale, CA,
>      "A 4K Static 5-V RAM", Paper THPM-12.5, International Solid-
>      State Circuits Conference, Digest of Technical Papers,
>      February 19-21, 1976, pp. 132-7.

   Actually, in reading Mr. Ngai's articles both here and on ba.politics,
as well as elsewhere on the net, my impression is that Mr. Ngai expends 
a great deal of energy defending the status quo. 

   Now I realize that this is not a technical comment, and I further realize
that this will be viewed as a personal "attack".  It is, however, a legitimate
observation on the nature of Mr. Ngai's articles, and as such is a cogent 
contribution to the overall discussion.


   Along these same lines, I apparently managed to offend several people in my
comments to Ken Shoemaker@intel.  I apologize for any discomfort the nature of
my comments may have caused, and hopefully a difference in approach will be
noted in this entry.


   I do feel, however, that my comments were cogent.  Mr. Shoemaker was, in 
essence, using his position as a microprocessor designer at INTEL as a point
of authority to support his thesis.  While I'll grant that it is certainly a
matter of opinion, it is my opinion that working for Intel doesn't qualify
anyone as an authority on anything (judgement based on shipped products and
ethical standards in advertising and specification listings(or lack thereof).
Therefore, it was cogent to point out that his position of authority was
questionable.

   I also suggested that perhaps Mr. Shoemaker's concepts of engineering and
quality were warped.  This suggestion *WAS* preceded by a conditional, 
which I still adhere to:  "*IF* hardware designers are suggesting that system
designers should accept badly designed chips *SIMPLY* because it is possible
to work around the flaws in software, *THEN* I would suggest that their concepts
of engineering and quality are warped.".  Mr. Shoemaker (and several others)
chose to take this as a personal affront.  Perhaps due to the manner in which
I expressed myself.  I therefore also apologize for any personal distress my
ineptitude caused.  I stand by the essence of my statements, however.


   I do not believe that it is always possible to discuss even technical 
issues without occasionally making personal observations and/or comments.  If,
however, it is clearly the wish of the majority of readers of net.arch that
this be the rule, I will abide by it.  

   Thank you.

-- 

====================================

Disclaimer:  I hereby disclaim any and all responsibility for disclaimers.

tom keller
{ihnp4, dual}!ptsfa!gilbbs!mc68020

(* we may not be big, but we're small! *)

dougp@ism780 (03/07/86)

>That is, of course, unless the cost of hardware is a concern.  Software
>is usually a one-time cost in a device driver for a personal computer,
>whereas the hardware continues to cost money, machine after machine.

Gee,
I wish *any* piece of software was just a "one-time cost".  That'd sure
make maintenance, upgrades, new ports, etc. *much* easier!  :-)
Doug Pintar at InterActive Systems

davidsen@steinmetz.UUCP (Davidsen) (03/09/86)

In article <21@gilbbs.UUCP> mc68020@gilbbs.UUCP (Tom Keller) writes:
>In article <221@dmsd.UUCP>, bass@dmsd.UUCP (John Bass) writes:
................ long previous quote deleted here ................
>
>   Methinks that you are missing the point here, John.  What is being said is
>that the design and implementation of the VLSI component itself is a botch.
>If you are suggesting that designers should settle for bad designs (and let's
>face it, any component at the chip level that can't be bothered to *TELL* me
>that it isn't ready for further interaction is a ***BAD*** design!) simply 
>because it is *POSSIBLE* to gloss over the problems in software, then I would
>suggest to you that your concepts of engineering and quality are warped.

I hate to get into this, but there are classes of devices which change
state due to processor action (like USARTs, for instance). Given any such
device on the market, or even in the lab, it is posible to access the
device so quickly that the status won't track what's happening. This
sometimes happens when a processor sends a character to a USART or
parallel interface and then tests the busy bit before it has become
active. There is a finite time which any device takes to REALIZE it's not
ready for another command.

By this inference, any such device is always a botch, since at least one
gate delay is present between the write and the status update. Does this
mean using galium arsenide for USARTs to avoid being a botch? Ridiculous!
What has happened is that the circuit designer is using poorly selected
parts (or the user has jumped the processor speed).

Timing loops can (usually) be avoided by checking the status to be sure it
becomes "not ready", before continuing, but then the code will fail if the
processor is slowed to the point that the device goes not ready and ready
before it's checked. The solution is to blame the person who put the chips
together, not to say that some chips are a "botch".
-- 
	-bill davidsen

	seismo!rochester!steinmetz!--\
       /                               \
ihnp4!              unirot ------------->---> crdos1!davidsen
       \                               /
        chinet! ---------------------/        (davidsen@ge-crd.ARPA)

"It seemed like a good idea at the time..."

clif@intelca.UUCP (Clif Purkiser) (03/10/86)

> In article <374@mips.UUCP>, mark@mips.UUCP (Mark G. Johnson) writes:
> 
>    I do not believe that it is always possible to discuss even technical 
> issues without occasionally making personal observations and/or comments.  If,
> however, it is clearly the wish of the majority of readers of net.arch that
> this be the rule, I will abide by it.  
> 
>    Thank you.
> 
> 
> tom keller
> {ihnp4, dual}!ptsfa!gilbbs!mc68020
> 
> (* we may not be big, but we're small! *)

	I was offended by your comments about Ken and to a lesser extent about
Phil.  Although this posting was much more reasonable.  

	I think it is not only possible but very desirable to discuss computer
architectures without making personal observations (i.e flames) about 
individuals.  It would be nice if we could have this discussion without flames
about individual companies.  However, I guess that would be asking for the 
impossible.

	
	Now for something completely different

	A call for a discussion on new architectures.   I fairly recently 
read the sales literature about the Cray 2 and the Fairchild Clipper.  
The both seem interesting.  Would anyone like to post an evaluation of their
architectures?   I'm really tired of timing loops.



	

-- 
Clif Purkiser, Intel, Santa Clara, Ca.
HIGH PERFORMANCE MICROPROCESSORS
{pur-ee,hplabs,amd,scgvaxd,dual,idi,omsvax}!intelca!clif
	
{standard disclaimer about how these views are mine and may not reflect
the views of Intel, my boss , or USNET goes here. }

kds@intelca.UUCP (Ken Shoemaker) (03/11/86)

>    I do feel, however, that my comments were cogent.  Mr. Shoemaker was, in 
> essence, using his position as a microprocessor designer at INTEL as a point
> of authority to support his thesis.  While I'll grant that it is certainly a
> matter of opinion, it is my opinion that working for Intel doesn't qualify
> anyone as an authority on anything (judgement based on shipped products and
> ethical standards in advertising and specification listings(or lack thereof).
> Therefore, it was cogent to point out that his position of authority was
> questionable.
> 
>    I also suggested that perhaps Mr. Shoemaker's concepts of engineering and
> quality were warped.  This suggestion *WAS* preceded by a conditional, 
> which I still adhere to:  "*IF* hardware designers are suggesting that system
> designers should accept badly designed chips *SIMPLY* because it is possible
> to work around the flaws in software, *THEN* I would suggest that their concepts
> of engineering and quality are warped.".  Mr. Shoemaker (and several others)
> chose to take this as a personal affront.  Perhaps due to the manner in which
> I expressed myself.  I therefore also apologize for any personal distress my
> ineptitude caused.  I stand by the essence of my statements, however.
> 
>    I do not believe that it is always possible to discuss even technical 
> issues without occasionally making personal observations and/or comments.  If,
> however, it is clearly the wish of the majority of readers of net.arch that
> this be the rule, I will abide by it.  
> 
>    Thank you.

By saying that I am a microprocessor designer at Intel in the tail of my
articles, I am by no means attempting to give greater weight to the statements
that I put in those articles.  In fact, I specifically say that the article
contains *my own opinions* and should not be taken to be those of Intel,
or any other of its employees.  It seems that, at least for many people
on the net, merely admitting that I work for Intel is all I need do to have
my opinions denegrated!  By stating that I work for Intel, I am merely giving
people a base from which to interpret my opinions.  If I happen to be
overly familiar with Intel products, and not quite so with other manufacturers'
products, well, it just goes with the territory.  Also, they pay the bills
for this machine.  I would think that trying to hide or misrepresent that I
work for Intel would not be such a good idea, since if this were generally
applied, it could undermine the validity of many discussions on the net.

By my line of work, one could assume that I have
spent a bit of time working on microprocessors and microprocessor based
products and am therefore familiar with the tradeoffs that go into not only
microprocessor based systems, but into microprocessors themselves.  I do
not, by any strech of the imagination, claim to have a monopoly on these
tradeoffs or skills.

As for the SCC, I happen to like the device a lot.  I have used it in systems
and have no problem with it.  If it needs a bit of a recovery time, well,
it isn't the first peripheral chip that I have worked with that has required
the same kind of thing.  I like the fact that it integrates 2 synchronous/
asynchronous channels into a 40-pin DIP, with their own baud rate generators.
I like that it has a 4-byte receive character FIFO.  Etc.  I personally would
rather have a second serial channel than to have 0ns command recovery time.
Or than having a ready output pin.  I don't even mind its having a data bus 
float time longer than most any microprocessor in the world can tolerate.  
These are things I can take care of outside the chip.  I am very suprised
that Mr. Keller seems so adamantly opposed to external logic, and yet
is still a big 68020 fan.  At least with the 386, you get the MMU and
page support hardware on the same chip as the CPU!  Just think how many
devices you save there!  The interface parameters of a chip
are well documented.  If you have a problem with them, you don't have
to use the chip.  It sounds almost as if Mr. Keller has overlooked this
parameter in his own code, and is looking for someone else to blame for
his shortcoming.

People should also be aware that semiconductor companies don't exist in
a vacuum.  They are always looking for people and organizations that are
working with their devices to suggest improvements.  But they can't
do miracles, and they can't be everything to everyone.  You should
realize that the decisions on what to do in a chip usually aren't made
arbitrarily, or, at least, that is my opinion from having watched the
process.  As one of the radio newsmen in the Bay Area is fond of saying,
"if you don't like the news, go out and make some of your own."  You
can also change an organization from within.  I mean, with the bozos that
we must routinely hire, anyone could get a job with Intel, right, Mr.
Keller?  Or start your own company.  If you are so full of good ideas,
then bring one of them to fruition, sell them, and become the greatest
commercial success since time immemorial.  Oh well, I think I've said
enough for now.  Perhaps we should get on with what I would think would
be more appropriate discussions on net.arch.  I won't suggest that
people mail me flames, because everyone has to get their last words
in.
-- 
If you don't like the answer, then ask another question!  Everything is the
answer to something...

Ken Shoemaker, Microprocessor Design, Intel Corp., Santa Clara, Ca.
{pur-ee,hplabs,amdcad,scgvaxd,oliveb,qantel}!intelca!kds
	
---the above views are personal.

phil@amdcad.UUCP (Phil Ngai) (03/12/86)

In article <38@gilbbs.UUCP> mc68020@gilbbs.UUCP (Tom Keller) writes:
>   Actually, in reading Mr. Ngai's articles both here and on ba.politics,
>as well as elsewhere on the net, my impression is that Mr. Ngai expends 
>a great deal of energy defending the status quo. 
>
>   Now I realize that this is not a technical comment, and I further realize
>that this will be viewed as a personal "attack".  It is, however, a legitimate
>observation on the nature of Mr. Ngai's articles, and as such is a cogent 
>contribution to the overall discussion.

I thought the idea was to see if what someone posted made sense. Would
it be helpful if I said "Tom Keller spends a lot of his time attacking
Intel?" I would prefer to use arguments of the form "Tom Keller
attacks Intel for putting the 386's MMU on the chip, but doing so is
good because it saves chips, etc..." I hope you can see the difference
between these two forms of discussion.

It is true that in the case of the SCC, people have been saying it is
a particularly bad chip because it requires external timing to be done
for it. My response was that all chips require external timing so it
isn't fair to say the SCC is a *particularly* bad chip. If you wish to
interpret this as defending the status quo, that's your choice.  I'm
afraid I don't think your comment is relevant.
-- 
 "We must welcome the future, remembering that soon it will become the
  present, and respect the past, knowing that once it was all that was
  humanly possible."

 Phil Ngai +1 408 749 5720
 UUCP: {ucbvax,decwrl,ihnp4,allegra}!amdcad!phil
 ARPA: amdcad!phil@decwrl.dec.com

phil@amdcad.UUCP (Phil Ngai) (03/12/86)

By the way, I think I've said this before but maybe some of the people
in this discussion didn't see it: I have designed several board level
products (Multibus) with the SCC and had no trouble using it. I was
able to hide the cycle recovery time from the programmer at no cost by
being clever in a state machine which was already needed for other
timing purposes. I do not consider the timing requirements of the SCC
to be unusually difficult to meet even without considering all the
*good* points of the SCC, something which seems to have been quite
overlooked in this discussion. How about two channels, two clock
generators, two programmable baud rate generators (up to 76.8 Kb
async), full modem control, vectored interrupts, byte synchronous, bit
synchronous, DMA, relatively large input fifo, and all in a 40 pin
package?
-- 
 "We must welcome the future, remembering that soon it will become the
  present, and respect the past, knowing that once it was all that was
  humanly possible."

 Phil Ngai +1 408 749 5720
 UUCP: {ucbvax,decwrl,ihnp4,allegra}!amdcad!phil
 ARPA: amdcad!phil@decwrl.dec.com