[net.micro.68k] Bad Devices

mjs@sfsup.UUCP (M.J.Shannon) (02/23/86)

As a kernel hacker, I would maintain that a device that requires a
certain latency and neither rejects further commands nor signals an
iterrupt until it's ready is a botch.  Why patch software when the
hardware CAN do it right?  Software is not the answer to hardware
designer ineptitude.  Even if it has to be done at the board level,
the proper choice is to add the hardware to disable access to the
device until its latency period is over.
-- 
	Marty Shannon
UUCP:	ihnp4!attunix!mjs
Phone:	+1 (201) 522 6063

Disclaimer: I speak for no one.

"If I never loved, I never would have cried." -- Simon & Garfunkel

bass@dmsd.UUCP (John Bass) (02/26/86)

> As a kernel hacker, I would maintain that a device that requires a
> certain latency and neither rejects further commands nor signals an
> iterrupt until it's ready is a botch.  Why patch software when the
> hardware CAN do it right?  Software is not the answer to hardware
> designer ineptitude.  Even if it has to be done at the board level,
> the proper choice is to add the hardware to disable access to the
> device until its latency period is over.
> -- 
> 	Marty Shannon
> UUCP:	ihnp4!attunix!mjs

DO IT RIGHT ???? ^&%^%@%(*) Right depends on the goals and requirements.
From were many of us sit RIGHT is LOW COST, LOW POWER, SMALL SIZE, and
a dozen other reasons for using a KNOWN hardware/software tradeoff to
reduce componet counts. From Marty's IVORY TOWER in AT&T land he has a very
obscure view of RIGHT -- a company that produces $300 power supplies that
deliver 40 watts (they do last 30 years with minor service though) and other
common place electronics (like telephones) that are now 1/3 the cost once the
repairability and service life requirements have been reduced by other vendors.
Not that I especially like some of the CHEAP phones -- but a good trend overall.

Timing loops are FAIR GAME for any lowcost design -- and can be VERY general
with the aid of a subroutine that takes as it's argument the min number of
time units to spin out.

John Bass

dyer@atari.UUcp (Landon Dyer) (02/27/86)

In article <144@sfsup.UUCP>, mjs@sfsup.UUCP (M.J.Shannon) writes:
> As a kernel hacker, I would maintain that a device that requires a
> certain latency and neither rejects further commands nor signals an
> iterrupt until it's ready is a botch.  Why patch software when the
> hardware CAN do it right?  Software is not the answer to hardware
> designer ineptitude.  Even if it has to be done at the board level,
> the proper choice is to add the hardware to disable access to the
> device until its latency period is over.

That is, of course, unless the cost of hardware is a concern.  Software
is usually a one-time cost in a device driver for a personal computer,
whereas the hardware continues to cost money, machine after machine.
Given a part with bugs that is half the cost of a similar part,
without bugs, I would take the first part any day, for a "mass" market
computer.

Does anyone remember the Atari VCS (2600)?  It was a 6507 with 128 bytes
of RAM, a *sleazy* video chip, and a PIA.  Something like 18 million
of them were sold.  By all accounts it was one of the *worst* machines
to program ever devised by man.  Lines of video were generated by
counting cycles on the scanline and twiddling bits in the hardware
at just the /right/ clock on the screen.

Obviously a VCS is not a $10,000 Unix(tm) engine, but "pretty" hardware
may still cost money.  It is up to the marketplace to determine whether
or not it is worth it.  It wasn't worth it in the VCS, and it may not
be worth it in your Unix(tm) box.

And ... c'mon!  Surely you can write a piece of assembly language
that is g'teed to take 3us of processor time.  There are already
worse processor dependencies in the kernel and device drivers.


-Landon

mc68020@gilbbs.UUCP (Tom Keller) (02/27/86)

In article <221@dmsd.UUCP>, bass@dmsd.UUCP (John Bass) writes:
> > As a kernel hacker, I would maintain that a device that requires a
> > certain latency and neither rejects further commands nor signals an
> > iterrupt until it's ready is a botch.  Why patch software when the
> > hardware CAN do it right?  Software is not the answer to hardware
> > designer ineptitude.  Even if it has to be done at the board level,
> > the proper choice is to add the hardware to disable access to the
> > device until its latency period is over.
> DO IT RIGHT ???? ^&%^%@%(*) Right depends on the goals and requirements.
> From were many of us sit RIGHT is LOW COST, LOW POWER, SMALL SIZE, and
> a dozen other reasons for using a KNOWN hardware/software tradeoff to
> reduce componet counts. From Marty's IVORY TOWER in AT&T land he has a very
> obscure view of RIGHT -- a company that produces $300 power supplies that
> deliver 40 watts (they do last 30 years with minor service though) and other
> common place electronics (like telephones) that are now 1/3 the cost once the
> repairability and service life requirements have been reduced by other vendors.
> Not that I especially like some of the CHEAP phones -- but a good trend overall.
> Timing loops are FAIR GAME for any lowcost design -- and can be VERY general
> with the aid of a subroutine that takes as it's argument the min number of
> time units to spin out.


   Methinks that you are missing the point here, John.  What is being said is
that the design and implementation of the VLSI component itself is a botch.
If you are suggesting that designers should settle for bad designs (and let's
face it, any component at the chip level that can't be bothered to *TELL* me
that it isn't ready for further interaction is a ***BAD*** design!) simply 
because it is *POSSIBLE* to gloss over the problems in software, then I would
suggest to you that your concepts of engineering and quality are warped.

   *GIVEN* that this component was the only choice (for some inexplicable
reason), it *STILL* does not follow that the cheapest solution is necessarily
the best.  KNowing very little about the other requirements of the system being
designed by the original poster, I don't believe that you have any grounds for
your argument in this case.

   tom keller
   {ihnp4, dual}!ptsfa!gilbbs!mc68020

   (* we may not be big, but we're small! *)

olson@harvard.UUCP (Eric Olson) (03/02/86)

>common place electronics (like telephones) that are now 1/3 the cost once the
>repairability and service life requirements have been reduced by other vendor
>Not that I especially like some of the CHEAP phones-- but a good trend overall
>
>Timing loops are FAIR GAME for any lowcost design -- and can be VERY general
>with the aid of a subroutine that takes as it's argument the min number of
>time units to spin out.
>
>John Bass

I hate using other than AT&T phones.  I can never believe how poor they are.
The manufacturers seem to totally disregard functionality.

I would have agreed before someone suggested that the timing constant be
determined when the program is run via looping while watching a clock (or
waiting for an interrupt, or anything else not processor speed dependent).
I really like that solution.  It is very clean.  And very little extra work.

-Eric

davidsen@steinmetz.UUCP (Davidsen) (03/09/86)

In article <21@gilbbs.UUCP> mc68020@gilbbs.UUCP (Tom Keller) writes:
>In article <221@dmsd.UUCP>, bass@dmsd.UUCP (John Bass) writes:
................ long previous quote deleted here ................
>
>   Methinks that you are missing the point here, John.  What is being said is
>that the design and implementation of the VLSI component itself is a botch.
>If you are suggesting that designers should settle for bad designs (and let's
>face it, any component at the chip level that can't be bothered to *TELL* me
>that it isn't ready for further interaction is a ***BAD*** design!) simply 
>because it is *POSSIBLE* to gloss over the problems in software, then I would
>suggest to you that your concepts of engineering and quality are warped.

I hate to get into this, but there are classes of devices which change
state due to processor action (like USARTs, for instance). Given any such
device on the market, or even in the lab, it is posible to access the
device so quickly that the status won't track what's happening. This
sometimes happens when a processor sends a character to a USART or
parallel interface and then tests the busy bit before it has become
active. There is a finite time which any device takes to REALIZE it's not
ready for another command.

By this inference, any such device is always a botch, since at least one
gate delay is present between the write and the status update. Does this
mean using galium arsenide for USARTs to avoid being a botch? Ridiculous!
What has happened is that the circuit designer is using poorly selected
parts (or the user has jumped the processor speed).

Timing loops can (usually) be avoided by checking the status to be sure it
becomes "not ready", before continuing, but then the code will fail if the
processor is slowed to the point that the device goes not ready and ready
before it's checked. The solution is to blame the person who put the chips
together, not to say that some chips are a "botch".
-- 
	-bill davidsen

	seismo!rochester!steinmetz!--\
       /                               \
ihnp4!              unirot ------------->---> crdos1!davidsen
       \                               /
        chinet! ---------------------/        (davidsen@ge-crd.ARPA)

"It seemed like a good idea at the time..."