[comp.periphs.scsi] Always IN-2000 SCSI host adapter

iverson@xstor.com (Tim Iverson) (06/07/91)

Recently, it has come to my attention that the current BIOS in the Always
IN-2000 SCSI host adapter no longer disables interrupts during transfers.
And, despite other problems, seems to be marginally useful as a host
adapter for a DOS only system.

Since my now out-of-date information on disabling interrupts seems to have
caused Always some concern, I decided I owed them (and the net) an honest
review of their current board.  Consequently, below is the full story of my
involvement with the Always board (including a mini-review).

Please note that while the facts below are true, the opinions I express are
my own and do not represent Storage Dimensions' official position (not
unless there's something they're not telling me :-).

A little over a year ago, Always gave SDI a board to eval saying it would
make any disk run faster, just use CoreTest and see.  So, we tested it with
CoreTest and found that a drive capable of max. 200KB/s transfers was
turning in >1MB/s transfers.

I heard about it from one of the test engineers and dismissed it by saying,
"Well, it must be a cacheing adapter."  Wrong.  Always has no cache.  I was
more than a little curious now, so I used soft-ice to track things down in
the bios.  I found a section of code like this:

	cli
	call ????	; disk_xfer done here (verified with an Ancot trace)
	sti

Pretty ugly, eh?  It also seemed rather deliberate.  It is especially odd
when considering that their netware driver did not have to disable
interrupts and that they claimed their board would speed up drives.

So, point 1. - they knew their board caused drives to run faster on
benchmarks.  Point 2., the code is obviously deliberate.  Conclusion?  They
(both their engineers and marketeers) tried to pull a fast one.

My personal opinion?  I will never, ever, do business with them because of
their (apparent) complete lack of integrity on this one point.

Now the story reaches the present day.  Always contacted me re: the
interrupt issue and my postings on the net, saying that they don't disable
interrupts and that they never did and that I should post a retraction ("did
you know that fraud is actionable?").

When I told him that I personally tracked down the cli/xfer/sti hack and
witnessed it (fact not fraud), I was told, "Oh - we don't do that anymore,
that was only in the beta copy we sent to SDI.  We never actually released
it."  Hmmm.  Well, I said, "If you send me a card, I'll tell the net the
facts about it."  Suprisingly, the next day (yesterday, actually) an Always
card shows up on my doorstep.

Here are my observations about the board that was sent to me (silkscreen
2001-01-2D, board sticker 2001-02-2D, bios sticker REV.3.33, pal sticker
2.5, s/n sticker 1180040):

My initial impressions were good.  The layout looks clean (no reworks),
there's a switch block instead of jumpers, the internal connectors are
keyed to help prevent cabling errors, and they included drivers for various
flavors of Novell Netware and recent releases of SCO Unix and SCO Xenix,
although SCSI-only with the Always is impossible under Xenix and Unix
without a second non-SCSI Unix/Xenix system to perform the integration.

There are some problems: first, the board uses the WD33C93A SCSI chip.
According to the opinion of others here more experienced with protocol
chips, this is "not a good chip" (stated slightly more colorfully :-).
Could this chip be why the Always board didn't work with the 3280 I tried?
Perhaps, but I don't know for sure.

The second problem is much greater: the external connector is not an
accepted SCSI-1 or SCSI-2 connector.  It is a 25 pin Macintosh connector.
This isn't so bad until you realize that the 25 pin SCSI connectors usually
used for the PC have term power and ground swapped w.r.t. the Mac cable.
Connect the wrong cable by accident (perhaps by asking a different supplier
for a 25 pin SCSI cable for your *PC*), and you could short-out something
important, like your drive or your host adapter.

At first glance, the documentation appears sparse, but concise and to the
point.  There are a few discrepancies: differences between stated factory
settings and actual, and it said SCO stood for "Santa Clara Operations"
instead of "Santa Cruz Operations".

The doc's also state that the board's FIFO uses dual-ported RAM for high
performance and no 1st party DMA to avoid incompatibilities.  This is 100%
correct, but very misleading.  Lack of "bus-mastering" ability does reduce
incompatibility, but it also means this board is suitable only for DOS; it
would introduce too much overhead on a multitasking system.

Now the fun begins.  I tried to attach a Maxtor 3280 (a very finicky drive)
to it.  The board would not recognize it and the BIOS format and inquiry
routines behaved erratically - sometimes they hung, sometimes they returned
immediately, they never printed anything and the drive light never lit.
BTW, all the other SCSI boards I have work fine with the 3280 (Adaptec,
BusTek, Future Domain, and SDC800).

So, I tried a different drive (Maxtor 8380).  This time the drive was
recognized, but the BIOS routines at c800:5 and c800:8 were still no-go.
Further, it insisted on trying to boot off the hard disk (yes, I did have a
boot floppy in drive A).  Undaunted, I used an Adaptec 1542 to boot up from
floppy and install DOS on the hard drive.  Upon reinstalling the Always
board and booting up (off the 8380 - I had no choice), I found that I could
not access the floppy at all.

Well, I had disabled the floppy controller (sw9 off) on the Always board so
I could use the motherboard floppy controller that was already connected.
Maybe the doc's were wrong and sw9 needed to be on?  No.  With sw9 on, the
machine failed POST.  O.K. - no floppy, I could live with this long enough
to get a few benchmarks, but it was very annoying.

[ Later on I went back and checked it out on a different machine and the
  motherboard floppy did work - looks like an Always/Machine problem, since
  the floppy worked fine on the same machine with the Adaptec and BusTek
  boards. ]

What do the benchies say?  Well, mainly that Always is just a typical dumb
SCSI board - low overhead, good transfer rate, good for DOS, bad for
anything that multitasks.  Here's how Always compares to Adaptec and BusTek:

Overhead:	Always is 1.8ms faster than Adaptec
		Always is 0.1ms faster than BusTek

Max xfer rate:	Always is 2% faster than Adaptec
		Always is 1% slower than BusTek

N.B. The Always IN-2000 has no provisions for synchronous operation and the
     ISA bus transfer rate cannot be altered (they use programmed i/o), so
     the Adaptec 1542B and BusTek 540 were both set for asynchronous SCSI
     transfers with a 5MB/s ISA bus transfer rate.  Synchronous operation
     and higher ISA bus rates are possible with both Adaptec and BusTek
     with some computers and some drives.  Tests were performed under DOS.

     Also note that Always' overhead and transfer rate will vary greatly
     depending on processor and i/o bus speed (programmed i/o, remember),
     while the Adaptec and BusTek values will remain relatively stable.

Due to lack of bus-mastering ability, I cannot honestly recomend the Always
board for use outside of DOS.  Even then, the problems I experienced with
the floppy drive, the BIOS setup routines, and the 3280 suggest that
integration of this board will be more difficult than the rather small
performance increase is worth.

Combine their average performance with poor integration and the sneaky
interrupt disabling trick used in a previous BIOS, and I would call this the
"Never IN-2000" instead of the "Always IN-2000".


- Tim Iverson
  iverson@xstor.com -/- uunet!xstor!iverson

david@talgras.UUCP (David Hoopes) (06/07/91)

In article <1991Jun06.204457.28453@xstor.com> iverson@xstor.com writes:
>
>Recently, it has come to my attention that the current BIOS in the Always
>IN-2000 SCSI host adapter no longer disables interrupts during transfers.
>And, despite other problems, seems to be marginally useful as a host
>adapter for a DOS only system.
>
> [ stuff deleted ]
>
>
>There are some problems: first, the board uses the WD33C93A SCSI chip.
>According to the opinion of others here more experienced with protocol
>chips, this is "not a good chip" (stated slightly more colorfully :-).

What is wrong with this chip.  Does it not work right?  Is it hard to
write drivers for?  Is it the wrong color?  We use this chip on some of
our scsi hardware and I have not heard of any trouble with them.

>
>The second problem is much greater: the external connector is not an
>accepted SCSI-1 or SCSI-2 connector.  It is a 25 pin Macintosh connector.

I am sure that someone will correct me if I am wrong, but I think that the
Macintosh style conector is an accepted SCSI-1 (I don't know about SCSI-2)
connector.  The way I understand it there are two types of conectors that
are covered in the spec.

>This isn't so bad until you realize that the 25 pin SCSI connectors usually
>used for the PC have term power and ground swapped w.r.t. the Mac cable.
>Connect the wrong cable by accident (perhaps by asking a different supplier
>for a 25 pin SCSI cable for your *PC*), and you could short-out something
>important, like your drive or your host adapter.
>

This I think is wrong.  I have taken the same drive with cable attached
and moved it from a Mac to my pc.  In fact that I why we choose to use
that conector on our SCSI host adapter, so that are tape drives could
be used on both PCs and Macs with the same cable.

Personally I like this conector better then the 50 pin conector.  The
little clips on the 50 pin conectors have a bad habit of wigeling loose.
That's a problem I have never had with the 25 pin conectors because they
have screws to hold them in place.


>
>Combine their average performance with poor integration and the sneaky
>interrupt disabling trick used in a previous BIOS, and I would call this the
>"Never IN-2000" instead of the "Always IN-2000".
>
>
>- Tim Iverson
>  iverson@xstor.com -/- uunet!xstor!iverson

I don't know anything about the Always IN-2000 and I am not defending it.  I
just question some of your arguments.


-- 
---------------------------------------------------------------------
David Hoopes                              Tallgrass Technologies Inc. 
uunet!talgras!david                       11100 W 82nd St.          
Voice: (913) 492-6002 x323                Lenexa, Ks  66214        

vail@tegra.COM (Johnathan Vail) (06/08/91)

In article <90@talgras.UUCP> david@talgras.UUCP (David Hoopes) writes:

   >There are some problems: first, the board uses the WD33C93A SCSI chip.
   >According to the opinion of others here more experienced with protocol
   >chips, this is "not a good chip" (stated slightly more colorfully :-).

   What is wrong with this chip.  Does it not work right?  Is it hard to
   write drivers for?  Is it the wrong color?  We use this chip on some of
   our scsi hardware and I have not heard of any trouble with them.

The color is usually OK.  Writing drivers for it is a real pain.  Ask
someone who has about the dreaded 4b status.  Not that it is a bad
chip, the real problem that I have with it is that you don't really
have a good feeling of whats going on inside.  You can't even look at
the SCSI signals.  When all is said and done it works ok.  And getting
2 interrupts (at least!) for every operation is not the greatest thing
for performance, esp with unix.

Give me a NCR 53C710 any day!

   >The second problem is much greater: the external connector is not an
   >accepted SCSI-1 or SCSI-2 connector.  It is a 25 pin Macintosh connector.

   I am sure that someone will correct me if I am wrong, but I think that the
   Macintosh style conector is an accepted SCSI-1 (I don't know about SCSI-2)
   connector.  The way I understand it there are two types of conectors that
   are covered in the spec.

OK.  The 25 pin on Macs may be a standard in the industry but are not
a real SCSI connector.  There are no 25 pin cables in the SCSI-1 or
SCSI-2 specifications.

There are several different connectors in the spec for both single
ended and differential.  Basically these are: a 50 pin header, a 50
pin "centronics", and a DB-50.  The D connector is not standard in
SCSI-1. These are for the "A" cable in SCSI-2 which is the equivelent
of "the cable" in SCSI-1.


jv


"Hackers, as a rule, do not handle obsolescence well" - Oliver Wendle Jones
 _____
|     | Johnathan Vail | n1dxg@tegra.com
|Tegra| (508) 663-7435 | N1DXG@448.625-(WorldNet)
 -----  jv@n1dxg.ampr.org {...sun!sunne ..uunet}!tegra!vail

mussar@bcars53.uucp (G. Mussar) (06/13/91)

In article <1991Jun06.204457.28453@xstor.com> iverson@xstor.com writes:
>
>Recently, it has come to my attention that the current BIOS in the Always
>IN-2000 SCSI host adapter no longer disables interrupts during transfers.
>And, despite other problems, seems to be marginally useful as a host
>adapter for a DOS only system.

I'm glad to hear that I am running an adapter that is MARGINALLY useful.
It runs as fast as a friend's ESDI system, plays ball with Hyperdsk and
Windows 3.0. 

>A little over a year ago, Always gave SDI a board to eval saying it would
>make any disk run faster, just use CoreTest and see.  So, we tested it with
>CoreTest and found that a drive capable of max. 200KB/s transfers was
>turning in >1MB/s transfers.

Coretest on systems that translate (fake out DOS) is inaccurate. I've gotten
values ranging from 200KB/s to 1.2MB/s on the same drive with the same
controller in the same system just by varying the xfer size (this is on
both SCSI and ESDI translating controllers). Coretest thinks that it is
xferring one track (no head movement) but due to the translating, the head
needs to move. I hope you don't trust those numbers.

>So, point 1. - they knew their board caused drives to run faster on
>benchmarks.  Point 2., the code is obviously deliberate.  Conclusion?  They
>(both their engineers and marketeers) tried to pull a fast one.
>
>My personal opinion?  I will never, ever, do business with them because of
>their (apparent) complete lack of integrity on this one point.

My personal opinion? I think YOU would have us believe thank Always tried
to pull a fast one. Unless you have hard proof that Always deliberately
tried to perturbate the benchmark tests (say like a commented source code
listing) then I believe you creating your own (potentially incorrect) 
story. FWIW, I would rather not deal with someone who comes to such a
"scientific" conclusion from the data you had anymore than I would like
dealing with a used-car salesman.

I have found that Always are fairly approachable and willing to help
track down problems. Did you ever call them about this before spouting
off to the net? BTW, have you ever tried to get a hold of a real person
at Adaptec? I was quite surprised to see Roy Neese (of Adaptec) so active
on the net after I received extremely shabby treatment when calling them
on the phone. I never had that kind of problem with Always. (BTW, thanks for
the manual Roy.)


>The doc's also state that the board's FIFO uses dual-ported RAM for high
>performance and no 1st party DMA to avoid incompatibilities.  This is 100%
>correct, but very misleading.  Lack of "bus-mastering" ability does reduce
>incompatibility, but it also means this board is suitable only for DOS; it
>would introduce too much overhead on a multitasking system.

Gee, there is that great "scientific" mind at work again. I wonder how
anyone ever got along without bus mastering in the early days. Are you
saying that the only way for Always to use a FIFO is to sit in a spin loop
waiting for it to fill? Is there no way to interrupt when filled and
retrieve the data from the FIFO? That is what we did in the old days, but,
I guess we couldn't mulitask back then. Lets get some real (honest) numbers
about the overhead you talk about. Should I expect 10% of the throughput or
95% ?

--
-------------------------------------------------------------------------------
Gary Mussar  |Internet:  mussar@bnr.ca                |  Phone: (613) 763-4937
BNR Ltd.     |                                        |  FAX:   (613) 763-2626

iverson@xstor.com (Tim Iverson) (06/22/91)

In article <1991Jun13.142032.16772@bigsur.uucp> mussar@bnr.ca (G. Mussar) writes:
>In article <1991Jun06.204457.28453@xstor.com> iverson@xstor.com writes:
>>And, despite other problems, seems to be marginally useful as a host
>>adapter for a DOS only system.
>
>I'm glad to hear that I am running an adapter that is MARGINALLY useful.

Obviously, your system lies *within* the margin of usefulness.  My tests
were on two systems (hardly a large sample, I know); the problems with the
floppy on one of the systems make it essentially unusable for all but the
most temporary work on that system.  So, yes, 1 out of 2 means MARGINALLY.

>>A little over a year ago, Always gave SDI a board to eval saying it would
>>make any disk run faster, just use CoreTest and see.  So, we tested it with
>>CoreTest and found that a drive capable of max. 200KB/s transfers was
>>turning in >1MB/s transfers.

>values ranging from 200KB/s to 1.2MB/s on the same drive with the same
>controller in the same system just by varying the xfer size (this is on

I don't care how you vary the buffer size, you simply cannot get CoreTest
to report a rate that exceeds that drive's maximum rate.  *Unless* someone
cheats somewhere.  You must have performed that test on a drive capable
of transferring at 1.2MB/s.

>both SCSI and ESDI translating controllers). Coretest thinks that it is
>xferring one track (no head movement) but due to the translating, the head
>needs to move. I hope you don't trust those numbers.

Only to a point.  If you thought for a moment, you would realize that speed
burps due to translated geometry always *reduce* the reported transfer
rate.  They never increase it.  CoreTest can still be used to provide a
ball-park figure; so, when it reports Pee-Wee Herman hitting a grand slam,
you've got to wonder a bit.

[ Recently, we needed much more precise numbers.  So, we developed our own
  much more accurate testing program that we now use internally.  This is
  what I used in my most recent investigation. ]

>My personal opinion? I think YOU would have us believe thank Always tried
>to pull a fast one. Unless you have hard proof that Always deliberately
>tried to perturbate the benchmark tests (say like a commented source code

Hmmm.  After looking over my original posting I see that I did leave out
an explanation of why turning off interrupts like that was so obviously
deliberate - only another programmer would understand from my posting.

Let me give a (very) cursory explanation: when interrupts are disabled, the
system essentially goes deaf to the world (i.e. it cannot "hear" requests
for service from any of it's peripherals); the only process that runs is
the one that turned off the interrupts.

What this means: the clock ticks, but the ticks are never counted, so no
time passes; the serial ports receive data, but nothing is ever done about
it, so you get overrun errors and data is lost; etc., etc..

Only someone very ignorant about systems programming would ever disable
interrupts for any longer that was absolutely necessary (usually it's done
for only a few instructions).  A disk transfer takes forever in computer
time (Lorne Green would love it ... CPU years! :-).  Much too long to even
consider disabling interrupts.

The telling blow is that their netware driver did not disable interrupts.
This means that they didn't have to disable interrupts during a BIOS disk
transfer - they must have had some other, non-engineering reason.

Finally, they admitted to me that this hack was present only in a beta
version they gave to SDI - no reason given for its presence or removal.

>listing) then I believe you creating your own (potentially incorrect) 
>story. FWIW, I would rather not deal with someone who comes to such a
>"scientific" conclusion from the data you had anymore than I would like
>dealing with a used-car salesman.

Frankly, I don't like being called a liar.  I reported facts and used them
to justify my conclusions.  If you can't refute my *reasoning* (which you
made no attempt to do - perhaps you can't), then please refrain from
attacking my integrity and withdraw your accusation.

I admit, my conclusion is based somewhat on circumstantial evidence - I
don't *know* what the programmer was thinking when he put the
disable/enable interrupt instructions around the disk transfer.  But, as
they say, actions speak louder than words - the *only* gain to Always as a
result of this action was to cause disk benchmarks to report wildly high
rates, ergo that was the result intended by the programmer.

[ N.B. I did indeed test the board by manually skipping the disable
 instruction; it worked fine with interrupts enabled - further proof that
 the hack was not done for any *engineering* reason. ]

>I have found that Always are fairly approachable and willing to help
>track down problems. Did you ever call them about this before spouting

No.  They called me and tried to *force* a posted retraction from me.  This
was unsuccessful, but if their board really was now as good as they said,
my own sense of honor demanded that I correct my earlier statements.  So, I
offered to correct my earlier statements *if* he would send a board to me
so I could verify things.  They sent, I looked, I posted.  Veni, vidi, vici?
Perhaps.  But I was aiming for more Joe Friday than Caesar.

>off to the net? BTW, have you ever tried to get a hold of a real person
>at Adaptec? I was quite surprised to see Roy Neese (of Adaptec) so active
>on the net after I received extremely shabby treatment when calling them

We have a very good relationship with Adaptec - Bruce Van Dyke sometimes
seems to almost live over here, and while I've only met Roy a couple of
times, I have the utmost respect for him.  BTW, SDI is a big customer of
Adaptec, so that may explain the treatment.

>>correct, but very misleading.  Lack of "bus-mastering" ability does reduce
>>incompatibility, but it also means this board is suitable only for DOS; it
>>would introduce too much overhead on a multitasking system.
>
>Gee, there is that great "scientific" mind at work again. I wonder how
>anyone ever got along without bus mastering in the early days.

If you are not capable of refuting my opinion via reasoning, please refrain
from displaying your ineptitude by resorting to name calling.

>Are you
>saying that the only way for Always to use a FIFO is to sit in a spin loop
>waiting for it to fill? Is there no way to interrupt when filled and
>retrieve the data from the FIFO? That is what we did in the old days, but,

Think for moment: let's do a minor "back of the envelope" calculation ...

	guess 1: 128 byte FIFO
	guess 2: 512KB/s (transfer rate for some hard drive somewhere)
	divide guess 2 by guess 1 to get ...
	512KB/s / 128 bytes = 4096 full FIFOs (or interrupts) per second.

Your average 386 can't even handle two builtin serial ports running at 9600
baud.  Actually it has a hard time with one 9600 and one 4800, but let's be
magnanimous and say a 386 running Unix can handle 2K interrupts per second.
Using an interrupt driven driver, it would be impossible to service any
drive faster than 256KB/s without 100% overhead (worse yet, some interrupt
somewhere is bound to loose out and not get serviced in time).

My guess (based more on intuition and experience than the above estimate)
is that they probably work it just like ESDI, with a "rep insw" from the
port and rely on the board to do the handshaking.  Simple and direct.
Overhead is still essentially 100% (i.e. only interrupt processing can
happen during a transfer), but this way they can probably sustain > 1MB/s.

>I guess we couldn't mulitask back then.

<sigh>, I never said bus-mastering was required, just that *I* would not
recomend using a non-bus-mastering board for any multi-tasking OS.
Bus-mastering is relatively cheap and provides such a large reduction in
overhead w.r.t cost, that it is foolish not to use it when you can.

>Lets get some real (honest) numbers
>about the overhead you talk about. Should I expect 10% of the throughput or
>95% ?

Arggh!  I feel like saying, "Okay pardner, 10 paces then draw!"  This is
getting silly.  You want numbers, go get them.  If you can summon enough
reasoning to convince me that my opinion is on shaky ground, I'll go get
them myself, but so far all I've seen is hot air (hot bits? - whatever).

>Gary Mussar  |Internet:  mussar@bnr.ca                |  Phone: (613) 763-4937


- Tim Iverson
  iverson@xstor.com -/- uunet!xstor!iverson

mussar@bcars53.uucp (G. Mussar) (06/23/91)

In article <1991Jun22.033501.17909@xstor.com> iverson@xstor.com writes:
>In article <1991Jun13.142032.16772@bigsur.uucp> mussar@bnr.ca (G. Mussar) writes:
>>In article <1991Jun06.204457.28453@xstor.com> iverson@xstor.com writes:
>>>And, despite other problems, seems to be marginally useful as a host
>>>adapter for a DOS only system.
>>
>>I'm glad to hear that I am running an adapter that is MARGINALLY useful.
>
>Obviously, your system lies *within* the margin of usefulness.  My tests
>were on two systems (hardly a large sample, I know); the problems with the
>floppy on one of the systems make it essentially unusable for all but the
>most temporary work on that system.  So, yes, 1 out of 2 means MARGINALLY.
>
I run a lowly 25MHz 386 so I guess I'm just not in the same class as you
because it really is useful on my system. FWIW, I too had problems with 
the floppy but I traced it to the fact that I was running my bus at 12.5MHz.
Dropping the speed to 8 MHz fixed it. I called up Always, explained what was
happening and they sent me up a new chip (at their expense). I put it in
and the card now runs fine at 12.5MHz. But I do suppose it is good business
practise to hold the original problem against the company forever. 

>Hmmm.  After looking over my original posting I see that I did leave out
>an explanation of why turning off interrupts like that was so obviously
>deliberate - only another programmer would understand from my posting.
>

Oh thank you. I've have been programming low level realtime I/O routines for
over 15 years. I'm sure I did understand the impact of what you saying.
I merely disagree with the "obvious" conclusions that you drew. 

>Let me give a (very) cursory explanation: when interrupts are disabled, the
>system essentially goes deaf to the world (i.e. it cannot "hear" requests
>for service from any of it's peripherals); the only process that runs is
>the one that turned off the interrupts.
>
>What this means: the clock ticks, but the ticks are never counted, so no
>time passes; the serial ports receive data, but nothing is ever done about
>it, so you get overrun errors and data is lost; etc., etc..
>
>Only someone very ignorant about systems programming would ever disable
>interrupts for any longer that was absolutely necessary (usually it's done
>for only a few instructions).  A disk transfer takes forever in computer
>time (Lorne Green would love it ... CPU years! :-).  Much too long to even
>consider disabling interrupts.

Of course we are talking about DOS here, remember. I've spent a number
of hours tracing down crashes where "smart" systems programmers did the
Microsoft recommended thing and switched to a "private" stack inside their
code/drivers before re-enabling interrupts. Problem is that most of these
"smart" system programmers neglected to take into account that nested
interrupts (or any interrupts) might need stack space as well. I have
found a number of programs where their internal stack works "most" of the
time, but don't use a serial mouse (or at least don't move it) when they
are active. Given the choice of missed timing ticks or crashes from stack
overflow, I think I would choose the latter. And if, just by chance, they
are in fact being truthful about this being beta software, such disabling
of interrupts just might be belts and suspender type programming rather
than an outright plot to deceive you. You say the network drivers (and in
fact the production software) don't disable interrupts. But the fact that
the network driver you received did it differently than the other driver
makes it most apparent that this is a plot. I believe that it just might
be possible that the net drivers are a little later vintage the the driver
you are complaining about. But there I go again being ignorant of the fact
that all software is of the same vintage, etc. when given to you. Sorry.

>>listing) then I believe you creating your own (potentially incorrect) 
>>story. FWIW, I would rather not deal with someone who comes to such a
>>"scientific" conclusion from the data you had anymore than I would like
>>dealing with a used-car salesman.
>
>Frankly, I don't like being called a liar.  I reported facts and used them
>to justify my conclusions.  If you can't refute my *reasoning* (which you
>made no attempt to do - perhaps you can't), then please refrain from
>attacking my integrity and withdraw your accusation.

Sir, I do not call you a liar, but, rather I believe you may not be taking
all possibilities into account before YOU go and accuse a company of 
plotting to deceive you and the rest of the world. I made no attempt to
reason why beta software might have the int disables in because there are
many (both good and bad). But I suppose they would only occur to those
ignorant of system programming. Those in the "know" would correctly assume
a plot against the world. 

I still do not want to deal with people who jump to such conclusions based on 
(in your own words) somewhat circumstantial evidence.

>>at Adaptec? I was quite surprised to see Roy Neese (of Adaptec) so active
>>on the net after I received extremely shabby treatment when calling them
>
>We have a very good relationship with Adaptec - Bruce Van Dyke sometimes
>seems to almost live over here, and while I've only met Roy a couple of
>times, I have the utmost respect for him.  BTW, SDI is a big customer of
>Adaptec, so that may explain the treatment.

Well I called to obtain some information on some of their products. It 
appears that local rep knew that were supposed to carry Adaptec products
but they didn't know what any of the products did or how to get any
additional info. At Adaptec, I got a lady who after finding out that I wanted
information, rudely cut me over to an automated system which informed me 
that glossy brochures for some products could be obtained by mailing down
a self address, sampled envelope along with $10.00 per glossy for each
product I wanted. I thought perhaps this might have been an exception, but,
numerous other people on BIX have complained about similar experiences. 

I have heard reassonably good things about Adaptec's products (and a 
couple of bad things, but, those appear to be fixed). And I am impressed
by the help provided by Roy Neese on the net. But there is a real (or
perceived) problem with "little" folks getting info especially if they 
don't have access to Roy on the net.


>>Are you
>>saying that the only way for Always to use a FIFO is to sit in a spin loop
>>waiting for it to fill? Is there no way to interrupt when filled and
>>retrieve the data from the FIFO? That is what we did in the old days, but,
>
>Think for moment: let's do a minor "back of the envelope" calculation ...
>
>	guess 1: 128 byte FIFO

I believe the FIFO is 1-2K not 128 byte, But whats a factor of 8 or 16, eh?

>	guess 2: 512KB/s (transfer rate for some hard drive somewhere)
>	divide guess 2 by guess 1 to get ...
>	512KB/s / 128 bytes = 4096 full FIFOs (or interrupts) per second.

or 512 or 256 full FIFOs (or interrupts) per second if the real FIFO size 
is being used.

>
>Your average 386 can't even handle two builtin serial ports running at 9600
>baud.  Actually it has a hard time with one 9600 and one 4800, but let's be
>magnanimous and say a 386 running Unix can handle 2K interrupts per second.
>Using an interrupt driven driver, it would be impossible to service any
>drive faster than 256KB/s without 100% overhead (worse yet, some interrupt
>somewhere is bound to loose out and not get serviced in time).

I know I'm ignorant of system programming. I guess that why I can get a 
10MHz 286 running 32 ports of synchronous X.25, interrupt driven with
an average of 15,000 interrupts/sec (no, its not DOS compatible HW). OTOH,
my lowly 25MHz 386 can easily run a 56K line while running Windows 3. It
all depends on the OS and the programmer writing the SW.

>>about the overhead you talk about. Should I expect 10% of the throughput or
>>95% ?
>
>Arggh!  I feel like saying, "Okay pardner, 10 paces then draw!"  This is
>getting silly.  You want numbers, go get them.  If you can summon enough
>reasoning to convince me that my opinion is on shaky ground, I'll go get
>them myself, but so far all I've seen is hot air (hot bits? - whatever).

Sigh. Tim, I am using this SCSI controller in my own personal system. I don't
have the resources to purchase a number of controllers/ disks/ systems to get
these numbers and I don't have companies mailing me boards to try out. I
truely was interested in knowing what kind of difference a bus-master board
in a multi-tasking system would be. Perhaps someone with both the resources
and time has already done a comparison with either OS/2 or Unix.  I guess
I'll just wait for OS/2 V2.0 and see if my system is still marginally
useful to me. After all, not all programs I run spend 100% of their time doing 
disk I/O.

If you really want to go 10 paces, then draw, be my guest, but don't expect me
to continue in a flame fest with you. I still don't like the "obvious" 
conclusions you draw based on the evidence you presented.
>
>- Tim Iverson
>  iverson@xstor.com -/- uunet!xstor!iverson


--
-------------------------------------------------------------------------------
Gary Mussar  |Internet:  mussar@bnr.ca                |  Phone: (613) 763-4937
BNR Ltd.     |                                        |  FAX:   (613) 763-2626

iverson@bang.uucp (Tim Iverson) (06/23/91)

In article <90@talgras.UUCP> david@talgras.UUCP (David Hoopes) writes:
>In article <1991Jun06.204457.28453@xstor.com> iverson@xstor.com writes:
>>The second problem is much greater: the external connector is not an
>>accepted SCSI-1 or SCSI-2 connector.  It is a 25 pin Macintosh connector.
>
>I am sure that someone will correct me if I am wrong, but I think that the
>Macintosh style conector is an accepted SCSI-1 (I don't know about SCSI-2)

All SCSI-1 connectors are 50 pin.  SCSI-2 has 50 and 60 pin, but no 25 pin.

>>This isn't so bad until you realize that the 25 pin SCSI connectors usually
>>used for the PC have term power and ground swapped w.r.t. the Mac cable.

>This I think is wrong.  I have taken the same drive with cable attached
>and moved it from a Mac to my pc.  In fact that I why we choose to use
>that conector on our SCSI host adapter, so that are tape drives could
>be used on both PCs and Macs with the same cable.

That's exactly the problem I was talking about - *most* PC host adapters
that use 25 pin, have term-power swapped w.r.t Mac pin-outs.  Some don't.
That "some" includes Always, and may bite you in a tender spot (i.e. your
wallet, when you short-out something important).

There are two ways it could have worked anyway with PC-style pin-outs: 1.
your PC's host adapter is one of those that uses the Mac pin-outs; or 2. you
got lucky and none of your devices are supplying term-power to the bus -
could be they're set up to use their own internal source.

If you use 25 pin without knowing the pin-outs or knowing exactly how all
your devices are setup w.r.t. term-power, you're relying on luck.  With 50
pin, you're safe - luck is not involved.

On a slightly humorous note - this problem was discovered accidentally a
couple of years ago by one of our brighter software guys.  About 3 weeks
after he was hired he accidentally used a PC cable for a Mac.  Result: one
shorted out Mac motherboard (it has no fuse).  He did this twice before
that little light bulb we all have lit up - maybe the smell of ozone and
burnt plastic helped.  Anyway, we now call this the "Englebert"-effect ...
name changed to protect the innocent (i.e. me - he knows where I live :->).

>Personally I like this conector better then the 50 pin conector.  The
>little clips on the 50 pin conectors have a bad habit of wigeling loose.

Yeah, I agree - the 50 pin is also pretty big, which sometimes makes it hard
to get the card in the slot without squashing the connector clips, but with
50, at least you don't have to test your cable before connecting it to make
sure the pin-outs are right for the card.

>I don't know anything about the Always IN-2000 and I am not defending it.  I
>just question some of your arguments.

That's okay - they were good honest questions.  BTW - yes, I dodged the chip
question on purpose: it seems to have been answered already (and answered
much better than I could have done).

>David Hoopes                              Tallgrass Technologies Inc. 
>uunet!talgras!david                       11100 W 82nd St.          

- Tim Iverson
  iverson@xstor.com -/- uunet!xstor!iverson

iverson@bang.uucp (Tim Iverson) (06/23/91)

In article <1991Jun23.032656.3227@bigsur.uucp> mussar@bnr.ca (G. Mussar) writes:
>In article <1991Jun22.033501.17909@xstor.com> iverson@xstor.com writes:
>>In article <1991Jun13.142032.16772@bigsur.uucp> mussar@bnr.ca (G. Mussar) writes:
>>>In article <1991Jun06.204457.28453@xstor.com> iverson@xstor.com writes:
>[...]
>and the card now runs fine at 12.5MHz. But I do suppose it is good business
>practise to hold the original problem against the company forever. 

Well, I don't hold their technical problems against them forever, just their
sneaky tricks.  If they fix their technical problems that's great, but the
only thing that would satisfy me on the other is an admission of guilt on
the interrupt issue (even if it was confidential).

BTW, both systems I tested it on were using the standard 8Mhz bus speed and
(supposedly) they sent me their latest and greatest firmware and BIOS.

>Oh thank you. I've have been programming low level realtime I/O routines for
>over 15 years. I'm sure I did understand the impact of what you saying.

Sorry for the false assumption - your posting didn't seem to indicate
understanding, but then that's the beauty of our wonderfully ambiguous
english language ... well, maybe someone else got something useful from the
digression.

>I merely disagree with the "obvious" conclusions that you drew. 

I really would like to hear your reasoning - is it all gut feel or do you
have something concrete?  I've rejected numerous other scenarios (this one
gets about a 75% feel, all the rest are at about 5 or 10%), but a different
plausible explanation would certainly cause me to reevaluate my position.

>overflow, I think I would choose the latter. And if, just by chance, they
>are in fact being truthful about this being beta software, such disabling
>of interrupts just might be belts and suspender type programming rather

Could be.  Their code was very clean though - very easy to understand from
the dissassembly, unlike many other BIOSes (it took me about 2 minutes to
find the spot instead of 30).  The cli/sti really stood out as odd - why
not put the band-aid over the sore instead of wrapping the patient like a
mummy?  Judging from the rest of the code, the programmer should have been
competent enough to avoid doing it this way.

I'd give this a 10% likelyhood, but only if it's for a software issue, the
hardware issue is not possible.

>fact the production software) don't disable interrupts. But the fact that
>the network driver you received did it differently than the other driver
>makes it most apparent that this is a plot. I believe that it just might
>be possible that the net drivers are a little later vintage the the driver
>you are complaining about. But there I go again being ignorant of the fact
>that all software is of the same vintage, etc. when given to you. Sorry.

Arggh.  I really don't mind when you attack the facts (just when you attack
my integrity) - no need to apologize.  Yes, the netware driver could have
been a later vintage, but the real point is that it ran fine on the *same*
card that had the cli/sti BIOS.  This kinda zeros the software-bandaid for
flaky beta hardware scenario.

>>>listing) then I believe you creating your own (potentially incorrect) 
>>>story. FWIW, I would rather not deal with someone who comes to such a
>>>"scientific" conclusion from the data you had anymore than I would like
>>>dealing with a used-car salesman.
>>
>>Frankly, I don't like being called a liar.  I reported facts and used them
>>to justify my conclusions.  If you can't refute my *reasoning* (which you
>>made no attempt to do - perhaps you can't), then please refrain from
>>attacking my integrity and withdraw your accusation.
>
>Sir, I do not call you a liar, but, rather I believe you may not be taking

Hmm.  Your own words: "creating your own ... story".  Creating and stories
implies fictional.  Fiction is just sugar-coated lies, so pony up.

[ Actually, I guess your words could be construed as a form of apology -
  kind of a face-saving "I never called you a liar, you just thought I did"
  - so maybe I should consider it one ... rdly-rckn-snagl-fratzen ... ]

>all possibilities into account before YOU go and accuse a company of 
>plotting to deceive you and the rest of the world.

I took a large number of possibilities into account - I played devil's
advocate and tried to give a solid justification to their actions, then
tested to see if they were true.  I couldn't find a good engineering reason
for the hack.  That doesn't mean there isn't one (I'm not perfect - how
would you ever guess? :-) just that the real reason probably lies elsewhere.

>I made no attempt to
>reason why beta software might have the int disables in because there are
>many (both good and bad). But I suppose they would only occur to those
>ignorant of system programming. Those in the "know" would correctly assume
>a plot against the world. 

Another small factor in the "plot" scenario is that I would judge it's
chances of discovery (had I been on Always' side) to have been rather
small.  The only reason I caught it was that I have an extremely large
"curiousity bump" - if something's odd, I've *got* to know why.

>I still do not want to deal with people who jump to such conclusions based on 
>(in your own words) somewhat circumstantial evidence.

Unfortunately, that's the nature of some of my data.  When I get new data,
I'll reevaluate my conclusions.  However, I can hardly see Always coming up
with a scenario that I would believe.

>[horror story of adaptec customer relations]

>couple of bad things, but, those appear to be fixed). And I am impressed
>by the help provided by Roy Neese on the net. But there is a real (or
>perceived) problem with "little" folks getting info especially if they 
>don't have access to Roy on the net.

I don't know how to help out here.  I can say that if you or anyone ever,
ever gets that kind of treatment from Storage Dimensions (where I work),
just call the president and he'll kick some *ss for you.  You could call
me, and I'd rip the offending party up verbally, but for some reason his
words seem to carry a little more weight ... around here, the customer
isn't king, he's GOD :-).

>>	guess 1: 128 byte FIFO
>
>I believe the FIFO is 1-2K not 128 byte, But whats a factor of 8 or 16, eh?
>
>>	guess 2: 512KB/s (transfer rate for some hard drive somewhere)
>>	divide guess 2 by guess 1 to get ...
>>	512KB/s / 128 bytes = 4096 full FIFOs (or interrupts) per second.
>
>or 512 or 256 full FIFOs (or interrupts) per second if the real FIFO size 
>is being used.

Hmm, I just thought of another reason it probably isn't done like this - if
you wait for the FIFO to fill, you've got to stop taking in data from the
SCSI bus.  Chances are, with a fast drive, that data can come in as fast as
you can pick it up (it comes in on the i/o bus, not the memory bus).

>>magnanimous and say a 386 running Unix can handle 2K interrupts per second.
>
>I know I'm ignorant of system programming. I guess that why I can get a 
>10MHz 286 running 32 ports of synchronous X.25, interrupt driven with
>an average of 15,000 interrupts/sec (no, its not DOS compatible HW). OTOH,
>my lowly 25MHz 386 can easily run a 56K line while running Windows 3. It
>all depends on the OS and the programmer writing the SW.

I never said it was impossible - I once got a 12Mhz 286 to service approx.
56K interrupts/sec. with room to spare.  I was using Unix as an example.
Unless you've got a special real-time multi-tasking OS, the interrupt
overhead is real high.  Under Unix a limit of 2K i/s is pretty real.

Besides, even if they have a full track buffer like most ESDI controllers,
they've still got to carry the bytes into the system by hand.  This is the
same reason multi-drive SCSI beats multi-drive ESDI.

>Sigh. Tim, I am using this SCSI controller in my own personal system. I don't
>have the resources to purchase a number of controllers/ disks/ systems to get
>these numbers and I don't have companies mailing me boards to try out.

Actually, this was the first time I was singled out personally for this
"honor", but we do have lots of hardware lying around.  I may even get
curious enough to look into it on my own time, but I wouldn't count on
anything there for at least a month - it's real hard to measure overhead
on a Unix driver accurately from software, but I might have a way.

>I truely was interested in knowing what kind of difference a bus-master board
>in a multi-tasking system would be. Perhaps someone with both the resources

This I can answer in general: if you have one SCSI disk vs. one ESDI disk,
the ESDI comes out a little bit ahead.  If you have two vs. two, then the
SCSI wins.  This test was done about a year ago with the Adaptec 1542, the
results would probably be much different using a BusTek 540 - their per
command overhead is more than 1ms less Adaptec's.

>useful to me. After all, not all programs I run spend 100% of their time
>doing disk I/O.

Since you're talking about a home system, you may be satisfied with
something that has 10% or 20% more overhead, especially since you already
have a board that works for you.  If you're buying from scratch, though,
bus-mastering is worth it.

>If you really want to go 10 paces, then draw, be my guest, but don't expect me
>to continue in a flame fest with you.

Actually, my words were aimed at provoking a "flame-fest", but was I hoping
for something more civilized - sorta like a rationality test (you passed,
mostly :->).  

>I still don't like the "obvious" 
>conclusions you draw based on the evidence you presented.

That would be inhuman.  You have an Always board, which implies at least an
emotional interest in vindicating them.  I know I'd have a problem (a very
very small one :-) admitting to liking a board someone on the net was
calling the "Never IN-2000".  If it works for you, fine, but even ignoring
the interrupt issue, the other problems are such that I simply can't give
Always a thumbs-up.

>Gary Mussar  |Internet:  mussar@bnr.ca                |  Phone: (613) 763-4937

mussar@bcars53.uucp (G. Mussar) (06/24/91)

In article <1991Jun23.105753.5484@bang.uucp> iverson@xstor.com writes:
>Well, I don't hold their technical problems against them forever, just their
>sneaky tricks.  If they fix their technical problems that's great, but the
>only thing that would satisfy me on the other is an admission of guilt on
>the interrupt issue (even if it was confidential).

I would be interested in the real reason you had trouble with the floppy
just in case something starts acting up in my system. Speed and interacting
hardware problems are fairly tough to track down especially without lots
of fancy (expensive) equipment to help. I doubt that Always (or most other
companies) would leap out into the inferno and advertise technical problems
(even if they have been solved). And admitting guilt is usually only done
if the party was guilty to start off with. There is the possibility that
they are guilty but I don't believe the evidence presented (to date)
indicates that (IMHO), emotions aside.

>I really would like to hear your reasoning - is it all gut feel or do you
>have something concrete?  I've rejected numerous other scenarios (this one
>gets about a 75% feel, all the rest are at about 5 or 10%), but a different
>plausible explanation would certainly cause me to reevaluate my position.

Having been the one to track done the stack overflow problems in
"professional" software (in one case, drivers provided by Intel), I certainly
can identify with that particular type of problem. I've also tracked 
numerous other "interrupt" related problems caused by people who are
unclear on the concept of critical sections of code or with hardware
which is "flaky" wrt the speed of access (both too fast and too slow).
I am willing to believe these kinds of issues may have been the reason
for the original SW to have a "Chubb security lock" in the form of
disable/enable ints which managed to make it out in beta SW. Again IMHO.
I've even let a few things manage to get out into the field with some
debugging SW turned on (it does happen once in a while).

>>If you really want to go 10 paces, then draw, be my guest, but don't expect me
>>to continue in a flame fest with you.
>
>Actually, my words were aimed at provoking a "flame-fest", but was I hoping
>for something more civilized - sorta like a rationality test (you passed,
>mostly :->).  

I guess that makes me just a cocktail weenie instead of a foot-long. Thanks.

FWIW, the price of the Adpatec (plus SW) came out to twice the cost for
the IN-2000 (way up here in Canada). This ends up being a significant 
factor for some people (as well as the customer support problem for us
little folks who have lousy local reps). If U.S. people are starting from
scratch and have no objection to the price (rare but true sometimes) I
have recommended Adaptec products. I have heard some rumors that Adaptec
might be trying to get out of the "board" business and get more into the
chip business with "Adaptec register compatible" boards showing up on the
market. That could be a problem if true (lets hope not).
--
-------------------------------------------------------------------------------
Gary Mussar  |Internet:  mussar@bnr.ca                |  Phone: (613) 763-4937
BNR Ltd.     |                                        |  FAX:   (613) 763-2626

iverson@xstor.com (Tim Iverson) (06/26/91)

In article <1991Jun24.162514.17437@bigsur.uucp> mussar@bnr.ca (G. Mussar) writes:
>In article <1991Jun23.105753.5484@bang.uucp> iverson@xstor.com writes:
>>only thing that would satisfy me on the other is an admission of guilt on
>>the interrupt issue (even if it was confidential).

>[quote reordered slightly ...] admitting guilt is usually only done
>if the party was guilty to start off with. There is the possibility that
>they are guilty but I don't believe the evidence presented (to date)
>indicates that (IMHO), emotions aside.

They are guilty of not informing us that they had done this prior to giving
us the board.  Why?  Did they not want us to know or was it a matter of a
slight mixup trampling alot of good intentions?

If the cli/sti hack had been explained up front, my opinion of Always would
be very different than it is now.  In fact, there would have been no reason
for me to have gotten involved at all, the spurious benchmark times would
not have been curious.

>I would be interested in the real reason you had trouble with the floppy
>just in case something starts acting up in my system. Speed and interacting
>hardware problems are fairly tough to track down especially without lots
>of fancy (expensive) equipment to help. I doubt that Always (or most other

The floppy didn't work on a Mitsubishi MP386S, a 16Mhz 386SX ISA-bus clone.
It did work on an AST 33Mhz 386 EISA-bus machine.  The MP386S is a known
flaky machine - I used it for the simple reason that it happened to be on
my desk that day.

>I've even let a few things manage to get out into the field with some
>debugging SW turned on (it does happen once in a while).

My software's always perfect, too (except for that one time about 3 years
ago when ... :-).  It could have been a simple oversight, yet why didn't
they explain up front or after the fact?

>FWIW, the price of the Adpatec (plus SW) came out to twice the cost for
>the IN-2000 (way up here in Canada). This ends up being a significant 
>factor for some people (as well as the customer support problem for us
>little folks who have lousy local reps).

That's one of my own peeves - everyone wants support, even the little guys,
but nowadays, you have to pay a premium to get it.  This isn't so much a
problem for Old Deep-Pockets, but the rest have to balance support with
performance, with compatibility, and with price.

It's even worse when you cross-over and see that same Old D-P that knows
good support really does cost money (and is willing to pay for it) never
realizes that bad support costs even more money in lost customers and lost
word-of-mouth sales.

>I have heard some rumors that Adaptec
>might be trying to get out of the "board" business and get more into the
>chip business with "Adaptec register compatible" boards showing up on the
>market. That could be a problem if true (lets hope not).

Actually, it wouldn't be such a big loss - there's already one Adaptec 1542
clone that has much less overhead than the 1542 (BusTek 540), and I've
heard rumors of a drive array adapter that speaks 1542 (from Dell?).  A
stable and accepted interface is far more important to the success of SCSI
in the market than any single vendor's board.

Snazzy new interfaces look neat but consistency is far more important.  Case
in point - Adaptec added a new interface to their EISA card, the 1740.  The
new interface is indeed improved, but it requires a new driver.  Result?
Initially, the new mode was completely ignored, and even now (more than a
year later) it is supported only by a scant handful compared to the support
available for the 1542 interface.

>Gary Mussar  |Internet:  mussar@bnr.ca                |  Phone: (613) 763-4937
>BNR Ltd.     |                                        |  FAX:   (613) 763-2626


- Tim Iverson
  iverson@xstor.com -/- uunet!xstor!iverson