[comp.sys.3b1] more on the HFC saga

mhw@fithp (Marc Weinstein) (05/13/91)

To all those who are still interested...

We've discovered quite a bit over the last few weeks about Hardware Flow
Control on the 3b1.  If the below findings are common knowledge to some,
such is life.  If anything is incorrect, please post a correction!

In my last postings, I stated that a friend of mine and I had tried to get
two Practical Peripherals 9600SA modems to talk, transferring files with
a port rate of 19200.  We were only able to talk consistently at 9600 baud
port rates, but had a lot more to try.  As stated before, we use the UUCP
'e' protocol and V.42bis, so we get a consistent 960Bps transfer, not bad,
but not the best.

[BTW: If anyone has purchased one of these modems, do yourself a favor and
call the company to get the latest PROMs.  We obtained some V1.31 PROMs from
them, and they help - quicker connects (protocol negotiation) and less DCD
problems causing lost lines.]

Well, my friend installed 3.51m (giving us 3.51m on both ends) and we gave
things a try at 19200.  No luck - same problem.  UUCP would get hung during
file transfer - it appeared as though the connect bit stream was still too
fast for the 3b1 port to keep up (tty000).  We had almost given up.

Then, just to see what was going on, we wrote a program to check the port
configuration using an ioctl() call.  We would run it before, during, and
after various types of calls.  What we found was that the Hardware Flow
Control bit (CTSCD) was NOT getting set!!!  Even if we checked immediately
after calling the /etc/hfc_ctl program, the bit was NOT set.  So, we wrote
a program to set the CTSCD bit and did some experimentation.  Here's what
we found...

First and foremost, the /etc/hfc_ctl program appears to ENABLE HFC, not turn
it on.  It seems that this program (somehow) configures the driver so that
it pays attention to the CTSCD bit on the port.  The trick is to get this
bit turned on.  The standard getty (/etc/getty) appears to recognize the
'CTSCD' string in the gettydefs file, but the HoneyDanber uugetty does not.
It, instead, recognizes the string 'HFC', which should be placed in the
gettydefs entry.  It should be placed in both the second and third fields
of the entry, to make sure it gets turned on during the entire connection.

However, the uugetty program doesn't apply these settings until a connection
is made.  This means that the 'idle' condition of the port is NO HFC turned
on.  If you happen to try an outgoing call at this time, uucico doesn't set
HFC, it just inherits the current state of the line.  So, the outgoing call
will not have HFC active.  In fact, if the HFC bit is set and you kill
uugetty, when it restarts the bit will be cleared.  The solution (if
that's what you can call it) was to write a program to condition the port,
and have the program do its stuff right after uugetty has started.  The
program is actually invoked in the background from a script called from
the inittab (rather than directly calling uugetty) - it gives uugetty a
certain amount of time to get settled, and then it attempts to condition
the port.  The script then exec's uugetty.  In this way, we can make sure
that the HFC bit will be set during the 'idle' state.  However, this
obviously implies that if an outgoing call were attempted before the
program finished execution, there could be an interval where HFC is not on.

The other thing we discovered was that uugetty doesn't seem to recognize
19200 baud.  We tried using the strings 'B19200' and 'EXTA' in the
gettydefs entry, and yet our program which prints out port settings still
showed B9600 as the baud rate.  So, we added this capability to our port
conditioning program.

We also installed the patch to the kernel which offloads characters from
the interrupt buffer more often - this patch changes the polling of the
interrupt buffer from once every 4/60th of a second to once every 1/60th
of a second.

Unfortunately, with all this in place, we STILL can't get reliable
connections with a port speed of 19200.  We're seeing a very strange
problem right now, where if we try to login, even manually, with the
port speed set to 19200, we always get 'Login incorrect', even though
the login name we type in is echoed back at us as we would expect.  And,
it appears that changing the way the 'init' program starts uugetty is
not without its flakiness - uugetty doesn't get restarted after an outgoing
call (it should, since the modem is configured to reset when DTR goes low).

The last possibility would be to switch to the new getty which was posted
a while back.  We tried using it and saw all sorts of errors.  Has anyone
gotten this working??  If so, could they post the source they ended up
with?

Well, if anyone can offer any advice, we'd welcome it.  I'd love to hear
how those who say they got 19200 HFC to work actually did it.  We're
beginning to think that, yes, HFC does work, but it doesn't matter because
the 3b1 isn't fast enough to administer it properly.  By the time it
detects overflow and reacts to it, it's too late.  Complete conjecture
on our part, though...

-- 
Marc Weinstein
{simon,royko,tellab5}!linac!fithp!mhw		Elmhurst, IL
-or- {internet host}!linac.fnal.gov!fithp!mhw

thad@public.BTR.COM (Thaddeus P. Floryan) (05/14/91)

In article <1991May13.010730.4743@fithp> mhw@fithp (Marc Weinstein) writes:

	To all those who are still interested...

	We've discovered quite a bit over the last few weeks about Hardware
	Flow Control on the 3b1.  If the below findings are common knowledge
	to some, such is life.  If anything is incorrect, please post a
	correction!

	[...]

Sigh, no correction needed.  Details of purchasing a company 6 weeks ago have
kept me away from the net for awhile, so I've a bunch of catching up to do.  As
an aside, as part of that acquisition and attempting to bring up a Sun 3/60 to
SunOS 4.1.1, an SGI IRIS with IRIX 3.whatever, a PS/2-80 with A/IX 1.2.1, etc.
I've gained a new appreciation for how GOOD the 3B1/UNIXPC really is and how
EASY it is to install and/or upgrade system software on the 3B1; those other
systems suck dead bunnies through a straw in that regards in my opinion.

(I also saw the post re: 3B1 and Sun 3/60 Ethernet, and since I just acquired
and upgraded a Sun 3/60 to SunOS 4.1.1, I'll try out the rcp, et al by taking
one of my 3B1 in to the office and "seeing what happens" :-)

In any event, my original posting started this thread (ref: 1800 cps output
and 75 cps input using HDB at 19200 baud after 6KB or so of uucp'ing; that 75
cps was due to constant "Alarm n" after each subsequent block transferred
followed by a 10-second wait thus driving the cps rate down to the floor,
lower than even the proverbial whale turds :-).   A quick grep on the extant
newsgroup files here at BTR shows many others have confirmed those stats.

My own tests with extremely sophisticated DLM and other RS-232 test equipment
confirm beyond a shadow of a doubt:

	THE 3B1/UNIXPC CANNOT OPERATE AT 19200 BAUD RELIABLY.

I first discovered this in 1987 with 3.51 and re-confirmed this in 1991 with
3.51m; tests were run on ALL of tty000 through tty004, with stock V2 UUCP and
the 3B1 HDB UUCP, and xmodem/ymodem/zmodem and "cat" transfers.

Recent private conversations with "people who should know" (:-) have elicited
the info that HFC on *INPUT* is *NOT* implemented in *ANY* kernel that CT has
produced.  Many (most?) of CT's systems were 68020-based, and the *NEED* for
HFC at 19200 was non-existent as proved by my own tests on two MightyFrames;
as an aside, HFC at 38400 on the MightyFrames is also not needed except for
worst-case zmodem transfers where, in my tests, I encountered only ONE serial
port overrun in over 200MB file transfers during the tests.  And, again, from
a "person who should know", examination of the kernel source code verifies
that INPUT HFC is non-existent; output HFC is compiled-in and works just fine.

And as for the one abusive respondent who claims that 19200 baud works just
fine and that I was full of it, I wonder what he's smoking?  I have read in a
private email (cc'd to me) that with a 3-wire (2,3,7) direct 19200 baud serial
connection between his (the abusive respondent's) 3B1 and an NS32532-based
Minix machine he gets 700-750 cps; yeah, sure, that's a high-quality 19200
baud connection all right; hot damn!  :-)

And for those who've forgotten, here's the crucial (non-abusive) part of the
abusive respondent's posting to comp.sys.3b1 re: my original posting in regards
to 3B1 operation at 19200 baud:

	[...]
	Funny, my TrailBlazer Plus, still with creaky old 4.00 ROMs, has been
	running just fine with hardware flow control and an interface locked
	at 19,200 for several *years* now.  Currently traffic levels are near
	three-quarters of a *gigabyte* per month with nary a trace of data
	loss.  (There may be an occasional HFC hiccup but uucp's g protocol
	obviously deals with it and even at that the data rates show minimal
	impact from retries.  HDB uucp, BTW, not the stock garbage.)

	For the record, I happen to have some numbers for Tuesday of last
	week, an entirely ordinary day for ******** [site name deleted] with
	respect to uucp.  That day a total of 24 megabytes was transferred
	in and out over the modem in just over 7 hours.  Outbound traffic
	outweighed inbound by about 1.24 to 1 and these numbers include a
	non-trivial amount a 2400 baud traffic (perhaps even a trace of 1200)
	along with some PC Pursuit in addition to Telebit (PEP) links.  Despite
	all that when one accounts for g-protocol packet overhead the *average*
	rate is over 1,000 bytes per second.
	[...]

Using my calculator, 24MB during 7 hours equates to 952 cps, which is about
what I get with my T2500 locked at 9600 baud on the 3B1 for a mixture of email
and large uucp file transfers.  Even considering the 2400 and 1200 baud
connects, that is NOWHERE'S near the 1850+ cps over the modem (even faster
using direct hard-wired connections) I do get using systems that CAN and DO
support 19200 baud correctly (e.g. my Amigas (which are 68020-based) or the CT
MightyFrames (also 68020)).

A properly functioning 19200 baud connection is capable of "almost" 7MB per
hour (6.9MB actually) which, for 24MB, should have required only 3-1/2 hours
at most, half the 7 hours (or twice the speed) as stated in the posting extract
included above.

Twice as long is the same as half the speed.  Hmmm, half of 19200 is 9600, and
with 10 bits per character is 960 cps.  Double hmmm.

Phooie!  If you believe that 750 cps (or even 952 or 1000 cps) is a correctly
functioning 19200 baud connection, I have some ocean/beach-front property in
Arizona I'll be happy to sell you, or you can buy my option on the Brooklyn
Bridge for $1,000,000 in small, unmarked bills!  :-)

The 700-750 cps data rate is about the maximum my own VAX-11/780 systems
can pump out at 19200 with NO other load on the system.  If one is going to
test 19200 baud operation, one damn well better be using systems that are
CAPABLE of 19200 baud operation; the 3B1/UNIXPC is NOT.  As for testing, as
posted previously, I do have and use the proper test systems since I sell
commercial modem-interface products of my own design and manufacture to phone
companies, the US Govt, and others.

This is the real world, not a chapter from "Peter Pan"; closing one's eyes and
wishing real, *REAL* hard, is *NOT* going to make one's dreams come true.  As
much as I like the 3B1 (I have a LOT of them! :-), it simply does NOT support
19200 baud in a consistently reliable manner on the 3B1.

Sheesh, haven't YOU ever wondered WHY "19200" isn't in the 3B1's serial port
baud UA setup menu?

Thad Floryan [ thad@btr.com (OR) {decwrl, mips, fernwood}!btr!thad ]

gandrews@netcom.COM (Greg Andrews) (05/15/91)

In article <2793@public.BTR.COM> thad@public.BTR.COM (Thaddeus P. Floryan) writes:
>  [a whole lotta stuff <grin> but here's just a bit of it]
>
>And for those who've forgotten, here's the crucial (non-abusive) part of the
>abusive respondent's posting to comp.sys.3b1 re: my original posting in regards
>to 3B1 operation at 19200 baud:
>
>	[...]
>	Funny, my TrailBlazer Plus, still with creaky old 4.00 ROMs, has been
>	running just fine with hardware flow control and an interface locked
>	at 19,200 for several *years* now.  Currently traffic levels are near
>	three-quarters of a *gigabyte* per month with nary a trace of data
>	loss.  (There may be an occasional HFC hiccup but uucp's g protocol
>	obviously deals with it and even at that the data rates show minimal
>	impact from retries.  HDB uucp, BTW, not the stock garbage.)
>

Yeah, but I've already pointed out to him that saying "my HFC works with
UUCP transfers over a TrailBlazer PEP connection" doesn't prove anything.

Telebit modems in uucp spoofing mode won't feed packets to the receiving
computer faster than it can accept them.  That is, it feeds uucp packets
to the computer, then waits for the acks before it sends more.  The modem
negotiates "normal" size packets and windows (64 byte packets, window size
of 3) so the largest amount of data it could pour into the computer isn't
very large.  The modems flow control each other, and the sender's modem
can flow control the sender by witholding acks.  In other words, the modems
arrange things so it's almost impossible for them to overrun anyone's
buffers - whether hardware flow control is enabled or not.

The modems provide end-to-end flow control through the *uucp protocol*
rather than hardware handshaking.  It matters not whether hardware flow
control is perfect or broken - the modems don't use it when they're
spoofing uucp.

To summarize, using the results of PEP/uucp sessions will not tell you
how well your hardware flow control works.  It only tells you how well
the uucp spoofing works.

>
>Using my calculator, 24MB during 7 hours equates to 952 cps, which is about
>what I get with my T2500 locked at 9600 baud on the 3B1 for a mixture of email
>and large uucp file transfers.  
>

My back-of-the-envelope calculations came up with 1,000 cps, but it's still
the same conclusion - 24 megabytes in 7 hours is much closer to 9600 bps
than 19200...

>
>Thad Floryan [ thad@btr.com (OR) {decwrl, mips, fernwood}!btr!thad ]
>

-- 
 .------------------------------------------------------------------------.
 |  Greg Andrews   |       UUCP: {apple,amdahl,claris}!netcom!gandrews    |
 |                 |   Internet: gandrews@netcom.COM                      |
 `------------------------------------------------------------------------'

daveb@Ingres.COM (Dave Brower) (05/17/91)

In article <2793@public.BTR.COM>, thad@public.BTR.COM (Thaddeus P. Floryan) writes:
>Recent private conversations with "people who should know" (:-) have elicited
>the info that HFC on *INPUT* is *NOT* implemented in *ANY* kernel that CT has
>produced.  Many (most?) of CT's systems were 68020-based, and the *NEED* for
>HFC at 19200 was non-existent as proved by my own tests on two MightyFrames;
>as an aside, HFC at 38400 on the MightyFrames is also not needed except for
>worst-case zmodem transfers where, in my tests, I encountered only ONE serial
>port overrun in over 200MB file transfers during the tests.  And, again, from
>a "person who should know", examination of the kernel source code verifies
>that INPUT HFC is non-existent; output HFC is compiled-in and works just fine

Is there a chance that replacement tty drivers could make HFC work, or
would it be done somewhere else?  Not that re-writing a tty driver is
easy, but given some of the other things people have done, it may not be
out of the question.  I'm assuming by the quoted article that the HFC is
visible to the kernel, but that it is being ignored.

thanks,
-dB

bdb@becker.UUCP (Bruce D. Becker) (05/17/91)

In article <1991May15.002922.20778@netcom.COM> gandrews@netcom.COM (Greg Andrews) writes:
|
|Yeah, but I've already pointed out to him that saying "my HFC works with
|UUCP transfers over a TrailBlazer PEP connection" doesn't prove anything.
|[...]
|The modems provide end-to-end flow control through the *uucp protocol*
|rather than hardware handshaking.  It matters not whether hardware flow
|control is perfect or broken - the modems don't use it when they're
|spoofing uucp.
|[...]
|My back-of-the-envelope calculations came up with 1,000 cps, but it's still
|the same conclusion - 24 megabytes in 7 hours is much closer to 9600 bps
|than 19200...


	I run a TB+ at 19200 without flow control or
	locked interface speed, and with modem
	compression turned off.  The maximum throughput
	I seem to get over a clean line is nearer to
	1200 bytes/sec, but this is hardly ever acheived
	due to line noise and system loading.

	As far as I can see, HFC is just a big source of
	problems and should be avoided if at all possible.
	Even if it works correctly (which for many versions
	of the O/S it doesn't), it's very susceptible to
	"skid" at high baud rates. This means that the
	time between the detection of overrun and the
	assertion of the appropriate control line can
	be long enough under some load conditions that
	characters are still dropped. HFC isn't actually
	useful for UUCP transfers, and wreaks havoc with
	interactive users because the modem's buffering
	system has no concept of interrupt characters
	(like "^C") needing special handling.


	I've run the serial port (actually tty002, but
	that oughtn't to be a real difference) at 19200
	with a direct connection to a faster system
	(with respect to serial speed), using a protocol
	which sends a 1K block and gets an ACK in
	response. The fastest I can send stuff is about
	1300 bytes/sec, which I take to be the maximum
	rate at which characters can be delivered out
	the serial port (probably interrupt service
	overhead). No flow control is in effect during
	these transfers, yet the error rate is fairly
	low (consistent with an overlong cable in an
	electrically noisy environment)...


-- 
  ,u,	 Bruce Becker	Toronto, Ontario
a /i/	 Internet: bdb@becker.UUCP, bruce@gpu.utcs.toronto.edu
 `\o\-e	 UUCP: ...!utai!mnetor!becker!bdb
 _< /_	 "It's the death of the net as we know it (and I feel fine)" - R.A.M.

bruce@balilly (Bruce Lilly) (05/18/91)

In article <1991May13.010730.4743@fithp> mhw@fithp (Marc Weinstein) writes:
>
>First and foremost, the /etc/hfc_ctl program appears to ENABLE HFC, not turn
>it on.  It seems that this program (somehow) configures the driver so that

It uses an ioctl, which is listed in recent /usr/include/sys/termio.h's as:
#define TCHFCCTL	(TIOC|15)
It is not in old termio.h's.

ioctl(fd, TCHFCCTL, 1) is equivalent to /etc/hfc_ctl +/dev/ttyNNN
ioctl(fd, TCHFCCTL, 0) is equivalent to /etc/hfc_ctl -/dev/ttyNNN

-- 
	Bruce Lilly		blilly!balilly!bruce@sonyd1.Broadcast.Sony.COM

burris@highspl (David Burris) (05/18/91)

From article <1991May16.184350.27211@ingres.Ingres.COM>, by daveb@Ingres.COM (Dave Brower):
> In article <2793@public.BTR.COM>, thad@public.BTR.COM (Thaddeus P. Floryan) writes:
>>Recent private conversations with "people who should know" (:-) have elicited
>>the info that HFC on *INPUT* is *NOT* implemented in *ANY* kernel that CT has
>>produced.
> 
> Is there a chance that replacement tty drivers could make HFC work, or
> would it be done somewhere else?

Does anyone have the source code for the existing drivers? Also,
does the HARDWARE support HFC on input? Or more simply, are both the
RTS & CTS lines connected to the RS-232 and the UART? I'm
considering "hacking" up a device driver that uses both input and
output HFC. I simply CAN'T STAND not being able to use 19200 to talk
to a modem that cost half as much as my 7300.

I will be happy to make the driver public domain after it has been
tested, assuming of course I get it working.

-- 
================================================================
David Burris					Aurora,Il.
burris@highspl		           ..!linac!highspl!burris
================================================================

mhw@fithp (Marc Weinstein) (05/19/91)

From article <101141@becker.UUCP>, by bdb@becker.UUCP (Bruce D. Becker):
> In article <1991May15.002922.20778@netcom.COM> gandrews@netcom.COM (Greg Andrews) writes:
> 
> 	As far as I can see, HFC is just a big source of
> 	problems and should be avoided if at all possible.
> 	Even if it works correctly (which for many versions
> 	of the O/S it doesn't), it's very susceptible to
> 	"skid" at high baud rates. This means that the
> 	time between the detection of overrun and the
> 	assertion of the appropriate control line can
> 	be long enough under some load conditions that
> 	characters are still dropped.

We've found this is true at 19200, but not so at 9600.  I've had *lots*
of activity on the PC and can't seem to cause a file transfer to fail.

>       HFC isn't actually
> 	useful for UUCP transfers, and wreaks havoc with
> 	interactive users because the modem's buffering
> 	system has no concept of interrupt characters
> 	(like "^C") needing special handling.

Well, sort of...The place where HFC on the UNIXPC really works is when your
PC can send chars out faster than the remote modem can handle them.  For
instance, if either the DCE-to-DCE speed or the DCE-to-DTE speed on the far
end are less than the host DTE-to-DCE speed, then the modems will apply
HFC and the UNIXPC will properly halt data transmission.  HFC does NOT
seem to work for handling overflow on incoming PC ports.

> 	I've run the serial port (actually tty002, but
> 	that oughtn't to be a real difference) at 19200
> 	with a direct connection to a faster system
> 	(with respect to serial speed), using a protocol
> 	which sends a 1K block and gets an ACK in
> 	response. The fastest I can send stuff is about
> 	1300 bytes/sec, which I take to be the maximum
> 	rate at which characters can be delivered out
> 	the serial port (probably interrupt service
> 	overhead).

Yeah - we'd like to try this, but we only support the 'g' protocol in
UUCP.  Is this a UUCP connection you're talking about, or a UMODEM-type
transfer?

Marc

-- 
Marc Weinstein
{simon,royko,tellab5}!linac!fithp!mhw		Elmhurst, IL
-or- {internet host}!linac.fnal.gov!fithp!mhw

mhw@fithp (Marc Weinstein) (05/19/91)

From article <2793@public.BTR.COM>, by thad@public.BTR.COM (Thaddeus P. Floryan):
> In article <1991May13.010730.4743@fithp> mhw@fithp (Marc Weinstein) writes:
> 
> 	To all those who are still interested...
> 
> 	We've discovered quite a bit over the last few weeks about Hardware
> 	Flow Control on the 3b1.  If the below findings are common knowledge
> 	to some, such is life.  If anything is incorrect, please post a
> 	correction!
> 
> 	[...]
> 
> Sigh, no correction needed.

Hmmm - is this 'sigh' a show of disappointment in the PC, or a subtle flame?

> I've gained a new appreciation for how GOOD the 3B1/UNIXPC really is and how
> EASY it is to install and/or upgrade system software on the 3B1; those other
> systems suck dead bunnies through a straw in that regards in my opinion.

I've never tried this.  I have some straws, and could find some dead bunnies.
Or, should we move this discussion to alt.sex.sucking.dead.bunnies??

Our findings seem to support most everything you say.  I'll post a "wrapup"
article outside of this reply.

-- 
Marc Weinstein
{simon,royko,tellab5}!linac!fithp!mhw		Elmhurst, IL
-or- {internet host}!linac.fnal.gov!fithp!mhw

mhw@fithp (Marc Weinstein) (05/19/91)

From article <1991May16.184350.27211@ingres.Ingres.COM>, by daveb@Ingres.COM (Dave Brower):
> In article <2793@public.BTR.COM>, thad@public.BTR.COM (Thaddeus P. Floryan) writes:
>>Recent private conversations with "people who should know" (:-) have elicited
>>the info that HFC on *INPUT* is *NOT* implemented in *ANY* kernel that CT has
>>produced.
>> ...
> 
> Is there a chance that replacement tty drivers could make HFC work, or
> would it be done somewhere else?  Not that re-writing a tty driver is
> easy, but given some of the other things people have done, it may not be
> out of the question.

It would really be a question of rewriting the incoming character offloading
code, to make it apply HFC when *some* condition arises.  That could be
when the interrupt buffer is seen to be getting full.  The problem is that
it assumes that the 3B1 can keep up.  If we were to achieve true 19200
throughput, which for simplicity we'll say is ~2000 Bps, this would imply
that the 3B1 would have to be able to keep up with interrupts being
generated 2000 times/sec, leaving some 500 microseconds to service each
interrupt.  My guess is this poses real problems for the microprocessor.
I don't think it can keep up.

The real solution would have to come in the way of buffered tty's or
a DMA for tty activity.

> I'm assuming by the quoted article that the HFC is
> visible to the kernel, but that it is being ignored.

No - with HFC properly turned on (see other articles) the kernel WILL
stop the flow of data on an outgoing port when the CTS signal is negated.
It's incoming data which is the problem.

-- 
Marc Weinstein
{simon,royko,tellab5}!linac!fithp!mhw		Elmhurst, IL
-or- {internet host}!linac.fnal.gov!fithp!mhw

car@ramecs.UUCP (Chris Rende) (05/20/91)

From article <1991May13.010730.4743@fithp>, by mhw@fithp (Marc Weinstein):
> Even if we checked immediately
> after calling the /etc/hfc_ctl program, the bit was NOT set.
> 
> First and foremost, the /etc/hfc_ctl program appears to ENABLE HFC, not turn
> it on.

My 3b1 (3.5 Foundation, 3.51 Development) doesn't have /etc/hfc_ctl.

Where does /etc/hfc_ctl come from? (I've asked before but never received a
response).

car.
-- 
Christopher A. Rende           Central Cartage (Nixdorf/Pyramid/SysVR2/BSD4.3)
uunet!edsews!rphroy!trux!car   Multics,DTSS,Unix,Shortwave,Scanners,UnixPC/3B1
car@trux.mi.org                Minix 1.2,PC/XT,Mac+,TRS-80 Model I,1802 ELF
trux!ramecs!car     "I don't ever remember forgetting anything." - Chris Rende

bdb@becker.UUCP (Bruce D. Becker) (05/20/91)

In article <1991May18.140028.11062@highspl> burris@highspl (David Burris) writes:
|
|Does anyone have the source code for the existing drivers? Also,
|does the HARDWARE support HFC on input? Or more simply, are both the
|RTS & CTS lines connected to the RS-232 and the UART? I'm
|considering "hacking" up a device driver that uses both input and
|output HFC. I simply CAN'T STAND not being able to use 19200 to talk
|to a modem that cost half as much as my 7300.

	I'm puzzled about your belief that HFC is
	a requirement for 19200 bps connections.
	This is certainly not the case as far as
	I can see. I've been using 19200 bps
	successfully for a long time without
	ever using HFC, as have many others.

Cheers,
-- 
  ,u,	 Bruce Becker	Toronto, Ontario
a /i/	 Internet: bdb@becker.UUCP, bruce@gpu.utcs.toronto.edu
 `\o\-e	 UUCP: ...!utai!mnetor!becker!bdb
 _< /_	 "It's the death of the net as we know it (and I feel fine)" - R.A.M.

beyo@beyonet.UUCP (Steve Urich) (05/21/91)

In article <354@ramecs.UUCP>, car@ramecs.UUCP (Chris Rende) writes:
> 
> My 3b1 (3.5 Foundation, 3.51 Development) doesn't have /etc/hfc_ctl.

	<*> /etc/hfc_ctl comes for the 3.51 Foundation. To get HFC 
	    on 3.0 or 3.5 you have to get the following programs for
	    osu-cis.

	    uucp osu-cis!~/att7300/STORE/HFC3.0+IN.Z   (3.0 foundation)
	    uucp osu-cis!~/att7300/STORE/HFC3.5+IN.Z   (3.5 foundation)

	    You might have to change the scripts around to suit your
	    needs.

					Steve WB3FTP
			wells!beyonet!beyo@dsinc.dsi.com

bdb@becker.UUCP (Bruce D. Becker) (05/21/91)

In article <1991May18.170853.2649@fithp> mhw@fithp (Marc Weinstein) writes:
|From article <101141@becker.UUCP>, by bdb@becker.UUCP (Bruce D. Becker):
|[...]
|>       HFC isn't actually
|> 	useful for UUCP transfers, and wreaks havoc with
|> 	interactive users because the modem's buffering
|> 	system has no concept of interrupt characters
|> 	(like "^C") needing special handling.
|
|Well, sort of...The place where HFC on the UNIXPC really works is when your
|PC can send chars out faster than the remote modem can handle them.  For
|instance, if either the DCE-to-DCE speed or the DCE-to-DTE speed on the far
|end are less than the host DTE-to-DCE speed, then the modems will apply
|HFC and the UNIXPC will properly halt data transmission.  HFC does NOT
|seem to work for handling overflow on incoming PC ports.

	I'm having a hard time understanding why
	speed changes are necessary. For most
	things compression is irrelevant, or they
	are done in the host as in news batches
	or uucp files. Doing compression in the
	modem seems wasteful of resources due
	to the fact that uncompressed data gets
	punped thru the serial interface with
	an interrupt service routine invocation
	for each character!
	
	Naturally on a little beastie like the 3B1
	this is pretty ferocious CPU consumption
	at high baud rates. Better to have direct
	end-to-end transfers at the same speed,
	with no buffering in the modems. 


|> 	I've run the serial port (actually tty002, but
|> 	that oughtn't to be a real difference) at 19200
|> 	with a direct connection to a faster system
|> 	(with respect to serial speed), using a protocol
|> 	which sends a 1K block and gets an ACK in
|> 	response. The fastest I can send stuff is about
|> 	1300 bytes/sec, which I take to be the maximum
|> 	rate at which characters can be delivered out
|> 	the serial port (probably interrupt service
|> 	overhead).
|
|Yeah - we'd like to try this, but we only support the 'g' protocol in
|UUCP.  Is this a UUCP connection you're talking about, or a UMODEM-type
|transfer?

	Both, actually (UUCP 'g'). This is talking to
	an Amiga running Handshake terminal emulator
	(very fast), and Dillon UUCP 1.06.


-- 
  ,u,	 Bruce Becker	Toronto, Ontario
a /i/	 Internet: bdb@becker.UUCP, bruce@gpu.utcs.toronto.edu
 `\o\-e	 UUCP: ...!utai!mnetor!becker!bdb
 _< /_	 "It's the death of the net as we know it (and I feel fine)" - R.A.M.

mhw@fithp (Marc Weinstein) (05/22/91)

From article <354@ramecs.UUCP>, by car@ramecs.UUCP (Chris Rende):
> From article <1991May13.010730.4743@fithp>, by mhw@fithp (Marc Weinstein):
>> Even if we checked immediately
>> after calling the /etc/hfc_ctl program, the bit was NOT set.
>> 
>> First and foremost, the /etc/hfc_ctl program appears to ENABLE HFC, not turn
>> it on.
> 
> My 3b1 (3.5 Foundation, 3.51 Development) doesn't have /etc/hfc_ctl.

True.

> Where does /etc/hfc_ctl come from? (I've asked before but never received a
> response).

It comes with the 3.51 Foundation.

-- 
Marc Weinstein
{simon,royko,tellab5}!linac!fithp!mhw		Elmhurst, IL
-or- {internet host}!linac.fnal.gov!fithp!mhw

mhw@fithp (Marc Weinstein) (05/22/91)

From article <102086@becker.UUCP>, by bdb@becker.UUCP (Bruce D. Becker):
> In article <1991May18.140028.11062@highspl> burris@highspl (David Burris) writes:
> |Does anyone have the source code for the existing drivers? Also,
> |does the HARDWARE support HFC on input? Or more simply, are both the
> |RTS & CTS lines connected to the RS-232 and the UART?
> 
> 	I'm puzzled about your belief that HFC is
> 	a requirement for 19200 bps connections.
> 	This is certainly not the case as far as
> 	I can see.

Lucky you.

>       I've been using 19200 bps
> 	successfully for a long time without
> 	ever using HFC, as have many others.

Sending??  Receiving??  Idle system or lots of activity???

There are many factors here.  If you have very compressible data, and
your port rate is set to 19200 and you are using MNP5 or V.42bis, you
will see transfer rates getting close to 1900 Bps.  The UNIXPC can barely
pump out data that fast, let alone receive it.  However, if you are using
the 'g' protocol in UUCP, or XMODEM (anything which sends blocks of data
with ACKs in between), then you don't start to really push things.

Try using MNP5 plus the UUCP 'e' protocol, and start up two compiles
in the background.  If you can't make it croak, you are indeed lucky.

I'm beginning to think that there *ARE* select PCs out there which may
just be able to handle these higher throughputs.  I know of at least
one 3b1 which can communicate with a Sun using V.42bis, 19200 port rate,
and sees ~1800 Bps with no data corruption.  Mine can't do this, and I'm
not sure what the difference is.  Perhaps the port rate is not exactly
19200 on some systems - perhaps the CPU clock is just a bit faster.

Anyone care to speculate?

-- 
Marc Weinstein
{simon,royko,tellab5}!linac!fithp!mhw		Elmhurst, IL
-or- {internet host}!linac.fnal.gov!fithp!mhw

mhw@fithp (Marc Weinstein) (05/22/91)

From article <103431@becker.UUCP>, by bdb@becker.UUCP (Bruce D. Becker):
> In article <1991May18.170853.2649@fithp> mhw@fithp (Marc Weinstein) writes:
> |From article <101141@becker.UUCP>, by bdb@becker.UUCP (Bruce D. Becker):
> |
> |Well, sort of...The place where HFC on the UNIXPC really works is when your
> |PC can send chars out faster than the remote modem can handle them.  For
> |instance, if either the DCE-to-DCE speed or the DCE-to-DTE speed on the far
> |end are less than the host DTE-to-DCE speed, then the modems will apply
> |HFC and the UNIXPC will properly halt data transmission.  HFC does NOT
> |seem to work for handling overflow on incoming PC ports.
> 
> 	I'm having a hard time understanding why
> 	speed changes are necessary.

Most modems now support the ability to nail your port rate to something 
(9600 or 19200) and vary the modem-to-modem rate to suit the remote 
system.  This makes administration much easier - one rate in gettydefs,
one rate in your Systems file.

>       For most
> 	things compression is irrelevant, or they
> 	are done in the host as in news batches
> 	or uucp files.  Doing compression in the
> 	modem seems wasteful of resources due
> 	to the fact that uncompressed data gets
> 	pumped thru the serial interface with
> 	an interrupt service routine invocation
> 	for each character!

Hmmm - don't understand the logic here.  If I want to send a file to
someone, and I know my modem will compress the file anyway, then I don't
have to bother with compressing the file before the fact.  Less wory,
less bother.

> 	Naturally on a little beastie like the 3B1
> 	this is pretty ferocious CPU consumption
> 	at high baud rates. Better to have direct
> 	end-to-end transfers at the same speed,
> 	with no buffering in the modems. 

True, if both ends use the same speed.

-- 
Marc Weinstein
{simon,royko,tellab5}!linac!fithp!mhw		Elmhurst, IL
-or- {internet host}!linac.fnal.gov!fithp!mhw

burris@highspl (David Burris) (05/22/91)

From article <102086@becker.UUCP>, by bdb@becker.UUCP (Bruce D. Becker):
>
> 	I'm puzzled about your belief that HFC is
> 	a requirement for 19200 bps connections.
> 	This is certainly not the case as far as
> 	I can see. I've been using 19200 bps
> 	successfully for a long time without
> 	ever using HFC, as have many others.
> 

Don't I keep seeing that you are running UUCP 'g' protocol? 

I'm puzzled why your puzzled! It's simple, myself and my neighbors
want reliable transfers of any size file at high baud rates. We have
been experimenting with sending large files using an interface speed
of 19200 and V.42bis and UUCP 'e' protocol and can prove repeatedly
that it WILL NOT WORK. The receiving system CANNOT KEEP UP.

I have made considerable improvements and can transfer 100K
compressed news batches with no problem. The failure seems to occur
around 150K.

I am currently studying the kernel/driver code (not source) to find
out what's happening. So far, at 19200 data speeds, I've found a few
possible places where the SOFTWARE will drop entire clists on input
overflow.

Also, as was mentioned in a previous article, all that is necessary
is to find the system activities that can cause lost characters and
strategically toggle flow control before and after those activities.

Right now, I'm concentrating on the tty drivers and line discipline
to make sure that they don't silently drop clists any more. Instead
I would like to exert HFC and wait for clists to free.

As a start, make certain that you allocate enough clists for your
system and set the ttyhog tunable parameter to a high value. If the
input characters allocated to clists exceeds ttyhog, the entire
input buffer is silently flushed. On the 7300, the maximum for
ttyhog is 1024. This is less than one seconds worth of buffering
using V.42bis at 19200. Although this may not be where the
characters are getting lost, its a candidate.

-- 
================================================================
David Burris					Aurora,Il.
burris@highspl		           ..!linac!highspl!burris
================================================================

yarvin-norman@cs.yale.edu (Norman Yarvin) (05/23/91)

mhw@fithp (Marc Weinstein) writes:
>I'm beginning to think that there *ARE* select PCs out there which may
>just be able to handle these higher throughputs.  I know of at least
>one 3b1 which can communicate with a Sun using V.42bis, 19200 port rate,
>and sees ~1800 Bps with no data corruption.  Mine can't do this, and I'm
>not sure what the difference is.  Perhaps the port rate is not exactly
>19200 on some systems - perhaps the CPU clock is just a bit faster.

The device driver guide (documents/DDDguide.mm.Z on osu-cis) mentions
something which might be a reason for this.  Interrupts are handled by
having a linked list of interrupt service routines for each priority level;
when an interrupt of that level occurs the routines are called one after
another.  The ordering of this list can make a big difference, and loadable
device drivers install their interrupt service routines at the head of the
list.

On the other hand, the list is presumably uniform among all copies of any
given version of the OS, and is only changed when loadable device drivers
are installed.  Furthermore, loadable device drivers might use different
interrupt levels than those used by the serial port.

bruce@balilly (Bruce Lilly) (05/23/91)

In article <1991May22.041659.13189@fithp> mhw@fithp (Marc Weinstein) writes:
>
>I'm beginning to think that there *ARE* select PCs out there which may
>just be able to handle these higher throughputs.  I know of at least
>one 3b1 which can communicate with a Sun using V.42bis, 19200 port rate,
>and sees ~1800 Bps with no data corruption.  Mine can't do this, and I'm
>not sure what the difference is.  Perhaps the port rate is not exactly
>19200 on some systems - perhaps the CPU clock is just a bit faster.
>
>Anyone care to speculate?

OK (remember, this is speculation, not hard facts): According to the
Device Driver Development Guide, the last driver to be installed has its
interrupt service routine(s) placed at the beginning of the interrupt
"chain". If one is using a combo board, with few other drivers, and if the
cmb driver is last to be installed, it might result in somewhat better
interrupt response than if there are many loadable drivers (particularly
huge ones like the ether driver) with the cmb driver loaded early (so that
its interrupt routine only gets characters after the other routines have
been polled).  I'm not sure how the built-in driver for /dev/tty000 is
linked into the interrupt service chain, but that might be another
pertinent factor.

Other possible factors are the version of the OS, hardware revision
levels, kernel modifications (such as the patch to speed up response to
keyboard input, which might slow down response to tty interrupts), the
number of installed serial boards, the type of serial boards (EIA or
combo).

I've re-ordered the loadable drivers here a few weeks ago to load the cmb
driver last in order to be able to see if that has makes any noticable
difference in handling serial response on the combo boards, but I haven't
had the time to test the response yet.  However, the system has stayed up
since the change, with no noticeable problems:

balilly.UUCP  up 17+02:56,     1 user,   load    0.14,    0.23,    0.16

 DEVNAME  ID  BLK CHAR  LINE   SIZE    ADDR     FLAGS
    wind   0   -1    7   -1  0x9000   0x4d000 ALLOC BOUND 
   ether   1   -1   10   -1 0x13000  0x360000 ALLOC BOUND 
    nipc   2   -1   -1   -1  0x7000  0x373000 ALLOC BOUND 
      xt   3   -1    9    1  0x3000  0x37a000 ALLOC BOUND 
     dup   4   -1   18   -1  0x1000   0x5f000 ALLOC BOUND 
     cmb   5   -1   -1   -1  0x3000  0x37d000 ALLOC BOUND 

-- 
	Bruce Lilly		blilly!balilly!bruce@sonyd1.Broadcast.Sony.COM

dnichols@ceilidh.beartrack.com (DoN Nichols) (05/23/91)

In article <1991May22.042143.13250@fithp> mhw@fithp (Marc Weinstein) writes:
>From article <103431@becker.UUCP>, by bdb@becker.UUCP (Bruce D. Becker):

	[ ... ]

>>       For most
>> 	things compression is irrelevant, or they
>> 	are done in the host as in news batches
>> 	or uucp files.  Doing compression in the
>> 	modem seems wasteful of resources due
>> 	to the fact that uncompressed data gets
>> 	pumped thru the serial interface with
>> 	an interrupt service routine invocation
>> 	for each character!
>
>Hmmm - don't understand the logic here.  If I want to send a file to
>someone, and I know my modem will compress the file anyway, then I don't
>have to bother with compressing the file before the fact.  Less wory,
>less bother.

	If you pre-compress the file, that's fewer characters through your
serial interface, and therefore fewer interrupts.  And, since the
transmission speed between modems is fixed (after compression), the fewer 
interrupts per unit time (on the average).  It's the high rate of interrupts
that really brings the system to its knees.

	Since the ethernet interface is also a source of interrupts, (I
think at the same or higher priority), having it running at the same time as
a high-speed uucp transfer can cause lost characters.  (I can get away with
running the interface to my TeleBit at 19200 IF the ethernet is stopped, and
the system at the other end is running at 9600, so the average data rate is
slow enough to let the 3B1 empty the buffers.)  If ethernet is running, I
get lots of retrys, and eventual timeouts and failures to the same system at
19200, but it survives with the 3B1's interface speed at 9600.  If I try to
go to the uucp 'e' protocol, I get failures at 9600 with the ethernet
running.  I haven't tried it with the ethernet stopped.  The system at the
other end is a PC/RT running AIX, and the serial interface is a dumb
multi-port board, so IT can't keep up with 19200, therefore it doesn't hurt
much to tie my system to 9600 :-)

	[ ... ]

	Good Luck
		DoN.
-- 
Donald Nichols (DoN.)		| Voice (Days):	(703) 664-1585
D&D Data			| Voice (Eves):	(703) 938-4564
Disclaimer: from here - None	| Email:     <dnichols@ceilidh.beartrack.com>
	--- Black Holes are where God is dividing by zero ---

res@colnet.uucp (Rob Stampfli) (05/24/91)

>Well, sort of...The place where HFC on the UNIXPC really works is when your
>PC can send chars out faster than the remote modem can handle them.  For
>instance, if either the DCE-to-DCE speed or the DCE-to-DTE speed on the far
>end are less than the host DTE-to-DCE speed, then the modems will apply
>HFC and the UNIXPC will properly halt data transmission.  HFC does NOT
>seem to work for handling overflow on incoming PC ports.

The above comment seems to have the support of a number of people who have
played around with hardware flow control.  I was just perusing "Managing
UUCP and Usenet" by O'Reilly & Associates, and here is what they have to
say about this (brackets mine):

  "In the RS-232 standard, [ hardware ] flow control is defined only for
  half-duplex connections -- that is, for connections in which data can be
  transmitted only in one direction at a time.  However, the standard has
  been adapted, de-facto, for full-duplex communications as well.

  "In the half-duplex standard, the DTE [ computer ] asserts RTS when it
  wants to send data.  The DCE [ modem ] replies with CTS when it is ready,
  and the DTE begins sending data.  Unless RTS and CTS are both asserted,
  only the DCE can send data.

  "However, in the full-duplex variations, RTS/CTS is used as a kind of
  throttle.  The signals have the opposite meanings than they do for
  half-duplex communications.

  "When a DTE device is able to accept data, it asserts pin 4, Request to
  Send.  If the DCE is ready to accept data, it asserts pin 5, Clear to
  Send.  If the voltage on RTS or CTS drops at any time, this tells the
  sending system that the receiver is not ready for more data...

This seems to agree with what the poster says above.  Could it be that
AT&T implemented the half-duplex standard, which deals only with DTE to DCE
flow control?  I have always assumed HFC worked like what was described as
the full-duplex variation, but maybe this is not the case.   It would be
interesting to hear from someone more well versed in the implementation
of the standards.

PS:  The short excerpt from "Managing UUCP and Usenet" is typical of the
calibre of information contained in this publication.  I would recommend it
highly to anyone owning a Unix-PC.  It is indeed the UUCP "bible".
-- 
Rob Stampfli, 614-864-9377, res@kd8wk.uucp (osu-cis!kd8wk!res), kd8wk@n8jyv.oh

bruce@balilly (Bruce Lilly) (05/25/91)

In article <1991May23.133742.560@ceilidh.beartrack.com> dnichols@ceilidh.beartrack.com (DoN Nichols) writes:
>
>	Since the ethernet interface is also a source of interrupts, (I

Just out of curiosity, do you have SQE enabled or disabled on your MAU (or
AUI, depending on which acronym-of-the-week club you prefer)?
-- 
	Bruce Lilly		blilly!balilly!bruce@sonyd1.Broadcast.Sony.COM

bruce@balilly (Bruce Lilly) (05/25/91)

In article <1991May24.054753.28804@colnet.uucp> res@colnet.uucp (Rob Stampfli) writes:
>
>  "However, in the full-duplex variations, RTS/CTS is used as a kind of
>  throttle.  The signals have the opposite meanings than they do for
>  half-duplex communications.
>
>  "When a DTE device is able to accept data, it asserts pin 4, Request to
>  Send.  If the DCE is ready to accept data, it asserts pin 5, Clear to
>  Send.  If the voltage on RTS or CTS drops at any time, this tells the
>  sending system that the receiver is not ready for more data...
>
>This seems to agree with what the poster says above.  Could it be that
>AT&T implemented the half-duplex standard, which deals only with DTE to DCE
>flow control?  I have always assumed HFC worked like what was described as
>the full-duplex variation, but maybe this is not the case.   It would be
>interesting to hear from someone more well versed in the implementation
>of the standards.

HFC in the 3b1 (at least with 3.51m) works as described above
(full-duplex). There is, however a TCSRTS ioctl which only works in
half-duplex.

-- 
	Bruce Lilly		blilly!balilly!bruce@sonyd1.Broadcast.Sony.COM

mhw@fithp (Marc Weinstein) (05/25/91)

From article <1991May23.003857.8878@blilly.UUCP>, by bruce@balilly (Bruce Lilly):
> In article <1991May22.041659.13189@fithp> mhw@fithp (Marc Weinstein) writes:
>>
>>I'm beginning to think that there *ARE* select PCs out there which may
>>just be able to handle these higher throughputs.  I know of at least
>>one 3b1 which can communicate with a Sun using V.42bis, 19200 port rate,
>>and sees ~1800 Bps with no data corruption.
> 
> OK (remember, this is speculation, not hard facts): According to the
> Device Driver Development Guide, the last driver to be installed has its
> interrupt service routine(s) placed at the beginning of the interrupt
> "chain".

How do you reorder the devices drivers in the chain?

> I'm not sure how the built-in driver for /dev/tty000 is
> linked into the interrupt service chain, but that might be another
> pertinent factor.

Hmmm - we use tty000.  We could switch to a combo board tty, but we
figured that tty000 *should* have the best response.

> Other possible factors are the version of the OS, hardware revision
> levels, kernel modifications

Has anyone seen any correlation with any ktune parameters??

> (such as the patch to speed up response to
> keyboard input, which might slow down response to tty interrupts),

We were under the impression that this patch was NOT just for keyboard
input, but was for ANY character I/O device.  Is this not the case?

We could swear our tty I/O improved with the patch in place.

-- 
Marc Weinstein
{simon,royko,tellab5}!linac!fithp!mhw		Elmhurst, IL
-or- {internet host}!linac.fnal.gov!fithp!mhw

mhw@fithp (Marc Weinstein) (05/25/91)

From article <1991May23.133742.560@ceilidh.beartrack.com>, by dnichols@ceilidh.beartrack.com (DoN Nichols):
> In article <1991May22.042143.13250@fithp> mhw@fithp (Marc Weinstein) writes:
>>From article <103431@becker.UUCP>, by bdb@becker.UUCP (Bruce D. Becker):
> 
>>> 	Doing compression in the
>>> 	modem seems wasteful of resources due
>>> 	to the fact that uncompressed data gets
>>> 	pumped thru the serial interface with
>>> 	an interrupt service routine invocation
>>> 	for each character!
>>
>>Hmmm - don't understand the logic here.  If I want to send a file to
>>someone, and I know my modem will compress the file anyway, then I don't
>>have to bother with compressing the file before the fact.  Less wory,
>>less bother.
> 
> 	If you pre-compress the file, that's fewer characters through your
> serial interface, and therefore fewer interrupts.  And, since the
> transmission speed between modems is fixed (after compression), the fewer 
> interrupts per unit time (on the average).  It's the high rate of interrupts
> that really brings the system to its knees.

True enough.  In this context, I agree.  I just don't see this as an
argument for turning off compression in the modem by default.

-- 
Marc Weinstein
{simon,royko,tellab5}!linac!fithp!mhw		Elmhurst, IL
-or- {internet host}!linac.fnal.gov!fithp!mhw

bruce@balilly (Bruce Lilly) (05/25/91)

In article <1991May25.062742.22291@fithp> mhw@fithp (Marc Weinstein) writes:
>From article <1991May23.003857.8878@blilly.UUCP>, by bruce@balilly (Bruce Lilly):
>> 
>> OK (remember, this is speculation, not hard facts): According to the
>> Device Driver Development Guide, the last driver to be installed has its
>> interrupt service routine(s) placed at the beginning of the interrupt
>> "chain".
>
>How do you reorder the devices drivers in the chain?

To set the order of loadable device drivers on bootup, simply edit
/etc/lddrv/drivers, placing the driver names in the desired order. To do
it without rebooting, you'll have to manually run /etc/lddrv/lddrv to
remove the drivers, then reload them. Consult TFM for details.

-- 
	Bruce Lilly		blilly!balilly!bruce@sonyd1.Broadcast.Sony.COM

dnichols@ceilidh.beartrack.com (DoN Nichols) (05/26/91)

In article <1991May24.220705.14489@blilly.UUCP> bruce@balilly (Bruce Lilly) writes:
>In article <1991May23.133742.560@ceilidh.beartrack.com> dnichols@ceilidh.beartrack.com (DoN Nichols) writes:
>>
>>	Since the ethernet interface is also a source of interrupts, (I
>
>Just out of curiosity, do you have SQE enabled or disabled on your MAU (or
>AUI, depending on which acronym-of-the-week club you prefer)?

	If that translates to the ethernet transceiver connected to the
thick-wire cable, it is disabled on every one on my net.  As I understand,
it is used for some systems (e.g. DEC) which are unhappy if it is not
present, (they call it a heartbeat).  Since I have no system which says that
it requires the SQE, I saw no reason to add unnecessary packets to any part
of the system.  (Does it propigate through the net, or is it only back down
the DB-15 cable to the ethernet card in the system.

	I also have trailers enabled on everything, since nothing is
allergic to them.

	System consists of (at present) two active UNIX-PCs (one 7300, one
3b1), one Tektronix 6130, and one Sun 2/120.  Both the Tektronix and the Sun
are running BSD4.2-derived code, so the WillGoWrong software should be right
at home. :-)

	If I need 19200 with HFC, I guess I'll try moving the TrailBlazer to
the Sun.  Maybe it can handle it.  Same cpu, same clock speed, a bit faster
on the dhrystones, perhaps a more intelligent serial card.
-- 
Donald Nichols (DoN.)		| Voice (Days):	(703) 664-1585
D&D Data			| Voice (Eves):	(703) 938-4564
Disclaimer: from here - None	| Email:     <dnichols@ceilidh.beartrack.com>
	--- Black Holes are where God is dividing by zero ---

bruce@balilly (Bruce Lilly) (05/27/91)

In article <1991May25.195809.10314@ceilidh.beartrack.com> dnichols@ceilidh.beartrack.com (DoN Nichols) writes:
>In article <1991May24.220705.14489@blilly.UUCP> bruce@balilly (Bruce Lilly) writes:
>>Just out of curiosity, do you have SQE enabled or disabled on your MAU (or
>>AUI, depending on which acronym-of-the-week club you prefer)?
>
>	If that translates to the ethernet transceiver connected to the
>thick-wire cable, it is disabled on every one on my net.  As I understand,

Thick, thin, or twisted.

>  (Does it propigate through the net, or is it only back down
>the DB-15 cable to the ethernet card in the system.

Just to the interface card.

-- 
	Bruce Lilly		blilly!balilly!bruce@sonyd1.Broadcast.Sony.COM

burris@highspl (David Burris) (05/28/91)

From article <1991May23.003857.8878@blilly.UUCP>, by bruce@balilly (Bruce Lilly):
> OK (remember, this is speculation, not hard facts): According to the
> Device Driver Development Guide, the last driver to be installed has its
> interrupt service routine(s) placed at the beginning of the interrupt
> "chain". If one is using a combo board, with few other drivers, and if the
> cmb driver is last to be installed, it might result in somewhat better
> interrupt response than if there are many loadable drivers (particularly
> huge ones like the ether driver) with the cmb driver loaded early (so that
> its interrupt routine only gets characters after the other routines have
> been polled).  I'm not sure how the built-in driver for /dev/tty000 is
> linked into the interrupt service chain, but that might be another
> pertinent factor.
> 

I don't understand this and I invite you to enlighten me.

If we assume a steady stream of data and understand that all the
interrupt routines must be "polled" for each interrupt, where is the
time savings?


-- 
================================================================
David Burris					Aurora,Il.
burris@highspl		           ..!linac!highspl!burris
================================================================

bruce@balilly (Bruce Lilly) (05/30/91)

In article <1991May28.035153.544@highspl> burris@highspl (David Burris) writes:
>
>I don't understand this and I invite you to enlighten me.
>
>If we assume a steady stream of data and understand that all the
>interrupt routines must be "polled" for each interrupt, where is the
>time savings?

Assume drivers are polled in the order
driver1->driver2->driver3->driver4. A finite amount of time is required
for each driver to determine whether it is responsible for handling an
interrupt (i.e. polling the hardware). If driver4 has the interrupt
handler, the interrupt will not be serviced until after driver1, driver2,
and driver3 have determined that there are no interrupts for them. On the
other hand, if the order is driver4->driver3->driver2->driver1, then
dirver4 can handle interrupts in a more timely manner.

Note that if there are many interrupts from different devices at the same
interrupt level, some of the earlier drivers in the chain may find that
there are interrupts for them that require servicing, further delaying
drivers that are later in the chain.

Also note that if a non-maskable interrupt, or a high-priority interrupt
occurs while early drivers are checking for an interrupt at a low
priority, later drivers in the low-priority chain will also be delayed
while the high-priority interrupt is serviced.

Once an interrupt has been serviced, there is no need for other drivers to
poll hardware unless there is another device which requires service at the
same interrupt priority.

-- 
	Bruce Lilly		blilly!balilly!bruce@sonyd1.Broadcast.Sony.COM

jmm@eci386.uucp (John Macdonald) (06/01/91)

In article <1991May29.225254.27852@blilly.UUCP> bruce@balilly (Bruce Lilly) writes:
|In article <1991May28.035153.544@highspl> burris@highspl (David Burris) writes:
|>
|>I don't understand this and I invite you to enlighten me.
|>
|>If we assume a steady stream of data and understand that all the
|>interrupt routines must be "polled" for each interrupt, where is the
|>time savings?
|
|Assume drivers are polled in the order
|driver1->driver2->driver3->driver4. A finite amount of time is required
|for each driver to determine whether it is responsible for handling an
|interrupt (i.e. polling the hardware). If driver4 has the interrupt
|handler, the interrupt will not be serviced until after driver1, driver2,
|and driver3 have determined that there are no interrupts for them. On the
|other hand, if the order is driver4->driver3->driver2->driver1, then
|dirver4 can handle interrupts in a more timely manner.
|
|Note that if there are many interrupts from different devices at the same
|interrupt level, some of the earlier drivers in the chain may find that
|there are interrupts for them that require servicing, further delaying
|drivers that are later in the chain.

There are some significant choices to be made when the kernel is being
set up.  The routine used to process shared interrupts has to choose
the answer to a number of questions, in particular:

    (1) What order to poll the low level drivers?
	Usually they are polled in the same order every time (scan a
	list in order).  An alternative is to keep the list in a loop
	and have some algorithm choose where to start (usually this
	is either the device that last interupted or the device that
	has least recently been polled).

    (2) When to quit?
	This can have such possibilities as: quit after the first
	driver has processed an interrupt, quit after all drivers
	have been polled once, quit after all drivers have denied
	having an outstanding interrupt.

(DISCLAIMER: I don't know how these issues were resolved for the
3b1 - I've looked into them for other Unix kernels on similar
hardware.)

In David's original question, I think that the work "all" means that
he is talking only about cases in which "all" drivers get polled for
each interrupt (at least once).

Bruce's answer is right for those cases where the scanner quits
after the first driver admits that the interrupt was for it.

David is right that there is no difference in potential delay if all
devices are scanned for each interrupt.  The time between the point
when a device raises an interrupt and the time when the driver gets
invoked includes the time required to poll all drivers later in the
chain for a previous interrupt that is still being processed as well
as the time to process any outstanding interrupts that there may be
for drivers that are earlier on the chain.  There some bias based on
position, but it disappears as soon as there are burst of three or
more fast incoming interrupts for the same device (and if you get
two fast enough to lose a character then you are almost always in a
situation where there will be a third coming at about the same
speed).

The "right" answer to the kernel design choices listed above depends
upon a number of things
    - how much buffering do the hardware devices have
    - are all devices connected to the same interrupt level really
	of equal importance [this is often impossible to determine
	completely until you look at the final layout in the end
	customers site]
    - can any or all of the devices afford to miss an interrupt (a
	mouse port that occassionally loses an interrupt won't be
	noticed - so what if you sometimes have to move the mouse
	27 millimeters and sometimes only 23 mm - so it could be
	put at the low end of an unfair scanner quite well).
    - are there any special hardware requirements (like having a
	programmable interrupt controller that expects individual
	acknowledgements and then tries to raise another interrupt
	if there are additional devices ready)
-- 
Usenet is [like] the group of people who visit the  | John Macdonald
park on a Sunday afternoon. [...] luckily, most of  |   jmm@eci386
the people are staying on the paths and not pissing |
on the flowers - Gene Spafford

bruce@balilly (Bruce Lilly) (06/01/91)

In article <1991May31.185952.4619@eci386.uucp> jmm@eci386.UUCP (John Macdonald) writes:
>
>    (2) When to quit?
>	This can have such possibilities as: quit after the first
>	driver has processed an interrupt, quit after all drivers
>	have been polled once, quit after all drivers have denied
>	having an outstanding interrupt.
>
>(DISCLAIMER: I don't know how these issues were resolved for the
>3b1 - I've looked into them for other Unix kernels on similar
>hardware.)

Quitting after the first driver has processed an interrupt does not
gracefully handle the case where two or more devices have pending
interrupts at the same interrupt level.

The 68000 series interrupt hardware/software uses 3 interrupt lines to
encode an interrupt priority level (7 possible, as well as a no-interrupt
condition). When a device, such as a serial interface controller chip,
wishes to generate an interrupt, it typically drives one of its pins low.
This signal then usually goes to an interrupt controller, which coverts
that one signal into the three which the CPU uses. In the 3B1, according
to the Device Driver Guide, only two interrupt levels are used, so the
interrupt level encoder might not be required. When the interrupting
device had had its interrupt serviced, as in reading the data register on
a serial controller, that device's interrupt output pin is no longer
driven. If there is another device at the same interrupt level, the CPU
will still see an interrupt condition. If all devices at the same level
with pending interrupts have been serviced, the interrupt priority visible
to the CPU will have changed (if there are no pending interrupts at any
priority, the CPU will see a no-interrupt condition -- if a higher-pririty
group of interrupts has been serviced, and there exists one or more
devices with pending interrupts at a lower priority, the CPU will see the
lower priority).

Assuming the interrupt servicing code in the kernel does things in a
logical manner, service routines in the interrupt handling chain need only
be polled until there are no more pending interrupts at the priority level
of the chain. If the 3B1's kernel continues throught the chain, that could
be a source of poor interrupt response time when many device drivers are
loaded (it could also be fixed).

An earlier poster had raised a question about the kernel modification
which was intended to improve response time. My recollection is that that
involved a change to the clock interrupt, which would cause *output*
characters to be sent more frequently. The extra overhead in so doing
could very well delay handling of *incoming* characters at a lower (serial
card) priority.

Perhaps the fellow who posted a day or so ago that he worked on the 3.51
kernel could confirm how it handles these interrupts, and whether the
interrupt services routines are ordered in a last-loaded, first-serviced
manner as described in the Device Driver Guide (which was written for an
earlier version of the OS).
-- 
	Bruce Lilly		blilly!balilly!bruce@sonyd1.Broadcast.Sony.COM

jmm@eci386.uucp (John Macdonald) (06/06/91)

In article <1991Jun1.113531.5096@blilly.UUCP> bruce@balilly (Bruce Lilly) writes:
|In article <1991May31.185952.4619@eci386.uucp> jmm@eci386.UUCP (John Macdonald) writes:
|>
|>    (2) When to quit?
|>	This can have such possibilities as: quit after the first
|>	driver has processed an interrupt, quit after all drivers
|>	have been polled once, quit after all drivers have denied
|>	having an outstanding interrupt.
|>
|>(DISCLAIMER: I don't know how these issues were resolved for the
|>3b1 - I've looked into them for other Unix kernels on similar
|>hardware.)
|
|Quitting after the first driver has processed an interrupt does not
|gracefully handle the case where two or more devices have pending
|interrupts at the same interrupt level.

As I said, it is a design decision - i.e. there is no right answer,
just the answer that best matches the circumstances.  Quitting after
the first driver has processed an interrupt causes an immediate new
interrupt to occur and another (full) scan of the list of device [in
the case where there are multiple simultaneous equal level interrupts].
Whether this is a good choice depends upon how often multiple shared
interrupts occur on that particular system/interrupt.  If never, then
the immediate return is a clear win - interrupt processing will take
less time in many cases, and never take more.  If rarely, then it may
still be worth taking the immediate return - the increased delay when
simultaneous interrupts occur may be small compared to the savings
when they don't, as long as there is no danger of losing interrupts
or taking too long to respond when they do occur.

Unless I am remembering wrong, there is no way to check which interrupt
lines are still pending on the 68000 family (unless there is an interrupt
controller chip driving the lines that can be explicitly interrogated),
so the various choices I outlined are the alternatives available.  (You
could view the immediate return method as a way of checking the interrupt
lines - return form interrupt if the line is clear, restart interrupt
processing if the line is still set.)

|The 68000 series interrupt hardware/software uses 3 interrupt lines to
|encode an interrupt priority level (7 possible, as well as a no-interrupt
|condition). When a device, such as a serial interface controller chip,
|wishes to generate an interrupt, it typically drives one of its pins low.
|This signal then usually goes to an interrupt controller, which coverts
|that one signal into the three which the CPU uses. In the 3B1, according
|to the Device Driver Guide, only two interrupt levels are used, so the
|interrupt level encoder might not be required. When the interrupting
|device had had its interrupt serviced, as in reading the data register on
|a serial controller, that device's interrupt output pin is no longer
|driven. If there is another device at the same interrupt level, the CPU
|will still see an interrupt condition. If all devices at the same level
|with pending interrupts have been serviced, the interrupt priority visible
|to the CPU will have changed (if there are no pending interrupts at any
|priority, the CPU will see a no-interrupt condition -- if a higher-pririty
|group of interrupts has been serviced, and there exists one or more
|devices with pending interrupts at a lower priority, the CPU will see the
|lower priority).

Yep, the 68000 family uses level triggered interrupts - some other systems
use edge triggered interrupts.  This is a factor in the design choices.
With edge triggered interrupts, it is necessary to check all devices in
case more than one has triggered the interrupt.  With level triggered
interrupts, there is the option of the immediate return because any device
that simultaneously triggered the interrupt will *continue* to hold the
level set until it gets properly handled.

Such hardware design considerations lead to many software people having
a distinct preference for Motorola processors over Intel.

|Assuming the interrupt servicing code in the kernel does things in a
|logical manner, service routines in the interrupt handling chain need only
|be polled until there are no more pending interrupts at the priority level
|of the chain. If the 3B1's kernel continues throught the chain, that could
|be a source of poor interrupt response time when many device drivers are
|loaded (it could also be fixed).

How do you tell if there is another pending interrupt?  I don't recall
any processor status bits showing the state of those lines.  Either you
poll devices until you think you must have gotten em all, or you return
from the interrupt processing and let another interrupt come in again
immediately if there is another outstanding.  (Polling all devices is
not guarantee there will be not be such an immediate interrupt - you
can have a device raise its interrupt line *after* you have polled it,
while you are still polling a later device...)
-- 
Usenet is [like] the group of people who visit the  | John Macdonald
park on a Sunday afternoon. [...] luckily, most of  |   jmm@eci386
the people are staying on the paths and not pissing |
on the flowers - Gene Spafford