[comp.protocols.appletalk] does Mac barge into flashtalk?

paul@taniwha.UUCP (Paul Campbell) (02/26/89)

In article <20838@agate.BERKELEY.EDU> jim@insect.berkeley.edu () writes:
>A short piece on page 12 of InfoWorld, Feb. 20, 1989, by Mark Stephens
>entitled: "Tutorial. Appletalk Standard Present in All Macintosh Computers;
>But Access Has Its Price: Network Software is Slow" states that
>
>     "The major reason Appletalk is slow is that the Zilog serial
>     chip can be overwhelmed by clock pulses that arrive at rates
>     faster than 230.8 kbps.  When data is sent any faster than
>     that over the network,  Macs are prone to not detecting the
>     clock pulse, deciding there is no traffic, and barging into
>     conversations."
>
>Is there any truth to this?  If so, what implications does this have for
>protocols like TOPS Flashtalk on the same network as Macs?

What it really means is that the clock recovery ciruitry on the SCC
(phase locked loop) doesn't work at much over 230k .... I think that
both FlashTalk and DaynaTalk are both just external faster PLLs and clocks
(probably 2 chips and a crystal) plus extra software.

Most when an appletalk packet is received with CRC errs or timeouts
(ie a 'faster' packet on a slower interface or viceversa) it is discarded,
when you go to send something on the net you send a pulse on the net
just before starting the first packet which is intended to force a collision
on anyone else who is also trying to start to send a packet. (This is
because localtalk hardware can't really do Collision Sense like ethernet can).

One thing you should be aware of if you are going to run a net a 3 times
the speed is that collision detection (esp of the type mentioned above)
is only going to work on a net with a MAXIMUM size that is 1/3 that
of the maximum size of a 230k net. Also problems from terminations
and reflections are potentially worse (look over your net for loose 
connections, if you have a phonenet style cabling make sure all the termination
resistors are in the correct place, make sure you don't have long cable runs
near mains wiring or fluorescent lights etc etc


	Paul

-- 
Paul Campbell			..!{unisoft|mtxinu}!taniwha!paul (415)420-8179
Taniwha Systems Design, Oakland CA

    "Read my lips .... no GNU taxes" - as if they could tax free software

desnoyer@Apple.COM (Peter Desnoyers) (02/28/89)

In article <325@taniwha.UUCP> paul@taniwha.UUCP (Paul Campbell) writes:
>In article <20838@agate.BERKELEY.EDU> jim@insect.berkeley.edu () writes:
>>     "The major reason Appletalk is slow is that the Zilog serial
>>     chip can be overwhelmed by clock pulses that arrive at rates
>>     faster than 230.8 kbps.  
>>
>What it really means is that the clock recovery ciruitry on the SCC
>(phase locked loop) doesn't work at much over 230k .... I think that
>both FlashTalk and DaynaTalk are both just external faster PLLs and clocks
>(probably 2 chips and a crystal) plus extra software.
>
>One thing you should be aware of if you are going to run a net a 3 times
>the speed is that collision detection (esp of the type mentioned above)
>is only going to work on a net with a MAXIMUM size that is 1/3 that
>of the maximum size of a 230k net. 

Slight correction. The maximum delay on a LocalTalk network (300m, v
(est.) = 0.5) is about 2 microseconds. LocalTalk does not do hardware
collision detection, but instead uses a CSMA reservation system -
sending out a lapRTS and waiting 200uS for a lapCTS. The 200uS swamps
any possible cable delays. However, the efficiency is lowered because
beta (frame size / slot time) is decreased.

If you are interested in the performance of CSMA buses, read on...


For vanilla LocalTalk, beta = 106. The reservation cycle works as
unslotted Aloha, with an efficiency of 1/2e or 18.4%. For every
successful reservation (2e cycles, or 5.4*200uS or 1.1mS) you can send
a frame of 609.5 bytes + inter-dialogue gap or 21.2mS + 0.4mS. Thus
raw efficiency (for max size packets) is 21.2/(21.2+0.4+1.1) = 93.4%.
Actual throughput is lowered by LAP overhead (1.6%) and DDP overhead
(2.3%). 

If you speed it up 3 times, you get the same reservation period -
1.1mS - and a maximum frame size of 7.1mS, for an efficiency of
7.1/(1.1+7.1+0.4) = 82.6%. So you actually speed up by 2.7, instead of
3.  Still looks like a win to me. 

This analysis assumes worst-case load and best-case frame length
(maximum length frames). If you are not sending maximum-length frames
you are probably interested in delay, rather than throughput, anyway.
This analysis also ignores any non-theoretical, real-world issues. (:-)

				Peter Desnoyers