lyons@cdss.UUCP (Don R Lyons) (02/16/90)
I have purchased a telebit trailblazer+ to reduce my uunet phonebills . The system I am using is a 16MHz, Acer 386 running SCO Unix 3.2.0 . I know the TB+ is in PEP mode running the uucp proto 'cause I watched a xfer with uucico in debug mode. I have the following entries in the Systems and Devices file : Systems -> uunet Any ACU 19200 9-7038765055u ogin: ... Devices -> ACU tty2a - 300-19200 dialTBIT \T I am seeing uucp xfers @ 300cps and news as low as 20cps. Is there something I am missing ? How do I get the modem to only establish a connection with uunet at 9600bps ? Finally , is it possible that my xfer stats are misleading ? I dont know. Any help will be appreciated. PS : is {ames,sun,uunet}!telebit!modems alive ?
rock@rancho.uucp (Rock Kent) (02/17/90)
In article <125@cdss.UUCP> lyons@cdss.UUCP (Don R Lyons) writes:
Don> I have purchased a telebit trailblazer+ to reduce my uunet phonebills .
Don> I am seeing uucp xfers @ 300cps and news as low as 20cps.
This gets asked often enough that maybe we should have it in a regular
posting somewhere. Let's see how well I remember:
1. Your duart can have a dramatic impact on your throughput. Lots of
potential problems with 8250s. I used to regularly get 700cps
with my 16450s. Now that I've replaced them with 16550As my
average runs about 850cps with frequent excursions to 1300cps. If
you're using a smart card of some sort, check your installation
and, especially, your interrupt assignments.
2. Your serial driver can also have a dramatic impact on throughput.
There have been several reported problems with the serial drivers
put out by the various V/386 vendors. I imagine yours is fixed,
but its worth checking with the vendor.
3. Make sure that you are using hardware flow control (RTS/CTS). I
prefer to also lock my modem-serialport interface at 19200 and let
the flow control work with the modem to deal with calls at lesser
speeds.
4. Make sure that you have compression turned off. You don't want to
be compressing already compressed news batches.
Don> The system I am using is a 16MHz, Acer 386 running SCO Unix 3.2.0 . I
I'm running Microport Unix V/386 on a 20mhz 386 no-name clone. I'm
also running a T2500, so some of my references to registers may not
make sense. I've found, though, that all of the articles about
'blazers have been applicable to my problems.
Don> I have the following entries in the Systems and Devices file :
Don> Systems -> uunet Any ACU 19200 9-7038765055u ogin: ...
Don> Devices -> ACU tty2a - 300-19200 dialTBIT \T
My files, with thanks to eric@snark.uu.net (Eric S. Raymond) and a
posting he made on this subject about a year ago, are as follows.
My Devices file:
#
# --- Telebit Trailblazer/T1000/T2000 devices ------
#
# Devices for access to a 'blazer on tty01
ACUTB tty01 - 19200 tbPEP
ACUTBC tty01 - 19200 tbPEPc
ACUTBAUTO tty01 - 19200 tbauto
ACUTB2400 tty01 - 19200 tb2400
ACUTB2400N tty01 - 19200 tb2400n
ACUTB1200 tty01 - 19200 tb1200
My Dialers file:
##########
# Telebit Trailblazer Plus, T1000 or T2000
#
# assumes Q6 X1 S51=4 S52=2 S53=3 S54=3 S55=3 S58=2 S66=1 S92=1 S95=2 in EEPROM
#
# The magic parts of these scripts are the delays after connection, which hold
# off handing control to uucico so it won't time out during the PEP negotiation.
#
tb1200 =W-, "" \d\K\dATE0 OK ATS92=0S50=2S95=0DT,\T CONNECT\s1200
tb2400 =W-, "" \d\K\dATE0 OK ATS92=0S50=3S95=0DT,\T CONNECT\s2400
tb2400n =W-, "" \d\K\dATE0 OK ATS92=0S50=3DT,\T CONNECT\s2400
tbauto =W-, "" \d\K\dATE0 OK ATS92=0S50=0S95=0DT,\T CONNECT\s
tbPEP =W-, "" \d\K\dATE0 OK ATS92=0S95=0S50=255S7=60S111=30DT,\T\r\n\d\d\d\d\d\d\d\d\c CONNECT\sFAST
tbPEPc =W-, "" \d\K\dATE0 OK ATS92=0S95=0S50=255S7=60S110=1S111=30DT,\T\r\n\d\d\d\d\d\d\d\d\c CONNECT\sFAST
#
My Systems file:
ncr-sd Any ACUTB 19200 nnnnnnn "" \r ogin:-\r-ogin:-\r-ogin: . . .
donner Any ACUTBAUTO 19200 nnnnnnn "" \r ogin:-\r-ogin:-\r-ogin: . . .
serene Any ACUTBC 19200 nnnnnnn ogin:-\K\r-ogin:-\K\r-ogin: . . .
uunet Any ACUTBC 19200 nnnnnnnnnnn ogin:-\r-ogin:-\r-ogin: . . .
killer Any ACUTB2400 19200 nnnnnnnnnnn ogin:-\r-ogin:-\r-ogin: . . .
Finally, my register settings, where interesting, are:
S50=000 Auto Speed Determination & Sequence.
S51:005 Interface at up to 19200
S52:002 DTR controls AutoAnswer and Causes RESET
S58:002 RTS flow control in full duplex.
S66:001 Lock Interface Speed.
S67=000 CTS on when Modem ready.
S68:002 Use CTS flow control in full Duplex.
S92=000 Answering Sequence. PEP,V22,212A,103
S96=001 MNP Compression enabled.
S110:000 PEP Compression disabled.
S111=255 Use protocol specified by remote.
S130:003 DSR on for commands or data.
S131:001 DCD on when carrier detected.
Don> a connection with uunet at 9600bps ?
Set s50=255 as is done above for the tbPEP and tbPEPc devices.
Don> PS : is {ames,sun,uunet}!telebit!modems alive ?
Yup, I received a reply sent to that address about a week ago.
***************************************************************************
*Rock Kent rock@rancho.uucp POB 8964, Rancho Sante Fe, CA. 92067*
***************************************************************************
bret@codonics.COM (Bret Orsburn) (02/23/90)
In article <1990Feb17.063412.18455@rancho.uucp> rock@rancho.uucp (Rock Kent) writes: > >4. Make sure that you have compression turned off. You don't want to > be compressing already compressed news batches. > OK, I'll bite: Why not? Surely, compression will be less effective the second time, but is there any good reason to disable it? Do you have any data to demonstrate that throughput is *decreased* by enabling compression for a compressed newsfeed? -- ------------------- bret@codonics.com uunet!codonics!bret Bret Orsburn
larry@nstar.UUCP (Larry Snyder) (02/23/90)
In article <959@codonics.COM>, bret@codonics.COM (Bret Orsburn) writes: > > Surely, compression will be less effective the second time, but is there > any good reason to disable it? Do you have any data to demonstrate that > throughput is *decreased* by enabling compression for a compressed newsfeed? > In many cases enabling compression in the modem will actually increase the about of time to transfer the file - ie: with compressed and or archived data. -- Larry Snyder, Northern Star Communications, Notre Dame, IN USA uucp: larry@nstar -or- ...!iuvax!ndmath!nstar!larry 4 inbound dialup high speed line public access system
jbayer@ispi.UUCP (Jonathan Bayer) (02/23/90)
bret@codonics.COM (Bret Orsburn) writes: >In article <1990Feb17.063412.18455@rancho.uucp> rock@rancho.uucp (Rock Kent) writes: >> >>4. Make sure that you have compression turned off. You don't want to >> be compressing already compressed news batches. >> >OK, I'll bite: Why not? >Surely, compression will be less effective the second time, but is there >any good reason to disable it? Do you have any data to demonstrate that >throughput is *decreased* by enabling compression for a compressed newsfeed? Yes it is. Remember, compression takes time. The idea behind compression on the fly is to save characters at the cost of time, hopefully there will be a net gain. However, compressed files usually are not compressable. Therefore if the modem is trying to compress the data and it can't it will slow down because it is trying to do a compress, and may actually send MORE characters due to a non-compression. JB -- Jonathan Bayer Intelligent Software Products, Inc. (201) 245-5922 500 Oakwood Ave. jbayer@ispi.COM Roselle Park, NJ 07204
lmb@vicom.com (Larry Blair) (02/24/90)
In article <959@codonics.COM> bret@codonics.COM (Bret Orsburn) writes: =In article <1990Feb17.063412.18455@rancho.uucp> rock@rancho.uucp (Rock Kent) writes: => =>4. Make sure that you have compression turned off. You don't want to => be compressing already compressed news batches. => = =OK, I'll bite: Why not? = =Surely, compression will be less effective the second time, but is there =any good reason to disable it? Do you have any data to demonstrate that =throughput is *decreased* by enabling compression for a compressed newsfeed? L-Z compression will result not result in a smaller size if is the data random, which is exactly what the compressed output is. You want proof? Try compressing a file and then compress the compressed output. The TB only does 12 bit compression vs. the 16 bit on the host. In addition, the time spent compressing the data is less than the time spend processing the additional interrupts and tramsmitting the additional data. No matter how much the TB can compress, it is still limited to 19200 in and out. -- Larry Blair ames!vsi1!lmb lmb@vicom.com
kls@ditka.UUCP (Karl Swartz) (02/24/90)
In article <959@codonics.COM> bret@codonics.COM (Bret Orsburn) writes: >In article <1990Feb17.063412.18455@rancho.uucp> rock@rancho.uucp (Rock Kent) writes: >>4. Make sure that you have compression turned off. You don't want to >> be compressing already compressed news batches. >OK, I'll bite: Why not? >Surely, compression will be less effective the second time, but is there >any good reason to disable it? Do you have any data to demonstrate that >throughput is *decreased* by enabling compression for a compressed newsfeed? Compressing random binary data will often make a file larger since there are no patterns for the compression algorithm to eliminate, and assuming the algorithm does a good job (which it does) a file that has already been compressed will almost certainly grow since they now have the overhead of the compression tables in addition to the uncompressible data. Compress is smart enough to recognize that it's efforts were wasted and refrains from replacing the original with a larger compressed file, but this only works if it has control over input and output files. Obviously this doesn't apply for a pipeline, which is in essence what's being done in a Telebit, all it can do is emit the larger stream of bytes. Since you wanted hard data, I made up a batch consisting of the usual "#! cunbatch" header and a compressed (16 bit) copy of the first Northern California file from comp.mail.maps (u.usa.ca.2). I then compressed the file using both 12 and 16 bit compression (I believe the Telebits use 12 bit compression) with results as follows: file bytes compression time @800CPS original 35755 n.a. 44.7 compress -b12 49206 -37.61% 1:01.5 compress -b16 47053 -31.59% 58.8 Doesn't look like I'll be running off to enable compression on my TrailBlazer soon.
martin@mwtech.UUCP (Martin Weitzel) (02/24/90)
In article <1318@ispi.UUCP> jbayer@ispi.UUCP (Jonathan Bayer) writes: }bret@codonics.COM (Bret Orsburn) writes: } }>In article <1990Feb17.063412.18455@rancho.uucp> rock@rancho.uucp (Rock Kent) writes: }>> }>>4. Make sure that you have compression turned off. You don't want to }>> be compressing already compressed news batches. }>> } }>OK, I'll bite: Why not? } }>Surely, compression will be less effective the second time, but is there }>any good reason to disable it? Do you have any data to demonstrate that }>throughput is *decreased* by enabling compression for a compressed newsfeed? } } }Yes it is. Remember, compression takes time. The idea behind }compression on the fly is to save characters at the cost of time, }hopefully there will be a net gain. However, compressed files usually }are not compressable. Therefore if the modem is trying to compress the }data and it can't it will slow down because it is trying to do a }compress, and may actually send MORE characters due to a }non-compression. Quite easy to explain: Suppose, you had an algorithm which allways compresses some input-data to *less* output-data. Why not feed the output again and again, until its size is reduced to zero. Such an algorithm cannot exist for arbitrary input-data. Or view it vice versa: If *uncompressing* yields N data-bits in total, there are 2^N possible bit patterns. No two of these can have the same representation when compressed, because uncompressing could not decide then between the two. So, if some bit patterns need less bits when compressed, there must be other bit patterns that need more. A little more mathematics shows, that algorithms, which tend to give high compression rates with certain kind of data, are much prone to fail on arbitrary data - esp. on the output of their own or other compression algorithms! -- Martin Weitzel, email: martin@mwtech.UUCP, voice: 49-(0)6151-6 56 83
bret@codonics.COM (Bret Orsburn) (02/24/90)
In article <21377@ditka.UUCP> kls@ditka.UUCP (Karl Swartz) writes: >In article <959@codonics.COM> bret@codonics.COM (Bret Orsburn) writes: >>In article <1990Feb17.063412.18455@rancho.uucp> rock@rancho.uucp (Rock Kent) writes: >>>4. Make sure that you have compression turned off. You don't want to >>> be compressing already compressed news batches. > >>Surely, compression will be less effective the second time, but is there >>any good reason to disable it? > >Compressing random binary data will often make a file larger since >there are no patterns for the compression algorithm to eliminate, >and assuming the algorithm does a good job (which it does) a file >that has already been compressed will almost certainly grow since >they now have the overhead of the compression tables in addition >to the uncompressible data. Makes sense to me. I wasn't aware that the two compression algorithms (a) are very similar (b) are nearly optimal (c) have significant overhead [tables]. I will take your word that such is the case. It would not be necessary to transmit a table if the class of data to be compressed was agreed upon in advance by both parties. That would seem to be a sound and sensible policy for a news feed, though not for a modem. The only remaining question is: Why does the little bit of documentation available on this subject incorrectly recommend using compression? (That is a rhetorical question. ;-) Thanks to all who responded. (I thank you, my phone bill thanks you, the man who pays my phone bill thanks you,....) -- ------------------- bret@codonics.com uunet!codonics!bret Bret Orsburn
chan@chansw.UUCP (Jerry H. Chan) (02/24/90)
In article <959@codonics.COM>, bret@codonics.COM (Bret Orsburn) writes: > In article <1990Feb17.063412.18455@rancho.uucp> rock@rancho.uucp (Rock Kent) writes: > >4. Make sure that you have compression turned off. You don't want to > > be compressing already compressed news batches. > > Surely, compression will be less effective the second time, but is there > any good reason to disable it? Do you have any data to demonstrate that > throughput is *decreased* by enabling compression for a compressed newsfeed? No one thus far has posted real data, so here it is -- Data for Trailblazer Plus (Rev BA5.01 ROMs): I had been getting consistent 800-950 cps xfer rates for batches on the order of 250K/file with COMPRESSION ENABLED. After complaining to my (ex-)feed about the seemingly slow xfer rates, they suggested that I turn off compression. Now my site consistently gets between 1100-1350 cps xfer rates, about a (believe it or not) 40% boost in throughput. -- Jerry Chan 508-853-0747, Fax 508-853-2262 |"My views necessarily reflect the Chan Smart!Ware Computer Services & Prods | views of the Company because Worcester, MA 01606 | I *am* the Company." :-) {bu.edu,husc6}!m2c!chansw!chan \---------------------------------
campbell@Thalatta.COM (Bill Campbell) (02/26/90)
In article <959@codonics.COM> bret@codonics.COM (Bret Orsburn) writes: In article <1990Feb17.063412.18455@rancho.uucp> rock@rancho.uucp (Rock Kent) writes: > >4. Make sure that you have compression turned off. You don't want to > be compressing already compressed news batches. > OK, I'll bite: Why not? Surely, compression will be less effective the second time, but is there any good reason to disable it? Do you have any data to demonstrate that throughput is *decreased* by enabling compression for a compressed newsfeed? compressing an already compressed file typically results in a file approximately 1/3 LARGER than the original! All you have to do to prove this to yourself is to take a fairly large file, compress it, rename the result to something without the .Z suffix and compress it again and compare the sizes. -- ....microsoft--\ Bill Campbell; Celestial Software ...uw-beaver-----!thebes!camco!bill 6641 East Mercer Way ....fluke------/ Mercer Island, Wa 98040 ....hplsla----/ (206) 232-4164
rock@rancho.uucp (Rock Kent) (02/28/90)
On 24 Feb 90 08:37:30 GMT, bret@codonics.COM (Bret Orsburn) said: >In article <21377@ditka.UUCP> kls@ditka.UUCP (Karl Swartz) writes: >>In article <959@codonics.COM> bret@codonics.COM (Bret Orsburn) writes: >>>In article <1990Feb17.063412.18455@rancho.uucp> rock@rancho.uucp (Rock Kent) writes: >>>>4. Make sure that you have compression turned off. You don't want to >>>Surely, compression will be less effective the second time, but is there >>>any good reason to disable it? >>Compressing random binary data will often make a file larger since >Makes sense to me. > >(I thank you, my phone bill thanks you, the man who pays my phone bill thanks > you,....) And I thank you. 'Bout jumped out of my chair when Bret challenged my relaying of conventional wisdom. Made me think. You all explained it far better than I would have. *************************************************************************** *Rock Kent rock@rancho.uucp POB 8964, Rancho Sante Fe, CA. 92067* ***************************************************************************