[comp.sys.tandy] tandy 16B and 16 bit compress

ken@csis.dit.csiro.au (Ken Yap) (12/12/90)

I was trying to get 16 bit compress working on my 16B (~= 6000) and I
found I couldn't have a large array in bss. Nor could I malloc or sbrk
any space larger than 245k or so. Now at boot up the machine claims to
have 892k (or so) user memory. Where's all that memory?  Do I have to
reset some limit somewhere?

We're off the net for a few days so mail may bounce. Take your time
with the emailed answer, we'll be back next week. Thanks in advance.

	Ken

mikes@iuvax.cs.indiana.edu (Michael Squires) (12/14/90)

In article <1990Dec12.003253.18391@csis.dit.csiro.au> ken@csis.dit.csiro.au (Ken Yap) writes:

Tandy 6000 compress requires at least 512K of memory to run; this is more than
is available using the standard configuration.  If running an older XENIX you
have to modify the kernel with adb; if 3.2, run the configuration program.

-- 

Mike Squires (mikes@iuvax.cs.indiana.edu)     812 855 3974 (w) 812 333 6564 (h)
mikes@iuvax.cs.indiana.edu          546 N Park Ridge Rd., Bloomington, IN 47408
Under construction: mikes@sir-alan@cica.indiana.edu

jpr@jpradley.jpr.com (Jean-Pierre Radley) (12/15/90)

In article <1990Dec12.003253.18391@csis.dit.csiro.au> ken@csis.dit.csiro.au (Ken Yap) writes:
>I was trying to get 16 bit compress working on my 16B (~= 6000) and I
>found I couldn't have a large array in bss. Nor could I malloc or sbrk
>any space larger than 245k or so. Now at boot up the machine claims to
>have 892k (or so) user memory. Where's all that memory?  Do I have to
>reset some limit somewhere?
>

Where did you cop you source?
When 16-bit compress was being developed on CompuServe's UnixForum, I wrote the
make file for the t6k, which is the only machine I had at the time, so clearly
it worked - terrifically- on a t6k.

The memory you own is one thing. The memory you're allowed to use is another.
You need to reconfigure the kernel or patch it, to let MAXMEM be larger. A
utility for this was provided with filePro 1.1. It's also do-able with adb;
there's a 512-byte file called MAXMEM.T6K in LIB 9 of CIS' UnixForum which does
the trick.
-- 

 Jean-Pierre Radley	    NYC Public Unix	jpr@jpr.com	CIS: 72160,1341

bill@bilver.uucp (Bill Vermillion) (12/17/90)

In article <1990Dec12.003253.18391@csis.dit.csiro.au> ken@csis.dit.csiro.au (Ken Yap) writes:
>I was trying to get 16 bit compress working on my 16B (~= 6000) and I
>found I couldn't have a large array in bss. Nor could I malloc or sbrk
>any space larger than 245k or so. Now at boot up the machine claims to
>have 892k (or so) user memory. Where's all that memory?  Do I have to
>reset some limit somewhere?

Yup.  If you are using 3.2 use the config (or is it conf) to set the max
memory any process can use.

If you are using one of the older version you need the configure kit, or a
direct patch, or if you have a copy of filePro use their patch.

I set mine t6k up with about 750 k per/prc.  Be warned though, 16bit
compress is SLOW and will swap like mad.   I made sure that all the
compress I did was 12 bit, but had compress set to be able to unpack 16bit.
I got a 400k news feed one time that was accidentally shipped as 16bit
compress.   It did about 100k per hour uncompressing.  YIKES!!!


>
>We're off the net for a few days so mail may bounce. Take your time
>with the emailed answer, we'll be back next week. Thanks in advance.
>
>	Ken


-- 
Bill Vermillion - UUCP: uunet!tarpit!bilver!bill
                      : bill@bilver.UUCP

ken@csis.dit.csiro.au (Ken Yap) (12/19/90)

>>I was trying to get 16 bit compress working on my 16B (~= 6000) and I
>>found I couldn't have a large array in bss. Nor could I malloc or sbrk
>>any space larger than 245k or so. Now at boot up the machine claims to
>>have 892k (or so) user memory. Where's all that memory?  Do I have to
>>reset some limit somewhere?
>>
>
>Where did you cop you source?
>When 16-bit compress was being developed on CompuServe's UnixForum, I wrote the
>make file for the t6k, which is the only machine I had at the time, so clearly
>it worked - terrifically- on a t6k.

I used the standard compress 4.0 distribution. It has a Makefile entry
for 6000 Xenix.

>The memory you own is one thing. The memory you're allowed to use is another.
>You need to reconfigure the kernel or patch it, to let MAXMEM be larger. A
>utility for this was provided with filePro 1.1. It's also do-able with adb;
>there's a 512-byte file called MAXMEM.T6K in LIB 9 of CIS' UnixForum which does
>the trick.

Thanks to all who replied. Somebody told me about cfg in 3.2, but my
release is 3.1. So I used adb to patch the kernel and it works now.

In brief:

# adb -w /xenix
Maxmem?W 888		# 4k for u area
$q

It can also be changed at runtime by poking memsize/4k into "maxmem".

Oh and increase the swap area (by a dump and restore). Mine's at 4M
now. I got a swap exhausted panic the first time.

Ironic note: I see no evidence that the previous owner knew about this
limit so he never used all that memory. Imagine all those memory cells
idle.  :-)

While I'm posting, did anybody save that short asm segment to do
alloca() on a 68k? Even one for a Sun will do, I can add the stack
probe instruction needed.

Seasons greetings!

nanook@rwing.UUCP (Robert Dinse) (12/21/90)

In article <1990Dec18.225202.4693@csis.dit.csiro.au>, ken@csis.dit.csiro.au (Ken Yap) writes:

> Ironic note: I see no evidence that the previous owner knew about this
> limit so he never used all that memory. Imagine all those memory cells
> idle.  :-)

     Maxmem is a per-process limit not a total limit on system memory
utilization. At any one time you will have more than one process in memory
plus the kernal occupies some space so those memory cells weren't sitting
idle.

     I've been told by some Tandy guru's that Maxmem is to prevent a lock-up
condition from occuring under certain circumstances and have been advised
that it should be set to less than 1/2 total user memory, though I've had to
violate that rule grossly on my system (Maxmem = 3000k with about 3700k
user memory available (4 megs less kernal)) in order to get gcc not to core
dump. (Some gcc processes have been larger than 2 megs).

uhclem@trsvax.UUCP (12/21/90)

<>
B>It can also be changed at runtime by poking memsize/4k into "maxmem".
B>
B>Oh and increase the swap area (by a dump and restore). Mine's at 4M
B>now. I got a swap exhausted panic the first time.
B>
B>Ironic note: I see no evidence that the previous owner knew about this
B>limit so he never used all that memory. Imagine all those memory cells
B>idle.  :-)
B>

Maxmem is the maximum amount of memory a SINGLE process can use.  So if
you ran more than one process at the same time (very likely) each
process running would be allowed to get up to that number.  

You can think of maxmem as a sort of anti-memory-hog device, as it
keeps a single program from grabbing more memory than a non-virtual
environment could stand.  So you don't want to set Maxmem so high that
a single process (pathalias?) can get everything.  "It would be bad."

					"Thank you, Uh Clem."
					Frank Durda IV @ <trsvax!uhclem>
				...decvax!microsoft!trsvax!uhclem
				...hal6000!trsvax!uhclem

P. S.:   Get 3.2.

jmh@coyote.uucp (John Hughes) (12/22/90)

In article <1990Dec17.020713.17712@bilver.uucp> bill@bilver.uucp (Bill Vermillion) writes:
>In article <1990Dec12.003253.18391@csis.dit.csiro.au> ken@csis.dit.csiro.au (Ken Yap) writes:
>>I was trying to get 16 bit compress working on my 16B (~= 6000) and I
>>found I couldn't have a large array in bss. Nor could I malloc or sbrk
>>any space larger than 245k or so. Now at boot up the machine claims to
>>have 892k (or so) user memory. Where's all that memory?  Do I have to
>>reset some limit somewhere?
>
>Yup.  If you are using 3.2 use the config (or is it conf) to set the max
>memory any process can use.
>
>If you are using one of the older version you need the configure kit, or a
>direct patch, or if you have a copy of filePro use their patch.
>
>I set mine t6k up with about 750 k per/prc.  Be warned though, 16bit
>compress is SLOW and will swap like mad.   I made sure that all the
>compress I did was 12 bit, but had compress set to be able to unpack 16bit.
>I got a 400k news feed one time that was accidentally shipped as 16bit
>compress.   It did about 100k per hour uncompressing.  YIKES!!!


In the process of upgrading and reconfiguring moondog I ran into a problem
similar to Ken's - namely that I couldn't get a compile of compress 4.0
to work in 16-bit mode, even with 2.25 Mb of RAM installed. Using the cfg
utility to up the maxmem size to 1Mb solved the problem, just as Ken
found out (I arrived at my solution at about the same time as the follow-
up with Ken's solution appeared... so I didn't bother to add my two cents
at that time..).

However, I've found that compress is reasonably fast. I routinely tar and
then compress whole directory structures, and I've never seen performance
as bad as 100K per hour (sheesh... now THAT's slow...). If compress is
really running that slow, then perhaps something else is in need of some
serious tweaking.


-- 
|     John M. Hughes      | "...unfolding in consciousness at the            |
| noao!jmh%moondog@coyote | deliberate speed of pondering."  - Daniel Dennet |
| jmh%coyote@noao.edu     |--------------------------------------------------|
| noao!coyote!jmh         | P.O. Box 43305  Tucson, AZ  85733                |

ken@csis.dit.csiro.au (Ken Yap) (12/27/90)

>Maxmem is the maximum amount of memory a SINGLE process can use.  So if
>you ran more than one process at the same time (very likely) each
>process running would be allowed to get up to that number.  

Ok, j'ai compris.

>You can think of maxmem as a sort of anti-memory-hog device, as it
>keeps a single program from grabbing more memory than a non-virtual
>environment could stand.  So you don't want to set Maxmem so high that
>a single process (pathalias?) can get everything.  "It would be bad."

Ok. But thrashing sounds like a better fate to me than running a pipe
for 5 minutes and then getting "process killed". I can always go watch
a TV documentary while waiting. :-) I should explain that I'm the only
user on the system, I don't have a modem (and don't intend to get one,
too much temptation to waste time and phone money; I bring disks back
from work) and I don't run cron jobs. So far the only program that
hasn't run in 256k is compress-16 and I need that only to unpack
files.

>P. S.:   Get 3.2.

Ok.  Anybody want to sell me a cheap copy? I'm in Oz, by the way so
offers involving lots of shipping costs aren't attractive. Are there
any Oz 6k owners at all reading this group?  Maybe I should ask the
local Tandy office but I'd probably get puzzled looks ("what's a 6k?").
:-(