[comp.sys.apollo] SR10.3

waldram@WOLF.UWYO.EDU (10/24/90)

Has anyone gotten the full/official release of SR10.3?  It was scheduled 9/28, then 
reswcheduled 10/15 due to some major bug.  Now I am told we won't receive it until
11/12.  Any info. will help me plan!
                -jjw
Jim Waldram
waldram@grizzly.uwyo.edu
University of Wyoming
 

hanche@imf.unit.no (Harald Hanche-Olsen) (10/25/90)

In article <9010241335.AA06698@wolf.Uwyo.EDU> waldram@WOLF.UWYO.EDU writes:

   Has anyone gotten the full/official release of SR10.3?  It was scheduled 9/28, then 
   reswcheduled 10/15 due to some major bug.  Now I am told we won't receive it until
   11/12.  Any info. will help me plan!
		   -jjw

Yup, we have it.  Only the tapes, though: We got it because of our
beta testing the CR1.0 compilers.  I don't know if the manuals are
ready or how long it will take HPollo to pack it for shipping and get
it out the door.  But at least the software is ready!  (AND that goes
even for the prism version, which is a big step forward...)

- Harald Hanche-Olsen <hanche@imf.unit.no>
  Division of Mathematical Sciences
  The Norwegian Institute of Technology
  N-7034 Trondheim, NORWAY

beierl_c@apollo.HP.COM (Christopher Beierl) (10/25/90)

In article <9010241335.AA06698@wolf.Uwyo.EDU> waldram@WOLF.UWYO.EDU writes:
>Has anyone gotten the full/official release of SR10.3?  It was scheduled 9/28, then 
>reswcheduled 10/15 due to some major bug.  Now I am told we won't receive it until
>11/12.  Any info. will help me plan!

My understanding is that SR10.3 began customer shipments on 15 Oct 90.

-Chris

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 Christopher T. Beierl  Internet: beierl_c@apollo.HP.COM;beierl_c@apollo.com
 Apollo Computer, Inc.      UUCP: {mit-eddie,yale,uw-beaver}!apollo!beierl_c
 A Subsidiary of Hewlett-Packard                       Phone: (508) 256-6600

okay@TAFS.MITRE.ORG ("Okay, S J") (10/26/90)

>From: hanche%sigyn.idt.unit.no%nuug%sunic%hagbard%eru.uucp@bloom-beacon.mit.edu 
> (Harald Hanche-Olsen)
>Subject: Re: SR10.3
>Message-Id: <HANCHE.90Oct25102020@hufsa.imf.unit.no>
>
>In article <9010241335.AA06698@wolf.Uwyo.EDU> waldram@WOLF.UWYO.EDU writes:
>
>Yup, we have it.  Only the tapes, though: We got it because of our
>beta testing the CR1.0 compilers.  I don't know if the manuals are
>ready or how long it will take HPollo to pack it for shipping and get
>it out the door.  But at least the software is ready!  (AND that goes
>even for the prism version, which is a big step forward...)

Well, what we were told in the 10.3 breakout session at ADUS was that 10.3 would
be released officially on 10/15/90 and be, in the words of one HPollo representative,
"in the stores by Christmas", meaning they expect to have widespread distribution by
then.
I personally wouldn't expect to see it for several more weeks.

---Steve 
------------
Stephen Okay       Technical Aide, The MITRE Corporation
sokay@mitre.org             <---work  "Captain,relax, its only the Prime Directive"
amidillo!steve@uunet.uu.net <---home          
Disclaimer: I get *MYSELF* in enough trouble with my opinions,
            Why inflict them on MITRE?

krowitz@RICHTER.MIT.EDU (David Krowitz) (10/26/90)

Well, even as I was sitting hear reading my morning mail wondering about when
we (as a beta-site) would get an officially released copy, the Federal Express
man showed up with a box containing media and new manuals. It'll take me a
couple of days to unpack it and see if the bugs we found have gone away ...


 -- David Krowitz

krowitz@richter.mit.edu   (18.83.0.109)
krowitz%richter.mit.edu@eddie.mit.edu
krowitz%richter.mit.edu@mitvma.bitnet
(in order of decreasing preference)

chen@digital.sps.mot.com (Jinfu Chen) (10/26/90)

In article <9010261249.AA20844@mwunix.mitre.org> okay@TAFS.MITRE.ORG ("Okay, S J") writes:
>
>Well, what we were told in the 10.3 breakout session at ADUS was that 10.3 would
>be released officially on 10/15/90 and be, in the words of one HPollo representative,
>"in the stores by Christmas", meaning they expect to have widespread distribution by
>then.
>I personally wouldn't expect to see it for several more weeks.

I have five SR10.3 tapes sitting in my cubicle, distributed by Mentor
Graphics. We recieved them with other Mentor 7.0 re-release yesterday. I
expect one node in our network will be up in 10.3 sometime today.

CC6.8, or whatever the new name is, is not here yet. From the release notes
it seems this is going to be a real ANSI compiler.

There're some goodies in 10.3, such as shut_lock file preventing any user to
shut down a node. Also as mentioned by others before, increased number of
processes.



-- 
Jinfu Chen                  (602)898-5338 
Motorola, Inc.  SPS  Mesa, AZ
 ...uunet!motsps!digital!chen
chen@digital.sps.mot.com
CMS: RXFR30 at MESAVM
----------

mth@cci632.UUCP (Michael Hickman) (10/27/90)

	We received SR10.3 from Mentor Graphics on Monday (10/22).  I have
upgraded 6 of our 11 nodes so far.  

	Last night we had three nodes (one SR10.1, one SR10.2, and one SR10.3!) 
that would lock-up with the DM getting 97% of the cpu time (from dspst)
after displaying the 'Welcome to Domain/OS SR10.3' message in the message
window.  No shell, nothing.  I could 'crp' onto the node prior to logging
in on the display and work fine, but it would lock up after the attempted
DM login.

	I discovered that if our main file server (still at SR10.1) was not running
any daemons, this problem did not occur.  The server is normally running:
tcpd, inetd, rwhod, routed, mountd, portmap, nfsd, cron, sendmail, and lpd.
I tried bringing the system up 10 different times with one more of these installed 
each time, to see which one was causing the other systems to have problems, but 
once I got to the point that all the daemons were running, the lock-up problem
hadn't re-occured!!!

	I am suspecting the nfs daemons.  We have problems with them in the past, and 
have found that if machines that we have cross-mounted go down for backups, our 
server dies.  

	Any ideas???


Michael Hickman  CAE/CAD System Administrator   mth@cci.com
Computer Consoles, Inc.           
Rochester, NY

PS  Thanks to those that gave me advice on copying SR10.2 X11R3 to SR10.1.
	Since I just got SR10.3, I no longer need to try this...  

krowitz@RICHTER.MIT.EDU (David Krowitz) (10/27/90)

I just (literally!) opened up the SR10.3 distribution box I received this
morning ...

Among the many pieces included is an "SR10.3 system software notice" page
(an addendum to the release notes). On the back of this page, there is a
notice that SR10.3 TCP/IP is *not* fully compatible with either SR10.1 or
SR10.2 TCP/IP , and that HP has added a "-c" switch to /etc/tcpd to allow
it to run in networks with pre-SR10.3 nodes.

If TCP/IP doesn't work, NCS doesn't work, and if NCS doesn't work, the registry
daemons, the network license servers, the SR10 printer support (both Apollo
prsvr and Unix lpr stuff) don't work. One of the symptoms of a broken NCS is
an NCS client program which begins to hog the CPU.

I'd check your /etc/rc.local files for a line with 
"/etc/tcpd -c"
on your SR10.3 systems.


 -- David Krowitz

krowitz@richter.mit.edu   (18.83.0.109)
krowitz%richter.mit.edu@eddie.mit.edu
krowitz%richter.mit.edu@mitvma.bitnet
(in order of decreasing preference)

thompson@PAN.SSEC.HONEYWELL.COM (John Thompson) (11/08/90)

> <<forwarded message>>
> In article <ianh.658013284@morgana>, ianh@bhpmrl.oz.au (Ian Hoyle) writes:
> |> Is sr10.3 vapourware ??? Only joking, but we've heard zilch on when
> that will
> |> _really_ ship here either.
> |> --
> 
> I believe it has already started shipping. Our local sales office got
> their copy 2 weeks ago (although we are still awaiting our copy, sigh)

We seem to be among the first people to get (or admit to getting) 10.3.  
It's been sitting in my office now for about a week while I juggle higher
priority stuff, and find room for it on our system.  We got it, in fact,
_before_ our (internal) support division received their copy!  Mentor
Graphics shipped it out to us 10/07/90 -- and it appears to be the real
10.3, not the 10.3 pre-release (10.03.a).  I don't normally give kudos
to Mentor, but if nobody else has it, I guess they deserve congratulations
for _somehow_ getting it and sending it to us!

John Thompson (jt)
Honeywell, SSEC
Plymouth, MN  55441
thompson@pan.ssec.honeywell.com

As ever, my opinions do not necessarily agree with Honeywell's or reality's.
(Honeywell's do not necessarily agree with mine or reality's, either)

rand@HWCAE.CFSAT.HONEYWELL.COM (Douglas K. Rand) (11/24/90)

Well, around the end of October (the listing of /, reports that the
creation time was October 23, 1990), I volunteered to be the guinea
pig for SR10.3. Since then I have learned a few things.

The biggest gotchas is color maps. At 10.3 HP/Apollo tried to make the
DM polite in dealing with the color map. There is a new feature called
~/user_data/color_map. Just put a copy of your own color map in that
file, and login. Or so the manual says.

The DM does everything right. I load up a color map that prints all
the text in this pukish orange (supposed to be amber); makes all of
the pad backgrounds black (I hate that white); and loads up a nice set
of paisely pad borders. Great. Everything works as advertised, except
GNU Emacs (18.54 with Zubkoff mods). It prints all the text in black.
And my pad backgrounds are black too. Oops. (How good a touch typist
are you?)

Got on the horn to HP/Apollo, and found out they are learning, just
like me. They were helpful, and full of questions. This is not a
flame. I guess this is what you get for being on the leading edge, you
get cut sometimes.

Turns out that when the DM reads ~/user_data/color_map it realizes
that the color in slot 0 of your ~/user_data/color_map is the text
color, and then loads that color in ANY color slot it chooses. All of
the text is printed in the right color, but there is no way for a GPR
program to figure out the color. Slot 0 is still black (the system
default). And the old way (SR10.2 and previous) way to get the color
of the text was via slot 0.

Now for the solution. Login, and do a lcm -p ~/user_data/color_map.
All of your colors will go screwy, but a CTRL-F will solve most of
them. Now logout, and log back in. Things should work find till you
reboot the node. (At least, they will work fine for your color map and
you. If anybody else logs in, things will probably break again.)

I asked HP/Apollo for a gpr call like gpr_$foreground to get the text
color, and an APR is being written. So far, the only solutions are to
use the LCM command. Or, create an option to pass to the GPR program
to specify the color to use. I don't really like either, but the LCM
is livable.

The other thing I notices was rubber banding windows. I couldn't see the
rubberbands over the black background of my pads. HP/Apollo finally
reproduced the problem, and it turns out that my forcing the color map
with the LCM call caused this problem. Here is how rubber banding was
described to me by HP/Apollo.

At 10.3, the index of the color of the pad border of the pad you are
moving/resizing is xor'ed with the index of the color the rubber band
is over. The new politeness of the DM causes all black regions of the
screen to use a single entry in the color map. In my case it picked 7
(the text of the pad titles). Well here is a little binary math done
in decimal:
	 9 xor 7 = 14
	11 xor 7 = 12
	13 xor 7 = 10
	15 xor 7 = 8

7 being the index used to print the pad titles (and all of the pad
backgrounds too, because they are all black). 9,11,13,15 are the
indices for the pad borders (lovely shades of pastel) . 14,12,10,8
are the indices for the pad back grounds (all black). So,
rubber banding over the pad backgrounds caused the rubber bands to be
painted black. As advertised.

My solution was to change the color for slot 7, to be a very dark
shade of gray (1,1,1 rgb). That worked better.

(An aside. There is a cool utility in /systest/ssr_util called
color_probe. It draws the current color map in your pad. And you can
point and clock on colors and it flashes them on the screen, and tells
you the color.)

Now on to more GPR stuff. I've got a stupid bar charting library that
behaves a little like dspst. It uses auto refresh
(gpr_$set_auto_refresh(TRUE,status)) instead of doing it myself. (I'm
lazy, what can I say?) At SR10.3 there is a new undocumented feature
for this. GPR creates a file in /tmp called gpr_autorefresh_<some-number> 
that is the bitmap for the entire screen.

Well, it won't work if that file does not have the following
privileges:
	u=rwx
Yes, the x is very important. Our /tmp directories have (I believe)
the default initial file ACLs:
	% llacl -L /tmp
	   Object ACL:
	      Network-wide access allowed
	      Required entries:
		root.%.%            	prwx-
		%.staff.%           	-rwx-
		%.%.none            	-----I
		%.%.%               	-rwx-
	      Extended entry mask:	-----
	   Initial Directory ACL:
	      Network-wide access allowed
	      Required entries:
		rand.%.%            	------UP
		%.none.%            	------UP
		%.%.none            	-----I
		%.%.%               	------U
	      Extended entry mask:	-----
	   Initial File ACL:
	      Network-wide access allowed
	      Required entries:
		rand.%.%            	------UP
		%.none.%            	------UP
		%.%.none            	-----I
		%.%.%               	------U
	      Extended entry mask:	-----

Which uses the umask. And mine is of the paranoid school: 177. So my
GPR autorefresh files had only rw. They needed rwx. My solution was to
do a umask just before the gpr_$set_auto_refresh() call, and then set
it back after. A better solution would be to have HP/Apollo change GPR
to try to set the ACL of the file when they create it. They, after
all, know what priviledges it requires.

Now, for some TCP/IP stuff. Our tcpd was running with the -c switch,
just like it should. But we couldn't transfer more than on the rough
order of 128 bytes via either ftp or telnet. I'd get the login prompt
(via telnet) and maybe the password prompt, and then it would say
'connection closed by foreign host'. Our solution was to add a -p0
option to tcpd. We never called HP/Apollo on this one, probably should
though.

So much for the complaints. The X performance is a lot better than
before. The F monitor stuff is still kinda slow (I have a 4500, 16
Meg, a F monitor [1280x1024], ethernet and ring).  (X on the new 400's
is good. Almost great.)  The new process limits are great. No more
full tables. It seems a bit quicker, but that might just be my newly
involed disk. (Who says Domain/OS never fragments!)  All in all, it
was pretty painless.

Finally, ANSI compatible include files! Now I can get rid of my
prototypes for malloc()! I'm waiting for CR1.0. Here is what is loaded
on my node:
	C++	2.0
	C	6.7 (both m and mpx)
	DSEE	3.3.2
	Pascal	8.7 (both m and mpx)
	TECHnet	1.1 (when is VMS post 5.1 support coming?)

Thats all I can think of. Over all it was a very painless process. The
advantages of 10.3 far out weigh the problems. (But you still gotta
fix them!) I would be interested in hearing any thing else about 10.3.

--
Douglas Keenan Rand                Honeywell -- Air Transport Systems Division
Phone: +1 602 869 2814               US Snail: P.O. Box 21111 Phoenix AZ 85036
Internet: @cim-vax.honeywell.com:rand@hwcae.cfsat.honeywell.com
UUCP: ...!uunet!hpfce!apciphx!hwcae!rand

chlg1043@uxa.cso.uiuc.edu (Christopher H Lee) (02/23/91)

We are planning on bringing our network up to 10.3 in the
near future.  Do any of you out there with experience updating
have any helpful hints?  I hope this will be less painful than
9.7 to 10.2 updates.

Oh...and can 10.3 coexist with 10.2 nodes?  We would like
to keep some of the nodes up and running (10.2) as we do the
system updates.

Thanks in advace


Chris Lee
Computational Electronics
University of Illinois

system@alchemy.chem.utoronto.ca (System Admin (Mike Peterson)) (02/24/91)

In article <1991Feb23.025815.28527@ux1.cso.uiuc.edu> chlg1043@uxa.cso.uiuc.edu (Christopher H Lee) writes:
>We are planning on bringing our network up to 10.3 in the
>near future.  Do any of you out there with experience updating
>have any helpful hints?  I hope this will be less painful than
>9.7 to 10.2 updates.

Hardly any pain at all, but as I outlined a few weeks ago, don't put SR10.3
on a DN2500 until the patch for the racing clock is available (April
sometime I was told, but I am running a version of it now, and the node
is now functional). If you start X Windows at boot time, be aware that
the options in /etc/rc and /etc/daemons have changed: /etc/daemons/Xapollo
now gets you X in DM-owns-root, and /etc/daemons/X gets you X in 
X-owns-root (this didn't exist before).

>Oh...and can 10.3 coexist with 10.2 nodes?  We would like
>to keep some of the nodes up and running (10.2) as we do the
>system updates.

Yes - just keep using the 'tcpd -c' option in /etc/rc.local on the
SR10.3 nodes until they're all at 10.3; then get rid of it and reboot
the whole network (at the same time :-) ).
-- 
Mike Peterson, System Administrator, U/Toronto Department of Chemistry
E-mail: system@alchemy.chem.utoronto.ca
Tel: (416) 978-7094                  Fax: (416) 978-8775