[comp.sys.apollo] some questions for the gurus.

GBOPOLY1@NUSVM.BITNET (fclim) (09/05/88)

hi,
     i have some questions concerning adminstrating and programming
on apollo computers.  but maybe i've better insert a piece of intro
here first:
     we are an educational institute.  we have 3 labs; each with 21
nodes; dn3010 with sr9.7.  on average, one partner serves 2 diskless
nodes in two of the centres.  the labs are used in a multi-users
environment; ie no user owns any node.  he/she uses the node for
one or two hours a day.  there
are one cartridge tape and one floppy disk drive and one dsp90 (with
500m winchester disk) in each centre.  in one of the lab, we have an
additional dsp90 and a mag-tape drive.   the main software in this
lab is mentor graphics packages whereas the other 2 centres are of
general uses.  we will upgrade our system to sr10 when it is released.
however, i am not too sure of the lab running mentor graphics packages;
if mentor has not upgrade their packages to sr10, then this lab will
remain at sr9.7.

      here are the questions:
(1) can we prevent ordinary users from shutting down the system?
    this is a question about the dm.  shut is a built-in command
    to dm; so we can't acl it.  we can't close the dm input pad
    as the user need it for cv and lo-ing.
    on a "real" unix system, the lab operators has to log in as
    root (or at least with a uid of 0) in order to shutdown.

(2) how do we prevent ordinary users to sigp other users' processes?
    we would like to keep crp open to all ordinary users so that
    they may be able to crp onto the node with cartridge or mag-
    tape drives.  however, we don't wish to see users abuse the
    system with "crp -on -n //w1a -cp /com/sigp -s process_1"
    or, worse, "crp -on -n //w1a -cp /com/xdmc shut'
    our users are mostly students and you know how students love
    to hack.  i don't mind any of them melt-ing my screen -- that
    is a harmless practical; though at times, an inconvenient pain
    in the rear.
    aegis nor domain/ix has accounting faciltities so i cannot
    catch anyone doing such mischiefs.

(3) i wish to port the unix terminal-locking program -- lock --
    on the apollos.  only catch is : although i wish to have lock
    usable by all users, i do not wish it to be abused.  my plans
    are that all users may lock their terminals but the program
    will bomb out after one hour.  upon exit, it will logs the user
    off.
    locking a terminal is simple.  i just use gpr in borrow-mode.
    but in borrow-mode, how will the program logs the user off?
    the dm input pad is hidden as the dm has relinquish the screen
    to my program.

it seems the 3 questions are connected to the dm which shouldn't be
there in the first place and shouldn't be so powerful and with aegis
lack of processes' ownership.

(4) has anybody successfully make gcc and g++?  i would like to have
    gcc because it's ansi even though ansi is not ready with its
    definition of c.  i would also like to try my hand at some oo
    programming.
    i think there will be program because /com/cc doesn't generate
    codes with a.out(5) formats.  will this be fixed at sr10.  or
    better still, has anybody any patches to get them working at
    sr9.7?

(5) has anybody make the andrew toolkits that comes together with
    the x11.r2 dist tape from mit?  as well as the other one -- gee,
    what's the name of that software -- the one that put john and
    yoko or brook shields on the root window (ie the background)?
    i don't really know what andrews is;  i have only peeked at its
    sources.  it seems to have some kind of oo sub-langauge;  seems
    an interesting thing to play with.  but i saw "/dev/console" in
    in the sources.  so i decided to ask first before i waste 3 to
    6 hours compiling it and then finding that it can't work at sr9.7.

i like to thank in advance everyone who reponse to these questions.
i hope that your replies are in a postive vein.   thank you.


fclim         --- gbopoly1 % nusvm.bitnet @ cunyvm.cuny.edu
computer centre
singapore polytechnic
dover road
singapore 0513




ps.
(6) is there an arpanet or uucp address for adus?  especially,
    the library?

thanks -- fclim.

rees@MAILGW.CC.UMICH.EDU (Jim Rees) (09/06/88)

    (1) can we prevent ordinary users from shutting down the system?

No.  I can always shut down the system by turning the power off, if no
other means is available.  If you require absolute physical security,
go talk to IBM.

    (2) how do we prevent ordinary users to sigp other users' processes?

This is "fixed" (assuming you consider current behavior to be broken)
in sr10.

    (4) has anybody successfully make gcc and g++?

The trick isn't getting gcc to work.  That's easy, it just runs.  The trick
is running the resulting objects.  I've got an a.out to coff converter that
I'll send to anyone who is interested.  It's still missing the ability to
generate good relocs for external data references, and since most addresses
appear in the text section, it won't work with dynamic binding (I bind KGT
references at link time).  But it actually works for simple cases.  Of
course it only helps if your Apollo runs coff (sr10 or later).

I'd like to take this opportunity to flame a bit on the issue of "security."
I don't give my car keys to someone I don't trust.  And I don't give a
computer account to someone I don't trust.  I wouldn't ask a workstation
manufacturer to prevent users from shutting the machine down for the same
reason I don't ask Ford to prevent users from running my car over a cliff. 
I've heard the claim that things are different in an academic environment,
that you can't expect students to exhibit responsible behavior.  Well maybe
I'm just an old fart, but I do expect that of students.  Does the Music
school lock down the tops of the grand pianos so the students won't cut the
strings?  I don't think so.

A timesharing system is different.  If you screw that up, you screw
everyone. But workstations are supposed to put the power into individual
people's hands. I think that's an important distinction.  When you start
treating your workstations as timesharing systems, you've taken power out of
the hands of the people, and put it into the hands of the bureaucrats.  I
think that's bad.  All of you users out there should be worried when people
who run computer labs start asking how they can prevent users from shutting
down the system.  Let them know that's the wrong question to ask.
-------

benoni@ssc-vax.UUCP (Charles L Ditzel) (09/06/88)

in article <8809051853.AA03917@mailgw.cc.umich.edu>, rees@MAILGW.CC.UMICH.EDU (Jim Rees) says:
>     (2) how do we prevent ordinary users to sigp other users' processes?
> This is "fixed" (assuming you consider current behavior to be broken)
> in sr10.

I consider SR9.7 and before to be broken for this very reason.  After all
what good is root ownership over a process if it can be killed by an ordinary
user ????  There are more than a few people that I have talked to that
regard this as downright weird!
  
> I'd like to take this opportunity to flame a bit on the issue of "security."
> I don't give my car keys to someone I don't trust.  And I don't give a
> computer account to someone I don't trust.  I wouldn't ask a workstation
>...etc
whoa...it is a matter of trust...it's really a matter of experience and
knowledge about what going on...and alot of users just want to get their
work down and not become computer jocks...  Besides if Apollo takes
elaborate effort to protect it's filesystem with ACLs and Unix permissions...
why not take the time to protect critical root and user processes AND
why not take the time to deny "shut" to novices...

> 
> A timesharing system is different.  If you screw that up, you screw
> everyone. But workstations are supposed to put the power into individual
> people's hands. I think that's an important distinction.  When you start
> treating your workstations as timesharing systems, you've taken power out of
> the hands of the people, and put it into the hands of the bureaucrats.  I
> think that's bad.  All of you users out there should be worried when people
> who run computer labs start asking how they can prevent users from shutting
> down the system.  Let them know that's the wrong question to ask.

One of the chief problems with letting everyone shut down their systems is
that if you have diskless machines that depend on server X ... and the
user at server X shuts it down...OR if you have user Y on a disked
node but his account is on server X and the user on server X shut it down...
I think you get the picture...I think 'shut' should be a root/sysadmin
command...or a command that can be given out to knowledgeable users.
Taking power out of the hands of bureaucrats and putting it into the
masses is 'pretty' rhetoric...but i would hate a "production" network to
depend on this philosophy...or even an academic setting...
---------------
Naturally My Opinions are my own

GBOPOLY1@NUSVM.BITNET (fclim) (09/06/88)

hi,
     (2) i am sorry for misleading people like jeff putsch (gatech!
amdcad!neptune.amd.com!putsch).  i do not wish to lock up crp as well
as sigp.  both are definitely needed once in a while, especially by
ordinary users.
     yes, you are right.  "crp -on //w1a -me -cp xdmc shut" won't
work.  i am sorry that i didn't check this out before mailing my
questions.
     another thing about sigp is that i do not wish to see users
killing the printer server.  but still, i will not edacl sigp to
% -.

     (1) and (2) jim rees (rees@mailgw.cc.umich.edu) writes:

>I've heard the claim that things are different in an academic environment,
>that you can't expect students to exhibit responsible behavior.  Well maybe
>I'm just an old fart, but I do expect that of students.  Does the Music
>school lock down the tops of the grand pianos so the students won't cut the
>strings?  I don't think so.

ok, i am sorry for classify students as mischievous.  my problems are
that our students are still green about computers.  they do not know
about partner and diskless nodes and about background (server) processes
or server-server like siomonit which we use for kermit.

>I don't give my car keys to someone I don't trust.  And I don't give a
>computer account to someone I don't trust.  I wouldn't ask a workstation

gee wheesh, i wish i have your authority -- blocking anyone i don't
like from using the computers.  we have laws here against such
discriminations.

>manufacturer to prevent users from shutting the machine down for the same
>reason I don't ask Ford to prevent users from running my car over a cliff.

don't extrapolate.  and if you do extrapolate, make damned sure that
your extrapolation is still in a straight line -- my stat prof.
you might as well say that people shouldn't write in asking questions
on how to prevent god from sending a lightning strike through an open
door at the workstations.

>everyone. But workstations are supposed to put the power into individual
>people's hands. I think that's an important distinction.  When you start
>treating your workstations as timesharing systems, you've taken power out of
>the hands of the people, and put it into the hands of the bureaucrats.  I
>think that's bad.  All of you users out there should be worried when people
>who run computer labs start asking how they can prevent users from shutting
>down the system.  Let them know that's the wrong question to ask.

my policy has always been to keep an open and flexible system
(in a previous query, i asked about allowing users to be able to have
their default login window changed to their choice of shells as well as
using dm or x as default).
i have kept all sources (including my system shell scripts or c programs)
on the file system.
my philosophy is that an open system will allow expermentations which
lead to better understanding.  but WHY would anyone want to try their
hands at shutting down the system?  is that a real big trill?  why
can't they write their programs, use the software packages, etc  and
leave the shutdown and system adminstration to the people who are
hired to look after such things?  why can't they understand by doing
so, they will have all the powers of the workstations, or maybe even
more, in their hands?  i can always write shell scripts or c programs
that will monitor them as if they are winston smiths.  but that will
put more load on the cpu.  and will that help them as well as me?

all i am asking is that apollo computers inc. removes some power
from the dm.  they have make the file system tight with acl et al.
i think it's time now for priorities on processes and shut is the
process to stop the grand-daddy process -- init.  it should be
handle with care.  maybe jim rees is blessed with all disked nodes.
but we can't afford such luxuries.

fclim         --- gbopoly1 % nusvm.bitnet @ cunyvm.cuny.edu
computer centre
singapore polytechnic
dover road
singapore 0513

dave@jplopto.uucp (Dave Hayes) (09/07/88)

> A timesharing system is different.  If you screw that up, you screw
> everyone. But workstations are supposed to put the power into individual
> people's hands. I think that's an important distinction.  

Well what do you have to say about the soon-to-be-released capability
(I believe it's NCS) to distribute large computing jobs over large APOLLO
networks? This amounts to timesharing. Another thing, I used to have to deal 
with a user that did the following via automatic script:

$ crp -on //some_node -me
Connected to node XXXX "//some_node" 
$ ppri -lo 16 -hi 16
$ sigp ?* -stop -qa

No joke, this guy's excuse was that he needed extra node(s) to do compilation!

> I'd like to take this opportunity to flame a bit on the issue of "security." 

Well I would too. In *ANY* multi-user environment where there are a set of
computer resources distributed among several colleagues, there must be some
sort of control and/or arbitration of the allocation of these resources to 
the users. This is the job of the systems administrator, who must resolve 
conflicts and keep the system up and running for _everyone_. If you are lucky 
enough to have a single stand-alone workstation per user, then you need not worry
about security. As soon as you get more than three users competing for the same 
resources, then someone has got to be able to enforce a resource allocation policy 
for those users. System security is the prefferred method of enforcement (management
does not take kindly to baseball bats!) of these policies.  

Yes, it can get very bureaucratic. But would you rather have nobody get work done 
because one of the many users of your system got so frustrated that he wrote a worm 
which hops from node to node killing processes that don't match his SID? Furthermore, 
would you leave your $10,000 applications software with world delete privileges?  
Even more to the point, would you let some random user (who maybe has just installed
some hot new software package that requires a reboot to get started) even have the 
ability to shutdown *your* node while your 10 hour simulation run was in progress?

All flame aside, I used to think like you. That is, until I was put on the other 
side of the fence.....

------===<<<(Dave Hayes)>>>===------
dave%jplopto@jpl-mil.jpl.nasa.gov   
{cit-vax,ames}!elroy!jplopto!dave   

pha@zippy.eecs.umich.edu (Paul Anderson) (09/07/88)

In article <8809051853.AA03917@mailgw.cc.umich.edu> rees@caen.engin.umich.edu (Jim Rees) writes:
>
>A timesharing system is different.  If you screw that up, you screw
>everyone. But workstations are supposed to put the power into individual
>people's hands. I think that's an important distinction.  When you start

Jim, this is true, but some machine resources are still expensive enough
that they must, by virtue of their cost, be shared among hundreds
of users.  This means that, workstation or not, they must have the
security, and more importantly, the robustness of a mainframe.

A major flaw in Apollo's past thinking is that their machines
should be used *only* as workstations, therefore, in a number
of cases, justifying flawed implementations of things that
could have and should have worked as well as a mainframe.

I hope that this attitude is changing, because I have yet
to see a workstation better than an Apollo overall, yet
it is clear that some additional effort will allow networks
of Apollos to finally, and truly compete head to head with
more traditional mainframe sites, and blow them out of
the water in every respect, bar none.

>treating your workstations as timesharing systems, you've taken power out of
>the hands of the people, and put it into the hands of the bureaucrats.  I
>think that's bad.  All of you users out there should be worried when people

I think it's bad, too, but neither extreme is really acceptable,
especially if either viewpoint is used to justify sloppy implementation.

>who run computer labs start asking how they can prevent users from shutting
>down the system.  Let them know that's the wrong question to ask.
>-------

Paul Anderson
CAEN

mishkin@apollo.COM (Nathaniel Mishkin) (09/08/88)

In article <9136@elroy.Jpl.Nasa.Gov> dave@jplopto.UUCP (Dave Hayes) writes:
>> A timesharing system is different.  If you screw that up, you screw
>> everyone. But workstations are supposed to put the power into individual
>> people's hands. I think that's an important distinction.  
>
>Well what do you have to say about the soon-to-be-released capability
>(I believe it's NCS) to distribute large computing jobs over large APOLLO
>networks? This amounts to timesharing. ...

There are some subtle issues here, not all of which I will claim we have
addressed.  The fundamental issue is whether you'll allow processing
on behalf of different users to occur simultaneously on the same machine.
I think that unless the machine is in a locked room accessible only by
trusted parties, you really can't with complete safety allow multiple
users to use the same machine at the same time.  (If pressed, I can
[probably] elaborate on why this is the case.)

In a truly secure environment, no server machine does anything interesting
for a client machine unless it has authenticated credentials from the
client.  Once the server knows who the client is, it applies whatever
decision procedure it choses to decide whether it will execute the client's
request.  The problem with this model is that it doesn't simply accomodate
the sort of "batch" processing that people want to use distributed systems
for:  I'm on machine A and I want machine B to run some big "job" for
me.  Therefore, B must be able to take on my identity so that it can
issue secure network requests (e.g. back to some file server that's holding
data for my job).  To do this, I must transmit my secret key (password)
to B (encrypted using a session key -- some key that only B and I know,
by virtue of having talked to an authentication server).  But now I'm
starting to get unhappy -- I had to tell someone else my secret key.
Do I trust B enough to do that, or is someone right now sitting at B
ready to crash it and stare at a core dump until he find my key sitting
someplace in the clear?

It's really sort of an unhappy state of affairs and one that most systems
have essentially punted on, but we shouldn't kid ourselves into thinking
that (say) rlogin'ing into some random remote system is actually safe
(either for the person doing the rlogin'ing or the system being rlogin'd
into).

-- 
                    -- Nat Mishkin
                       Apollo Computer Inc., Chelmsford, MA
                       mishkin@apollo.com

casey@admin.cognet.ucla.edu (Casey Leedom) (09/08/88)

In article <3e5437c2.13422@apollo.COM> mishkin@apollo.com
 (Nathaniel Mishkin) writes:
> In a truly secure environment, no server machine does anything interesting
> for a client machine unless it has authenticated credentials from the
> client. ...  The problem with this model is that it doesn't simply
> accommodate the sort of "batch" processing that people want to use
> distributed systems for:  I'm on machine A and I want machine B to run
> some big "job" for me.  Therefore, B must be able to take on my identity
> so that it can issue secure network requests (e.g. back to some file
> server that's holding data for my job).  To do this, I must transmit my
> secret key (password) to B (encrypted using a session key -- some key
> that only B and I know, by virtue of having talked to an authentication
> server).  But now I'm starting to get unhappy -- I had to tell someone
> else my secret key.  Do I trust B enough to do that, or is someone right
> now sitting at B ready to crash it and stare at a core dump until he find
> my key sitting someplace in the clear?

  A standard model for doing this is to perform authentication negotiation
with your file server, and get a capability handle in response.  You then
pass that capability to machine B to use.  Unfortunately this requires
that capabilities be timed out or someone could still grope through B and
find the capability.  Another mechanism is to negotiate an authentication
capability with your batch processor and hand that off to the file
server, but this requires that the file server keep track of capabilities
which may or may not be used.  A third mechanism is to set up a
sub-authentication server on your protected machine and make the batch
server process negotiate through it for access to your files.

  This is a well studied field with no truly definitive answers and many
more cropping up all the time.  As always it's a trade off between cost
(performance, implementation complexity, maintenance) vs. security
(which is never truly total till you melt your disks down in a furnace,
and maybe not then) vs. usability (which is really just a special case
of cost, but useful to break out).

  But this isn't what we're really talking about here.  Or, rather, that
first paragraph isn't what we were talking about.  We're interested in
the pragmatic security issues under Aegis and Unix.  And again, one of
the biggest reasons that I and many others are interested in them is
simply system stability.  If your systems are down one out of every two
days because people are accidentally or maliciously wiping you out, they
aren't very usable to anyone.  One out of two days is a total
exaggeration, but I'm sure you get the point.

  Cost vs. security vs. usability.

Casey

frank@CAEN.ENGIN.UMICH.EDU (Randy Frank) (09/08/88)

It's always fun when flames start up on a list...

I fundamentally disagree with those who state that security is a binary issue: if
you can't have perfect security, then why have any at all is b.s.

Security, be definition, is shades of gray.  There IS a fundamental difference
between executing a "SHUT" command and powering off a machine, even though the
end result is a down machine.  Firstly, it's much harder to accidentally power
off a machine, and it's also the case that a user who might not feel at all uneasy
about issuing a shut command might think twice about powering off a machine.

Sinilarly, the argument that if  you don't have physical security for a machine
why bother with logical security is equally lame.   Sure, with a lack of physical
security a very sophisticated user can always get into a console or some equivalent
mode and bypass whatever programitic security you build in, but the bottom line
is that VERY few users are sophisticated enough to do this, while almost any user
is smart enough to blast away another user's process.

For years most of us have lived with vanilla BSD Unix, which, for many of us, has
GOOD ENOUGH security.  Most of us have also with with machine rooms with marginal
physical security, and yet at least in my case I don't know of a case where a user
used lack of physical security to break into a system.  All cases I know of are
penetrations of logical/programatic security.

Despite what Apollo continues to say about "personal" workstations, they are
starting to build a class of machines such as the DN10000 which we can ONLY justify
as multi-user shared resources.  If they want to sell us those machines, they are
going to have to provide security on the same order of magnitude as standard BSD or
we can't run them in our environment.  It's that simple.

Randy

geof@imagen.UUCP (Geoffrey Cooper) (09/09/88)

In article <3e5437c2.13422@apollo.COM>, mishkin@apollo.COM (Nathaniel Mishkin) writes:
> I think that unless the machine is in a locked room accessible only by
> trusted parties, you really can't with complete safety allow multiple
> users to use the same machine at the same time.  (If pressed, I can
> [probably] elaborate on why this is the case.)

My bogometer goeth clang, clang!

Granted, you can make an academic argument about what MIGHT happen in
the most severe case.  You might even convince youself that timesharing
is too risky to be of any use.  Nevertheless, the risks can be
MINIMIZED to the point where sharing a processor is USEFUL in some
evironments -- the gain outweighs the risk.

An existance proof about an unsolvable aspect to a problem does not
preclude a USEFUL solution to the problem.
-- 
UUCP: {decwrl,sun}!imagen!geof ARPA: imagen!geof@decwrl.dec.com

mishkin@apollo.COM (Nathaniel Mishkin) (09/13/88)

In article <8809081356.AA00196@caen.engin.umich.edu> frank@CAEN.ENGIN.UMICH.EDU (Randy Frank) writes:
>It's always fun when flames start up on a list...

Seemed like the fire was going out so I guess I'll have to give the embers
a nudge...

>I fundamentally disagree with those who state that security is a binary issue: if
>you can't have perfect security, then why have any at all is b.s.

I don't really disagree with this statement and the point of my previous
message was more to heighten people's awareness rather than tell them
that the situation is hopeless.  I'm a little uncomfortable with the
"we've lived with vanilla BSD Unix security for years and it's been OK"
argument for two reasons.  First, I think it's really not good enough;
I don't think that you have to be all that much of a wizard to defeat
it.  (I get the willies thinking about PC's that can do TCP/IP plugged
into my internet.  Ports less than 1024 reserved to privileged processes?
Hah.  Keeping a list of "trusted hosts" in a network of 50 or more
machines.  I don't think so.)  Second, I don't see how Apollo (or any
other company selling to people who have any inkling of what's really
required to make a network truly secure) can in good conscience promote
something as being even "pretty secure" when we all know that it would
take someone with only moderate skills a day or so to defeat the system's
"security".

But back to the particular problems raised in this discussion:  the
signalling of processes you don't own and the shutting down of nodes.
As Jim Rees said, the signalling issue is fixed in sr10 so that you have
to be root or the same ID as the target process to be able to signal
it.  As far as shutdown goes, Jim said something like "You can just turn
the power off so what good does it do to require special privileges to
execute the DM's SHUT command." The counters to that seemed to be "But
look at the problems caused if you let randoms shut down nodes", which
misses the mark.  Of *course* it can cause problems, but the fact of
the matter is that the node is sitting on someone's desk and if he *wants*
to cause problems, he'll just shut the power off -- he doesn't need to
be able to issue the SHUT command.  If the retort here is "We have stupid
users that might *accidently* issue the SHUT command", well, all I can
say is if enough people think that's a real problem, shout now and I'm
sure we'll do something about it.

-- 
                    -- Nat Mishkin
                       Apollo Computer Inc., Chelmsford, MA
                       mishkin@apollo.com

krowitz@testnode.MIT.EDU (David Krowitz) (09/14/88)

Time for me to throw in my two cents worth ...

Actually, if would be nice if the SHUT command checked for some
basic things like:  

1) processes CRP in from another node
2) diskless partners that were currently booted off the disk
3) files that were opened from other nodes
4) if the node is a gateway

and things like that and issued a "do you really want to
do this?" message before it goes ahead and blows everything
out of the water. If I typed "dlf ?*" the system would ask
me if I really wanted to delete everything. Why not do the
same for SHUT. The issue of restricting access to the SHUT
command, for me, is not one so much of security as it is one
of simple safety. Does the person really know what they are
doing? If they have sys_adim or root priviledges (real or
ill-gotten) they probably have a good idea of what they're
up to. If they don't have the rights, then why trust them
to know what their doing to other users on the network? Why
would a random user *need* to shut down a node anyhow?


 -- David Krowitz

krowitz@richter.mit.edu   (18.83.0.109)
krowitz%richter@eddie.mit.edu
krowitz%richter@athena.mit.edu
krowitz%richter.mit.edu@mitvma.bitnet
(in order of decreasing preference)

pha@zippy.eecs.umich.edu (Paul Anderson) (09/14/88)

In article <3e720dde.13422@apollo.COM> mishkin@apollo.com (Nathaniel Mishkin) writes:
>In article <8809081356.AA00196@caen.engin.umich.edu> frank@CAEN.ENGIN.UMICH.EDU (Randy Frank) writes:
>>It's always fun when flames start up on a list...
>
>Seemed like the fire was going out so I guess I'll have to give the embers
>a nudge...
>
>>I fundamentally disagree with those who state that security is a binary issue: if
>>you can't have perfect security, then why have any at all is b.s.
>
>
>But back to the particular problems raised in this discussion:  the
>signalling of processes you don't own and the shutting down of nodes.
>As Jim Rees said, the signalling issue is fixed in sr10 so that you have
>to be root or the same ID as the target process to be able to signal
>it.  As far as shutdown goes, Jim said something like "You can just turn
>the power off so what good does it do to require special privileges to
>execute the DM's SHUT command." The counters to that seemed to be "But
>look at the problems caused if you let randoms shut down nodes", which
>misses the mark.  Of *course* it can cause problems, but the fact of
>the matter is that the node is sitting on someone's desk and if he *wants*
>to cause problems, he'll just shut the power off -- he doesn't need to
>be able to issue the SHUT command.  If the retort here is "We have stupid

You are missing the point.  At CAEN (~430 apollos), we don't put network wide services
on student or private faculty nodes - we put them on dedicated fileservers,
dedicated gateways, dedicated mail machines, and others.  We can't afford
to have holes in not only all the student machines, but also in critical
network service machines, too.

Face it, and believe me, you must, a network can't exist for long
unless critical nodes CAN HAVE PROTECTION NOT CURRENTLY SUPPLIED BY
APOLLO.  Yes, it is true that most other machines have security problems
as well, but there is no excuse for using YOUR argument to leave
gaping holes in the system that prevent us from running a tight
ship.

We generally don't leave our critical nodes in physical contact with
general users.  We do this for a variety of reasons, including the
need to supply clean, regulated power, clean, cool air, central
sites for maintenance and backup, and others as well.  Yet, despite
this isolation, there are literally hundreds of ways for users
to hose central services provided by us.  Since some of our servers can represent
tremendous time and money investments, it is NEVER, NEVER a waste
of time for Apollo to allow for tightened security on their
network.

>users that might *accidently* issue the SHUT command", well, all I can
>say is if enough people think that's a real problem, shout now and I'm
>sure we'll do something about it.

We're shouting...

>
>-- 
>                    -- Nat Mishkin
>                       Apollo Computer Inc., Chelmsford, MA
>                       mishkin@apollo.com

I invite anybody at Apollo who wonders why we complain about
Apollo security, and software stability in general, to call
me at (313)-936-1355.

So that anyone doesn't flame me too badly, I have always thought that
Apollos are the best workstation ever built, bar none, but that
doesn't mean there isn't *lots* of room for improvement.

Paul Anderson
CAEN

markl@neptune.AMD.COM (Mark Luedtke) (09/15/88)

I, for one, would (or will) not like to have to switch users to sigp someone's
remote process on my node.  In my environment noone generally crp's on a node
without knowing in advance that it will not be a problem or expressing that,
do to limited copies of software, it is a necessity.  However there have been
a few times where this has occured, and I see no reason to have to go through
contortions to kill this process.  (I REALLY wish that I could suspend such
processes, but I don't know any way to do this.  Does anyone else?)  Needless
to say, not everyone on our ring has a root id and those without have no way
to deal with this problem, especially if he is editing/etc and modifying 
priorities has no noticible effect.  Hopefully this is an optional feature
because it could cause the loss of a lot of work in my area.

Mark Luedtke  markl@neptune.amd.com

Note that these comments are only mine and do not necessarily reflect those of
AMD.

dbfunk@ICAEN.UIOWA.EDU (David B. Funk) (09/15/88)

I would like to add my 2 cents worth to the security issue discussion.
Like Randy Frank and Paul Anderson, I administer Apollos in a university
environment. This environment is unusual in many ways: a wide range of
users (total novice to total hacker), a wide range of sites (private
nodes on faculty desks to public nodes in student labs), and a wide
range of work loads (dedicated file servers to nodes that must support
many types of uses). At our site we have nodes in buildings spread over
a mile of campus (we have 2 miles of cable in one building alone).
We have a staff of only a few people and when anything goes wrong
we are called out to fix it.
In this context there are at least 2 different types of security issues.
There is the classical problem of the "hacker attack" and then there is
the problem of casual or unintentional user created havoc.
   In article <3e720b36.c6f9@apollo.COM>, Nat Mishkin
(mishkin%apollo.uucp@eddie.mit.edu) talks about the "hacker attack"
problem. I agree, any time you have a "real" network connection to an
external public network, you have an almost impossible security problem
on your hands. Then you have to decide what level of risk and paranoia
you are willing to live with and then pay for it.
    The other issue is the one that has caused me the most problems.
Here are some examples of this: a user mistypes "ex" instead of "ed"
at the DM command window; a new user reads the "commands.hlp", sees
the 'shut' command and decides to try it, not believing that it will
actually do what it says that it will; a Unix user, frusterated by the
system's not doing what he expects, decides to kill off some of those
strange jobs that don't make sense to him like '/sys/spm/spm' and
'/sys/mbx/mbx_helper'; We have one node with a Danford SEU board,
when 8 students are logged in on it the probability of one killing
the wrong process is definitly greater than zero. All of these
have happened here and I could go on adnauseam in this vein.
    The point is that one malicious hacker could cause immense
amounts of damage but the little dumb things are actually our major
time killers. We went so far as to create 'patches' for sigp &
kill that force them to respect process ownership. These have
helped but I havn't been able to deal with "ex" and "shut".
I think that these are the kind of things that Randy Frank had
in mind when he wrote "we've lived with vanilla BSD Unix security
for years and it's been OK".
    I am pleased to see that SR10 will provide process ownership
and "crp" control. I would like to add my vote to the "shut" control
demand.

    Dave Funk
    Iowa Computer Aided Engineering Network (ICAEN)
    University of Iowa, Iowa City, IA
    dbfunk@icaen.uiowa.edu

krowitz@RICHTER.MIT.EDU (David Krowitz) (09/16/88)

Actually, I believe that the SIGP command can be given a status code
from the /sys/ins/fault.ins.{ftn c pas} file, although I understand
that the process may not be suspended cleanly. You can use the Unix
kill -STOP and kill -CONT commands  to suspend and restart processes.
If you own a node (your personal machine, not just the node you 
happen to be logged into), then you could simply not run the
server process manager to prevent CRP requests from being accepted.
(you will also have to avoid run rlogind, rshd, rexecd, etc. if you
have Domain/IX loaded on the node).


 -- David Krowitz

krowitz@richter.mit.edu   (18.83.0.109)
krowitz%richter@eddie.mit.edu
krowitz%richter@athena.mit.edu
krowitz%richter.mit.edu@mitvma.bitnet
(in order of decreasing preference)

achille@cernvax.UUCP (achille) (09/16/88)

In article <8809142353.AA14291@umaxc.weeg.uiowa.edu> dbfunk@ICAEN.UIOWA.EDU (David B. Funk) writes:
[deleted stuff]
>    I am pleased to see that SR10 will provide process ownership
>and "crp" control. I would like to add my vote to the "shut" control
>demand.
>
>    Dave Funk
>    Iowa Computer Aided Engineering Network (ICAEN)
>    University of Iowa, Iowa City, IA
>    dbfunk@icaen.uiowa.edu


I would tend to agree with Dave.
But what I wouldn't want to see is a 'Unix' fix to shut: if you are root you can,
otherwise you can't.
We have a situation similar to the one described by Dave and we would like to
prevent people from shutting down 'important' nodes, BUT I don't want to
prevent some happy Apollo owner from shutting down his/her machine !
Yes, I'm thinking about something similar to the shutspm kludge, that would
be good enough for us.

Achille Petrilli
Cray & PWS Operations

casey@admin.cognet.ucla.edu (Casey Leedom) (09/16/88)

In article <8809141502.AA01500@testnode.mit.edu> krowitz@testnode.MIT.EDU
 (David Krowitz) writes:
> Why would a random user *need* to shut down a node anyhow?

  Opps!  I suddenly realize I may have been arguing on the wrong side all
this time ... :-) I shut down my node sometimes two or three times a day
as it gets locked in some weird state or another.  A typical case is our
gateway (also an Apollo) goes down and I have several telnet/rsh
connections running through it.  For some reason this really screws my
node up and I'm forced to reboot.  (This is happening a lot recently - I
think our Apollo gateway is running out of mbufs or something similar.)

  If I couldn't shut my machine down I'd be in a tough spot.  I have the
root password, but what about all the other users?  Since I'm Mr. Support
(can you really believe that? :-)), I'd be getting calls constantly to
come over and reboot so-n-so's node.  Ack!!!

  Seriously, I fully agree with David that shut should be reserved for
root people just to prevent accidents.  However, since we do need to
reboot the nodes so often ...  This need may go away when we bring up
SR10 (we just got our tape a couple of days ago - yeah!), but since I
have the hanging problem on broken TCP connections with TCP3.1 and as far
as I know SR10's TCP is identical, I doubt if it will go away entirely ...

  I'll certainly endorse David's suggestions that shut warn you about
things like:

> 1) processes CRP in from another node
> 2) diskless partners that were currently booted off the disk
> 3) files that were opened from other nodes
> 4) if the node is a gateway

Casey

P.S.  Does anyone know exactly why the broken TCP connections hang a
    node and what can be done about them?  Or even what to do about an
    Apollo node being used as a gateway that seems to be reacting
    unfavorably to increased usage (but it should be pointed out that
    this increased usage hasn't even begun to make the gateway run
    slowly, etc.).

rreed@mntgfx.mentor.com (Robert Reed) (09/21/88)

krowitz@RICHTER.MIT.EDU (David Krowitz) writes:
    Actually, I believe that the SIGP command can be given a status code
    from the /sys/ins/fault.ins.{ftn c pas} file, although I understand
    that the process may not be suspended cleanly. You can use the Unix
    kill -STOP and kill -CONT commands  to suspend and restart processes.

I'm not sure this is always true, from the DOMAIN/IX csh man pages:

   Commands started in-process cannot be suspended or manipulated using
   the csh job-control facilities.

This is a restriction imposed on jobs started under an existing csh. If
the process has been started by doing a crp, even this may not be 
possible.  Do any of our comrades from Apollo know the answer to this?

pato@apollo.COM (Joe Pato) (09/23/88)

In article <1988Sep20.112851.1047@mntgfx.mentor.com> rreed@mntgfx.UUCP (Robert Reed) writes:
. . .
>I'm not sure this is always true, from the DOMAIN/IX csh man pages:
>
>   Commands started in-process cannot be suspended or manipulated using
>   the csh job-control facilities.
>
>This is a restriction imposed on jobs started under an existing csh. If
>the process has been started by doing a crp, even this may not be 
>possible.  Do any of our comrades from Apollo know the answer to this?

When a DOMAIN/IX csh is run in-process it uses pgm_$invoke to run subcommands
instead of using fork/exec.  Subcommands run in this mode are actually running
in the same process as the csh and therefore csh job contol features (which
contol a child process) are inoperable (there is no child process).

It is possible, but clumsy, to use job control when using a csh through crp. 
Job control can be enabled by unsetting the INPROCESS environment variable. (or
setting the csh "inprocess" variable to 0.)  Cshells running in a crp
environment do not default to this state because crp does not catch the tty
stop signal (SIGTSTP).  As a result it never forwards the signal to the remote
process (the crp process itself stops).  If you want to be able to use job
control on the remote node you have to have some mechanism to send the signal
remotely.  (It is sufficient, but clumsy, to create another remote process and
use kill...)

  Joe Pato                UUCP: ...{attunix,uw-beaver,brunix}!apollo!pato
  Apollo Computer Inc.  NSFNET: pato@apollo.com