[mod.computers.apollo] Apollo Access Control

JW-Peterson@UTAH-20.ARPA (John W Peterson) (02/18/86)

Geez - and I thought some of our staff were hopelessly paranoid...

> I) TCP/IP is hazardous.
>         1) Apollo's do not seem to enforce the priveleged socket aspects of
>            Unix bsd4.2  TCP/IP. 
>         2) Since anyone can bind to such a socket  inbound use is virtually
>            insane...

It is generally acknowledged that BSD's "priviledged ports" are a pretty
flimsy security mechinism.  All it takes is any non unix box to walk right
through that door.  For example, anybody with an IBM PC and an ethernet card
can punch holes through this with little trouble.  

> However the pads
> can control the display manager so they can TYPE ON MY TERMINAL
> if they want.  I DON'T  WAN'T ANYONE ELSE TYPING ON MY TERMINAL.
>   ....
>  ...anyone can signal[/debug] anyones process, even across the net and 
> bridges.

This is only true if you run the server process manager (SPM) that allows
people to create arbitrary processes on your node.  If you're that terrified
about people invading your environment, don't run the SPM.  It certainly
isn't required on nodes with displays (although here we find it useful for
situations like graphics programs that run amuck - it's much easier to
CRP on and kill it remotely than to wait for the DM to finally time out).

>       3) Apollo has a trojan horse locksmith account build into login
> with account '<><><><>' and password unknown.  Why should they?

This login mechinism is only enabled when the node can't see the network
registries, and was undoubtedly created for debugging the login mechinism
(Haven't you ever locked your keys in the car?)

>       4) Why should copies of setuid and subsystem programs retain their
> priveleges, especially copies on floppy diskette, what would stop
> someone from changing the appropriate bits on a diskette and
> screwing with my system, or more easily just getting on his system
> and creating a setuid or subsystem manager and importing it
> to my system to wreack havoc?  

Again, if you're this paranoid you'd better lock up your tape/floppy drives.
Almost every system I've used (including Unix) can be easily compromised by
mounting bogus filesystems or plowing through the tape library.  Even with
the access to the media it would still be fairly tricky to use this to
break security under Aegis.  For example, in order to set a priviledged 
subsystem ACL (from a tape or by hand) you must have rights to that 
subsystem in the first place.  Also, it isn't enough for somebody to sit
down and create his/her own "login" subsystem.  Even if the subsystem has
the same name, it won't have the same UID - and that is what the system 
cares about.  

> Also by what bogus method has Apollo implemented setuid programs,
> since program/process management is done mostly by a user library
> which load programs in user mode( non-privleged) it is bound to
> be insecure and may present a means for programs changing ids.

A program under Aegis must fork or start a new process to change it's
associated user ID.  At this level the process management is done by the
kernel, not in user space.  The DM is a special case (analogous to init)
however, the bootshell is very specific about the programs it allows to
run in this position (i.e., only the DM, the SPM or login).

In general, it amuses me quite a bit that you seem to find Unix the model
of system security while not trusting something that is quite a bit more
sophisticated (Or perhaps you've forgotten the 4.2 sendmail bug already?).

-------

apollo@ucbvax.UUCP (02/21/86)

I'm no security expert, but I think that the problem of doing distributed
computing where the units of distribution don't trust each other is something
of an open research problem, and at the very least requires real-time
encryption of just about everything sent over the net, and authentication
server based access control.  I am not aware of any commercial vendors who
currently offer such a system.

Many of your complaints have to do with intra-node security.  Obviously,
anyone who has physical control over a machine can do whatever they want with
it.  This problem isn't unique to Apollo nodes.  How secure would you feel if
your malicious students were able to configure new kernels and reboot the Vax
that you do your work on?

I won't go into a lot of detail about so-called privileged ports, but I
think it is generally agreed in the industry that at best they only provide
the illusion of security, and at worst they encourage breaches of security,
as every server (and every programmer who develops server software) needs
to have superuser rights.

Security is a problem on any tightly coupled set of machines, but to single
out Apollo and imply that the problem is worse for us is bogus.  If it seems
worse for us, that's only because we do a better job than anyone else at
integrating our network.
-------

DAN@MC.LCS.MIT.EDU (Dan Blumenfeld) (02/22/86)

I think that Jim Rees has zeroed in on the problem.  The real issue is not
one of "absolute security", but rather how secure apollo systems are in
comparison to other systems.  While I too am no security expert (and have
no desire to be one), I have seen many breaches of security, etc. that
have been committed by students on a wide range of machines.  On vanilla-
flavored UNIX boxes, like VAXes and Suns runnning 4.2bsd, a student has
many opportunities to screw things up and/or make things unpleasant for
other users.  Two that immediately come to mind are "b vmunix -s" and
(on a VAX) "echo '...' > /dev/tty?" which are both breaches of security
in different ways.  There is also the issue of a student going up to a
file server and pressing the write protect button on the disk, or turning
the server off, or screwing around with network cables (e.g. removing
Ethernet terminators), which are very serious breaches of security because
many users are immediately affected.  

In "secure" computing facilities (e.g. Dod agencies), the computer and all of
it's terminals (or nodes if it's a net of workstations) are behind
locked doors, with heavy-duty physical control over who has access to
the machine and who doesn't.  The machines behind these doors communicate
NOWHERE, except between themselves.  There are no dial-ups, no Ethernet
cables running between buildings, no fiber optics, zippo.  You can't get
magentic media in and out of the room, let alone the building, without
special passes.  True, the data on these machines is classified, but
part of this complex security scheme is to prevent tampering and trashing.
How are people allowed to use this equipment?  The magic word is "trust".
The people that can gain access have demonstrated that they are trustworthy
enough to use the system without pilfering data and/or attempting to 
compromise it's integrity.  Security clearances are, in the final analysis,
only a measure of trust.

So, unless you want to run a University computer lab like a secure facility,
there is no way you're going to be able to prevent students (or anyone else
for that matter) from "playing around", especially with the kind of machines
and operating systems in use.  There's also the issue of students ripping
off licensed source or object code, but that's a different kind of security
problem.

Dan Blumenfeld
University of Pennsylvania